Install
User Guide
API
Examples
Getting Started
Tutorial
Glossary
Development
FAQ
Related packages
Roadmap
About us
Documentation
More
Getting Started
Tutorial
Glossary
Development
FAQ
Related packages
Roadmap
About us
Documentation
Toggle Menu
Prev
Up
Next
scikit-learn 0.22.dev0
Other versions
Please
cite us
if you use the software.
3. Model selection and evaluation
3. Model selection and evaluation
¶
3.1. Cross-validation: evaluating estimator performance
3.1.1. Computing cross-validated metrics
3.1.2. Cross validation iterators
3.1.3. A note on shuffling
3.1.4. Cross validation and model selection
3.2. Tuning the hyper-parameters of an estimator
3.2.1. Exhaustive Grid Search
3.2.2. Randomized Parameter Optimization
3.2.3. Tips for parameter search
3.2.4. Alternatives to brute force parameter search
3.3. Model evaluation: quantifying the quality of predictions
3.3.1. The
scoring
parameter: defining model evaluation rules
3.3.2. Classification metrics
3.3.3. Multilabel ranking metrics
3.3.4. Regression metrics
3.3.5. Clustering metrics
3.3.6. Dummy estimators
3.4. Model persistence
3.4.1. Persistence example
3.4.2. Security & maintainability limitations
3.5. Validation curves: plotting scores to evaluate models
3.5.1. Validation curve
3.5.2. Learning curve