diff --git a/doc/modules/model_evaluation.rst b/doc/modules/model_evaluation.rst index dee5865bdd33ea56adf510add45f97330bc223ba..813a39339e848c40161b40c429d193e5b0872e85 100644 --- a/doc/modules/model_evaluation.rst +++ b/doc/modules/model_evaluation.rst @@ -212,8 +212,8 @@ the following two rules: .. _multimetric_scoring: -Using mutiple metric evaluation -------------------------------- +Using multiple metric evaluation +-------------------------------- Scikit-learn also permits evaluation of multiple metrics in ``GridSearchCV``, ``RandomizedSearchCV`` and ``cross_validate``. diff --git a/examples/model_selection/plot_multi_metric_evaluation.py b/examples/model_selection/plot_multi_metric_evaluation.py index 5f4491e51f49cb591b215ee16acfef484a3c1c31..ea7d60dc20da297da3e7d9ab6934de260966279d 100644 --- a/examples/model_selection/plot_multi_metric_evaluation.py +++ b/examples/model_selection/plot_multi_metric_evaluation.py @@ -1,4 +1,7 @@ -"""Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV +""" +============================================================================ +Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV +============================================================================ Multiple metric parameter search can be done by setting the ``scoring`` parameter to a list of metric scorer names or a dict mapping the scorer names