diff --git a/doc/modules/model_evaluation.rst b/doc/modules/model_evaluation.rst
index a44e753fb28997887214635b100f25d5f7d1ddec..a720f4c0d4d6237106e10644edeae2421358c9bb 100644
--- a/doc/modules/model_evaluation.rst
+++ b/doc/modules/model_evaluation.rst
@@ -409,6 +409,15 @@ from the :ref:`sphx_glr_auto_examples_model_selection_plot_confusion_matrix.py`
    :scale: 75
    :align: center
 
+For binary problems, we can get counts of true negatives, false positives,
+false negatives and true positives as follows::
+
+  >>> y_true = [0, 0, 0, 1, 1, 1, 1, 1]
+  >>> y_pred = [0, 1, 0, 1, 0, 1, 0, 1]
+  >>> tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
+  >>> tn, fp, fn, tp
+  (2, 1, 2, 3)
+
 .. topic:: Example:
 
   * See :ref:`sphx_glr_auto_examples_model_selection_plot_confusion_matrix.py`
diff --git a/sklearn/metrics/classification.py b/sklearn/metrics/classification.py
index 2a7be716ee48da5139816da2bb1c7463799d3ce3..7821060a950eb9a46501240ddb8e3e8aa645ef87 100644
--- a/sklearn/metrics/classification.py
+++ b/sklearn/metrics/classification.py
@@ -186,6 +186,10 @@ def confusion_matrix(y_true, y_pred, labels=None, sample_weight=None):
     is equal to the number of observations known to be in group :math:`i` but
     predicted to be in group :math:`j`.
 
+    Thus in binary classification, the count of true negatives is
+    :math:`C_{0,0}`, false negatives is :math:`C_{1,0}`, true positives is
+    :math:`C_{1,1}` and false positives is :math:`C_{0,1}`.
+
     Read more in the :ref:`User Guide <confusion_matrix>`.
 
     Parameters