Average Precision (AP) is a metric that summarizes the precision-recall curve into a single value, providing an overall measure of a model’s performance across all classification thresholds. It calculates the area under the precision-recall curve by averaging precision values at different levels of recall.

Mathematically, it is often computed as:

Where:

  • and : Recall values at successive thresholds.
  • : Precision corresponding to .

Key Point: Higher AP indicates better performance, with a perfect model achieving an AP of 1. It is widely used in object detection and other tasks where precision-recall trade-offs matter.