Model Performance Metrics Calculator
Calculate accuracy, precision, recall, F1 score, MCC, AUC, and other classification performance metrics.
Inputs
Results
Accuracy
87.5%
Precision
89.47%
Recall (Sensitivity)
85%
F1 Score
87.18%
Matthews Correlation Coefficient75.09
AUC (Area Under Curve)87.5
Accuracy
87.5%
Precision
89.47%
Recall (Sensitivity)
85%
F1 Score
87.18%
How to Use This Calculator
- Start by filling in the input fields below. Results update instantly as you type, so you can experiment with different values to see how they affect the outcome.
- True Positives (TP) — Correctly predicted positive cases Minimum value: 0 (default: 85).
- True Negatives (TN) — Correctly predicted negative cases Minimum value: 0 (default: 90).
- False Positives (FP) — Incorrectly predicted as positive Minimum value: 0 (default: 10).
- False Negatives (FN) — Incorrectly predicted as negative Minimum value: 0 (default: 15).
- Once all inputs are set, review your results in the Results panel. Here's what each output means:
- Accuracy — shown as a percentage. This is the primary result of this calculator.
- Precision — shown as a percentage. This is the primary result of this calculator.
- Recall (Sensitivity) — shown as a percentage. This is the primary result of this calculator.
- F1 Score — shown as a percentage. This is the primary result of this calculator.
- Matthews Correlation Coefficient — shown as a numeric value.
- AUC (Area Under Curve) — shown as a numeric value.
- Explore the related calculators below if you need deeper analysis or want to approach this topic from a different angle.
Ad Placeholder
Formula
F1 = 2 × (Precision × Recall) / (Precision + Recall)Related Calculators
Neural Network Parameters Calculator
Calculate total parameters, weights, biases, and memory requirements for neural network architectures.
Gradient Descent Calculator
Calculate gradient descent parameters, convergence rate, effective learning rate, and training time estimates.
Cross-Validation Calculator
Calculate k-fold cross-validation splits, train-test splits, and data utilization for machine learning.
Ad Placeholder