Liking cljdoc? Tell your friends :D

scicloj.metamorph.ml.metrics

Excellent metrics tools from the cortex project.

Excellent metrics tools from the cortex project.
raw docstring

accuracyclj

(accuracy y y_hat)

Calculates the proportion of correct predictions.

y - Array of true class labels y_hat - Array of predicted class values

Returns the accuracy as a float in [0, 1], where 1.0 indicates perfect classification. Computed as 1.0 minus the error rate, equivalent to the number of correct predictions divided by total predictions.

Arrays must have the same shape.

See also: error-rate, precision, recall

Calculates the proportion of correct predictions.

`y` - Array of true class labels
`y_hat` - Array of predicted class values

Returns the accuracy as a float in [0, 1], where 1.0 indicates perfect
classification. Computed as 1.0 minus the error rate, equivalent to the
number of correct predictions divided by total predictions.

Arrays must have the same shape.

See also: `error-rate`, `precision`, `recall`
sourceraw docstring

AICclj

(AIC model y yhat feature-count)

Calculates the Akaike Information Criterion (AIC) for model selection.

model - Trained model map y - Actual target values yhat - Predicted values feature-count - Number of features used in the model

Returns AIC = 2k - 2L, where k = 2 + p (parameters) and L is the log-likelihood. Lower AIC values indicate better model fit with complexity penalty.

See also: scicloj.metamorph.ml.metrics/BIC, scicloj.metamorph.ml/loglik

Calculates the Akaike Information Criterion (AIC) for model selection.

`model` - Trained model map
`y` - Actual target values
`yhat` - Predicted values
`feature-count` - Number of features used in the model

Returns AIC = 2k - 2L, where k = 2 + p (parameters) and L is the log-likelihood.
Lower AIC values indicate better model fit with complexity penalty.

See also: `scicloj.metamorph.ml.metrics/BIC`, `scicloj.metamorph.ml/loglik`
sourceraw docstring

all-metricsclj

(all-metrics labels predictions label->class-fn iou-fn iou-threshold)

Returns global and per-class metrics for a given set of labels and predictions.

  • label->class-fn should take a label or prediction and return the class as a string or keyword.
  • iou-fn should take a label and prediction and return the intersection over union score
  • iou-threshold determines what iou value constitutes a matching bounding box. ** NOTE: If labels and predictions are produced from a sequence of images, ensure that the bounding boxes are shifted in each image so that there is not an overlap.
Returns global and per-class metrics for a given set of labels and predictions.
- label->class-fn should take a label or prediction and return the class as a string or keyword.
- iou-fn should take a label and prediction and return the intersection over union score
- iou-threshold determines what iou value constitutes a matching bounding box.
** NOTE: If labels and predictions are produced from a sequence of images,
   ensure that the bounding boxes are shifted in each image so that there is not an overlap.
sourceraw docstring

BICclj

(BIC model y yhat sample-size feature-count)

Calculates the Bayesian Information Criterion (BIC) for model selection.

model - Trained model map y - Actual target values yhat - Predicted values sample-size - Number of samples in the dataset feature-count - Number of features used in the model

Returns BIC = -2L + k*ln(n), where L is the log-likelihood, k = 2 + p (parameters), and n is the sample size. Lower BIC values indicate better model fit. BIC penalizes model complexity more heavily than AIC for larger sample sizes.

See also: scicloj.metamorph.ml.metrics/AIC, scicloj.metamorph.ml/loglik

Calculates the Bayesian Information Criterion (BIC) for model selection.

`model` - Trained model map
`y` - Actual target values
`yhat` - Predicted values
`sample-size` - Number of samples in the dataset
`feature-count` - Number of features used in the model

Returns BIC = -2L + k*ln(n), where L is the log-likelihood, k = 2 + p (parameters),
and n is the sample size. Lower BIC values indicate better model fit. BIC penalizes
model complexity more heavily than AIC for larger sample sizes.

See also: `scicloj.metamorph.ml.metrics/AIC`, `scicloj.metamorph.ml/loglik`
sourceraw docstring

eer-accuracyclj

(eer-accuracy y y_est)
(eer-accuracy y y_est bins)

Calculates accuracy at the equal error rate (EER) operating point.

y - Array of true binary labels (0 or 1) y_est - Array of continuous estimated probabilities or scores bins - Number of threshold discretization levels (default: 100)

Returns a map with:

  • :accuracy - Classification accuracy at the EER threshold
  • :threshold - The threshold value where TPR and FPR are balanced

EER accuracy is the standard metric in biometric systems (e.g., facial recognition) where false accept and false reject errors are equally weighted.

See also: equal-error-point, accuracy

Calculates accuracy at the equal error rate (EER) operating point.

`y` - Array of true binary labels (0 or 1)
`y_est` - Array of continuous estimated probabilities or scores
`bins` - Number of threshold discretization levels (default: 100)

Returns a map with:
- `:accuracy` - Classification accuracy at the EER threshold
- `:threshold` - The threshold value where TPR and FPR are balanced

EER accuracy is the standard metric in biometric systems (e.g., facial
recognition) where false accept and false reject errors are equally weighted.

See also: `equal-error-point`, `accuracy`
sourceraw docstring

equal-error-pointclj

(equal-error-point y y_est)
(equal-error-point y y_est bins)

Finds the classification threshold that minimizes the difference between FPR and (1 - TPR).

y - Array of true binary labels (0 or 1) y_est - Array of continuous estimated probabilities or scores (normalized) bins - Number of threshold discretization levels (default: 100)

Returns the threshold value where false positive rate and false negative rate are approximately equal. This is the equal error rate (EER) operating point, commonly used in biometric verification systems.

See also: eer-accuracy, roc-curve

Finds the classification threshold that minimizes the difference between FPR and (1 - TPR).

`y` - Array of true binary labels (0 or 1)
`y_est` - Array of continuous estimated probabilities or scores (normalized)
`bins` - Number of threshold discretization levels (default: 100)

Returns the threshold value where false positive rate and false negative rate
are approximately equal. This is the equal error rate (EER) operating point,
commonly used in biometric verification systems.

See also: `eer-accuracy`, `roc-curve`
sourceraw docstring

error-rateclj

(error-rate y y_hat)

Calculates the proportion of incorrect predictions.

y - Array of true class labels y_hat - Array of predicted class values

Returns the error rate as a float in [0, 1], where 0 indicates perfect classification and 1 indicates all predictions are wrong. Computed as the number of misclassifications divided by total predictions.

Arrays must have the same shape.

See also: accuracy, wrongs

Calculates the proportion of incorrect predictions.

`y` - Array of true class labels
`y_hat` - Array of predicted class values

Returns the error rate as a float in [0, 1], where 0 indicates perfect
classification and 1 indicates all predictions are wrong. Computed as the
number of misclassifications divided by total predictions.

Arrays must have the same shape.

See also: `accuracy`, `wrongs`
sourceraw docstring

false-negativesclj

(false-negatives y y_hat)

Identifies false negative predictions in binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns an array with 1.0 for false negatives (predicted 0, actual 1) and 0.0 elsewhere. Arrays must have the same shape.

False negatives are also known as Type II errors.

See also: false-positives, true-positives, true-negatives, fnr

Identifies false negative predictions in binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns an array with 1.0 for false negatives (predicted 0, actual 1) and 0.0
elsewhere. Arrays must have the same shape.

False negatives are also known as Type II errors.

See also: `false-positives`, `true-positives`, `true-negatives`, `fnr`
sourceraw docstring

false-positivesclj

(false-positives y y_hat)

Identifies false positive predictions in binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns an array with 1.0 for false positives (predicted 1, actual 0) and 0.0 elsewhere. Arrays must have the same shape.

False positives are also known as Type I errors.

See also: false-negatives, true-positives, true-negatives, fpr

Identifies false positive predictions in binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns an array with 1.0 for false positives (predicted 1, actual 0) and 0.0
elsewhere. Arrays must have the same shape.

False positives are also known as Type I errors.

See also: `false-negatives`, `true-positives`, `true-negatives`, `fpr`
sourceraw docstring

fnrclj

(fnr y y_hat)

Calculates false negative rate for binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns FNR as a double in [0, 1], computed as 1 minus the true positive rate. Uses the strict ROC definition.

FNR measures the proportion of actual positives incorrectly classified as negative. Lower values are better. Arrays must have the same shape.

See also: tpr, false-negatives, recall

Calculates false negative rate for binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns FNR as a double in [0, 1], computed as 1 minus the true positive rate.
Uses the strict ROC definition.

FNR measures the proportion of actual positives incorrectly classified as
negative. Lower values are better. Arrays must have the same shape.

See also: `tpr`, `false-negatives`, `recall`
sourceraw docstring

fprclj

(fpr y y_hat)

Calculates false positive rate for binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns FPR as a double in [0, 1], computed as false positives divided by total predicted negatives (FP / (FP + TN)). Uses the strict ROC definition.

FPR measures the proportion of actual negatives incorrectly classified as positive. Lower values are better. Arrays must have the same shape.

See also: tpr, false-positives, precision

Calculates false positive rate for binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns FPR as a double in [0, 1], computed as false positives divided by
total predicted negatives (FP / (FP + TN)). Uses the strict ROC definition.

FPR measures the proportion of actual negatives incorrectly classified as
positive. Lower values are better. Arrays must have the same shape.

See also: `tpr`, `false-positives`, `precision`
sourceraw docstring

precisionclj

(precision y y_hat)

Calculates precision (positive predictive value) for binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns precision as a float in [0, 1], computed as true positives divided by total predicted positives (TP / (TP + FP)). Precision measures the proportion of positive predictions that were correct.

High precision means few false positives. Arrays must have the same shape.

See also: recall, fpr

Calculates precision (positive predictive value) for binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns precision as a float in [0, 1], computed as true positives divided by
total predicted positives (TP / (TP + FP)). Precision measures the proportion
of positive predictions that were correct.

High precision means few false positives. Arrays must have the same shape.

See also: `recall`, `fpr`
sourceraw docstring

recallclj

(recall y y_hat)

Calculates recall (sensitivity, true positive rate) for binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns recall as a double in [0, 1], computed as true positives divided by total actual positives (TP / (TP + FN)). Recall measures the proportion of actual positive cases that were correctly identified.

High recall means few false negatives. Arrays must have the same shape.

Also known as sensitivity, hit rate, or true positive rate.

See also: precision, tpr, fnr

Calculates recall (sensitivity, true positive rate) for binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns recall as a double in [0, 1], computed as true positives divided by
total actual positives (TP / (TP + FN)). Recall measures the proportion of
actual positive cases that were correctly identified.

High recall means few false negatives. Arrays must have the same shape.

Also known as sensitivity, hit rate, or true positive rate.

See also: `precision`, `tpr`, `fnr`
sourceraw docstring

roc-curveclj

(roc-curve y y_est)
(roc-curve y y_est bins)

Computes an ROC (Receiver Operating Characteristic) curve for binary classification.

y - Array of true binary labels (0 or 1) y_est - Array of estimated probabilities or scores bins - Number of threshold discretization levels (default: 100)

Returns a sequence of [fpr tpr threshold] triplets, de-duplicated to include only boundary points where FPR or TPR changes. Thresholds range from 0.0 to 1.0.

The ROC curve visualizes the trade-off between true positive rate and false positive rate across different classification thresholds.

Note: This is a basic implementation. Consider using dedicated libraries for production ROC analysis.

See also: tpr, fpr, threshold, equal-error-point, eer-accuracy

Computes an ROC (Receiver Operating Characteristic) curve for binary classification.

`y` - Array of true binary labels (0 or 1)
`y_est` - Array of estimated probabilities or scores
`bins` - Number of threshold discretization levels (default: 100)

Returns a sequence of [fpr tpr threshold] triplets, de-duplicated to include
only boundary points where FPR or TPR changes. Thresholds range from 0.0 to 1.0.

The ROC curve visualizes the trade-off between true positive rate and false
positive rate across different classification thresholds.

Note: This is a basic implementation. Consider using dedicated libraries for
production ROC analysis.

See also: `tpr`, `fpr`, `threshold`, `equal-error-point`, `eer-accuracy`
sourceraw docstring

thresholdclj

(threshold y_est thresh)

Creates a binary mask by thresholding estimated probabilities.

y_est - Array of estimated probabilities or scores thresh - Threshold value for binarization

Returns an array where values >= thresh are true/1 and values < thresh are false/0.

Used to convert probability outputs into binary predictions for ROC curve analysis and threshold optimization.

See also: roc-curve, equal-error-point

Creates a binary mask by thresholding estimated probabilities.

`y_est` - Array of estimated probabilities or scores
`thresh` - Threshold value for binarization

Returns an array where values >= thresh are true/1 and values < thresh are false/0.

Used to convert probability outputs into binary predictions for ROC curve
analysis and threshold optimization.

See also: `roc-curve`, `equal-error-point`
sourceraw docstring

tprclj

(tpr y y_hat)

Calculates true positive rate (sensitivity, recall) for binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns TPR as a double in [0, 1], computed as true positives divided by total predicted positives (TP / (TP + FP)). Uses the strict ROC definition.

TPR measures the proportion of positive predictions that are true positives. Higher values are better. Arrays must have the same shape.

Also known as recall or sensitivity.

See also: fpr, fnr, recall, true-positives

Calculates true positive rate (sensitivity, recall) for binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns TPR as a double in [0, 1], computed as true positives divided by
total predicted positives (TP / (TP + FP)). Uses the strict ROC definition.

TPR measures the proportion of positive predictions that are true positives.
Higher values are better. Arrays must have the same shape.

Also known as recall or sensitivity.

See also: `fpr`, `fnr`, `recall`, `true-positives`
sourceraw docstring

true-negativesclj

(true-negatives y y_hat)

Identifies true negative predictions in binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns an array with 1.0 for true negatives (predicted 0, actual 0) and 0.0 elsewhere. Arrays must have the same shape.

True negatives represent correctly identified negative cases.

See also: true-positives, false-positives, false-negatives

Identifies true negative predictions in binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns an array with 1.0 for true negatives (predicted 0, actual 0) and 0.0
elsewhere. Arrays must have the same shape.

True negatives represent correctly identified negative cases.

See also: `true-positives`, `false-positives`, `false-negatives`
sourceraw docstring

true-positivesclj

(true-positives y y_hat)

Identifies true positive predictions in binary classification.

y - Array of true binary labels (0 or 1) y_hat - Array of predicted binary values (0 or 1)

Returns an array with 1.0 for true positives (predicted 1, actual 1) and 0.0 elsewhere. Arrays must have the same shape.

True positives represent correctly identified positive cases.

See also: true-negatives, false-positives, false-negatives, tpr

Identifies true positive predictions in binary classification.

`y` - Array of true binary labels (0 or 1)
`y_hat` - Array of predicted binary values (0 or 1)

Returns an array with 1.0 for true positives (predicted 1, actual 1) and 0.0
elsewhere. Arrays must have the same shape.

True positives represent correctly identified positive cases.

See also: `true-negatives`, `false-positives`, `false-negatives`, `tpr`
sourceraw docstring

unit-spaceclj

(unit-space divs)

Generates evenly-spaced values in the unit interval [0.0, 1.0].

divs - Number of divisions (bins) to create

Returns an array with divs + 1 values evenly distributed from 0.0 to 1.0, inclusive. For example, divs=4 produces [0.0 0.25 0.5 0.75 1.0].

Used internally for generating threshold values in ROC curve computation.

See also: roc-curve

Generates evenly-spaced values in the unit interval [0.0, 1.0].

`divs` - Number of divisions (bins) to create

Returns an array with `divs + 1` values evenly distributed from 0.0 to 1.0,
inclusive. For example, `divs=4` produces [0.0 0.25 0.5 0.75 1.0].

Used internally for generating threshold values in ROC curve computation.

See also: `roc-curve`
sourceraw docstring

wrongsclj

(wrongs y y_hat)

Identifies incorrect predictions in binary classification.

y - Array of ground truth labels y_hat - Array of classifier predictions

Returns an array with 1.0 where predictions don't match ground truth, 0.0 where they match. Arrays must have the same shape.

Useful for computing error rates and analyzing misclassification patterns.

See also: error-rate, accuracy

Identifies incorrect predictions in binary classification.

`y` - Array of ground truth labels
`y_hat` - Array of classifier predictions

Returns an array with 1.0 where predictions don't match ground truth, 0.0 where
they match. Arrays must have the same shape.

Useful for computing error rates and analyzing misclassification patterns.

See also: `error-rate`, `accuracy`
sourceraw docstring

cljdoc builds & hosts documentation for Clojure/Script libraries

Keyboard shortcuts
Ctrl+kJump to recent docs
Move to previous article
Move to next article
Ctrl+/Jump to the search field
× close