(classification-metric y-true y-pred metric averaging)(classification-metric y-true y-pred metric averaging options)Calculates various classification metrics, supporting binary and multiclass data. Return a single float number
y-true A TMD dataset, having the truthy-pred A TMD dataset, having the predictionmetric A keyword, supports any metric from: https://generateme.github.io/fastmath/clay/stats.html#binary-classification-metrics
and :roc-aucaveraging How the mostly binary metrices get averaged, supports :macro and :microoptions Options for the :metric-fnMulti-label data is so far not supported.
Both datasets need to have columns containing the appropriate column metadata as foreseen by TMD, see here:https://techascent.github.io/tech.ml.dataset/tech.v3.dataset.column-filters.html , eg:
The ml/predict fn is producing these type of datasets.
The function validates various aspects and ev. rejects data which has:
y-true and y-predThis might depend on the concrete metric-fn used.
Calculates various classification metrics, supporting binary and multiclass data.
Return a single float number
* `y-true` A TMD dataset, having the truth
* `y-pred` A TMD dataset, having the prediction
* `metric` A keyword, supports any metric from: https://generateme.github.io/fastmath/clay/stats.html#binary-classification-metrics
and :roc-auc
* `averaging` How the mostly binary metrices get averaged, supports :macro and :micro
* `options` Options for the :metric-fn
Multi-label data is so far not supported.
Both datasets need to have columns containing the appropriate column metadata
as foreseen by TMD, see here:https://techascent.github.io/tech.ml.dataset/tech.v3.dataset.column-filters.html
, eg:
* :column-type being :prediction, :probability-distribution
* :inference-target true
* :categorical-map column metadata is explicitely supported and get handled properly when present, so gets taken into consideration
when comparing columns
The `ml/predict` fn is producing these type of datasets.
The function validates various aspects and ev. rejects data which has:
* wrong column metadata
* missing values or NaNs
* non-discrete values in :prediction column
* non-uniform datatypes
* multi-label data ( having > 1 :inference-target column)
* mistmatch in shape between `y-true` and `y-pred`
* others
This might depend on the concrete metric-fn used.
(insist x)(insist x message)Evaluates expression x and throws an AssertionError with optional message if x does not evaluate to logical true.
Assertion checks are omitted from compiled code if 'assert' is false.
Evaluates expression x and throws an AssertionError with optional message if x does not evaluate to logical true. Assertion checks are omitted from compiled code if '*assert*' is false.
(regression-metric y-true y-pred metric-fn)Calculates various regression metrics and return a single float number
y-true A TMD dataset, having the truthy-pred A TMD dataset, having the predictionmetric A keyword, supports any metric from: https://generateme.github.io/fastmath/clay/stats.html#distance-and-similarity-metricsBoth datasets need to have columns containing the appropriate column metadata as foreseen by TMD, see here:https://techascent.github.io/tech.ml.dataset/tech.v3.dataset.column-filters.html , eg:
The ml/predict fn is producing these type of datasets.
The function validates various aspects and ev. rejects data which has:
y-true and y-predThis might depend on the concrete metric-fn used.
Calculates various regression metrics and return a single float number * `y-true` A TMD dataset, having the truth * `y-pred` A TMD dataset, having the prediction * `metric` A keyword, supports any metric from: https://generateme.github.io/fastmath/clay/stats.html#distance-and-similarity-metrics Both datasets need to have columns containing the appropriate column metadata as foreseen by TMD, see here:https://techascent.github.io/tech.ml.dataset/tech.v3.dataset.column-filters.html , eg: * :column-type being :prediction * :inference-target true The `ml/predict` fn is producing these type of datasets. The function validates various aspects and ev. rejects data which has: * wrong column metadata * missing values or NaNs * non-continous values in :prediction column * non-uniform datatypes * is multi-label data ( having > 1 :inference-target column) * mistmatch in shape between `y-true` and `y-pred` * others This might depend on the concrete metric-fn used.
cljdoc builds & hosts documentation for Clojure/Script libraries
| Ctrl+k | Jump to recent docs |
| ← | Move to previous article |
| → | Move to next article |
| Ctrl+/ | Jump to the search field |