Liking cljdoc? Tell your friends :D

zero-one.geni.ml.regression


aft-survival-regressionclj

(aft-survival-regression params)

Fit a parametric survival regression model named accelerated failure time (AFT) model (see Accelerated failure time model (Wikipedia)) based on the Weibull distribution of the survival time.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/AFTSurvivalRegression.html

Timestamp: 2020-10-19T01:55:51.453Z

Fit a parametric survival regression model named accelerated failure time (AFT) model
(see 
Accelerated failure time model (Wikipedia))
based on the Weibull distribution of the survival time.


Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/AFTSurvivalRegression.html

Timestamp: 2020-10-19T01:55:51.453Z
sourceraw docstring

decision-tree-regressorclj

(decision-tree-regressor params)

Decision tree learning algorithm for regression. It supports both continuous and categorical features.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/DecisionTreeRegressor.html

Timestamp: 2020-10-19T01:55:52.001Z

Decision tree
learning algorithm for regression.
It supports both continuous and categorical features.


Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/DecisionTreeRegressor.html

Timestamp: 2020-10-19T01:55:52.001Z
sourceraw docstring

fm-regressorclj

(fm-regressor params)

Factorization Machines learning algorithm for regression. It supports normal gradient descent and AdamW solver.

The implementation is based upon:

S. Rendle. "Factorization machines" 2010.

FM is able to estimate interactions even in problems with huge sparsity (like advertising and recommendation system). FM formula is:

FM regression model uses MSE loss which can be solved by gradient descent method, and regularization terms like L2 are usually added to the loss function to prevent overfitting.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/FMRegressor.html

Timestamp: 2020-10-19T01:55:52.555Z

Factorization Machines learning algorithm for regression.
It supports normal gradient descent and AdamW solver.

The implementation is based upon:

S. Rendle. "Factorization machines" 2010.

FM is able to estimate interactions even in problems with huge sparsity
(like advertising and recommendation system).
FM formula is:


FM regression model uses MSE loss which can be solved by gradient descent method, and
regularization terms like L2 are usually added to the loss function to prevent overfitting.


Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/FMRegressor.html

Timestamp: 2020-10-19T01:55:52.555Z
sourceraw docstring

gbt-regressorclj

(gbt-regressor params)

Gradient-Boosted Trees (GBTs) learning algorithm for regression. It supports both continuous and categorical features.

The implementation is based upon: J.H. Friedman. "Stochastic Gradient Boosting." 1999.

Notes on Gradient Boosting vs. TreeBoost:

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GBTRegressor.html

Timestamp: 2020-10-19T01:55:53.108Z

Gradient-Boosted Trees (GBTs)
learning algorithm for regression.
It supports both continuous and categorical features.

The implementation is based upon: J.H. Friedman. "Stochastic Gradient Boosting." 1999.

Notes on Gradient Boosting vs. TreeBoost:

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GBTRegressor.html

Timestamp: 2020-10-19T01:55:53.108Z
sourceraw docstring

generalised-linear-regressionclj

(generalised-linear-regression params)

Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html

Timestamp: 2020-10-19T01:55:53.908Z

Fit a Generalized Linear Model
(see 
Generalized linear model (Wikipedia))
specified by giving a symbolic description of the linear
predictor (link function) and a description of the error distribution (family).
It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family.
Valid link functions for each family is listed below. The first link function of each family
is the default one.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html

Timestamp: 2020-10-19T01:55:53.908Z
sourceraw docstring

generalized-linear-regressionclj

(generalized-linear-regression params)

Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html

Timestamp: 2020-10-19T01:55:53.908Z

Fit a Generalized Linear Model
(see 
Generalized linear model (Wikipedia))
specified by giving a symbolic description of the linear
predictor (link function) and a description of the error distribution (family).
It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family.
Valid link functions for each family is listed below. The first link function of each family
is the default one.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html

Timestamp: 2020-10-19T01:55:53.908Z
sourceraw docstring

glmclj

(glm params)

Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html

Timestamp: 2020-10-19T01:55:53.908Z

Fit a Generalized Linear Model
(see 
Generalized linear model (Wikipedia))
specified by giving a symbolic description of the linear
predictor (link function) and a description of the error distribution (family).
It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family.
Valid link functions for each family is listed below. The first link function of each family
is the default one.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html

Timestamp: 2020-10-19T01:55:53.908Z
sourceraw docstring

isotonic-regressionclj

(isotonic-regression params)

Isotonic regression.

Currently implemented using parallelized pool adjacent violators algorithm. Only univariate (single feature) algorithm supported.

Uses org.apache.spark.mllib.regression.IsotonicRegression.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/IsotonicRegression.html

Timestamp: 2020-10-19T01:55:54.264Z

Isotonic regression.

Currently implemented using parallelized pool adjacent violators algorithm.
Only univariate (single feature) algorithm supported.

Uses org.apache.spark.mllib.regression.IsotonicRegression.


Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/IsotonicRegression.html

Timestamp: 2020-10-19T01:55:54.264Z
sourceraw docstring

linear-regressionclj

(linear-regression params)

Linear regression.

The learning objective is to minimize the specified loss function, with regularization. This supports two kinds of loss:

This supports multiple types of regularization:

The squared error objective function is:

The huber objective function is:

where

Note: Fitting with huber loss only supports none and L2 regularization.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/LinearRegression.html

Timestamp: 2020-10-19T01:55:54.848Z

Linear regression.

The learning objective is to minimize the specified loss function, with regularization.
This supports two kinds of loss:

This supports multiple types of regularization:

The squared error objective function is:



The huber objective function is:



where



Note: Fitting with huber loss only supports none and L2 regularization.


Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/LinearRegression.html

Timestamp: 2020-10-19T01:55:54.848Z
sourceraw docstring

random-forest-regressorclj

(random-forest-regressor params)

Random Forest learning algorithm for regression. It supports both continuous and categorical features.

Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/RandomForestRegressor.html

Timestamp: 2020-10-19T01:55:55.394Z

Random Forest
learning algorithm for regression.
It supports both continuous and categorical features.


Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/RandomForestRegressor.html

Timestamp: 2020-10-19T01:55:55.394Z
sourceraw docstring

cljdoc is a website building & hosting documentation for Clojure/Script libraries

× close