(aft-survival-regression params)
Fit a parametric survival regression model named accelerated failure time (AFT) model (see Accelerated failure time model (Wikipedia)) based on the Weibull distribution of the survival time.
Timestamp: 2020-10-19T01:55:51.453Z
Fit a parametric survival regression model named accelerated failure time (AFT) model (see Accelerated failure time model (Wikipedia)) based on the Weibull distribution of the survival time. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/AFTSurvivalRegression.html Timestamp: 2020-10-19T01:55:51.453Z
(decision-tree-regressor params)
Decision tree learning algorithm for regression. It supports both continuous and categorical features.
Timestamp: 2020-10-19T01:55:52.001Z
Decision tree learning algorithm for regression. It supports both continuous and categorical features. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/DecisionTreeRegressor.html Timestamp: 2020-10-19T01:55:52.001Z
(fm-regressor params)
Factorization Machines learning algorithm for regression. It supports normal gradient descent and AdamW solver.
The implementation is based upon:
S. Rendle. "Factorization machines" 2010.
FM is able to estimate interactions even in problems with huge sparsity (like advertising and recommendation system). FM formula is:
FM regression model uses MSE loss which can be solved by gradient descent method, and regularization terms like L2 are usually added to the loss function to prevent overfitting.
Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/FMRegressor.html
Timestamp: 2020-10-19T01:55:52.555Z
Factorization Machines learning algorithm for regression. It supports normal gradient descent and AdamW solver. The implementation is based upon: S. Rendle. "Factorization machines" 2010. FM is able to estimate interactions even in problems with huge sparsity (like advertising and recommendation system). FM formula is: FM regression model uses MSE loss which can be solved by gradient descent method, and regularization terms like L2 are usually added to the loss function to prevent overfitting. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/FMRegressor.html Timestamp: 2020-10-19T01:55:52.555Z
(gbt-regressor params)
Gradient-Boosted Trees (GBTs) learning algorithm for regression. It supports both continuous and categorical features.
The implementation is based upon: J.H. Friedman. "Stochastic Gradient Boosting." 1999.
Notes on Gradient Boosting vs. TreeBoost:
Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GBTRegressor.html
Timestamp: 2020-10-19T01:55:53.108Z
Gradient-Boosted Trees (GBTs) learning algorithm for regression. It supports both continuous and categorical features. The implementation is based upon: J.H. Friedman. "Stochastic Gradient Boosting." 1999. Notes on Gradient Boosting vs. TreeBoost: Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GBTRegressor.html Timestamp: 2020-10-19T01:55:53.108Z
(generalised-linear-regression params)
Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one.
Timestamp: 2020-10-19T01:55:53.908Z
Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html Timestamp: 2020-10-19T01:55:53.908Z
(generalized-linear-regression params)
Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one.
Timestamp: 2020-10-19T01:55:53.908Z
Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html Timestamp: 2020-10-19T01:55:53.908Z
(glm params)
Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one.
Timestamp: 2020-10-19T01:55:53.908Z
Fit a Generalized Linear Model (see Generalized linear model (Wikipedia)) specified by giving a symbolic description of the linear predictor (link function) and a description of the error distribution (family). It supports "gaussian", "binomial", "poisson", "gamma" and "tweedie" as family. Valid link functions for each family is listed below. The first link function of each family is the default one. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.html Timestamp: 2020-10-19T01:55:53.908Z
(isotonic-regression params)
Isotonic regression.
Currently implemented using parallelized pool adjacent violators algorithm. Only univariate (single feature) algorithm supported.
Uses org.apache.spark.mllib.regression.IsotonicRegression.
Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/IsotonicRegression.html
Timestamp: 2020-10-19T01:55:54.264Z
Isotonic regression. Currently implemented using parallelized pool adjacent violators algorithm. Only univariate (single feature) algorithm supported. Uses org.apache.spark.mllib.regression.IsotonicRegression. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/IsotonicRegression.html Timestamp: 2020-10-19T01:55:54.264Z
(linear-regression params)
Linear regression.
The learning objective is to minimize the specified loss function, with regularization. This supports two kinds of loss:
This supports multiple types of regularization:
The squared error objective function is:
The huber objective function is:
where
Note: Fitting with huber loss only supports none and L2 regularization.
Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/LinearRegression.html
Timestamp: 2020-10-19T01:55:54.848Z
Linear regression. The learning objective is to minimize the specified loss function, with regularization. This supports two kinds of loss: This supports multiple types of regularization: The squared error objective function is: The huber objective function is: where Note: Fitting with huber loss only supports none and L2 regularization. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/LinearRegression.html Timestamp: 2020-10-19T01:55:54.848Z
(random-forest-regressor params)
Random Forest learning algorithm for regression. It supports both continuous and categorical features.
Timestamp: 2020-10-19T01:55:55.394Z
Random Forest learning algorithm for regression. It supports both continuous and categorical features. Source: https://spark.apache.org/docs/3.0.1/api/scala/org/apache/spark/ml/regression/RandomForestRegressor.html Timestamp: 2020-10-19T01:55:55.394Z
cljdoc is a website building & hosting documentation for Clojure/Script libraries
× close