이 페이지의 최신 내용은 아직 번역되지 않았습니다. 최신 내용은 영문으로 볼 수 있습니다.

가우스 과정 회귀

가우스 과정 회귀 모델(크리깅 보간)

회귀 학습기Train regression models to predict data using supervised machine learning

함수

fitrgp가우스 과정 회귀(GPR) 모델 피팅
predictPredict response of Gaussian process regression model
lossRegression error for Gaussian process regression model
compactCreate compact Gaussian process regression model
crossvalCross-validate Gaussian process regression model
plotPartialDependenceCreate partial dependence plot (PDP) and individual conditional expectation (ICE) plots
postFitStatisticsCompute post-fit statistics for the exact Gaussian process regression model
resubLossResubstitution loss for a trained Gaussian process regression model
resubPredictResubstitution prediction from a trained Gaussian process regression model

클래스

RegressionGPGaussian process regression model class
CompactRegressionGPCompact Gaussian process regression model class

도움말 항목

가우스 과정 회귀 모델

가우스 과정 회귀(GPR) 모델은 비모수 커널 기반의 확률적 모델입니다.

Kernel (Covariance) Function Options

In Gaussian processes, the covariance function expresses the expectation that points with similar predictor values will have similar response values.

Exact GPR Method

Learn the parameter estimation and prediction in exact GPR method.

Subset of Data Approximation for GPR Models

With large data sets, the subset of data approximation method can greatly reduce the time required to train a Gaussian process regression model.

Subset of Regressors Approximation for GPR Models

The subset of regressors approximation method replaces the exact kernel function by an approximation.

Fully Independent Conditional Approximation for GPR Models

The fully independent conditional (FIC) approximation is a way of systematically approximating the true GPR kernel function in a way that avoids the predictive variance problem of the SR approximation while still maintaining a valid Gaussian process.

Block Coordinate Descent Approximation for GPR Models

Block coordinate descent approximation is another approximation method used to reduce computation time with large data sets.