Note: This page has been translated by MathWorks. Click here to see

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

**MathWorks Machine Translation**

The automated translation of this page is provided by a general purpose third party translator tool.

MathWorks does not warrant, and disclaims all liability for, the accuracy, suitability, or fitness for purpose of the translation.

Perform predictor variable selection for Bayesian linear regression models

To estimate the posterior distribution of a standard Bayesian linear regression
model, see `estimate`

.

`PosteriorMdl = estimate(PriorMdl,X,y)`

`PosteriorMdl = estimate(PriorMdl,X,y,Name,Value)`

```
[PosteriorMdl,Summary]
= estimate(___)
```

returns the model that characterizes the joint posterior distributions of
`PosteriorMdl`

= estimate(`PriorMdl`

,`X`

,`y`

)*β* and *σ*^{2} of a Bayesian linear regression
model. Also, `estimate`

performs predictor variable
selection.

`PriorMdl`

specifies the joint prior distribution of the parameters,
the structure of the linear regression model, and the variable selection algorithm.
`X`

is the predictor data and `y`

is the response
data. `PriorMdl`

and `PosteriorMdl`

are not the same
object type.

To produce `PosteriorMdl`

, `estimate`

“updates” the prior distribution with information about the parameters that
it gleans from the data.

`NaN`

s in the data indicate missing values, which
`estimate`

removes using list-wise deletion.

uses additional options specified by one or more `PosteriorMdl`

= estimate(`PriorMdl`

,`X`

,`y`

,`Name,Value`

)`Name,Value`

pair
arguments. For example, `'Lambda',0.5`

specifies the shrinkage parameter
value for Bayesian lasso regression is `0.5`

for all coefficients except
for the intercept.

If you specify `Beta`

or `Sigma2`

, then
`PosteriorMdl`

and `PriorMdl`

are equal.

`[`

uses any of the input arguments in the previous syntaxes and also returns a table
including, for each parameter, posterior estimates, standard errors, 95% credible
intervals, and posterior probability that the parameter is greater than 0.`PosteriorMdl`

,`Summary`

]
= estimate(___)

Consider the multiple linear regression model that predicts U.S. real gross national product (`GNPR`

) using a linear combination of industrial production index (`IPI`

), total employment (`E`

), and real wages (`WR`

).

For all , is a series of independent Gaussian disturbances with a mean of 0 and variance .

Assume that the prior distributions are:

For

*k*= 0,...,3, has a Laplace distribution with a mean of 0 and a scale of , where is the shrinkage parameter. The coefficients are conditionally independent.. and are the shape and scale, respectively, of an inverse gamma distribution.

Create a prior model for Bayesian lasso regression. Specify the number of predictors, the prior model type, and variable names. Specify these shrinkages:

`0.01`

for the intercept`10`

for`IPI`

and`WR`

`1e5`

for`E`

because it has a scale that is several orders of magnitude larger than the other variables

The order of the shrinkages fololows the order of the specified variable names, but the first element is the shrinkage of the intercept.

p = 3; PriorMdl = bayeslm(p,'ModelType','lasso','Lambda',[0.01; 10; 1e5; 10],... 'VarNames',["IPI" "E" "WR"]);

`Mdl`

is a `lassoblm`

Bayesian linear regression model object representing the prior distribution of the regression coefficients and disturbance variance.

Load the Nelson-Plosser data set. Create variables for the response and predictor series.

load Data_NelsonPlosser X = DataTable{:,PriorMdl.VarNames(2:end)}; y = DataTable{:,"GNPR"};

Perform Baysian lasso regression by passing the prior model and data to `estimate`

, that is, by estimating the posterior distribution of and . Bayesian lasso regression uses MCMC to sample from the posterior. For reproducibility, set a random seed.

rng(1); PosteriorMdl = estimate(PriorMdl,X,y);

Method: lasso MCMC sampling with 10000 draws Number of observations: 62 Number of predictors: 4 | Mean Std CI95 Positive Distribution ------------------------------------------------------------------------- Intercept | -1.3472 6.8160 [-15.169, 11.590] 0.427 Empirical IPI | 4.4755 0.1646 [ 4.157, 4.799] 1.000 Empirical E | 0.0001 0.0002 [-0.000, 0.000] 0.796 Empirical WR | 3.1610 0.3136 [ 2.538, 3.760] 1.000 Empirical Sigma2 | 60.1452 11.1180 [42.319, 85.085] 1.000 Empirical

`PosteriorMdl`

is an `empiricalblm`

model object storing object storing draws from the posterior distributions of and given the data. `estimate`

displays a summary of the marginal posterior distributions to the command window. Rows of the summary correspond to regression coefficients and the disturbance variance, and columns to characteristics of the posterior distribution. The characteristics include:

`CI95`

, which contains the 95% Bayesian equitailed credible intervals for the parameters. For example, the posterior probability that the regression coefficient of`IPI`

is in [4.157, 4.799] is 0.95.`Positive`

, which contains the posterior probability that the parameter is greater than 0. For example, the probability that the intercept is greater than 0 is`0.427`

.

Plot the posterior distributions.

plot(PosteriorMdl)

Given the shrinkages, the distribution of `E`

is fairly dense around 0, which suggets that it might not be an important predictor.

By default, `estimate`

draws and discards a burn-in sample of size 5000. However, it is good practice to inspect a trace plot of the draws for adequate mixing and lack of transience. Plot a trace plot of the draws for each parameter. You can access the draws that compose the distribution, that is, the properties `BetaDraws`

and `Sigma2Draws`

, using dot notation.

figure; for j = 1:(p + 1) subplot(2,2,j); plot(PosteriorMdl.BetaDraws(j,:)); title(sprintf('%s',PosteriorMdl.VarNames{j})); end

```
figure;
plot(PosteriorMdl.Sigma2Draws);
title('Sigma2');
```

The trace plots indicate that the draws seem to be mixing well, that is, there is no detectable transience or serial correlation, and the draws do not jump between states.

Consider the regression model in Select Variables Using Bayesian Lasso Regression.

Create a prior model for performing SSVS. Assume that and are dependent (a conjugate mixture model). Specify the number of predictors, `p`

, and the names of the regression coefficients.

p = 3; PriorMdl = mixconjugateblm(p,'VarNames',["IPI" "E" "WR"]);

Load the Nelson-Plosser data set. Create variables for the response and predictor series.

load Data_NelsonPlosser X = DataTable{:,PriorMdl.VarNames(2:end)}; y = DataTable{:,'GNPR'};

Implement SSVS by estimating the marginal posterior distributions of and . Because SSVS uses Markov chain Monte Carlo for estimation, set a random number seed to reproduce the results. Suppress the estimaton display, but return the estimation summary table.

rng(1); PosteriorMdl = estimate(PriorMdl,X,y);

Method: MCMC sampling with 10000 draws Number of observations: 62 Number of predictors: 4 | Mean Std CI95 Positive Distribution Regime ---------------------------------------------------------------------------------- Intercept | -18.8333 10.1851 [-36.965, 0.716] 0.037 Empirical 0.8806 IPI | 4.4554 0.1543 [ 4.165, 4.764] 1.000 Empirical 0.4545 E | 0.0010 0.0004 [ 0.000, 0.002] 0.997 Empirical 0.0925 WR | 2.4686 0.3615 [ 1.766, 3.197] 1.000 Empirical 0.1734 Sigma2 | 47.7557 8.6551 [33.858, 66.875] 1.000 Empirical NaN

`PosteriorMdl`

is an `empiricalblm`

model object storing object storing draws from the posterior distributions of and given the data. `estimate`

displays a summary of the marginal posterior distributions to the command window. Rows of the summary correspond to regression coefficients and the disturbance variance, and columns to characteristics of the posterior distribution. The characteristics include:

`CI95`

, which contains the 95% Bayesian equitailed credible intervals for the parameters. For example, the posterior probability that the regression coefficient of`E`

(standardized) is in [-0.110, 0.365] is 0.95.`Regime`

, which contains the marginal posterior probability of variable inclusion ( for a variable) . For example, the posterior probability`E`

should be included in the model is 0.0925.

Assuming, arbitrarily, that variables having `Regime`

< 0.1 should be removed from the model, the results suggest that you can exclude the unemployment rate from the model.

By default, `estimate`

draws and discards a burn-in sample of size 5000. However, it is good practice to inspect a trace plot of the draws for adequate mixing and lack of transience. Plot a trace plot of the draws for each parameter. You can access the draws that compose the distribution, that is, the properties `BetaDraws`

and `Sigma2Draws`

, using dot notation.

figure; for j = 1:(p + 1) subplot(2,2,j); plot(PosteriorMdl.BetaDraws(j,:)); title(sprintf('%s',PosteriorMdl.VarNames{j})); end

```
figure;
plot(PosteriorMdl.Sigma2Draws);
title('Sigma2');
```

The trace plots indicate that the draws seem to be mixing well, that is, there is no detectable transience or serial correlation, and the draws do not jump between states.

Consider the regression model and prior distribution in Select Variables Using Bayesian Lasso Regression.

Create a Bayesian lasso regression prior model for 3 predictors and specify variable names. Specify the shrinkage values `0.01`

, `10`

, `1e5`

, and `10`

for the intercept, and the coefficients of `IPI`

, `E`

, and `WR`

.

p = 3; PriorMdl = bayeslm(p,'ModelType','lasso','VarNames',["IPI" "E" "WR"],... 'Lambda',[0.01; 10; 1e5; 10]);

Load the Nelson-Plosser data set. Create variables for the response and predictor series.

load Data_NelsonPlosser X = DataTable{:,PriorMdl.VarNames(2:end)}; y = DataTable{:,"GNPR"};

Estimate the conditional posterior distributions of given the data and that . Return the estimation summary table.

rng(1); % For reproducibility [Mdl,SummaryBeta] = estimate(PriorMdl,X,y,'Sigma2',10);

Method: lasso MCMC sampling with 10000 draws Conditional variable: Sigma2 fixed at 10 Number of observations: 62 Number of predictors: 4 | Mean Std CI95 Positive Distribution ------------------------------------------------------------------------ Intercept | -8.0643 4.1992 [-16.384, 0.018] 0.025 Empirical IPI | 4.4454 0.0679 [ 4.312, 4.578] 1.000 Empirical E | 0.0004 0.0002 [ 0.000, 0.001] 0.999 Empirical WR | 2.9792 0.1672 [ 2.651, 3.305] 1.000 Empirical Sigma2 | 10 0 [10.000, 10.000] 1.000 Empirical

`estimate`

displays a summary of the conditional posterior distribution of . Because it was fixed during estimation, inferences on are trivial.

Display `Mdl`

.

Mdl

Mdl = lassoblm with properties: NumPredictors: 3 Intercept: 1 VarNames: {4x1 cell} Lambda: [4x1 double] A: 3 B: 1 | Mean Std CI95 Positive Distribution --------------------------------------------------------------------------- Intercept | 0 100 [-200.000, 200.000] 0.500 Scale mixture IPI | 0 0.1000 [-0.200, 0.200] 0.500 Scale mixture E | 0 0.0000 [-0.000, 0.000] 0.500 Scale mixture WR | 0 0.1000 [-0.200, 0.200] 0.500 Scale mixture Sigma2 | 0.5000 0.5000 [ 0.138, 1.616] 1.000 IG(3.00, 1)

Because `estimate`

computes the conditional posterior distribution, it returns the model input `PriorMdl`

, not the conditional posterior, in the first position of the output argument list.

Display the estimation summary table.

SummaryBeta

`SummaryBeta=`*5×6 table*
Mean Std CI95 Positive Distribution Covariances
__________ __________ ________________________ ________ ____________ _______________________________________________________________________
Intercept -8.0643 4.1992 -16.384 0.01837 0.0254 'Empirical' 17.633 0.17621 -0.00053724 0.11705 0
IPI 4.4454 0.067949 4.312 4.5783 1 'Empirical' 0.17621 0.0046171 -1.4103e-06 -0.0068855 0
E 0.00039896 0.00015673 9.4925e-05 0.00070697 0.9987 'Empirical' -0.00053724 -1.4103e-06 2.4564e-08 -1.8168e-05 0
WR 2.9792 0.16716 2.6506 3.3046 1 'Empirical' 0.11705 -0.0068855 -1.8168e-05 0.027943 0
Sigma2 10 0 10 10 1 'Empirical' 0 0 0 0 0

`SummaryBeta`

contains the conditional posterior estimates.

Estimate the conditional posterior distributions of given that is the conditional posterior mean of (stored in `SummaryBeta.Mean(1:(end - 1))`

). Return the estimation summary table.

```
condPostMeanBeta = SummaryBeta.Mean(1:(end - 1));
[~,SummarySigma2] = estimate(PriorMdl,X,y,'Beta',condPostMeanBeta);
```

Method: lasso MCMC sampling with 10000 draws Conditional variable: Beta fixed at -8.0643 4.4454 0.00039896 2.9792 Number of observations: 62 Number of predictors: 4 | Mean Std CI95 Positive Distribution ------------------------------------------------------------------------ Intercept | -8.0643 0.0000 [-8.064, -8.064] 0.000 Empirical IPI | 4.4454 0.0000 [ 4.445, 4.445] 1.000 Empirical E | 0.0004 0.0000 [ 0.000, 0.000] 1.000 Empirical WR | 2.9792 0.0000 [ 2.979, 2.979] 1.000 Empirical Sigma2 | 56.8314 10.2921 [39.947, 79.731] 1.000 Empirical

`estimate`

display an estimation summary of the conditional posterior distribution of given the data and that is `condPostMeanBeta`

. In the display, inferences on are trivial.

Consider the regression model in Select Variables Using Bayesian Lasso Regression.

Create a prior model for performing SSVS. Assume that and are dependent (a conjugate mixture model). Specify the number of predictors, `p`

, and the names of the regression coefficients.

p = 3; PriorMdl = mixconjugateblm(p,'VarNames',["IPI" "E" "WR"]);

Load the Nelson-Plosser data set. Create variables for the response and predictor series.

load Data_NelsonPlosser X = DataTable{:,PriorMdl.VarNames(2:end)}; y = DataTable{:,'GNPR'};

Implement SSVS by estimating the marginal posterior distributions of and . Because SSVS uses Markov chain Monte Carlo for estimation, set a random number seed to reproduce the results. Suppress the estimaton display, but return the estimation summary table.

```
rng(1);
[PosteriorMdl,Summary] = estimate(PriorMdl,X,y,'Display',false);
```

`PosteriorMdl`

is an `empiricalblm`

model object storing object storing draws from the posterior distributions of and given the data. `Summary`

is a table with columns corresponding to posterior characteristics and rows corresponding to the coefficients (`PosteriorMdl.VarNames`

) and disturbance variance (`Sigma2`

)

Display the estimated parameter covariance matrix and regimes visited.

`Covariances = Summary(:,"Covariances")`

`Covariances=`*5×1 table*
Covariances
______________________________________________________________________
Intercept 103.74 1.0486 -0.0031629 0.6791 7.3916
IPI 1.0486 0.023815 -1.3637e-05 -0.030387 0.06611
E -0.0031629 -1.3637e-05 1.3481e-07 -8.8792e-05 -0.00025044
WR 0.6791 -0.030387 -8.8792e-05 0.13066 0.089039
Sigma2 7.3916 0.06611 -0.00025044 0.089039 74.911

`Regime = Summary(:,"Regime")`

`Regime=`*5×1 table*
Regime
______
Intercept 0.8806
IPI 0.4545
E 0.0925
WR 0.1734
Sigma2 NaN

`Regime`

contains the marginal posterior probability of variable inclusion ( for a variable). For example, the posterior probability `E`

should be included in the model is 0.0925.

Assuming, arbitrarily, that variables having `Regime`

< 0.1 should be removed from the model, the results suggest that you can exclude the unemployment rate from the model.

`PriorMdl`

— Bayesian linear regression model for predictor variable selection`mixconjugateblm`

model object | `mixsemiconjugateblm`

model object | `lassoblm`

model object | ...Bayesian linear regression model for predictor variable selection, specified as an object in this table.

Value | Description |
---|---|

`mixconjugateblm` | Dependent, Gaussian-mixture-inverse-gamma conjugate model for SSVS
predictor variable selection, returned by `bayeslm` |

`mixsemiconjugateblm` | Independent, Gaussian-mixture-inverse-gamma semi-conjugate model for SSVS
predictor variable selection, returned by `bayeslm` |

`lassoblm` | Bayesian lasso regression model, returned by `bayeslm` |

`X`

— Predictor datanumeric matrix

Predictor data for the multiple linear regression model, specified as a `numObservations`

-by-`PriorMdl.NumPredictors`

numeric matrix. `numObservations`

is the number of observations, and must be equal to the length of `y`

.

**Data Types: **`double`

`y`

— Response datanumeric vector

Response data for the multiple linear regression model, specified
as a numeric vector with `numObservations`

elements.

**Data Types: **`double`

Specify optional
comma-separated pairs of `Name,Value`

arguments. `Name`

is
the argument name and `Value`

is the corresponding value.
`Name`

must appear inside quotes. You can specify several name and value
pair arguments in any order as `Name1,Value1,...,NameN,ValueN`

.

`'Sigma2',2`

specifies estimating the conditional posterior
distribution of the regression coefficients given the data and that the specified
disturbance variance is `2`

.`'Display'`

— Flag to display Bayesian-estimator summary to command line`true`

(default) | `false`

Flag to display Bayesian-estimator summary to the command line, specified as the
comma-separated pair consisting of `'Display'`

and a value in this
table.

Value | Description |
---|---|

`true` | `estimate` prints estimation information and a
table to the command line summarizing the Bayesian estimators. |

`false` | `estimate` does not print to the command
line. |

The estimation information includes the estimation method, any fixed parameters, number of observations, and number of predictors. The summary table contains estimated posterior means, standard deviations (square root of the posterior variance), 95% equal-tailed credible intervals, the posterior probability that the parameter is greater than 0, and, if known, a description of the posterior distribution. For models that perform SSVS, the display table includes a column for variable-inclusion probabilities.

If you specify one of `Beta`

or `Sigma2`

,
then `estimate`

dispatches your specification to the display,
and corresponding posterior estimates are trivial.

**Example: **`'Display',false`

**Data Types: **`logical`

`'Beta'`

— Value of regression coefficients for conditional posterior distribution estimation of disturbance varianceempty array (

`[]`

) (default) | numeric column vectorValue of the regression coefficients for conditional posterior distribution estimation of the disturbance variance, specified as the comma-separated pair consisting of `'Beta'`

and a (`PriorMdl.Intercept`

+ `PriorMdl.NumPredictors`

)-by-1 numeric vector. That is, `estimate`

estimates the characteristics of *π*(*σ*^{2}|* y*,

`X`

`Beta`

`y`

`y`

, `X`

`X`

, and `Beta`

`'Beta'`

. If `PriorMdl.Intercept`

is `true`

, then `Beta(1)`

corresponds to the model intercept. All other values correspond to the predictor variables composing the columns of `X`

. `Beta`

cannot contain any `NaN`

values (that is, all coefficients must be known).You cannot specify `Beta`

and `Sigma2`

simultaneously.

By default, `estimate`

does not compute characteristics of the conditional posterior of *σ*^{2}.

**Example: **`'Beta',1:3`

**Data Types: **`double`

`'Sigma2'`

— Value of disturbance variance for conditional posterior distribution estimation of regression coefficientsempty array (

`[]`

) (default) | positive numeric scalarValue of the disturbance variance for conditional posterior distribution estimation of the regression coefficients, specified as the comma-separated pair consisting of `'Sigma2'`

and a positive numeric scalar. That is, `estimate`

estimates characteristics of *π*(*β*|* y*,

`X`

`Sigma2`

`y`

`y`

, `X`

`X`

, and `Sigma2`

`'Sigma2'`

.You cannot specify `Beta`

and `Sigma2`

simultaneously.

By default, `estimate`

does not compute characteristics of the conditional posterior of *β*.

**Example: **`'Sigma2',1`

**Data Types: **`double`

`'NumDraws'`

— Monte Carlo simulation adjusted sample size`1e5`

(default) | positive integerMonte Carlo simulation adjusted sample size, specified as the comma-separated pair
consisting of `'NumDraws'`

and a positive integer.
`estimate`

actually draws `BurnIn`

–
`NumDraws*`

`Thin`

samples. Hence,
`estimate`

bases the estimates off of
`NumDraws`

samples. For details on how
`estimate`

reduces the full Monte Carlo sample, see Algorithms.

**Example: **`'NumDraws',1e7`

**Data Types: **`double`

`'BurnIn'`

— Number of draws to remove from beginning of Monte Carlo sample`5000`

(default) | nonnegative scalarNumber of draws to remove from beginning of Monte Carlo sample to reduce transient effects, specified as the comma-separated pair consisting of `'BurnIn'`

and nonnegative scalar. For details on how `estimate`

reduces the full Monte Carlo sample, see Algorithms.

To help you specify the appropriate burn-in period size, determine the extent of the transient behavior in the Monte Carlo sample by specifying `'BurnIn',0`

, simulating a few thousand observations using `simulate`

, and then plotting the paths.

**Example: **`'BurnIn',0`

**Data Types: **`double`

`'Thin'`

— Monte Carlo adjusted sample size multiplier`1`

(default) | positive integerMonte Carlo adjusted sample size multiplier, specified as the comma-separated pair consisting of `'Thin'`

and a positive integer.

The actual Monte Carlo sample size is `BurnIn`

+ `NumDraws`

`*Thin`

. After discarding the burn-in, `estimate`

discards every `Thin`

– `1`

draws, and then retains the next. For details on how `estimate`

reduces the full Monte Carlo sample, see Algorithms.

To reduce potential large serial correlation in the Monte Carlo sample or to reduce the memory consumption of the draws stored in `PosteriorMdl`

, specify a large value of `Thin`

.

**Example: **`'Thin',5`

**Data Types: **`double`

`'BetaStart'`

— Starting values of regression coefficients for Markov Chain
Monte Carlo samplenumeric column vector

Starting values of the regression coefficients for the Markov
Chain Monte Carlo (MCMC) sample, specified as the comma-separated
pair consisting of `'BetaStart'`

and a numeric column
vector with (`PriorMdl.Intercept`

+ `PriorMdl.NumPredictors`

)
elements. By default, `BetaStart`

is the ordinary
least squares estimate.

It is good practice to run `estimate`

many
times using different parameter starting values. Verify that the solutions
from each run converge to similar values.

**Example: **`'BetaStart',[1; 2; 3]`

**Data Types: **`double`

`'Sigma2Start'`

— Starting values of disturbance variance for MCMC samplepositive numeric scalar

Starting values of the disturbance variance for the MCMC sample,
specified as the comma-separated pair consisting of `'Sigma2Start'`

and
a positive numeric scalar. By default, `Sigma2Start`

is
the residual mean squared error of the OLS estimator.

It is good practice to run `estimate`

many
times using different parameter starting values. Verify that the solutions
from each run converge to similar values.

**Example: **`'Sigma2Start',4`

**Data Types: **`double`

`PosteriorMdl`

— Bayesian linear regression model storing distribution characteristics`mixconjugateblm`

model object | `mixsemiconjugateblm`

model object | `lassoblm`

model object | `empiricalblm`

model objectBayesian linear regression model storing distribution characteristics, returned as a
`mixconjugateblm`

,
`mixsemiconjugateblm`

, `lassoblm`

, `conjugateblm`

, or `empiricalblm`

model object.

If you do not specify either of

`Beta`

or`Sigma2`

(that is, their values are`[]`

), then`estimate`

updates the prior model using the data likelihood to form the posterior distribution.`PosteriorMdl`

characterizes the posterior distribution, and it is an`empiricalblm`

model object. Information it stores or in its display helps you decide whether predictor variables are important.If you specify one of

`Beta`

or`Sigma2`

, then`PosteriorMdl`

equals`PriorMdl`

(that is, they are the same object storing the same property values). In other words,`estimate`

does not update the prior model to form the posterior model. However,`Summary`

stores conditional posterior estimates.

For more details on the display of `PosteriorMdl`

, see
`Summary`

.

`Summary`

— Summary of Bayesian estimatorstable

Summary of Bayesian estimators, returned as a table. `Summary`

contains the same information as the display of the estimation summary
(`Display`

). Rows correspond to parameters and columns correspond
these posterior characteristics:

`Mean`

– Posterior mean`Std`

– Posterior standard deviation`CI95`

– 95% equal-tailed credible interval`Positive`

– Posterior probability that the parameter is greater than 0`Distribution`

– Description of the marginal or conditional posterior distribution of the parameter, when known`Covariances`

– Estimated covariance matrix of the coefficients and disturbance variance`Regime`

– For models that perform SSVS, variable-inclusion probabilities. Low probabilities indicate that the variable should be excluded from the model.

Row names are the names in `PriorMdl.VarNames`

, and the
name of the last row is `Sigma2`

.

Alternatively, pass `PosteriorMdl`

to `summarize`

to obtain a summary of Bayesian estimators.

A *Bayesian linear regression model* treats
the parameters *β* and *σ*^{2} in
the multiple linear regression (MLR) model *y _{t}* =

For times *t* = 1,...,*T*:

*y*is the observed response._{t}*x*is a 1-by-(_{t}*p*+ 1) row vector of observed values of*p*predictors. To accommodate a model intercept,*x*_{1t}= 1 for all*t*.*β*is a (*p*+ 1)-by-1 column vector of regression coefficients corresponding to the variables composing the columns of*x*._{t}*ε*is the random disturbance having a mean of zero and Cov(_{t}*ε*) =*σ*^{2}*I*_{T×T}, while*ε*is a*T*-by-1 vector containing all disturbances. These assumptions imply that the data likelihood is$$\ell \left(\beta ,{\sigma}^{2}|y,x\right)={\displaystyle \prod _{t=1}^{T}\varphi \left({y}_{t};{x}_{t}\beta ,{\sigma}^{2}\right).}$$

*ϕ*(*y*_{t};*x*,_{t}β*σ*^{2}) is the Gaussian probability density with mean*x*and variance_{t}β*σ*^{2}evaluated at*y*._{t};

Before considering the data, a *joint prior distribution* assumption
is imposed on (*β*,*σ*^{2}).
In a Bayesian analysis, the beliefs about the distribution of the
parameters are updated using information about the parameters gleaned
from the likelihood of the data. The result is the *joint
posterior distribution* of (*β*,*σ*^{2})
or the *conditional posterior distributions* of
the parameters.

Monte Carlo simulation is subject to variation. That is, if
`estimate`

uses Monte Carlo simulation, then estimates and inferences
might vary when you call `estimate`

multiple times under seemingly
equivalent conditions. To reproduce estimation results, set a random number seed using
`rng`

before calling `estimate`

.

This figure describes how `estimate`

reduces the Monte Carlo sample
using the values of `NumDraws`

, `Thin`

, and
`BurnIn`

.

Rectangles represent successive draws from the distribution.
`estimate`

removes the white rectangles from the Monte Carlo sample.
The remaining `NumDraws`

black rectangles compose the Monte Carlo
sample.

아래 MATLAB 명령에 해당하는 링크를 클릭하셨습니다.

이 명령을 MATLAB 명령 창에 입력해 실행하십시오. 웹 브라우저에서는 MATLAB 명령을 지원하지 않습니다.

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

Select web siteYou can also select a web site from the following list:

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

- América Latina (Español)
- Canada (English)
- United States (English)

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)