Transform predictors into extracted features
z = transform(Mdl,x)
Create a feature transformation model with 100 features from the
rng('default') % For reproducibility data = load('caltech101patches'); q = 100; X = data.X; Mdl = sparsefilt(X,q)
Warning: Solver LBFGS was not able to converge to a solution.
Mdl = SparseFiltering ModelParameters: [1×1 struct] NumPredictors: 363 NumLearnedFeatures: 100 Mu:  Sigma:  FitInfo: [1×1 struct] TransformWeights: [363×100 double] InitialTransformWeights:  Properties, Methods
sparsefilt issues a warning because it stopped due to reaching the iteration limit, instead of reaching a step-size limit or a gradient-size limit. You can still use the learned features in the returned object by calling the
Transform the first five rows of the input data
X to the new feature space.
y = transform(Mdl,X(1:5,:)); size(y)
ans = 1×2 5 100
x— Predictor data
pcolumns | table of numeric values with
Predictor data, specified as a matrix with
p columns or
as a table of numeric values with
p columns. Here,
p is the number of predictors in the model, which is
Mdl.NumPredictors. Each row of the input matrix or
table represents one data point to transform.
transform converts data to predicted features by using the
learned weight matrix
W to map input predictors to output
rica, input data
X maps linearly
to output features
XW. See Reconstruction ICA Algorithm.
sparsefilt, input data maps nonlinearly to output
W). See Sparse Filtering Algorithm.
The result of
transform for sparse filtering depends
on the number of data points. In particular, the result of applying
transform to each row of a matrix separately
differs from the result of applying
transform to the
entire matrix at once.