Can you provide me suggestions/critique my approach to this Neural Network fitting?
조회 수: 1 (최근 30일)
이전 댓글 표시
I am hopping I can get some constructive suggestions on how to improve my code or if you guys think there is a better way to approach what I am trying to do or if there is any other area I should investigate. Much appreciated in advance.
So what I am doing is creating a neural network to fit a variable as a function of some other 7 variables. I have a gigantic tabulation of those 8 variables:
ANN=fitnet([50],'trainrp');
ANN=train(ANN,Input,Output);
Where Input is the tabulation of the 7 variables and Output is the tabulation of the variable. The end goal here is to give the ANN any 7 random combination of the Input and it will give me an accurate estimation of the Output (linear interpolation). However, I am doing this process iteratively. What I mean is that after training this neural network on fitting the data, I then generate a new and different table and use the neural network to verify if it can predict the output values within a 10% percent error. If it cannot, I take the rows of data where it didn't do well and I add them to the original table, and repeat the command:
ANN=train(ANN,Input,Output);
But now Input and Output are the original table plus the data from the new table where the neural network didn't do very well. And I keep repeating this process over and over and over (automated process, not manual).
댓글 수: 0
채택된 답변
Mrutyunjaya Hiremath
2023년 8월 17일
Try this:
% Load your data
% Example: load('your_data.mat');
% Assuming you have Input and Output as your data matrices
% Normalize Inputs
meanInput = mean(Input,2);
stdInput = std(Input,0,2);
Input = (Input - meanInput) ./ stdInput;
% Split data into training and validation sets
[trainInd,~,valInd] = dividerand(size(Input,2),0.7,0,0.3);
trainInput = Input(:,trainInd);
trainOutput = Output(:,trainInd);
valInput = Input(:,valInd);
valOutput = Output(:,valInd);
% Initialize the neural network with multiple hidden layers
% For instance, one with 100 neurons and another with 50
hiddenLayers = [100, 50];
ANN = fitnet(hiddenLayers,'trainrp');
% Set early stopping parameters
ANN.divideParam.trainRatio = 0.7;
ANN.divideParam.valRatio = 0.3;
ANN.divideParam.testRatio = 0;
% Regularization (to prevent overfitting)
ANN.performParam.regularization = 0.1;
% Convergence criteria
threshold = 0.02; % stop if mean relative error is below this
max_iterations = 20; % maximum number of training iterations
prevError = inf; % initialize with a high value
tolerance = 0.001; % minimal change to consider convergence
for i = 1:max_iterations
% Training the neural network
[ANN, tr] = train(ANN, trainInput, trainOutput);
% Validate
predictions = ANN(valInput);
error = abs(predictions - valOutput) ./ valOutput; % relative error
meanError = mean(error);
% Check convergence
if meanError < threshold || abs(prevError - meanError) < tolerance
break; % convergence achieved
end
% If the error on validation data is high, add it to training set
highErrorIndices = find(error > 0.1);
trainInput = [trainInput, valInput(:,highErrorIndices)];
trainOutput = [trainOutput, valOutput(:,highErrorIndices)];
prevError = meanError;
end
Change according to your input and output data.
추가 답변 (0개)
참고 항목
카테고리
Help Center 및 File Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!