The function 'trainbr' that performs Bayesian regularization backpropogation disables validation stops by default. The reasoning for this is that validation is usually used as a form of regularization, but 'trainbr' has its own form of validation built into the algorithm. More information on the 'trainbr' function can be found at the documentation link below:
If we have a network called "net", this behavior of validation stops is controlled via the parameter 'net.trainParam.max_fail', which gets set to 0 by default by 'trainbr' (see documentation link above). 'max_fail' denotes the maximum number of times that we allow the validation to improve to not improve before terminating training.
If we set 'max_fail' to 5, the training will terminate when we get 5 consecutive iterations where the validation performance does not improve. If we want to get the validation results without terminating the training, we can set 'max_fail' to a very large number like 10000.
Unfortunately, 'max_fail' cannot be set via the neural network app. But, we can use the app to generate a script for the network, and then edit the script to set 'max_fail' to a large number. Then, running this script will allow you to get validation results. I have attached an example script generated from the app where "max_fail" is set to a large number (see line 37 of the attached script).
Note that the script assumes that two variables, 'simplefitInputs' and 'simplefitTargets' are set to the input data and target data, respectively. When I ran the attached script, I used the sample 'chemical_dataset' by executing the following in the MATLAB Command Window:
>> load chemical_dataset
>> simplefitInputs = chemicalInputs
>> simplefitTargets = chemicalTargets
>>
After running the attached script, we can now view the validation and regression plots, noticing that the validation data is present.