필터 지우기
필터 지우기

Discrete weights with neural network toolbox

조회 수: 1 (최근 30일)
circuit_designer5172
circuit_designer5172 2015년 6월 18일
댓글: Jules BROCHARD 2018년 1월 25일
Hello, I am building a custom neural network. In the application I am attempting to model it is only possible to have weights of discrete values [-2, -1, 0, 1, 2]. I want to use this network to perform the training using the built-in functions, but don't want to get weights back that are 1.24345932 and have to round it and sacrifice accuracy in the testing phase. I have found some documentation that you can use the command net.inputs{1}.exampleInput = [...] but it doesn't realize that I want the values to be discrete and it resets the size of the inputs. Thank you!

채택된 답변

Eric Lin
Eric Lin 2015년 6월 19일
Constraining network weights is not possible with the built-in Neural Network Toolbox functions as the training algorithms are all gradient-based. If you would like to implement your own training algorithm, consider using the intlinprog or ga functions which perform mixed-integer optimization.
  댓글 수: 2
Alexandra Tzilivaki
Alexandra Tzilivaki 2017년 11월 6일
Hello Eric. Is it possible however to have non negative weights? If so, which is the best train function for non negative weights?
Many thanks in advance
Jules BROCHARD
Jules BROCHARD 2018년 1월 25일
If you build you own transfer function, you use a transformation, such as the exponential*, to map R into R+ before inputing them in your usual transfering function. In practice your weight will still be negative but they will be used as positive number.
*: beware of the distortion of space it induces. Oh and don't forget to adjust the gradient derivative accordingly :)

댓글을 달려면 로그인하십시오.

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Sequence and Numeric Feature Data Workflows에 대해 자세히 알아보기

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by