Why does legacy code scale variables by 2^16?

조회 수: 1 (최근 30일)
KAE
KAE 2017년 11월 20일
편집: KAE 2017년 11월 22일
I am trying to understand some legacy Matlab code, and was puzzled to note that many variables are multiplied or divided by 2^16. After some digging I found this is the largest number that a 16-bit (unsigned) integer can hold, see here. I believe the code originated in C or C#. I have two questions: (1) Could the code author be using this factor to force the variable to be a float? (2) If so, can I delete all the 2^16 factors without affecting the values in Matlab?
Here is an example,
SOME_THRESHOLD = floor(0.010 * 65536); % Author's comment indicates this is supposed to represent 10%
SOME_THRESHOLD gets passed in to a function which where all the other variables have been multiplied by either 65536 or 2^16 prior to arithmetic operations; floor and abs are also used.
  댓글 수: 7
Image Analyst
Image Analyst 2017년 11월 21일
I don't see the example in your original/edited question. But ... good luck though.
Christoph F.
Christoph F. 2017년 11월 22일
I think the original author tried to make the MatLAB script use exactly the same numbers as the C code would.

댓글을 달려면 로그인하십시오.

채택된 답변

KAE
KAE 2017년 11월 22일
편집: KAE 2017년 11월 22일
Just to close this question out, based on all the comments and info at the links, it appears that the 2^16 factors are for binary scaling in the original C code. There could be numerical differences if the 2^16 factors were removed but they are small for my application. Thanks for all your help!

추가 답변 (0개)

카테고리

Help CenterFile Exchange에서 Matrix Indexing에 대해 자세히 알아보기

태그

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by