After trying to experiment, I now have a full grasp of the problem and its internal implementation. I wrote a simple matlab function for C code generation to understand the principle of its monoCamera, vehicleToImage implementation.
function imagePt = vehicleToImgPt(worldPt)%#codegen
focalLength = [309.4362 344.2161];
principalPoint = [318.9034 257.5352];
imageSize = [480 640];
height = 2.1798;
pitch = 14;
intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);
sensor = monoCamera(intrinsics,height,'Pitch',pitch);
imagePt = vehicleToImage(sensor,worldPt);
Then generate the main C code belows:
main explain is: b_a is [R,t] format array, 3*4 size, rows prioritised. i do experiment by following code, the b_a is same as C code.
R1 = rotx(pitch)*rotx(90)*rotz(90);% 车辆坐标系到摄像头物理坐标系的转换矩阵。车辆坐标系的坐标轴先绕Z轴旋转-90度，再把坐标轴绕X轴旋转-90度，最后坐标轴再绕X轴旋转-pitch角度。
t1 = -R1*[0;0;height];
b_a = [R1,t1]
0 -1.0000 0 0
-0.2419 0 -0.9703 2.1151
0.9703 0 -0.2419 0.5273
R1 is the conversion from the vehicle coordinate system to the physical coordinate system of the camera，t1 is corrosponding translation vector.
- use formular
imgPt = intrinsics.IntrinsicMatrix'*[R1,t1]*[10;0;0;1];
imgPt = imgPt./imgPt(end)
- use worldToImage function