I am attempting to wrap my head around what sample time actually means in the real world. I am trying to use Simulink to write firmware for a new product that my company is designing. It runs on an ARM Cortex-M3 processor, so I'm having simulink generate code for that target. My boss is working on a Simulink model that is essentially a state machine which is supposed to control the timing of things. I've done some tests on hardware to try to get the interfacing done, but I am unable to wrap my head around what is actually controlling the timing of things as they run on the processor.
There doesn't seem to be any timer interaction, nor does there seem to be another mechanism for controlling how long a specific process takes to execute. The only mechanism I've been able to identify is that the rt_OneStep() function selects whether it will execute a function during a particular step or not.
So for example, let's say that I pick the default sample time to be 0.1 and I set some S-Function's sample time to 0.2, then essentially there's an infinite loop running and the second function will be executed once for every two times the first function is executed. Do I have a grasp of the situation? Is there some other method of timekeeping at play here that I'm not aware of?
If I do have the time step idea correct, what is the best method for using Simulink to control timing of things in an environment that is interrupt driven and in a low power mode where the processor is asleep most of the time? Thanks in advance for your help.