- Much less frequent max-outs
- Much more sensitive JS control. This means that smaller errors had larger consequences and finer control of the JS was required (my monkeys were trained to steer for 1-2 years each).
Best way to analyze two waveform signals
조회 수: 4(최근 30일)
I am trying to find the best way to analyze two waveform that were collected for an experiment. The reference waveform was collected from a motion device and the direction of movement it had moved in pitch and roll. Subjects were supposed to input their perception of pitch and roll with a joystick. Data is very messy and RMS error, correlation values, and FFT are not great measures. Any suggestions would be helpful!
Adam Danz 2022년 6월 16일
Hello @Austin Bollinger, looks like @Jonas has made some good recommendations. I'll add another perspective.
My grad school research was similar but not exactly the same as what you describe. Monkeys used a joystick (JS) to steer through virtual environents. The JS controlled angular velocity and they had to stay within a narrow path boundary but the catch was, the path dissappeard after 0.5 seconds so they had to maintain an internal prepresentation of the path and update their estimate of their location relative to the path boundaries by integrating optic flow and their joystick commands across time. No other visual features were present.
Here are three bits of advice gained from my experience with this.
It sounds like you are studying vestibular (or visual, maybe visual?) signals and, based on the length of time (x-axis), it looks like the trials are long and the JS estimates are done in real-time which means you would expect the JS estimates to be similar to the real motion but with a temporal lag to account for perecptual and motor processing. But, that's often not the case. Instead, we see periods of flat signals where the JS is maxed out in the pos or neg directions.
Bummer, but not unusual. I had the same problem at first. To fix that, I inceased the joystick gain so that the max/min was much greater than what was actually needed to do the task. This had 2 effects (keep in mind, this was with monkeys so I can't just tell them to not do that).
If I'm reading your plots correctly, the max target signal is +/-15 so your max JS gains should be lager than +/- 18, especially if your task is open-loop (ie, subjects perceive something without the ability to predict the next move so JS control is completely reactionary).
Even if you work out the JS saturation problem, you still may see sudden jolts of the JS in the pos/neg directions within the velocity signal. Both of my monkeys independently adopted somewhat of a "bang-bang" control of the JS (which controlled angular velocity) meaning that to take a smooth turn, instead of smoothly controlling the JS they would often assert several jolts of the JS in the direction of the curve. Critically, though this is a nuisance to the data analyst, 1) it was a natural behavior (monkey can't talk or understand verbal instructions) and 2) it had no effect on the task and resulted in what appeared to be smooth turns. If you're analyzeing orientation such as pitch/roll (ie, units = degrees) rather than angular velocity (units = deg/sec) and if the subjects are controlling angular velocity, any bang-bang control will integerate-out and the signal should be smoother.
The take-home message is that what looks sub-optimal in the data may be completely a completely natural control system developed organically within the brain (or whatever is controlling your JSs).
This leads us to the final, most critical point.
Lots of data
Though I haven't seen your data, I can make a bet: there is a LOT of trial-to-trial variability. If the same subjects does the same condition 10 times, there will not be any near-repeats of behavior over the entire 6 minute trials. This finding is frequently mentioned in human steering literature. Again, a nuisance for data analysis but the result of a natural phenomenon.
To overcome this, you need lots of data -- lots and lots of it. I think I had something like 5000 repetitions of each condtion but if you're using humans, that will obviously be much smaller. With lots and lots of data, I was able to fit the responses from all repetitions from the same condtion to estimate the underlying timeseries but you would need to know the expected function that the behavior follows which doesn't seem to be the case from your description. Nevertheless, with lots and lots of data, you can eliminate the noise and access the underlying system.
Example: the gray lines below are single trials (100s) showing the displacement of the monkey from the center of a curved path. Variance grows with time because the monkey had no visual feedback of the path. But if I bin the data and measure the median within each bin, now I see the underlying trend and the distribution is normal within each bin which tells me the monkey was generally folloiwing the same strategy across time (months).
Take-home message: Law of Large Numbers: fitting, averaging, computing something from lots and lots of data from the same condition will eliminate noise (but trial-to-trial variability which is often the interesting part and should not be ignored).
To answer your question...
Sorry, I got carried away there. There is no "best way" to analyze the wave forms. This is where the art of data analytics comes in. Before we can recommend a good metric, we need the precise question being asked and sometimes that question needs to be adapted to the data.