noodles’ love for her monsters has re-ignited my own monster person
here’s Marsse again!!
I really miss doodling dorky stuff about his character
but it seems I never really explained how he came to be
At first it was for skeleranger stuff but I do have a backstory for him separate from that
To put it plainly, he’s just a bunch of tentacles that took over the body of a mummy. What happened was during an the embalming process, the pod of the tentacles had properties for preservation and got added in as a bonus since Marsse’s original mummy person was very rich and all that. Then after hundreds of years, the tentacles began to manifest inside the mummy and became the current Marsse.
His name is actually an anagram of Ramses because he used to misread all the hieroglyphs at first and just ran with it as his name. He likes money a lot and began his villainous career to restore his former fortune. He’s not gonna grave rob his own tomb for monies. Marsse a strong, dependable working or at least he tries.
Right now he just lives a pretty mundane life with a minor modeling job for representing former villains in the workforce. He also has an adoptive sister, his imouto as he calls them after his stint as sentai villain.
Instead, I’m keeping an open mind (aka shotgun or fishing experiment approach) to considering a wide array of candidate models for validation in Part 1 of the study. The issue then is how to go about declaring winners without making a priori declarations about the targets before hand.
One method is illustrated by the figures above. Here I used the phytoplankton data in MARSS as a mock up for some of the potential parameters that may come out of candidate models (humor me and just go along with this).
As you can see in the first figure we have several years of data with what looks like a combination of seasonal variation and multi-year trends. One assumptions that can be made is that all of the observed parameters are slightly different views of a lesser number of underlying hidden states.
One of the way to estimate the number of hidden states is to use a Dynamic Factor Analysis. The second plot shows that for this data set that there are two underlying hidden states.
What we can also see is that the parameters load differently relative to these hidden states. Pmax and W1 load heavily on hidden state 1 while Tau1 and CP load heavily on hidden state 2. W2 loads heavily on both.
From this factor analysis we have an indication that when it comes to performance (in this mock up) there are really just two hidden states that need to be monitored compressed down from the 5 potential parameters.
With the dynamic factor analysis done, there is a higher level of confidence that the hidden states represent orthogonal parameters that can then be used to monitor training response.
Noise, probability, and deterministic skeletons in a simple exhaustion simulation
Reading about extinction risk metrics in http://cran.r-project.org/web/packages/MARSS/vignettes/UserGuide.pdf gives me some thoughts on modeling time to exhaustion that is probably a bit more mature than a straight deterministic t = W’/(P-CP).
The first issue is to start the move from a deterministic centered view to a probabilistic one. In general what people have been doing so far is to take a take a deterministic approach ie if you deplete W’ you will become exhausted or something along those lines. From a probablistic perspective you would look at the problem as given the amount of W’ depleted what is the probability of exhaustion.
For performance modelling, I think it is the probabilistic perspective will ultimately prove more useful because of some of techniques it opens up mathematically and because it is conceptually more useful.
As a quick example consider how a stage hunter and GC rider aproaches a final climb. In a deterministic view of the world you might say given W’ and CP this climb can be done at 5.9 W/kg. This might work reasonably well for someone hunting stages wins since if they try it enough times they will hit the top about as fast as they usually can. If they happen to explode before the top of the climb no major loss they try again tomorrow. For the GC rider on the other hand, cracking on one climb is potentially the end of their race.
Now looking at it from a probabalistic perspective, one can say at 5.8 W/kg the chance of premature exhaustion might only be 10%, 20% @ 5.9 W/kg, and 50% @ 6.0 W/kg etc. In this case, it is clear why a GC rider would be taking a huge risk trying to follow an attack at 6.0 W/kg. Versus if they were going for a stage, it would make perfect sense to ride at 6 W/kg since it only matters if they win and how much time they lose doesn’t matter.
Possibly more important though is that if we are thinking in terms of probability it opens up a lot of interesting possibilities to use bayesian/hidden markov/state-space/dynamical-systems methods.
For example, the figure above are cumulative distribution plots of the probability of exhaustion (remaining W’ < .01*W’) starting a W’ and decreasing W’ by 0.075 at each time step (the deterministic skeleton) following the documentation for the MARSS package.
The black circles are the true probability of exhaustion generated by the deterministic skeleton. However, note that the probability is sigmoid rather than a square wave because of the inclusion of uncertainty in knowing the true state of W’.
Simulations are then run by creating data with the skeleton and including both random process noise as well as random observation noise.
The green line is the result of fitting the data using the Dennis method which assumes no observation noise (effectively what we have been doing in considering power meter or ergometer data as an observation of truth) placing all of the noise estimate into process or model noise. The red line is the fit using a Kalman fliter where both process and observation noise are allowed.
In the upper left hand corner is the average from 9 simulations and the other panels show the individual fits. As you can see (since I left in a lot of noise) some of the individual fits crash and burn a bit. In the averaged plot, the Kamlan filter fit regresses to the truth as it is an unbiased estimate. The Dennis method, which places all error onto the process however, does as it introduces bias.
One of the insights that come out of these simulations is that the effective application of any model is not just dependent on how much noise the model process introduces but ALSO how much noise the observation introduces.