regressive model


Some week this was: the regressed Final Fantasy XV was shown in over 60 gaming sites around the world, IGN and kotaku among them (talk about 15 min of fame!) But back to business: I may go back and make box art for notable regressions that I never made any for between newer model/commission postings.

  • Final Fantasy XV (here for posts and descriptions)
  • Mirror’s Edge 64 (here..)
  • The Laugh of Us (here)
  • Ice Climber 64 (here)
  • Tomb Raider (here)

Be sure to check out the “demake” tag or any of the notable tags in the description section. Thanks again!


P.S. Is there anything that i have already regressed that you would like to see given the box art treatment like Kingdom Hearts or Assassin’s Creed, to name a few?


Ice Climbers 64Nintendo 64Cancelled

Other Smash Bros Characters: MarioLinkWii Fit TrainerVillagerRosalinaLittle MacSamus, Ice Climbers

Fake Description: The first 3d Ice Climbers game, Ice Climber 64, was to be a dramatic reimagining/reboot of the series. The player would control Nana or Popo, will the AI controlling the other. The goal of the game is simple: get to the top of mountain. It was to be very simple and focused, and most of all, completely devoid of time wasting filler content.

The journey up the mountain takes place in real time. meaning there were no time skips or gaps throughout the journey. The Climbers would have to contend with low visibility, unstable footing, deep chasm, and mysterious eggplants that cause disorienting hallucinations resulting in both of them meeting a tragic end, but their ghosts continued to climb the seemingly never ending mountain, I mean, and Nana’s optional photo-ops! 

-The Regressor

P.S. I have way too much time to think about potential game ideas at work ><
Accepting commissions: starting at $15! More info here
Software used: Autodesk Maya 2009, Photoshop CS4, After Effects CS4
Made while watching: The Clone Wars, Scooby Doo Mystery Inc., Cosmos, The Simpsons, Family Guy, American Dad.


Amy Rose

Sonic, Shadow. Other Nintendo/Smash Bros Characters

I know I said I would do Tails this week, but no time to model/rig a fox, so here is a quick Amy. The actual Amy looks even less like a hedgehog than Sonic, if thats possible lol.

-The Regressor
Accepting commissions: starting at $15! More info here
Software used: Autodesk Maya 2009, Photoshop CS4, After Effects CS4
Made while watching: Ben 10, Brynhildr in the Darkness


Sonic the Hedgehog: Sonic 64, Nintendo 64Cancelled

Other Smash Bros Characters: MarioLinkWii Fit TrainerVillagerRosalinaLittle MacSamusIce Climbers,GreninjaMaroizard, Mega Man

Back in the era of Nintendo 64, it was preposterous to even THINK of Sonic and Mario gracing the same console, and here we are several Olympic games later, and now prepping for their 2nd Smash bout >:]

-The Regressor

P.S. How does Sonic even remotely resemble an actual hedgehog? He is so far stylized that he could easily have been a made-up creature lol.

Accepting commissions: starting at $15! More info here
Software used: Autodesk Maya 2009, Photoshop CS4, After Effects CS4
Made while watching: X-Files


Shadow the Hedgehog: Sonic 64, Nintendo 64Cancelled

Other Smash Bros posts: MarioLinkWii Fit TrainerVillagerRosalinaLittle MacSamusIce Climbers,GreninjaMaroizardMega Man, Sonic

I’d really like a Shadow color swap for Sonic in the new Smash Bros, honestly, all of Sonic’s Brawl colors were too similar to one another. Will probably do Tails next, sometime next week.

-The Regressor
Accepting commissions: starting at $15! More info here
Software used: Autodesk Maya 2009, Photoshop CS4, After Effects CS4
Made while watching: Warehouse 13


You talking sh!t Samus?

Smash Bros. tags: MarioLinkWii Fit TrainerVillagerRosalinaLittle MacSamus

Sam should know this, being 4th time Veteran: Size means NOTHING in the world of SMASH :P

-The Regressor

And yes, I made this Samus to where she and her suit share the same proportions/rig, cause that is one thing that has always irked me about her in the Prime games:, there is no way the same body is present to have those proportions in the Power Suit. Just spatially never made sense to me. Rant over…
Accepting commissions: starting at $15! More info here
Software used: Autodesk Maya 2009, Photoshop CS4, After Effects CS4
Made while watching: Another, Batman Beyond, X Files


Stephen Hawking has crunched the numbers, here’s what England has to do to win the World Cup

Through a technique of general logistic regression modelling, the 72-year-old theoretical physicist analyzed the 45 matches England has played in the tournament since its first and only World Cup triumph in 1966. He also examined the 204 penalties taken in shootouts, in which England has famously come up short on the international stage through the years.

Read more | Follow policymic

The equation of a linear regression model is designated by ŷ, pronounced ‘y hat’ and if you think you don’t like math remember that mathematicians are the kind of people who 'go, “yeah, it’s like a y with a hat on it” and make that a technical term. How can you not want to be one of those people?


Mirror’s Edge 64

Fake Description: “Mirror's Edge 64 was to be a revolutionary first person game well ahead of its time. Taking advantage of the awkward 64 controller, developers intended to give the players the most realistic immersion possible on the 64. The control set-up allowed for independent control not only the head and body, but also manually aim faiths eyes for greater range of view.

The character model of Faith too was an achievement in itself as well: she had real time articulated and animated fingers! Sadly, due to too many technical and financial constraints, Mirror’s Edge 64 was scrapped.” -

Personally, I’ve love to play a game like mirror’s edge is stereoscopic 3D, or even better, the Occulus Rift XD

Software used ~Maya, Photoshop, After Effects~
Commissions starting at $20! More info page here, post here

So I got bored at work today, yay both bosses being on vacation, and decided to do some statistics with my blog. I did a little bit of this in the past when the boob sweater meme was running around, but I hadn’t done anything for awhile. 

So I took my last 500 or so posts I made and took count of the notes, whether cutegirlsdoingcutethings​ reblogged it, whether the content was Owari no Seraph, whether the content was Kancolle, and whether the content was Love Live. From here I imported the data set into R and ran a linear regression on it. The results were quite interesting. I’m only going to put the interesting results and not the whole iterative process with the model since I can only upload so many pictures. 

The initial model with just Kancolle as an explanatory variable surprised me. 

So a little on how to read this output if you aren’t familiar with regression analysis. The Estimate is the estimated marginal effect the variable has on the dependent variable. In this case the estimate for Kancolle is -20.147, meaning that if a post is tagged as Kancolle, I can expect it to have 20 less notes than  post that is not tagged as Kancolle. Now this seems mighty interesting. Does this mean I should not post Kancolle if I want to get those sweet sweet notes?

No it doesn’t. The main reason is that by using a T test, or looking at the Pr(>|t|) column we can see that this value is only significant at the 29% level. This means that there is a 29% chance of making a type 1 error, or in simpler terms there is a 29% chance of rejecting a true null hypothesis. In this case our null hypothesis is that the coefficient for Kancolle should be equal to zero, and our alternative hypothesis is that it isn’t. In standard statistics practice, 5% significance is generally used. So my conclusion from this model is that it doesn’t really tell us anything useful at all. The R squared is also somewhat worrying. In general it’s not good to use the R squared as the only indicator of your goodness of fit, but it is good as an indicator of how much of the variance is explained by your model. That is actually exactly what the R squared tells you. Here we see that the R squared has a value of .002029. Which means that an incredibly small amount of the variance is explained. This is worrysome, but we have already concluded that this model is mostly useless. 

To save room for some pictures, in the single variate models, CGDCT had an estimate of 334 with a Pr(>|t|) of basically 0, Owari had an estimate of 80 with a Pr(>|t|) of .0619, and Love Live had an estimate of 34 with a Pr(>|t|) of .0436. The naive model had an intercept of 122. Meaning that if nothing was specified we would expect a post to have 122 notes on average. 

Now this is fine and all, but multivariate regression models is where it’s at. So smashing them all together we get

The interesting thing here is that kancolle’s estimate is not any different from zero. So we can take it out of the model and see how things work. 

Now that is much better. All of our variables are happily below the 5% significance level, and even below the 3% which is really quite good. It’s not very often that that actually happens. We can see that a reblog from CGDCT is worth 335 notes, a post with owari no seraph content is worth 106 extra notes, and Love Live posts are worth a modest 33 extra notes. Very interesting. 

So it seems that everything is all good and that the model is great, but we need to test a few things. How good does this model fit? First we can take a look at the R squared. It’s sitting at 19% which is not great, but it’s also not terrible. Ideally we would want the R squared to be higher since it would explain more of the variance in the model, but that’s ok. You can have a really powerful meaningful model that has a low R squared. 

How about a likelihood ratio test?

That is quite the result. The quick and simple of this test is that it checks to see if your model is significantly different than a model with no explanatory variables, or the naive model. Here we can see that there is a 2.2e-16 chance that we are making a mistake. To type that out, that is 0.000000000000022% chance that our model is actually not different from the naive model. I’m willing to take those odds :p

Another good check for goodness of fit with linear regressions is looking at the residuals. A histogram of the residuals looks like

This is where we start to see that things might not be so great. Ideally you want your residuals to be tightly clustered around zero. This is because the residual is the difference between the observed value and the estimated value. We also want there to be roughly the same amount of residuals that are above zero as there are below zero, we want the residuals to be roughly normally distributed. 

Here we can see that that is not quite the case. If you squint, and make smaller bins it kinda sorta looks like a normal distribution, but it’s a stretch. We can also see that there are a lot more residuals below zero than there are above zero. This means that are model is over predicting. Why would this happen? Well there are a lot of reasons it could happen. The main one is likely outliers. In this set of data there were quite a few posts over 500 notes and a few in the thousands. But for each one of those there were 30 more that had less than 50 notes. The outliers skew the data and it makes it difficult for the model to accurately predict. 

Another look at the residuals is

We can see a little better that a good chunk of our residuals are concentrated below zero. The distribution is actually fairly consistent across the board, but there are some outliers. In linear regression we want the error terms to be fairly constant. This is one of the classical assumptions, and it is being violated in this model. So we have a nice case of heteroskedacity. 

One of the main consequences of heteroskedacity is that your standard errors will be high, and thus your significance of your variables will tend to be lower than it ought to be. In this case the variables had very clearly defined effects on the amount of notes a post got, so there wasn’t too much of a problem. 

But you are probably wondering how do you deal with this problem? The best way is to respecify your model. All the other things you can do are kind of like band aids for your model. They help fix the problem, but they aren’t solving the underlying issues. You can take studentized residuals (residuals / standard error), robust standard errors, or you can do a weighted least square regression (WLS) and hope that it works. The first two help you generate better estimates of the significance of your variables because they help deal with the heteroskedacity. The WLS method can be quite effective, but it is quite difficult to accurately assign weights to your different variables. It is a rather advanced technique that I don’t have much experience with, so I’m not going to mess with it. 

All that aside I’m fairly happy with this model as just something I did for funsies. I don’t want to put tooo much work into fixing it, so I’m just going to take it as it is. In the future if I do more of this sort of thing I will have much better defined data, so these sorts of things should be less of a problem. 

Anyways I hope you enjoyed this somewhat lengthy post about statistics and stuff. I really like doing these sorts of things. 

But most importantly, all hail the glorious note fountain that is cutegirlsdoingcutethings reblogs. 


Moth commissioned by battlerager (Commission-0010)

The character Moth, created by lazymoth, is ready for another adventure after having his try at Fluttery Moth (go play it)! What awaits this cute, dauntless Moth next?

-The Regressor

Software used: Autodesk Maya 2009, Photoshop CS4, After Effects CS4
Made while watching: The X-Files, Man of Steel