Demystifying Analytics – Part 6 – is the model any good

by | Dec 21, 2012

First and foremost you must realise that no model is perfect!   Next to getting stuck in ‘analysis paralysis’ in the exploring phase – the temptation to create a perfect model is the best way to suck up time with nothing to show for it.
 
What you want is a model that’s better than what’s currently being done. If there is no model in use then you want your model to perform better than a reasonable guess such as assuming each option is equally likely to occur or taking the average in continuous data etc.
 
You need to consider the validation data set when looking at performance measures, as well as the training data. However make your decisions based on the validation results. The most common performance measures include:

  • Cumulative lift chart – tells you how many times it is better to use the model compared to not using a model at all.

 

  • ROC chart – plots true positive against false positive. It contains a 45 degree do nothing (random chance) reference line to compare against.

 

  • Misclassification rates – used in decision type predictive modelling (will a customer default). The best model is the one with the smallest misclassification rate.
  • Average Square Error – used in predictive models that have a numeric target (the energy consumption used). The best model is the one with the smallest average square error.
  • Kolmogorov-Smirnov – also used in decision type models. This time the best model is the one with the largest Kolmogorov-Smirnov statistic.

 
However, no matter how good a model’s performance measure is – it must align with the business outcome/objective. For example, if the objective requires the model to be easy to interpret and a neural network looks to be the best, then you’ll need to look at the next best as neural networks are difficult to interpret. The bonus of having the neural network as one of your model options is you know how good the next best one is relative to the best model!
 
Next run your model against the test data set. What you are checking for here is that the model works well on completely unseen data. Look at the performance measures again and compare them to the training & validation ones. You will see that the models performance reduces from training to validation to test – however the drop in performance between validation & test data sets should be similar to the drop in performance between the training & validation data sets.
 
Unfortunately if your chosen model doesn’t perform well on test data or you are unable to find a model that does have good performance measures that aligns with the business objective then its back to exploring stage!
 
And that’s it. Happy modelling…..

0 Comments
Submit a Comment

Your email address will not be published. Required fields are marked *