S3_Modelling_and_Simulations_Validity

Definition/Theory (Class Summary)
The validity of the model may well be related to the accuracy of the model and its reliability. How good are the predictions of the model and how accurate it is in representing the real world. As models are quite important, specially in the field of prediction, they need to be "tested" and thus determine how valid they are.

When thinking about a models validity we first need to think about what does the model represents. A model that has be done for representing the climate changes in África, doesn't necessarily would work if I use it to check the changes in Europe. Models are so complex that most of the times are done to represent real life constrained scenarios, and if we take them out of this constrictions they don't necessarily work correctly. Apart from that we also need to check the models accuracy with what happens in the real life to talk of its validity. The more the model resembles the real-life scenario, the more valid it becomes. And of course we always need to take into account the GIGO effect. We need to remember the model work on data, so if the inputted data is incorrect (Garbage In) then the outputted results would also bee incorrect (Garbage Out), that's why we call it GIGO (Garbage In Garbage Out), effect that occurs in most of the process with computers involved as they don't have a common sense to decide weather a piece of information should be used or not.

For all of theses there are some specific techniques used. First you need to check the model assumptions. For this to be done, one needs to check them with subject matter experts. Also modelling disciplines such as System Dynamics help a lot by representing assumptions in a visual diagram that can easily be verified.

Also, the model behaviour needs to be checked so that the results match the expected behaviour. For doing this programmer will use the following techniques:
 * Extreme condition testing: Run the simulation with parameters set at extreme levels.
 * Sensitive testing: run the simulation multiple times varying each parameter a bit higher and a bit lower, looking for paramters that cause the results to change significantly.
 * Calibration and Optimization: Use automated tools that apply algorithms such as Hill-Climbing or Genetic algorithms to adjust each parameter until the result matches a predetermined value or time series.

(Determining the Validity of Simulation Models; Anonymous; http://forio.com/resources/article/validity/)