The standard level of significance used to justify a claim of a statistically significant effect is 0.05, for example with regard to the relationship between two variables. For better or worse, the term statistically significant has become synonymous with P0.05. It was R.A. Fisher (1924) in his Statistical Methods for Research Workers (SMRW) who suggested giving 0.05 its special status. Page 44 of the 13th edition of SMRW, describing the standard normal distribution, states:
The value for which P=0.05, or 1 in 20, is 1.96 or nearly 2; it is convenient to take this point as a limit in judging whether a deviation ought to be considered significant or not. Deviations exceeding twice the standard deviation are thus formally regarded as significant. Using this criterion we should be led to follow up a false indication only once in 22 trials, even if the statistics were the only guide available. Small effects will still escape notice if the data are insufficiently numerous to bring them out, but no lowering of the standard of significance would meet this difficulty.
But, what does it really mean? A probability of 0.05 means that there is only a 5 percent chance of the data in your table ocurring by chance alone, and is termed stastistically significant. Therefore, a probability of 0.05 or smaller means you can be at least 95 per cent certain that the relationship between your two variables could not have ocurred by chance factors alone.