Researchers cannot simply conclude that there is a difference between two groups in a well-constructed study. This difference must be due to the manipulation of the independent variable. No matter how well a researcher designs the study, there always exists a degree of error in the results. This error can be due to individual differences both within and between experimental groups, or the error can be due to systematic differences within the researcher’s sample. Irrespective of its source, this error acts as a kind of noise in the data. It affects participants scores on study measures even though it is not the variable of interest. Statistical significance is aimed at determining the probability that the observed result of a study was due to the influence of the independent variable rather than by chance. A result is statistically significant at a certain level. For example, a result might be significant at p<.05. represents the probability that the result was due to chance, and .05 represents a 5% probability that the result was due to chance.Â Therefore, p<.05 means that inferential statistical analysis has indicated that the observed results have over a 95% probability of being due to the influence of the independent variable. The 5% cutoff is generally thought of as the standard for most scientific research. Note that it is theoretically impossible to ever be entirely certain that ones results are not due to chance, as the nature of science is one of falsification, not immutable proof.