Back to chapter

1.13:

Significance Testing: Overview

JoVE Core
Analytical Chemistry
È necessario avere un abbonamento a JoVE per visualizzare questo.  Accedi o inizia la tua prova gratuita.
JoVE Core Analytical Chemistry
Significance Testing: Overview

Lingue

Condividere

Is the difference between the two values due to an unexplainable random error or a systematic error that can be rationalized by a hypothetical model?

The significance test is a statistical analysis used to validate whether the difference between two values can be explained by indeterminate errors or not.

The null hypothesis assumes that the values compared are the same and any difference stems from indeterminate errors. The alternate hypothesis states that the difference must be real and cannot be explained by indeterminate errors.

A significance level denoted by α sets a confidence level condition for the validity of the null hypothesis. The null hypothesis is rejected when values are present beyond the confidence level.

The significance test is called one-tailed if rejection occurs for values at only one end of the normal distribution curve. In two-tailed significance tests, the rejection can occur for values falling at either end of the normal distribution curve.

1.13:

Significance Testing: Overview

Significance testing is a set of statistical methods used to test whether a claim about a parameter is valid. In analytical chemistry, significance testing is used primarily to determine whether the difference between two values comes from determinate or random errors. The effect of a particular change in the measurement protocol, analyst, or sample itself can cause a deviation from the expected result. In the case of a suspected deviation/outlier, we need to be able to confirm mathematically that the deviation comes from a determinate source and that the observation with the deviation can be logically omitted from the analysis.

Two hypotheses are used as criteria for significance testing. The null hypothesis (H0) states that the values being compared do not differ from each other significantly. In other words, if any difference exists between two values, it is ascribed to an indeterminate error. The alternate hypothesis (HA) states that the compared values are not equal, and the difference is more significant than can be explained by indeterminate error.

Before the test is performed, the hypotheses need to be stated, and a significance level (α) needs to be set. The test statistic, based on the sample mean and standard deviation, is then calculated and compared to the tabulated values, which are set at particular significance levels and defined as one- or two-tailed. If the calculated test statistic exceeds the critical values (tabulated statistic), the null hypothesis is rejected, and we state that the difference between the two values cannot be explained by random, indeterminate error.

In one-tailed significance testing, the alternative hypothesis can specify that the observed value is either higher or lower than the expected value, but not both. In two-tailed significance testing, the alternative hypothesis can simply state that the observed value is not equal to the expected value, with no regard to the direction.

Significance testing can be used on different statistical parameters of one or more data sets. Tests are given different names depending on the parameters or purpose. Significance testing is frequently applied to compare an observed value with the mean or compare two means from two different data sets. These tests are known as t-tests. Significance tests can also be performed on the variance of two data sets. In this case, the test is known as an F-test. If a significance test is used to identify outliers, the test is called a Q-test.