critical value
|
The critical value is a threshold that determines the boundary for rejecting the null hypothesis (H0) in a hypothesis test. It is a point on the probability distribution of the test statistic beyond which the null hypothesis is rejected. The critical value is chosen based on the significance level (α) of the test, which represents the probability of making a Type I error (i.e., rejecting a true null hypothesis).
|
hypothesis
|
In statistics, a hypothesis is a formal statement about a population parameter or relationship between variables. Hypotheses guide statistical tests to determine whether data support or refute them. The hypothesis suggesting an effect or difference is called the alternative hypothesis. The alternative hypothesis is always paired with a null-hypothesis, suggesting no effect or difference.
|
p-value
|
The p-value indicates the probability of obtaining a result equal to, or even more extreme than the observed value, assuming the null hypothesis is true. Common thresholds for significance are 0.05, 0.01, and 0.001. A smaller p-value suggests stronger evidence against the null hypothesis. The p-value is denoted as p.
|
significance test
|
A significance test is a statistical method used to determine whether observed data provide enough evidence to reject a null hypothesis. It calculates a probability of observing data as extreme as, or more extreme than, the actual sample results, assuming the null hypothesis is true
|
test statistic
|
A test statistic is a value calculated from the sample data that is used to decide whether to reject the null hypothesis (H0) in a hypothesis test. It quantifies the degree to which the observed data diverges from what is expected under the null hypothesis. In a t-test, the test statistic is a t-value, which measures the distance between the sample mean and the (hypothesised) population mean, expressed in units of standard errors.
|
Type I Error
|
A Type I Error occurs when a null hypothesis (H0) that is actually true is incorrectly rejected. It is also known as a false positive errors, as it suggests that an effect of difference exists, when, in fact, it does not. The probability of committing a Type I Error is denoted by the significance level (α) of the test, which is typically set before conducting the test (e.g. α = 0.05). This means that there is a 5% chance of rejecting the true null hypothesis.
|
Type II Error
|
A Type II Error occurs when a null hypothesis (H0) that is actually false is incorrectly accepted (or not rejected). It is also known as a false negative error, as it suggests that no effect or difference exists when, in fact, there is one.
|