Introduction

In the previous section on ‘When to use an independent samples t-test and when to apply paired samples test?’ we have seen how we can compare two groups/populations using t-tests and z-tests. However, when we are dealing with multiple groups it becomes difficult to use t-tests or z-tests.

Why multiple t-tests do not work?

Suppose we want to compare the means of four groups, we could perform six different t-tests for each pair of groups taken at a time. But, we face the problem of ‘Alpha – chance of type I error’ going beyond the limit we intend to maintain.

As we discussed earlier, Alpha or type I error represents the chance of getting a sample through which we reject the null hypothesis when it is actually true. As we deal with the same samples multiple times while performing multiple t-tests, chances that the overall Alpha or type I error for the test increases significantly.

This is the main reason we use ANOVA (Analysis of variance) to compare multiple groups or populations while controlling the Alpha/Type I error rate.

Assumptions for the test

  • Each group or population is normally/approximately normally distributed from which the samples are drawn
  • Samples are independent
  • Variances of all groups are equal

When to use ANOVA?

Simply put, it is used for comparing the means of multiple groups. This has tremendous potential and scores of applications in various branches of science. For example, if we are compare the effectiveness of placebo vs Drug1 vs Drug2. Here we want to check the impact of placebo as well as drugs on the disease of patients. Here the impact is a continuous variable while the treatment (Drug/Placebo) is categorical variable.

Dependent and independent variables in ANOVA

When we have the dependent variable on a continuous scale and the independent one as categorical, we can use ANOVA to test the impact of independent variable on the outcome or dependent variable.

ANOVA2.jpg
ANOVA

Another example is a study to understand the impact of duration of training program on the performance of employees. Suppose, the company has tried with 3 months, 6 months and 12 months training programs; here, the performance of employees is a continuous variable and the criteria/independent variable which is the duration of training is a categorical one.

Two way ANOVA and MANOVA

One way ANOVA is used when we have only one independent variable; two way ANOVA or MANOVA are the techniques used when two or more independent variables are used in the study. For example, if the company wants to study the impact of duration of training program as well as educational level of employees on performance, two way ANOVA should be used.

Why the name Analysis of variance?

Our objective is to compare the means of multiple groups, then why do we use the name analysis of variance for this test?

To understand this let us take a look at how the test works.

Null Hypothesis:

µ1 = µ2 = µ3 (Mean of all groups are equal and there is no                                                                                      significant difference between them)

Alternate Hypothesis:

Not all means are equal (At least one group has different mean from other groups)

In case the null hypothesis is true, ‘Means’ (averages) of all groups are equal. Also, the assumption for this test is that variances of all groups are equal. This implies that should the null hypothesis be true it could be assumed that the independent groups or populations are just the subsets of one single large population or group.

So, essentially we are analysing the variance within the groups and variance across the groups. We express this comparison as F ratio.

ANOVA3.jpg
Sources of variation

ANOVA4.jpg
Analysis of Variance

F ratio

F statistic or F Ratio = Variance across groups/*variance within each group

*Variance within each group is approximately same (Refer to assumptions for this test)

Quick look at F distribution and the F statistic

Below is an example of F distribution. The distribution is right skewed and is characterised by a pair of degrees of freedom (df), one df for numerator and one df for denominator.

ANOVA1.jpg
F distribution

In the above calculation for F statistic in ANOVA, we have df for numerator as k-1 and df for denominator as n-k

Where k is number of groups/populations and

n is total sample size of all groups put together

How to calculate F statistic?

  • Sum of squares for numerator =

∑ nk*(Mean of the groupk – Overall mean of all data points for all samples)^2

  • Sum of squares for denominator =

∑ (Data point – Mean of group)^2  (Do this for each group and add values of all groups)

  • F Statistic = (SS of numerator/ (k-1)) / ((SS of denominator/ (n-k))

How to infer from F statistic?

If the variance within groups is approximately same as the variance across groups, it means that all the groups are a subset of one master group and hence the means of all groups is approximately same. Hence, null hypothesis holds true.

Above case is possible where F statistic is close to 1. This is the reason we see F statistic value far above 1 for 95% or 99% confidence levels where we reject the null hypothesis.

F statistic is characterised by the combination of degrees of freedom for numerator and denominator.

Depending on F value for a particular confidence level either the Null hypothesis is rejected or accepted.

In case Null hypothesis is rejected, it means not all groups are equal (In terms of means). So, we need to perform additional tests such as Scheffe Post hoc test or Tukey Post hoc test to find out what groups differ from others.

ANOVA test tells us whether all groups are equal or not, but, which groups are different has to be found out using the above post hoc tests mentioned.

Hope you have enjoyed this article! Let us know your feedback and suggestions.