You can view the advanced test analytics on the same screen you use to check and exam.

You can navigate to that screen via the Teacher environment. When you are here, click on the Test tab. You can select the correct class using a drop-down menu in the top-right of the screen.

You can see how many students have submitted their answers under Results. Click on the number to check and grade the test.

You will then see five tabs. Click on the one labelled Advanced Analytics.

# Test Analytics

## Discrimination and Difficulty Count

These values give the total number of questions that have good, marginal and poor discrimination and the total number of questions that have easy, medium and hard difficulty. More information on these values can be found in the discrimination and difficulty sections on this page.

## Test Average

This is the average score out of 100 for this test.

## Test Standard Deviation

The standard deviation is used to tell how spread out the scores were for the test. About 68 percent of student's scores fall within one standard deviation from the average score. A low standard deviation means that most students score around the same on this exam. A higher standard deviation indicates that there was more variance in the test scores.

## Test Duration Average

The average amount of time students took to complete this test.

**Test Reliability Coefficient**

We have used Cronbach’s α (alpha) to measure the test reliability, i.e. whether or not the test is a reliable measuring tool. Cronbach’s alpha is defined by the following equation:

where K is the sample size, σyi2 is the variance of the individual test scores, and σx2 is the variance of the total test scores (variance = (standard deviation)2). Cronbach’s alpha can range from 0 (unreliable) to 1 (maximum reliability). The ranges are defined as follows:

0.9 and up = good/very good

0.8 - 0.9 = satisfactory/good

0.7 - 0.8 = middling/satisfactory

0.7 and under = poor/middling

# Graphs

## Test Score Distribution

This graph shows how the students' grades are distributed. Each bar on the graph represents the number of students whose grade for the test falls within that range of test scores.

## Difficulty vs Rit Plot

This graph gives a good view of which test items need to be evaluated. The green box indicates the area where questions have an acceptable difficulty (p-value) and Rit value. Dots plotted below this box indicate questions that have an undesirable Rit-value. Dots plotted left and right of the green box indicates questions that have undesirable p-value.

# Question statistics Table

## Difficulty

The question difficulty, called the p-value, is defined by the following equation:

The p-value can range from 0 (hard) to 1 (easy). The ranges we have chosen to use are as follows:

0.8 and up = easy

0.3 - 0.8 = medium (target range)

0.3 and under = hard

## Discrimination

The discrimination index helps to determine how well a question helps to distinguish between students who did well on the test and students who did not do well on the test. It is defined by the following equation:

The upper group and lower group are defined as the top 27% and bottom 27% of test-takers respectively. The discrimination index can range from -1 to 1. The ranges are defined as follows:

0.4 and up = good

0.3 - 0.39 = satisfactory

0.2 - 0.29 = middling

0.19 and below = poor

## Rit and Rir Values

Rit, item-test correlation, and Rir, item-rest correlation, are also used to measure the discrimination ability of each question. They indicate the correlation between an individual’s score of the question and the total exam score. Rit and Rir differ in that Rir subtracts the question score from the total test score to determine the correlation.

They are defined by the following equation:

The ranges are defined as follows:

0.35 and up = good

0.25 - 0.35 = satisfactory

0.15 - 0.25 = middling

0.15 and below = poor