Reliability: How? When? What?

 

Jaison Jacob

Assistant Professor, Department of Mental Health Nursing, Laxmi Memorial College of Nursing,

Mangaluru-575002, Dakshina Kannada, Karnataka, India

*Corresponding Author E-mail: nov8525@gmail.com

 

ABSTRACT:

Reliability is one of the important steps in research, It helps in testing the quality in view of consistency, stability and equivalence. Various methods like test retest, Inter rater, Intra Rater, Split half method and formulas like Kuder Richardson formula, Cronbach’s  alpha, Karl Pearson Coorelation formula etc. The Knowledge of This formulas and criteria for using a particular method is essential for a researcher. It the basis for a proper way of checking the Questionnaire/ Tools Quality.

 

KEY WORDS: Reliability, Correlation, Internal Consistency.

 

 


INTRODUCTION:

Reliability is having different layers we know the methods but how and which formula to be used depends on some factors like the variable and type of research. Reliability is done so that the tool/questionnaire which we tend to use is good enough to assess the variable under study.

 

Definition: It is accuracy, with consistency of a measuring tool for any given situation.

 

It is the extent that the individual/tool remaining nearly same in repeated measurements.

 

Reliability value ranges from +1 to -1 where +1 is perfectly Positive and -1 is perfectly negative. Graphically it can be represented by Scatter diagram

 

Methods of reliability:

1.    Stability

2.    Equivalence

3.    Internal Consistency

 

Reliability of Stability:

The stability of a measure refers to the extent to which the same results are obtained on repeated administrations of the instrument The ways to find the stability is:

 

1.    Test Retest method:

The researcher administers same tool to a same on two occasions and compare the scores obtained. This are used for variables which we know can’t change over time like Attitude or variables like Height, Weight which will be constant for a period. Formulas used is:

 

 

 

Karl Persons’ correlation formula:

For Nominal variables like Height and weight

 

Where: xi is first Observation and yi is second observation

 

Spearmen’s Product/rank order correlation formula:

 

For ordinal variables like attitude

 

Where: d2 = deference in ranks

 

2.    Reliability of Equivalence:

It refers to measuring the extent a tool is consistent with different researchers or different tools Even this is used to compare a new tool with a already existing standardized tool. Most commonly used for assessing Skill/ practice using observation as method of data collection.

 

This method is used in 2 conditions

a.     When different observers or researchers are using an instrument to measure the same phenomena at the same time

b.    When 2 similar instruments are administered to individual at about same(Parallel form)

 

Methods of assessment (when one tool/questionnaire is used)

a.    Intra rater reliability:

In this method one researcher uses one tool more than one observation on same sample.

Formula used: Karl parsons’ formula

 

b.    Inter rater reliability:

In this method 2 researchers use one tool to asses a variable and scores of each observer is used to check the reliability by using Karl persons’ correlation formula  or

 

Cohen’s Kappa coefficient Formula

 

k = (Pr(a) - Pr(e)) / (1-Pr(e))

 

Where,

Pr (a) - Relative observed agreement,

Pr (e) - Hypothetical probability of chance agreement,

k - Cohen's kappa index value

Observed agreement = (a+d)/N

Expected agreement = Expected a + Expected d/ N

 

Note: Need to prepare a 2x2 table and each coordinate having both agreement name as A and coordinate both disagreeing is D

 

Method of Assessment for Parallel form:

Also called equivalent forms reliability uses one set of questions divided into two equivalent sets (“forms”), where both sets contain questions that measure the same construct, knowledge or skill. The two sets of questions are given to the same sample of people within a short period of time and an estimate of reliability is calculated from the two sets.

 

Eg: you want to find the reliability for a test of mathematics comprehension, so you create a set of 100 questions that measure that construct. You randomly split the questions into two sets of 50 (set A and set B), and administer those questions to the same group of students a week apart.

 

3.    Reliability of Internal consistency:

 This is to estimate the items in the tool measure same attribute and nothing else. Methods used:

 

a.    Split half method:

Mostly used for  Multiple Choice tests

In this method we divide the tool into two equal half’s (odd and even items) and check the reliability using Karl parsons’ correlation formula. But this gives reliability of half tool to get the complete reliability we need to use

Spearmen Brown prophecy formula

 

R = 2r/1+r

 

Where, r= Split half correlation coefficient

Note: Splitting Equal half’s so total number of questions should be a even number

 

b.    Cronbach’s  alpha:

Used when you have multiple Likert questions in a survey/questionnaire

 

It measures the homogeneity of the tool i.e. The extent to which different subparts of an instrument are equivalent in terms of measuring the critical attribute

Formula:

 

Vi = variance of scores of each question

VT = Total variance of overall Scores

K = number of items/questions

 

Note: Variance formula two forms are available

i.       Formula for total variance:

ii.      Formula for variance of Scores of items/ questions

 

c.     Kuder Richardson  formula(KR20):

Used for Dichotomous scoring with different range of difficulty (MCQ, Short Answer, Fill in the blanks)

 

KR20 = [n/(n - 1)] x [1 - (Σpq)/Var]

 

Where,

∑pq = sum the product of pq for all n items

p = proportion of people passing the item

q = proportion of people failing the item (or 1-p)

Var = variance of whole test

 

d.    Kuder Richardson formula 21(KR21):

Used for Dichotomous scoring with same difficulty (Check list, True or false)

 

Where, k= No of Items

M =mean scores of test

s2= variance of whole test

 

Interpreting the scores:

The reliability Score Lies between 0 - 1. The higher the score nearer to 1 the higher the it’s reliable. Many Researcher Recommend correlation value of 0.7 or higher as acceptable. And 0.35 to 0.7 as modifiable and below 0.35 as discarded 

 

REFERENCE:

1.       Mahajan’s. Methods in Biostatistics for medical students and research workers, 8th Ed. Jaypee publications, 2008. pp. 223-231

2.       C.R Kothari. Methods and Techniques, 2nd ed. New Age International Publication. 2008. pp 138-139

3.       Dennise F. Polit, Bernadeth P. Hungler. Nursing Research- Principals and Methods 6th Ed. Lippincott Publications. 1999; pp 411-418

4.       Using and Interpreting Cronbach’s Alpha. University of Virginia library research data service + Science [internet]. [cited 17 Jan 2017]. Available from:http://data.library.virginia.edu/using-and-interpreting-cronbachs-alpha/

5.       Kuder-Richardson 20 (KR-20) and 21 (KR-21) [Internet]. Statistics How To. 2017 [cited 7 Feb 2017]. Available from: http://www.statisticshowto.com/kuder-richardson/

 

 

 

 

 

 

 

Received on 17.05.2017          Modified on 21.06.2017

Accepted on 29.08.2017        © A&V Publications all right reserved

Int. J. Adv. Nur. Management. 2017; 5(4):372-374. 

DOI:   10.5958/2454-2652.2017.00080.4