The concepts of reliability and validity explained with examples

0

Reliability and validity are key concepts in the field of psychometrics, which is the study of theories and techniques involved in psychological measurement or evaluation.

The science of psychometrics is the basis of psychological tests and evaluations, which involves obtaining an objective and standardized measure of the behavior and personality of the individual examiner. It is a comprehensive process that examines and measures all aspects of an individual’s identity. The data obtained through this process is intertwined and integrated to form a rounded profile of the individual. Such profiles are often created in daily life by several professionals, for example, doctors create medical and lifestyle profiles of the patient to diagnose and treat health disorders, if any. Professional counselors employ a similar approach to identify the most appropriate field for an individual. These profiles are also built in the courts to give context and justification to legal cases, in order to be able to resolve them quickly, judiciously and efficiently. However, to be able to formulate accurate profiles, the evaluation method used must be precise, impartial and relatively error-free. To ensure these qualities, each method or technique must possess certain essential properties.

◉ Standardization: all tests must be carried out under consistent and uniform parameters to avoid the introduction of any erroneous variation in the results of the text.

◉ Objectivity: the evaluation of the test must be carried out in an objective manner, so that no bias, neither of the examiner nor of the examinee, is introduced or reflected in the data obtained.

Norm Test Standards: Each test must be designed in such a way that the results can be interpreted in a relative manner, that is, it must establish a frame of reference or a point of comparison to compare the attributes of two or more individuals in an adjustment.

◉ Reliability: the test must produce the same result each time it is administered in a particular entity or individual, that is, the results of the test must be consistent.

◉ Validity: the test that is performed must produce the data that it intends to measure, that is, the results must satisfy and be in accordance with the objectives of the test.

Reliability concept

It refers to the consistency and reproducibility of the data produced by a given method, technique or experiment. It is said that the form of evaluation is reliable if it repeatedly produces stable and similar results under consistent conditions. Consistency is partially guaranteed if the attribute being measured is stable and does not change suddenly. However, errors can be introduced by factors such as the physical and mental state of the examinee, inadequate attention, distraction, the response to visual and sensory stimuli in the environment, etc. When estimating the reliability of a measure, the examiner should be able to demarcate and differentiate between errors produced as a result of inefficient measurement and the actual variability of the actual score. A real score is that subset of measured data that would be repeated consistently in several instances of tests in the absence of errors. Therefore, the overall score produced by a test would be a composite of the actual score and measurement errors.

Types of reliability

Test-retest Reliability
It is a measure of the consistency of the results of the test when the test is administered to the same individual twice, where both instances are separated by a specific period of time, using the same instruments and test conditions. Then the two scores are evaluated to determine the true score and the stability of the test.
This type is used in the case of attributes that are not expected to change within that given period of time. This works for the measurement of physical entities, but in the case of psychological constructions, it presents some drawbacks that can induce errors in the score. First, the quality under study may have undergone a change between the two test cases. Second, the experience of taking the test again could alter the way the examinee performs. And finally, if the time interval between the two tests is not enough, the individual can give different answers according to the memory of his previous attempt.

Example

Medical monitoring of “critical” patients works according to this principle, since the patient’s vital statistics are compared and correlated at specific time intervals, to determine if the patient’s health is improving or deteriorating. Depending on which, the medication and the patient’s treatment are modified.

Reliability of parallel forms
Measure reliability by administering two similar forms of the same test or performing the same test in two similar configurations. In spite of the variability, both versions should focus on the same aspect of the individual’s ability, personality or intelligence. The two scores obtained are compared and correlated to determine if the results show consistency despite the introduction of alternative versions of the environment or test. However, this leads to the question of whether the two similar but alternative forms are really equivalent or not.

Example
If you are testing an individual’s problem-solving skills, one could generate a large set of appropriate questions that can then be separated into two groups with the same level of difficulty, and then administered as two different tests. The comparison of the scores of both tests would help eliminate errors, if any.

Reliability between
t measures the consistency of the score made by the test evaluators. It is important since not all individuals will perceive and interpret the answers in the same way, therefore, the accuracy that is considered of the answers will vary according to the person who evaluates them. This helps to refine and eliminate any errors that may be introduced by the evaluator’s subjectivity. If the majority of evaluating judges agree on the answers, the test is accepted as reliable. But if there is no consensus among the judges, this implies that the test is not reliable and has not really proven the desired quality. However, the evaluation of the test must be carried out without the influence of any personal bias. In other words, judges should not be pleasant or unpleasant to other judges based on their personal perception of them.

Example
Often, this is put into practice in the form of a panel of outstanding professionals, and can be observed in various contexts, such as the evaluation of a beauty contest, the performance of a job interview, a scientific symposium, etc.

Reliability of internal consistency
It refers to the ability of different parts of the test to test the same aspect or construction of an individual. If two similar questions are posed to the examinee, the generation of similar answers implies that the test shows an internal consistency. If the answers are different, the test is not consistent and needs to be refined. It is a statistical approach to determine reliability. It is of two types.

► Average correlation between elements
Consider all the questions that probe the same construct, segregate them into individual pairs and then calculate the correlation coefficient for each pair of questions. Finally, an average of all the correlation coefficients is calculated to obtain the final value of the mean correlation between elements. In other words, determine the correlation between each question in the entire test.

► Reliability half divided
Divide the questions that analyze the same construction into two sets of equal proportions, and the data obtained from both sets are compared and compared to determine the correlation, if any, between these two sets of data.

Concept of validity
It refers to the ability of the test to measure data that satisfy and support the objectives of the test. It refers to the extension of the applicability of the concept to the real world instead of an experimental configuration. With respect to psychometrics, it is known as test validity and can be described as the degree to which the evidence supports a given theory. It is important as it helps researchers determine what test to implement to develop a measure that is ethical, efficient, cost-effective and that actually tests and measures the construct in question. Other forms of non-psychological validity include experimental validity and diagnostic validity. Experimental validity refers to whether a test will be supported by statistical evidence and whether the test or theory has some application in real life. The diagnostic validity, on the other hand, is in the context of clinical medicine, and refers to the validity
of diagnostic and screening tests.

Types of validity

Construction validity
It refers to the ability of the test to measure the construct or the quality it intends to measure, that is, if a test is intended to evaluate intelligence, it is valid if it really tests the intelligence of the individual. It involves performing a statistical analysis of the internal structure of the test and its examination by a panel of experts to determine the suitability of each question. It also studies the relationship between the test responses to the test questions and the individual’s ability to understand the questions and provide adequate answers. For example, if a test is prepared with the intention of evaluating a subject’s knowledge of science, the language used to present problems is highly sophisticated and difficult to understand. In such a case, the test, instead of measuring the knowledge, ends up proving the mastery of the language and, therefore, it is not a valid construction to measure the student’s knowledge of the subject.

► Convergent validity
This type of construct validity measures the degree to which two hypothetically related concepts are really real in real life. For example, if a test that is designed to test the correlation of the emotion of happiness, happiness and happiness proves the correlation by providing irrefutable data, then it is said that the test has convergent validity.
► Discriminant validity
It is a measure of the degree to which two hypothetically unrelated concepts are not really related in real life (evidenced by observed data). For example, if a certain test is designed to prove that happiness and despair are not related, and this is tested with the data obtained by performing the test, it is said that the test has discriminant validity.

Content validity
Scientists

It is a form of non-statistical validity that involves examining the content of the test to determine if it also polls and measures all aspects of the given domain, that is, if a specific domain has 4 subtypes, then the same number of test questions must be proved. all 4 of these subtypes with equal intensity. This type of validity must be taken into account when formulating the test itself, after carrying out an exhaustive study of the construct to be measured. For example, if a test is designed to assess learning in the biology department, it must cover all aspects of biology, including its various branches, such as zoology, botany, microbiology, biotechnology, genetics, ecology, etc., or less appear to cover.
► Validity of representation
It is also known as translation validity and refers to the degree to which an abstract theoretical concept can be translated and implemented as a practical verifiable construct. For example, if one designed a test to determine if comatose patients could communicate through some type of signals and if the test worked and produced adequate support results, then the test would have representation validity.
► Face validity
It is an estimate of whether a particular test seems to measure a construction. It does not imply in any way if it really measures the construction or not, but simply projects what it does. For example, if it seems that a test measures what it is supposed to do, it has a high apparent validity, but if it does not, then it has a low apparent validity. It is the least sophisticated form of validity and is also known as surface validity. Therefore, if an intelligence seems to be testing the intelligence of individuals, as observed by an evaluator, the test has nominal validity.

Validity related to the criterion
Students in the classroom
It measures the correlation between the results of a test for a construct and the results of pre-established tests that examine the individual criteria that make up the general construct. In other words, if a given construct has 3 criteria, the results of the test are correlated with the test results for each individual criterion that is already established as valid. For example, if a company performs an IQ test of a job applicant and compares it with its previous academic record, any correlation that is observed will be an example of validity related to the criterion. Depending on the type of correlation, the validity is of two types.
► Concurrent validity
It refers to the degree to which the results of a test correlate well with the results obtained from a related test that has already been validated. The two tests are taken at the same time and provide a correlation between events that are on the same temporal plane (present). For example, if a group of students is given an evaluative test, and the same day, their teachers are asked to rate each of these students and compare the results of both sets, any correlation observed between The two sets of data will be valid at the same time.

► Predictive validity
Score panel
It refers to the degree to which the results of a test are correlated with the results of a related test that is administered in the future. The difference in the time period between the administration of the two tests allows the correlation to have a predictive quality. For example, if you administer an assessment test that aims to test the intelligence of the students and the students with high scores obtained academic success later, while those who scored low did not do well academically, the test is said to have validity predictive
Although both concepts are essential for a precise psychological evaluation, they are not interdependent. A test can be reliable without being valid, and vice versa. This is explained by considering the example of a weighing machine. If one puts a weight of 500 g on the machine, and if it shows any other value than 500 g, then it is not a valid measurement. However, it can still be considered reliable if each time the weight is placed, the machine shows the same reading of, for example, 250 g. Therefore, in terms of measurement, validity describes accuracy, while reliability describes precision.
In case a test is valid but not reliable, the implementation of the classical test theory gives the examiner or researcher options and ways to improve the reliability of that test.

Leave A Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.