Measuring the Dependability and Consistency of a Construct
Image Source
Reliability is how much the measure of a construct is consistent and true. If utilization of this scale to measure a similar construct circumstance, do we get essentially a similar outcome without fail? A case of an unreliable measurement is individuals guessing your weight. Very likely, individuals will figure in an unexpected way, the diverse measures will be inconsistent, and in this manner, the "guessing" procedure of measurement is unreliable.
A more reliable measurement might be to utilize a weight scale, where you are probably going to get a similar value each time you advance on the scale, except if your weight has really changed between measurements. Note that reliability infers consistency yet not precision. In the event that the weight scale is adjusted mistakenly, it won't measure your actual weight and is along these lines, not a legitimate measure. All things considered, the miscalibrated weight scale will at present give you a similar weight without fail, and henceforth the scale is reliable.
Sources of Unreliable Observation
- Observer's Subjectivity
- Imprecise Questions
- Unfamiliarity
Image Source
Observer's Subjectivity
In the event that employee morale in a firm is measured by viewing whether the employees grin at each other, regardless of whether they make jokes, et cetera, at that point diverse observers may surmise distinctive measures of morale in the event that they are viewing the employees on an exceptionally bustling day or a light day. Two observers may likewise construe diverse levels of morale around the same time, contingent upon what they see as a joke and what isn't.
Imprecise Questions
On the off chance that you ask individuals what their compensation is, diverse respondents may interpret this inquiry distinctively as a month to month pay, yearly pay, or every hour wage, and henceforth, the subsequent observations will probably be very unique and unreliable.
Unfamiliarity
Making inquiries about issues that respondents are not exceptionally well-known about or think about, for example, asking someone whether it's happy with a nation's association with other nation.
Image Source
Approaches to Verify Realibility
- Inter-rater Reliability
- Test-retest Reliability
- Split-half reliability.
- Internal Consistency Reliability
Inter-rater Reliability
Is a measure of consistency between at least two autonomous raters of a similar construct. In the event that the measure is categorical, an arrangement of all categories is characterized, raters verify which class every observation falls in, and the level of understanding between the raters is a gauge of inter-rater reliability.
For example, if there are two raters rating 100 observations into one of three conceivable categories, and their evaluations coordinate for 75% of the observations, at that point inter-rater reliability is 0.75. On the off chance that the measure is interval or ratio scaled, at that point a basic connection between's measures from the two raters can likewise fill in as a gauge of inter-rater reliability.
Image Source
Test-retest Reliability
Is a measurement of similar constructs pointed to an identical example at the same time. If those observations have not changed generously between the two tests, at that point the measure is reliable. The relationship in observations between the two tests is a gauge of test-retest reliability. Note here that the time interval between the two tests is basic. The more is the time hole, the greater is the possibility that the two observations may change amid this time, and the lower will be the test-retest reliability.
Split-half reliability
Dividing the number measures in a given construct, splitting them into a half, and regulate the whole instrument to an example of respondents. Figure the total score for every half for every respondent, and the connection between's the total scores in every half is a measure of split-half reliability. The more is the instrument, the more probable it is that the two halves of the measure will be comparable.
Internal Consistency Reliability
Is a measure of consistency between various items of a similar construct. In the event that a multiple-item construct measure is controlled to respondents, the extent to which respondents rate those items in a comparative way is an impression of internal consistency. This reliability can be evaluated as far as a normal inter-item relationship, normal item-to-total connection. For instance, on the off chance that you have a scale with six items, you will have fifteen diverse item pairings, and fifteen relationships between's these six items.
Image Source
References:
Measuring Social Capital
Measuring the Quality of Life and the Construction of Social Indicators
Participant Observation in Social Research
Validity and Reliability of Observation and Data Collection in Biographical Research
The conundrum of verification and validation of social sciencebasedmodels
Posted from my blog with SteemPress : https://steemme.000webhostapp.com/2018/08/measuring-the-dependability-and-consistency-of-a-construct