Scale Classification Bases

Scale Classification Bases

The Scale Classification Bases can be categorized on the following bases.

  1. Subject orientation: In this, a scale is designed to measure the characteristics of the respondent who completes it or to estimate the stimulus object that is presented to the respondent.
  2. Response form: In this, the scales can be classified as categorical or comparative. Categorical scales (rating scales) are used when a respondent scores some object without direct reference to other objects. Comparative scales (ranking scales) are used when the respondent is asked to compare two or more objects.
  3. Degree of subjectivity: In this, the scale data is based on whether we measure subjective personal preferences or just make non-preference judgements. In the former case, the respondent is asked to select which person or solution he favors to be employed, whereas in the latter case he is simply asked to judge which person or solution will be more effective without reflecting any personal preference.
  4.  Scale properties: In this, the scales can be classified as nominal, ordinal, interval and ratio scales. Nominal scales merely classify without indicating order, distance or unique origin. Ordinal scales indicate magnitude relationships of ‘more than’ or ‘less than’, but indicate no distance or unique origin. Interval scales have both order and distance values, but no unique origin. Whereas, ratio scales possess all these features.
  5. Number of dimensions: In this, the scales are classified as ‘uni-dimensional’ or ‘multi-dimensional’. In the former, only one attribute of the respondent or object is measured, whereas multi-dimensional scaling recognizes that an object might be described better by using the concept of an attribute space of ‘n’ dimensions, rather than a single-dimension continuum.
  6. Scale construction techniques: This can be developed by the following five techniques.
  • Arbitrary approach: In this, the scales are developed on ad hoc basis. It is the most widely used approach.
  • Consensus approach: In this, a panel of judges evaluates the items chosen for inclusion in the instrument regarding whether they are relevant to the topic area and unambiguous in implication.
  • Item analysis approach: In this, a number of individual items are developed into a test that is given to a group of respondents. Post administering the test, total scores are evaluated, and the individual items are analyzed to determine which items discriminate between persons or objects with high and low total scores.
  • Cumulative scales: These are chosen on the basis of their conforming to some ranking of items with ascending and descending discriminating power.
  • Factor scales: This can be constructed on the basis of inter-correlations of items indicating a common factor accounts for the relationship between items.

 

Share

Technique of Developing Measurement Tools

Technique of Developing Measurement Tools:

a)  Concept development: This is the first step. In this case, the researcher should have a complete understanding of all the important concepts relevant to his study. This step is more applicable to theoretical studies compared to practical studies where the basic concepts are already established beforehand.

b)  Specification of concept dimensions: Here, the researcher is required to specify the dimensions of the concepts, which were developed in the first stage. This is achieved either by adopting an intuitive approach or by an empirical correlation of the individual dimensions with that concept and/or other concepts.

c)  Indicator selection: In this step, the researcher has to develop the indicators that help in measuring the elements of the concept. These indicators include questionnaires, scales, and other devices, which help to measure the respondents opinion, mindset, knowledge, etc. Using more than one indicator lands stability and improves the validity of the scores.

Index formation: Here, the researcher combines the different indicators into an index. In case, there are several dimensions of a concept the researcher needs to combine them.

Share

Test of Practicality of a measuring instrument

Test of Practicality of a measuring instrument

The practicality attribute of a measuring instrument can be estimated regarding its economy, convenience and interpretability. From the operational point of view, the measuring instrument needs to be practical. In other words, it should be economical, convenient and interpreted.

Economy consideration suggests that some mutual benefit is required between the ideal research project and that which the budget can afford. The length of measuring instrument is an important area where economic pressures are swiftly felt. Even though more items give better reliability, in the interest of limiting the interview or observation time, we have to take only few items for the study purpose. Similarly, the data-collection methods, which are to be used, occasionally depend upon economic factors.

Convenience test suggests that the measuring instrument should be easily manageable. For this purpose, one should pay proper attention to the layout of the measuring instrument. For example, a questionnaire with clear instructions and illustrated examples is comparatively more effective and easier to complete than the questionnaire that lacks these features. Interpretability consideration is especially important when persons other than the designers of the test are to interpret the results. In order to be interpretable, the measuring instrument must be supplemented by the following:

  1. detailed instructions for administering the test,
  2. scoring keys,
  3. evidence about the reliability, and
  4. guides for using the test and interpreting results.
Share

Test of Reliability

Reliability is an essential element of test quality. An instrument for measurement is reliable if it provides consistent results. But a reliable instrument need not be valid. For example, if a clock shows time nonstop then it is reliable, but that does not mean it is showing the correct time. Reliability deals with consistency, or reproducibility of similar results in a test by the test subject, if a test is administered on two occasions; the same conclusions are reached both times. While a test with poor reliability will have remarkably different scores each time with the same test and same examinee.

If a test is then it has to be reliable, but the vice versa is not true. Although, reliability might is not as valuable as validity, but nonetheless reliability it is easier to assess than validity for a test. Reliability has two key aspects: stability and equivalence. The degree of stability can be located comparing the results of repeated measurements with the same candidate and the same instrument. Equivalence means the probability of the amount of errors getting introduced by various investigators or different sample items being studied during the repetition of the test. The best way to test for reliability of a test is that two investigators should compare their observations of the same events. Reliability can be improved in the following ways:

(i) By standardizing the measurement conditions to reduce external factors such as boredom, fatigue, etc. which leads to achievement of stability.

(ii) By detailed directions for measurement which can be generalized and used by trained and motivated persons to conduct research and also by increasing the purview of the sample of items used, this lead to equivalence.

Share

Tests of Sound Measurement

While evaluating a measurement tool, three major considerations must be taken into account: validity, reliability and practicality. A sound measurement should fulfill all of these tests.

Test of Validity

It is the most important criterion. It indicates the degree to which an instrument measures what it is supposed to measure. There are three types of validity: Content validity, Criterion-related validity, and Construct validity.

Content validity refers to the extent to which a measuring instrument adequately covers the topic under study. Its determination is mainly judgmental and intuitive. It cannot be expressed in numerical terms. It can also be determined by a panel of persons who judge the extent of the measuring instruments standards.

Criterion-related validity refers to our ability to predict or estimate the existence of a current condition. It reflects the success of measures used for empirical estimating purposes. Criterion-related validity is expressed as the coefficient of correlation between the test scores. Here, the concerned criterion must possess the following characteristics:

  • Relevance: When a criterion is defined in terms judged to be the proper measures, it is known to be relevant.
  • Unbiased: When the criterion provides each subject an equal opportunity to score, it is unbiased.
  • Reliability: When a criterion is stable or reproducible, it is considered as reliable.
  • Availability: The information specified by the criterion should be easily available.

Construct validity is most complex and abstract. It is the extent up to which the scores can be accounted for by the explanatory constructs of a sound theory. Its determination requires association of a set of other propositions with the results received from using the measurement instrument. If the measurements correlate with the other propositions as per our predictions, it can be concluded that there is some degree of construct validity.

If the above criteria are met, we may conclude that our measuring instrument is valid and provides correct measurement; if not, we may have to look for more information and/or depend on judgment.

Share

Sources of Error in Measurement

Measurement should be precise and unambiguous in an ideal research study. However, this objective is often not met with in entirety. As such, the researcher must be aware about the sources of error in measurement. Following are listed the possible sources of error in measurement.

a) Respondent: At times the respondent may be reluctant to express strong negative feelings or it is just possible that he may have very little knowledge, but may not admit his ignorance. All this reluctance is likely to result in an interview of ‘guesses.’ Transient factors like fatigue, boredom, anxiety, etc. may limit the ability of the respondent to respond accurately and fully.

b) Situation: Situational factors may also come in the way of correct measurement. Any condition which places a strain on interview can have serious effects on the interviewer-respondent rapport. E.g., if someone else is present, he can distort responses by joining in or merely by being present. If the respondent feels that anonymity is not assured, he may be reluctant to express certain feelings.

c) Measurer: The interviewer can distort responses by rewording or reordering questions. His behavior, style and looks may encourage or discourage certain replies from respondents. Careless mechanical processing may distort the findings. Errors may also creep in because of incorrect coding, faulty tabulation and/or statistical calculations, particularly in the data-analysis stage.

d) Instrument: Error may arise because of the defective measuring instrument. The use of complex words, beyond the comprehension of the respondent, ambiguous meanings, poor printing, inadequate space for replies, response choice omissions, etc. are a few things that make the measuring instrument defective and may result in measurement errors.

Hence, researcher must know that correct measurement depends on successfully meeting all of the issues mentioned above. He must, as far as possible, try to eliminate, neutralize or otherwise deal with all the possible sources of error so that the final results may not be contaminated.

Share

Measurement Scales

The most commonly used  measurement scales are: (i) Nominal scale; (ii) Ordinal scale; (iii) Interval scale; and (iv) Ratio scale.

(i) Nominal scale: In this scale, symbols, events or attributes are numbered in order to identify them. The number order is symbolic and not quantitative  it is just convenient labels. Nominal scales are convenient ways to track people, objects and events. Although the nominal scale is the least powerful measurement level, yet it is very useful and is used in routinely in surveys and ex-post-facto researches for classification of major sub-groups of the population.

(ii) Ordinal scale: The ordinal scale measures degrees of separation between an event, object or emotion rather than quantitative measurement. The scale measures qualitative phenomena, and rank from highest to lowest. Ordinal measures have absolute values, and the real differences between adjacent ranks may not be equal. The usage of an ordinal scale implies ‘greater than’ or ‘less than’ without our being able to state how much greater or less. Measures of statistical significance are restricted to non-parametric methods.

(iii) Interval scale: In interval scale, the intervals in the scale are not fixed by zero but are adjusted as assumptions. Interval scales have an arbitrary zero. The Fahrenheit scale and time can be examples of an interval scale.

(iv) Ratio scale: Ratio scales are those scales of measurement which have an absolute or true zero of measurement. The various examples of ratio scale are Mass, length, duration,energy etc. A ratio scale has equal distances and a true zero.

Share

Measurement in Research

The core of any research is measurement. It can be defined as the method of assigning numbers to things. It is essential in research as everything has to be reduced to numbers.

Assigning numbers to properties of things is easy. However, it is quite difficult in other cases. Measuring social conformity or intelligence is much complex than measuring weight, age or financial assets, which can be directly measured directly with some standard unit of measurement. Measurement tools of abstract/qualitative concepts are not standardized, and the results are not very accurate.

A clear understanding of the level of measurement of variables is important in research because it is the level, which determines what type of statistical analysis has to be conducted. The collected data can be classified into distinct categories. If there are limited categories, then they are known as discrete variables. If there are unlimited categories, they are known as continuous variables. The nominal level of measurement describes these categorical variables. Nominal variables include demographic properties like sex, race, religion, etc. This is considered as the most basic level of measurement. No ranking or hierarchy is present in this level.

The variables that can be sequenced in some order of importance can be described by the ordinal level. Opinions and attitude scales or indexes in the social sciences are ordinal in nature. Ex.: Upper, middle, and lower class. In this case, the order is known; however, the interval between the values is not meaningful.

Variables that have more or less equal intervals are described by the interval level of measurement. Crime rates come under this measurement level. Temperature is also an interval variable. Here, the interval between variables can be interpreted; but, ratios are not meaningful.

Ratio level describes variables that have equal intervals and a reference point. Measurement of physical dimension such as weight, height, distance, etc. falls under this level.

Share

Steps for Sample Design

The researcher must keep in mind the following points while preparing a sample design.

(i) Universe: While preparing a sample design, it is foremost required to define the set of objects to be studied.

Technically, it is also known as the Universe, which can be finite or infinite. In case of a finite universe, the number of items is limited. Whereas, in an infinite universe the number of items is limitless.

(ii) Sampling unit: It is necessary to decide a sampling unit before selecting a sample. It can be a geographical one (state, district, village, etc.), a construction unit (house, flat, etc.), a social unit (family, club, school, etc.), or an individual.

(iii) Source list: In other words, it is called the ‘sampling frame’ from which the sample is drawn. It comprises the names of all items of a universe (finite universe only). If source list/sampling frame is unavailable, the researcher has to prepare it by himself.

(iv) Sample size: This is the number of items, selected from the universe, constituting a sample. The sample size should not be too large or too small, but optimum. In other words, an optimum sample accomplishes the requirements of efficiency, representativeness, reliability and flexibility.

(v) Parameters of interest: While determining a sample design, it is required to consider the question of the specific population parameters of interest. For example, we may like to estimate the proportion of persons with some specific attributes in the population, or we may also like to know some average or other measure concerning the population.

(vi) Budgetary constraint: Practically, cost considerations have a major impact upon the decisions concerning not only the sample size but also the sample type. In fact, this can even lead to the use of a non-probability sample.

(vii) Sampling procedure: The researcher, at last, decides the techniques to be used in selecting the items for the sample. In fact, this technique/procedure stands for the sample design itself. Apparently, such a design should be selected, which for a provided sample size and cost, has a smaller sampling error.

 

Share

Different Research Designs

Research Design in Exploratory Research Studies

Exploratory research studies, also known as formulative research studies, are conducted when there are very few or no earlier studies for reference. In this type of research design, a vague problem is chosen, which is followed by an exploratory research to find a new hypothesis. It lays emphasis on discovery of ideas and possible insights that help in identifying areas for future experimentation.

Purpose

1) It provides information to form a more precise problem definition or hypothesis.

2) It establishes research priorities.

3) It gives the researcher a feel of the problem situation and familiarizes him with the problem.

4) It collects information about possible problems in carrying out the research using specific collection tools and specific techniques for analysis.

In exploratory studies, the following three methods are generally used:

1) Survey of relevant literature

2) Survey of experienced individuals

3) Analysis of selected examples

Survey of Relevant Literature

Published literature are very good sources for the purpose of hypothesis generation and problem definition. Much of the published and unpublished data is available through books, journals, newspapers, periodicals, government publications, individual research projects, and data collected by the trade associations. Some of it could be relevant to the given problem situation. An analysis of existing literature may not provide the solution to the research problem, but, it surely gives a direction to the research process.

Survey of Experienced Individuals

Talking to individuals who have expertise and ideas about the research subject can be very useful for the study. Attempt should be made to gather all possible information about the subject of research from people who have specific knowledge about it. In this case, the experimenter must prepare a systematic interview schedule to collect information from the respondents. The success of this survey depends upon the freedom of response given to the respondent, expertise and communication skills of the respondents, and the conversational skills of the experimenter in extracting maximum information from the respondents.

Analysis of Selected Examples

This method involves the selection of examples, which reflect the problem situation. A thorough analysis of the examples is conducted. In certain cases, such type of study helps in identifying the possible relationships that exist between the variables. The relationships, their extent, and direction are then measured using conclusive research designs.

Share