Scale Classification Bases

Scale Classification Bases

The Scale Classification Bases can be categorized on the following bases.

  1. Subject orientation: In this, a scale is designed to measure the characteristics of the respondent who completes it or to estimate the stimulus object that is presented to the respondent.
  2. Response form: In this, the scales can be classified as categorical or comparative. Categorical scales (rating scales) are used when a respondent scores some object without direct reference to other objects. Comparative scales (ranking scales) are used when the respondent is asked to compare two or more objects.
  3. Degree of subjectivity: In this, the scale data is based on whether we measure subjective personal preferences or just make non-preference judgements. In the former case, the respondent is asked to select which person or solution he favors to be employed, whereas in the latter case he is simply asked to judge which person or solution will be more effective without reflecting any personal preference.
  4.  Scale properties: In this, the scales can be classified as nominal, ordinal, interval and ratio scales. Nominal scales merely classify without indicating order, distance or unique origin. Ordinal scales indicate magnitude relationships of ‘more than’ or ‘less than’, but indicate no distance or unique origin. Interval scales have both order and distance values, but no unique origin. Whereas, ratio scales possess all these features.
  5. Number of dimensions: In this, the scales are classified as ‘uni-dimensional’ or ‘multi-dimensional’. In the former, only one attribute of the respondent or object is measured, whereas multi-dimensional scaling recognizes that an object might be described better by using the concept of an attribute space of ‘n’ dimensions, rather than a single-dimension continuum.
  6. Scale construction techniques: This can be developed by the following five techniques.
  • Arbitrary approach: In this, the scales are developed on ad hoc basis. It is the most widely used approach.
  • Consensus approach: In this, a panel of judges evaluates the items chosen for inclusion in the instrument regarding whether they are relevant to the topic area and unambiguous in implication.
  • Item analysis approach: In this, a number of individual items are developed into a test that is given to a group of respondents. Post administering the test, total scores are evaluated, and the individual items are analyzed to determine which items discriminate between persons or objects with high and low total scores.
  • Cumulative scales: These are chosen on the basis of their conforming to some ranking of items with ascending and descending discriminating power.
  • Factor scales: This can be constructed on the basis of inter-correlations of items indicating a common factor accounts for the relationship between items.

 

Share

Scaling techniques for researcher

Scaling techniques for researcher

During research especially when the concepts we want to measure are complex and abstract and there are no standardized measurement tools available, we face problems of measurement. Alternatively, when we are measuring something which can lead to subject bias like attitudes and opinions, there is a problem of their valid measurement. A similar problem may be faced in a lesser degree while measuring physical or institutional concepts. Therefore, knowledge of some such procedures which may enable accurate measurement of abstract concepts is extremely essential.

Scaling techniques are immensely beneficial for a researcher.

Scaling is the process of assigning numbers to various degrees of attitudes, preferences, opinion, and other concepts. Scaling is defined as a procedure for the assignment of numbers (or other symbols) to a property of objects in order to impart some of the characteristics of numbers to the properties in question.

Scaling can be done in two ways: (i) making a judgement about an individuals characteristics and then placing him on a scale which is defined in terms of that characteristic, and (ii) constructing questionnaires where individual’s responses score assign them a place on a scale. A scale is a continuum, consisting of the highest point and the lowest point along with several intermediate points between these two extremities. These scale-point positions are hierarchically related to each other. Numbers for measuring the degree of differences in the attitudes or opinions are assigned to individuals corresponding to their positions in a scale. Therefore, the term ‘scaling’ implies procedures for determination of quantitative measures of subjective abstract concepts.

Share

Technique of Developing Measurement Tools

Technique of Developing Measurement Tools:

a)  Concept development: This is the first step. In this case, the researcher should have a complete understanding of all the important concepts relevant to his study. This step is more applicable to theoretical studies compared to practical studies where the basic concepts are already established beforehand.

b)  Specification of concept dimensions: Here, the researcher is required to specify the dimensions of the concepts, which were developed in the first stage. This is achieved either by adopting an intuitive approach or by an empirical correlation of the individual dimensions with that concept and/or other concepts.

c)  Indicator selection: In this step, the researcher has to develop the indicators that help in measuring the elements of the concept. These indicators include questionnaires, scales, and other devices, which help to measure the respondents opinion, mindset, knowledge, etc. Using more than one indicator lands stability and improves the validity of the scores.

Index formation: Here, the researcher combines the different indicators into an index. In case, there are several dimensions of a concept the researcher needs to combine them.

Share

Test of Practicality of a measuring instrument

Test of Practicality of a measuring instrument

The practicality attribute of a measuring instrument can be estimated regarding its economy, convenience and interpretability. From the operational point of view, the measuring instrument needs to be practical. In other words, it should be economical, convenient and interpreted.

Economy consideration suggests that some mutual benefit is required between the ideal research project and that which the budget can afford. The length of measuring instrument is an important area where economic pressures are swiftly felt. Even though more items give better reliability, in the interest of limiting the interview or observation time, we have to take only few items for the study purpose. Similarly, the data-collection methods, which are to be used, occasionally depend upon economic factors.

Convenience test suggests that the measuring instrument should be easily manageable. For this purpose, one should pay proper attention to the layout of the measuring instrument. For example, a questionnaire with clear instructions and illustrated examples is comparatively more effective and easier to complete than the questionnaire that lacks these features. Interpretability consideration is especially important when persons other than the designers of the test are to interpret the results. In order to be interpretable, the measuring instrument must be supplemented by the following:

  1. detailed instructions for administering the test,
  2. scoring keys,
  3. evidence about the reliability, and
  4. guides for using the test and interpreting results.
Share

Test of Reliability

Reliability is an essential element of test quality. An instrument for measurement is reliable if it provides consistent results. But a reliable instrument need not be valid. For example, if a clock shows time nonstop then it is reliable, but that does not mean it is showing the correct time. Reliability deals with consistency, or reproducibility of similar results in a test by the test subject, if a test is administered on two occasions; the same conclusions are reached both times. While a test with poor reliability will have remarkably different scores each time with the same test and same examinee.

If a test is then it has to be reliable, but the vice versa is not true. Although, reliability might is not as valuable as validity, but nonetheless reliability it is easier to assess than validity for a test. Reliability has two key aspects: stability and equivalence. The degree of stability can be located comparing the results of repeated measurements with the same candidate and the same instrument. Equivalence means the probability of the amount of errors getting introduced by various investigators or different sample items being studied during the repetition of the test. The best way to test for reliability of a test is that two investigators should compare their observations of the same events. Reliability can be improved in the following ways:

(i) By standardizing the measurement conditions to reduce external factors such as boredom, fatigue, etc. which leads to achievement of stability.

(ii) By detailed directions for measurement which can be generalized and used by trained and motivated persons to conduct research and also by increasing the purview of the sample of items used, this lead to equivalence.

Share

Measurement in Research

The core of any research is measurement. It can be defined as the method of assigning numbers to things. It is essential in research as everything has to be reduced to numbers.

Assigning numbers to properties of things is easy. However, it is quite difficult in other cases. Measuring social conformity or intelligence is much complex than measuring weight, age or financial assets, which can be directly measured directly with some standard unit of measurement. Measurement tools of abstract/qualitative concepts are not standardized, and the results are not very accurate.

A clear understanding of the level of measurement of variables is important in research because it is the level, which determines what type of statistical analysis has to be conducted. The collected data can be classified into distinct categories. If there are limited categories, then they are known as discrete variables. If there are unlimited categories, they are known as continuous variables. The nominal level of measurement describes these categorical variables. Nominal variables include demographic properties like sex, race, religion, etc. This is considered as the most basic level of measurement. No ranking or hierarchy is present in this level.

The variables that can be sequenced in some order of importance can be described by the ordinal level. Opinions and attitude scales or indexes in the social sciences are ordinal in nature. Ex.: Upper, middle, and lower class. In this case, the order is known; however, the interval between the values is not meaningful.

Variables that have more or less equal intervals are described by the interval level of measurement. Crime rates come under this measurement level. Temperature is also an interval variable. Here, the interval between variables can be interpreted; but, ratios are not meaningful.

Ratio level describes variables that have equal intervals and a reference point. Measurement of physical dimension such as weight, height, distance, etc. falls under this level.

Share

Random Sample From an Infinite Universe

It is relatively difficult to explain the concept of random sample from an infinite population. However, a few examples will show the basic characteristics of such a sample. Suppose we consider the 10 throws of a fair dice as a sample from the hypothetically infinite population that consists of the results of all possible throws of the dice. If the probability of getting a particular number, say 7, is the same for each throw and the 10 throws are all independent, then we say that the sample is random. Similarly, it would be said to be sampling from an infinite population if we sample with replacement from an infinite population and our sample would be considered as a random sample if in each draw all elements of the population have the same probability of being selected and successive draws happen to be independent. In brief, one can say that the selection of each item in a random sample from an infinite population is controlled by the same probabilities and that successive selections are independent of one another.

In other words, if we have to take a sample of grain from a bag, it is not possible to assign a number to each grain or particle constituting the universe and as such the methods of constructing card population or of random sampling numbers cannot be used. In such cases a thorough mixing of the grain may be done and by dividing and sub-dividing the lot in parts, a sample of an adequate size can be obtained. The contents of the bag after thorough mixing may be divided in two equal parts of which one may be selected and this may further be divided in two parts after mixing. In this way the process can be continued till one of the sub-divisions is equal to the size of the desired sample.

Share

Complex Random Sampling Designs

Complex random sampling designs are probability sampling done with restricted sampling techniques. They are also called mixed sampling designs as they tend to combine probability and non-probability sampling procedures during sample selection.

Some of the popular complex random sampling designs are as follows:

(i) Systematic sampling: The researchers sometimes select every ith item from a list, this is known as systematic sampling. The first unit is a random number and the next unit onwards they are selected at the same fixed intervals.

(ii) Stratified sampling: In a very diverse universe stratified sampling is used were the population is divided into several groups that are more similar and then items are selected from each strata as a sample. The strata is a subjective choice of the researcher based on his experience and judgment by using simple random sampling.

(iii) Cluster sampling: In cluster sampling within the population there might be similar groups these are divided into a number of small homogeneous subdivisions then some of these clusters are randomly selected as sample. Cluster sampling is highly economic. The difference between stratified sampling and cluster sampling is that in stratified sampling a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are studied.

(iv) Area sampling: In area sampling a large area is divided into smaller parts and then samples are selected randomly.  This is a type of cluster sampling were the cluster of units is based on geographic area.

(v) Multi-stage sampling: Multi-stage sampling is a complex type of cluster sampling. Multi-stage sampling is used in researches where the entire universe is very large, for example the entire country; the researcher selects samples in various levels. The researcher after selecting clusters from all universe than randomly selects elements from each cluster. This type of sampling is cost effective and easy to administer.

(vi) Probability proportional to size (PPS) sampling: Probability proportional to size (PPS) sampling: Sometimes cluster sampling units lack equal number of elements; in such cases the researcher uses a random selection process where the probability of selection of each sub group is proportional to the size of the cluster. The actual numbers selected are indicative of the clusters chosen and selected. PPS avoids under representation of any one group.

(vii) Sequential sampling: This is a complex sampling design was the size of the sample is not fixed earlier but is determined according the need of the researcher. In this type of sampling method, the researcher does his research on a particular sample if not satisfied takes another sample unit and so on. The researchers keeps fine tuning the experiment and decides only after doing the experiment whether more samples are needed or not.

Share

Selecting a Random Sample

Random sample is the basic sampling method. Its main advantage is that, each member of the group is given an equal chance of being chosen. Thus, the statistical conclusions deduced from a random sample analysis are deemed to be valid. Though it sounds easy, the process of selection of a random sample is quite complex.

Lottery Method: This is the most commonly used method. Every member is assigned a unique number. These numbers are put in a jar and thoroughly mixed. After that, the researcher picks some numbers without looking at it and those people are included in the study.

Random Number Table: This table consists of a series of digits (0-9) that are generated randomly. The numbers are arranged in rows and columns and can be read in any direction. All the digits are equally probable.

Computer: In case of large population, selecting random samples manually becomes tedious and very time-consuming. In these cases, specific computer softwares are used to generate numbers randomly. This process is very fast and easy.

With and Without Replacement: When a population element is given the chance to be chosen more than once, it is known as sampling with replacement; when it can be chosen only once, it is known as sampling without replacement.

Share

Types of Sample Designs

Basically, there are two different types of sample designs, namely, non-probability sampling and probability sampling. Each of the two is described below.

(1) Non-probability sampling: This type of sampling is also known as deliberate sampling, purposive sampling, or judgement sampling. In this sampling procedure, the organisers of the inquiry deliberately choose the particular units of the universe to compose a sample on the basis that the small mass selected out of a large one would represent the whole. For example, if economic conditions of the population living in a state are to be studied, a few cities and towns can be deliberately selected for intensive study on the principle that they can represent the entire state. Besides, the investigator may select a sample yielding results favorable to his point of view. In case that happens, the entire inquiry may get vitiated. Thus, there exists the danger of bias entering into this type of sampling technique. However, if the investigators are impartial, work without bias and have the necessary experience so as to take sound judgement, the obtained results of an analysis of deliberately selected sample may be tolerably reliable.

Quota sampling is also an example of non-probability sampling. In this type of sampling the interviewers are simply given quotas to be filled from the different strata, with some instructions regarding filling up the quotas. Moreover, this type of sampling is relatively inexpensive and quite convenient.

(2) Probability sampling: This type of sampling is also known as random sampling or chance sampling. This sampling procedure gives each element in the population an equal chance of getting selected for the sample; besides, all choices are independent of one another. The obtained results of probability sampling can be assured in terms of probability. In other words, we can measure the errors of estimation or the significance of obtained results from a random sample. In fact, due to this very reason probability sampling design is superior to the deliberate sampling design. Probability sampling ensures the law of Statistical Regularity, which states that if the sample chosen is a random one, the sample will have the same composition and characteristics as the universe. Hence, probability sampling is more or less the best technique to select a representative sample.

Share