Share this Blog

Monday, August 15, 2011

Research designs

Research designs are concerned with turning the research question into a testing project. The best design depends on your research questions. Every design has its positive and negative sides. The research design has been considered as a "blueprint" for research, dealing with at least four problems: what questions to study, what data are relevant, what data to collect, and how to analyze the results.

Research design can be divided into fixed and flexible research designs (Robson, 1993). Others have referred to this distinction with ‘quantitative research designs’ and ‘qualitative research designs’. However, fixed designs need not be quantitative, and flexible design need not be qualitative. In fixed designs the design of the study is fixed before the main stage of data collection takes place. Fixed designs are normally theory-driven; otherwise it’s impossible to know in advance which variables need to be controlled and measured. Often these variables are quantitative. Flexible designs allow for more freedom during the data collection. One reason for using a flexible research design can be that the variable of interest is not quantitatively measurable, such as culture. In other cases, theory might not be available before one starts the research.

Descriptive research

Although some people dismiss descriptive research as `mere description', good description is fundamental to the research enterprise and it has added immeasurably to our knowledge of the shape and nature of our society. Descriptive research encompasses much government sponsored research including the population census, the collection of a wide range of social indicators and economic information such as household expenditure patterns, time use studies, employment and crime statistics and the like. Descriptions can be concrete or abstract. A relatively concrete description might describe the ethnic mix of a community, the changing age profile of a population or the gender mix of a workplace. Alternatively the description might ask more abstract questions such as `Is the level of social inequality increasing or declining?', `How secular is society?' or `How much poverty is there in this community?' Accurate descriptions of the level of unemployment or poverty have historically played a key role in social policy reforms (Marsh, 1982). By demonstrating the existence of social problems, competent description can challenge accepted assumptions about the way things are and can provoke action.Good description provokes the `why' questions of explanatory research. If we detect greater social polarization over the last 20 years (i.e. the rich are getting richer and the poor are getting poorer) we are forced to ask `Why is this happening?' But before asking `why?' we must be sure about the fact and dimensions of the phenomenon of increasing polarization. It is all very well to develop elaborate theories as to why society might be more polarized now than in the recent past, but if the basic premise is wrong (i.e. society is not becoming more polarized) then attempts to explain a non-existent phenomenon are silly. Of course description can degenerate to mindless fact gathering or what C.W. Mills (1959) called `abstracted empiricism'. There are plenty of examples of unfocused surveys and case studies that report trivial information and fail to provoke any `why' questions or provide any basis for generalization. However, this is a function of inconsequential descriptions rather than an indictment of descriptive research itself.

Explanatory research

Explanatory research focuses on why questions. For example, it is one thing to describe the crime rate in a country, to examine trends over time or to compare the rates in different countries. It is quite a different thing to develop explanations about why the crime rate is as high as it is, why some types of crime are increasing or why the rate is higher in some countries than in others. The way in which researchers develop research designs is fundamentally affected by whether the research question is descriptive or explanatory. It affects what information is collected. For example, if we want to explain why some people are more likely to be apprehended and convicted of crimes we need to have hunches about why this is so. We may have many possibly incompatible hunches and will need to collect information that enables us to see which hunches work best empirically. Answering the `why' questions involves developing causal explanations. Causal explanations argue that phenomenon Y (e.g. income level) is affected by factor X (e.g. gender). Some causal explanations will be simple while others will be more complex. For example, we might argue that there is a direct effect of gender on income (i.e. simple gender discrimination) (Figure 1.1a). We might argue for a causal chain, such as that gender affects choice of ®eld of training which in turn affects

Experimental Design

Experimental designs are often touted as the most "rigorous" of all research designs or, as the "gold standard" against which all other designs are judged. In one sense, they probably are. If you can implement an experimental design well (and that is a big "if" indeed), then the experiment is probably the strongest design with respect to internal validity. Why? Recall that internal validity is at the center of all causal or cause-effect inferences. When you want to determine whether some program or treatment causes some outcome or outcomes to occur, then you are interested in having strong internal validity. Essentially, you want to assess the proposition:

If X, then Y

or, in more colloquial terms:

If the program is given, then the outcome occurs

Unfortunately, it's not enough just to show that when the program or treatment occurs the expected outcome also happens. That's because there may be lots of reasons, other than the program, for why you observed the outcome. To really show that there is a causal relationship, you have to simultaneously address the two propositions:

If X, then Y

and

If not X, then not Y

Or, once again more colloquially:

If the program is given, then the outcome occurs

and

If the program is not given, then the outcome does not occur

If you are able to provide evidence for both of these propositions, then you've in effect isolated the program from all of the other potential causes of the outcome. You've shown that when the program is present the outcome occurs and when it's not present, the outcome doesn't occur. That points to the causal effectiveness of the program.

Think of all this like a fork in the road. Down one path, you implement the program and observe the outcome. Down the other path, you don't implement the program and the outcome doesn't occur. But, how do we take both paths in the road in the same study? How can we be in two places at once? Ideally, what we want is to have the same conditions -- the same people, context, time, and so on -- and see whether when the program is given we get the outcome and when the program is not given we don't. Obviously, we can never achieve this hypothetical situation. If we give the program to a group of people, we can't simultaneously not give it! So, how do we get out of this apparent dilemma?

Perhaps we just need to think about the problem a little differently. What if we could create two groups or contexts that are as similar as we can possibly make them? If we could be confident that the two situations are comparable, then we could administer our program in one (and see if the outcome occurs) and not give the program in the other (and see if the outcome doesn't occur). And, if the two contexts are comparable, then this is like taking both forks in the road simultaneously! We can have our cake and eat it too, so to speak.

That's exactly what an experimental design tries to achieve. In the simplest type of experiment, we create two groups that are "equivalent" to each other. One group (the program or treatment group) gets the program and the other group (the comparison or control group) does not. In all other respects, the groups are treated the same. They have similar people, live in similar contexts, have similar backgrounds, and so on. Now, if we observe differences in outcomes between these two groups, then the differences must be due to the only thing that differs between them -- that one got the program and the other didn't.

OK, so how do we create two groups that are "equivalent"? The approach used in experimental design is to assign people randomly from a common pool of people into the two groups. The experiment relies on this idea of random assignment to groups as the basis for obtaining two groups that are similar. Then, we give one the program or treatment and we don't give it to the other. We observe the same outcomes in both groups.

The key to the success of the experiment is in the random assignment. In fact, even with random assignment we never expect that the groups we create will be exactly the same. How could they be, when they are made up of different people? We rely on the idea of probability and assume that the two groups are "probabilistically equivalent" or equivalent within known probabilistic ranges.

So, if we randomly assign people to two groups, and we have enough people in our study to achieve the desired probabilistic equivalence, then we may consider the experiment to be strong in internal validity and we probably have a good shot at assessing whether the program causes the outcome(s).

But there are lots of things that can go wrong. We may not have a large enough sample. Or, we may have people who refuse to participate in our study or who drop out part way through. Or, we may be challenged successfully on ethical grounds (after all, in order to use this approach we have to deny the program to some people who might be equally deserving of it as others). Or, we may get resistance from the staff in our study who would like some of their "favorite" people to get the program. Or, they mayor might insist that her daughter be put into the new program in an educational study because it may mean she'll get better grades.

The bottom line here is that experimental design is intrusive and difficult to carry out in most real world contexts. And, because an experiment is often an intrusion, you are to some extent setting up an artificial situation so that you can assess your causal relationship with high internal validity. If so, then you are limiting the degree to which you can generalize your results to real contexts where you haven't set up an experiment. That is, you have reduced your external validity in order to achieve greater internal validity.

In the end, there is just no simple answer (no matter what anyone tells you!). If the situation is right, an experiment can be a very strong design to use. But it isn't automatically so. My own personal guess is that randomized experiments are probably appropriate in no more than 10% of the social research studies that attempt to assess causal relationships.

Experimental design is a fairly complex subject in its own right. I've been discussing the simplest of experimental designs -- a two-group program versus comparison group design. But there are lots of experimental design variations that attempt to accomplish different things or solve different problems. In this section you'll explore the basic design and then learn some of the principles behind the major variations.

No comments:

Post a Comment