Click versus Bic #2

on Saturday, June 13, 2009

Evaluation is a key component to interactive learning. Do these new strategies and techniques:

  • work
  • yield better outcomes than other methods, and
  • meet the learning objectives for the activity/course?
In other words, is online learning better?

In terms of answering this question, the research methods used could have a dramatic impact on the results. Do survey methods matter? Is there a response bias to answering an online survey in an online course? If a learner is using a learning management system (LMS) do they perceive anonymity? Are there other ways to assess online learning?

In a meta-analysis of response distortion, Richman, Kiesler, Weisband, and Drasgrow (1999) found that using a computer instrument had no consistent effect on distortion and other factors, such as being alone, moderated the relationship. In their concluding remarks, they speculated whether web-based anonymous surveys would effect response distortion. Social response distortion (SDR) is a tendency of inpiduals to answer questions in a more desirable manner than under other conditions, thus inpiduals engage in behaviors to intentionally look better (impression management). Studies of response distortion with computerized versus traditional instruments have reported conflicting findings -- less distortion (Evan & Miller, 1969; Kiesler & Sproull, 1986), more distortion (Lautenschalger & Faherty, 1990), and no differences (Booth-Kewley, Edwards & Rosenfeld, Whitner & Miller, 1995; Pinsoneault). Various claims have been offered for the nonequivalence of results, such as the ability to scan (Lautenshalger & Faherty, 1990), differences in the inputs/outputs of the interface (Moon, 1998), social context cues (Kiesler & Sproull, 1986), reduced feelings of privacy (Whitener & Miller, 1995) and the big brother syndrome (Rosenfeld, Booth-Kewley, Edwards, and Thomas, 1996).

Survey format

Tourangeau and Smith (1996) argued that computerized surveys foster a greater sense of privacy thereby increasing respondents’ willingness to report sensitive information. Wright, Aquilino, and Supple (1998) found respondent characteristics, such as age, sex, and attitudes, also moderated the mode effect along with several significant interaction effects. Wright and colleagues suggested a possible sex difference in responses because of the differences for enthusiasm for computers between males and females for computers. In addition, they suggested other possible moderators such as respondents’ general trust in others, attitudes towards surveys, perceptions of privacy and attitudes of computers for influencing any mode differences in response distortion.

Beyond respondent characteristics, several researchers concluded the survey format (paper or computerized) affects response distortion because of the ability to “scan, preview, review, skip or change items on a test” (Whitener & Klein, 1995, p. 67). Respondents in paper surveys are able to scan and change answers, while this is not always the case with computerized surveys. Computerized surveys can be programmed to force responses before continuing to the next item, show only one item at a time, and to administer different items based on responses.

In addition, web surveys can be formatted to look more similar to paper based surveys with advancements in web technologies. Previous computer based surveys had awkward text-based interfaces, which could be perceived as more an interview format (question, response, question, response) instead of a survey in which the respondent is able to scan, skip or change items. Richman and colleagues argued that “the effect will depend on how the interfaces makes respondents feel; the more a computer instruments resembles a traditional instruments, the more the two instruments should produce similar responses” (p. 756).

So does it matter?

The point of perceived differences in survey methods may be moot. For many online courses, the students are at a distant and an online survey is the only method available to collect feedback. All self report methods involve some bias error. Fortunately survey methods alone are not enough though to answer the questions posed at the start of this post.

Another method to evaluating online courses is web log analysis. Similar to consumer market research, web log analysis can yield crucial information on whether the learners are actually using the activities/course work being implemented. I have long felt the combination of surveys with an analysis of usage patterns is key to evaluating the effectiveness of online learning courses. Metrics of time spent, return visits, time per page, etc can provider instructors and instructional developers a wealth of information on usage patterns. Some might argue too much information. Thus distilling the major trends and key themes is quite important when face with a mountain of data.

References:

Booth-Kewley, S., Edwards, J. E., & Rosenfield, P. (1992). Impression management, social desirability, and computer administration of attitude questionnaires: Does the computer make a difference? Journal of Applied Psychology, 77, 562-566.

Evan, W. M., Miller, J. R. (1969). Differential effects on response bias of computer vs. conventional administration of a social science questionnaire: An exploratory methodological experiment. Behavioral Science, 14, 215-227.

Kiesler, S., & Sproull, L. (1986). Response effects on the electronic survey. Public Opinion Quarterly, 50, 402-413.

Lautenschlager, G. J., & Flaherty, V. L. (1990). Computer administration of questions: More desirable or more social desirability? Journal of Applied Psychology, 75, 310-314.

Moon, Y. (1999). Impression management in computer-based interviews: The effect of input modality, output modality, and distance. Public Opinion Quarterly, 62, 610-622.

Paulhus, D. L. (1991). Measurement and control of response bias. In J. P. Robinson, P. R., Shaver, & L. S. Wrightsman (Eds.), Measures of personality and social psychological attitudes (pp. 17-59). San Diego, CA: Academic Press.

Rosenfield, P., Booth-Kewely, S., & Edwards, J.E. (1993). Computer-administered surveys in organizational settings. American Behavioral Scientist, 36, 485-511.

Rosenfield, P., Booth-Kewely, S., Edwards, J.E., & Thomas, M. (1996). Responses on computer surveys: Impression management, social desirability, and the Big Brother syndrome. Computers in Human Behavior, 12, 263-274.

Tourangeau, R., & Smith, T. (1996). Asking sensitive question: The impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60, 275-304.

Weisband, S., & Kiesler, S. (1996). Self disclosure on computer forms: Meta-analysis and implications, Proceedings of CHI ‘96

Whitener, E. M., & Klein, H. J. (1995). Equivalence of computerized and traditional research methods: The role of scanning, social environment, and social desirability. Computers in Human Behavior, 11, 65-75.