iPsychExpts' Logo

1.1: What are Web Experiments?

Web experiments are experiments which are accessed via the World Wide Web (WWW) and are conducted in the participant’s web browser. In recent years, researchers have increasingly begun to conduct web experiments and this trend is reflected in the emergence of web sites such as Psychological Research on the Net, Web Experimental Psychology Lab, Online Social Psychology Studies and Online Psychology Research UK that provide links to a growing number of web experiments (see, Reips, 2001). The use of the WWW as a means for conducting psychology experiments is chiefly due to the increase in availability of internet access, increase in internet connection speeds (i.e., broadband) and the development of web based technologies which allow for a greater degree of interaction between the user and the web browser.


1.2: What are the Advantages of Web Experiments?

The main advantages of conducting web experiments can be classified as follows: sample size, sample diversity, sample specificity, participants’ motivation and experimenter’s absence.

Sample Size

Web experiments enable much larger number of participants to be run than lab experiments (Birnbaum, 2001; Hewson, 2003; Reips, 2000). This is especially true when the lab experiment is conducted at a small institution (Smith & Leigh, 1997). This is because the size of the population pool from which lab participants are obtained is considerably smaller than the size of the population from which web participants are obtained. Other factors, such as the ability to run participants simultaneously, and at any time, further contribute to larger sample sizes in web experiments. Also, unlike participants for web experiments, participants for lab experiments may not always be available (e.g., during the undergraduates’ summer recess).

Sample Diversity

The demographics of participants in web experiments are more diverse than the demographics of lab experiments (e.g., see Krantz, Ballard & Scher, 1997). This is because lab participants are normally undergraduate psychology students who tend to be predominately female, have the same educational background and are of similar age (Smart, 1966; Schultz, 1972). Furthermore, the prevailing use of undergraduate psychology student may bias experimental findings, especially those of social psychology experiments, since undergraduate students may have less strong attitudes, less formulated sense of self, stronger cognitive skills and stronger tendencies to comply with authority than other adults (see Sears, 1986).

Sample Specificity

The demographics of the participants in web experiments is partially dependent upon the web sites and newsgroups which were used to recruit participants. Hence, participants from specific target populations can easily be obtained (Schmidt, 1997). For example if you were conducting an investigation into rape victims’ attitudes towards men, you could post a message requesting participants to newgroups for rape victims (e.g., talk.rape).

Participants’ Motivation

Participants in web experiments tend to be more highly motivated than participants in lab experiments. This is because participation in lab experiments is not entirely voluntary, since participation is required to obtain course credit or to earn money. Hence web participants, whose participation is entirely voluntary, are more likely to be “good participants” and actively engage in the experiment. However, it has been argued that the generalizability of findings from web experiments may in some cases be reduced since participants in web experiments tend to be only volunteers and volunteers have been found to differ from non-volunteers on some personality factors, such as agreeableness and openness (see Dollinger & Leong, 1993).

Experimenter’s Absence

In web experiments there is no experimenter present and hence data collected in web experiments is free from experimenter bias, (Birnbaum, 2001; Piper, 1998; Reips, 2000). That is to say, the data obtained in web experiments will not be biased by subtle cues to behave in accordance to the experimenter’s expectations, which the experimenter inadvertently give to participants when conducting experiments (Rosenthal, 1966). From a financial point of view, the experimenter’s absence will make web experiments more cost effective than lab experiments, especially if the experiment requires a large sample size, since an experimenter is not paid to conduct each experimental session (see Reips, 2000). Additionally, the absence of an experimenter also increases the participants’ perceived anonymity and this may reduce the tendency to giving socially desirable answers (Joinson, 1999). Finally, it may also facilitates the collection of personal information of a highly sensitive nature (Fawcett & Buhle, 1995).


1.3: What are the Disadvantages of Web Experiments?

The main disadvantages of conducting web experiments can be classified as follows: environment variance, technical variance, dropout, multiple submission and hacking.

Environmental and Technical Variance

In web experiments the environmental factors in which the experiment is conducted cannot be controlled or even for that matter known. Hence, environmental factors such as background noise, lighting conditions, viewing angle and presence of distractions in the environment will vary between participants (Hecht, Oesker, Kaiser, Civelek and Stecker, 1999). Technical aspects may also vary between participants such as internet connection speeds. Fortunately, the effects of the differences in internet connection speeds between participants can be reduced by pre-loading experimental materials (McGraw, Tew and Williams, 2000) and minimizing the file size of experimental materials (Hewson, Yule, Laurent and Vogel, 2002). However, other hardware and software configuration differences between participants may not be so easily resolved. As a result the accuracy of reaction time measurements (Eichstaedt, 2001) and precise presentation of visual stimuli (Schmidt, 2001) may vary with regard to participants’ hardware and software configurations. As a consequence conducting a web experiment that employs response times as a dependent measure may seem problematic. However, in most cases precise measuring of response times to the nearest millisecond is not required. Moreover, the results of McGraw et al. (2000) suggest that an effect measured in tens of millisecond can be successfully replicated in a web experiment.

Dropout

Unlike in lab experiments participants frequently dropout in web experiments (Piper, 1998) and this can have an adverse effect upon the data obtained in web experiments. First, and mostly obviously, it reduces the sample size and hence decreases the statistical power for detecting an effect. However, large sample sizes are easily obtained in web experiments hence the reduction in sample size due to dropout is not in itself a serious problem. Second, Birnbaum (2001) and Reips (2000, 2002) maintain that the effect of dropout can easily mask the effect under investigation. For example, in experiments with a between-subjects design a selective dropout in one condition could undermine the results. Birnbaum (2001) also argues that even if the overall dropout rates are the same for all conditions dropout can still hinder the detection of an effect, if participants are dropping out of different conditions for different reasons. Third, the generalizability of the findings of the experiment will be compromised, if participants with specific traits, characteristics or attitudes selectively dropout. Consequently, experiments employing within-subject designs may also be adversely affected by dropout.

Multiple Submission and Hacking

It may be possible for participants in a web experiment to repeatedly submit data and as result multiple submission may impair the quality data obtained from web experiments. Fortunately, there are ways of preventing and detecting multiple submission. For instance, cookies (external files stored in the user’s web browser) can be used to record whether a user has participated in a web experiment and consequently the user can be prevented from resubmitting data, (Reips, 2000). Alternatively, or additionally, IP addresses of participants can be monitored so that only one submission per IP address is accepted, (Klauer, Musch & Naumer, 2000). Although these techniques can help monitor and prevent multiple submission, they are all by no means infallible and can be rendered ineffective if an individual is determined to make multiple submissions. However, in web experiments where there is no financial incentive (e.g., no lottery of cash prizes) to participate then there is little incentive to participate more than once and therefore little incentive to intentionally make multiple submissions.

Precautions can also be taken to prevent hacking. For example, preventing public read and write access to the directory in which the experiment resides (Hewson et al. 2002; Reips 2002). Other, more rigorous precautions can be taken but again, these methods are by no means infallible and a determined hacker may be able to compromise the system. However, it seems rather unlikely that web experiments would be targeted by hackers, especially when the web experiments do not offer financial incentive to participate and do not involve collecting personal information of a sensitive nature. Consequently, to date, hackers have not posed a problem for conducting web experiments (Musch & Reips, 2000).


1.4: Are Web Experiments Valid?

Since web experiments are conducted in environments that are less controlled than lab experiments researchers tend to believe that the data obtained from web experiments is less valid than the data obtained from lab experiments. Consequently, several researchers (e.g., Birnbaum, 2001; Buchanan & Smith 1999; Krantz et al., 1997; Reips, 2002) recommend that web experiments are validated by comparing data obtained in web experiments with data obtained in lab experiments. To date the findings of research that have compared the data obtained from web experiments with data obtained from lab experiments have been encouragingly positive. For instance, a survey by Musch and Reips (2000) reported that data from 18 web experiments and their lab replications were highly consistent.

However, there is still need for caution since only a relatively few studies (e.g., Pagani & Lombardi, 2000; Klauer et al., 2000; Krantz et al., 1997) have directly, statistically compared data obtained from a web experiment with data obtained from a lab experiment. Moreover the studies that have conducted direct statistical comparisons have typically compared lab and web experimental data using correlational analyses. These analyses only correlate the web experiment means for each condition with the lab experiment means for each condition and do not take into account differences in variances between data obtained in the web experiment and data obtained in the lab experiment. Hence using correlational analyses to statistically compare data from web experiment with data from lab experiment is inadvisable because using these analyses may mask considerably difference in variability between data obtained in a web experiment and data obtained in a lab experiment.

A statistically more powerful technique for comparing data obtained in a web experiment with data obtained in a lab experiment is to compute a factorial ANOVA (or HILOG analysis if the dependent measure is categorical) where the study type (either web or lab) is treated as a between-subjects variable. If the factorial ANOVA shows that there is not a statistically significant interaction between the experimental variable and the study type variable then the data from the web and lab experiments are theoretically equivalent. But because equivalence is assume when the null hypothesis is accepted there remains the distinct possibility that equivalence could be falsely assumed as a result of lack of statistical power to detect a statistically significant interaction between the experimental variable and the study type variable. Hence validating web experiments may prove problematic and this is especially true when the experimental effect under investigation is small (see Brand & Hahn, 2003).

So in some cases effectively validation of web experiments may be unfeasible and problematic, but it would be imprudent not to conduct a web experiment because you could not effectively validate it. Moreover, given the drawbacks of data obtained in lab experiments: in particular the severe lack of statistical power (see Sedlmeier & Gigerenzer, 1989), the limited participant demography and participants’ general lack of motivation to participate we might wonder why the lab experiment should be the gold standard in the first place!


1.5: References

Birnbaum, M. H. (2001). Introduction to Behavioral Research on the Internet. Upper Saddle River, NJ: Prentice Hall.

Brand, A., & Hahn, U. (2003). Evaluating the validity of internet experiments: Verbal overshadowing as a case study. Unpublished manuscript. PDF (Size: 40K)PDF

Buchanan, T., & Smith, J. L. (1999). Research on the internet: Validation of a World-Wide Web mediated personality scale. Behavior Research Methods, Instruments and Computers, 31, 565-571.

Dollinger, S. J., & Leong, F. T. (1993). Volunteer bias and the five-factor model. Journal of Psychology, 127, 29-36.

Eichstaedt, J. (2001). An inaccurate-timing filter for reaction time measurement by JAVA applets implementing Internet-based experiments. Behavior Research Methods, Instruments and Computers, 33, 179-186.

Fawcett, J., & Buhle, E. L. (1995). Using the internet for data collection: An innovative electronic strategy. Computers in Nursing, 13, 273-279.

Hecht, H., Oesker, M., Kaiser, A., Civelek, H., & Stecker, T. (1999). A perception experiment with time-critical graphics animation on the World-Wide Web. Behavior Research Methods, Instruments and Computers, 31, 439-445.

Hewson, C., Yule, P., Laurent, D., & Vogel, C. (2002). Internet Research Methods: A Practical Guide for the Social and Behavioural Sciences. London: Sage Publications Ltd.

Hewson, C. (2003). Conducting research on the Internet. Psychologist, 16, 290-293.

Joinson, A. (1999). Social desirability, anonymity, and Internet-based questionnaires. Behavior Research Methods, Instruments and Computers, 31, 433-438.

Klauer, K. C., Musch, J., & Naumer, B. (2000). On belief bias in syllogistic reasoning. Psychological Review, 107, 852-884.

Krantz, J. H., Ballard, J., & Scher, J. (1997). Comparing the results of laboratory and World-Wide Web samples on the determinants of female attractiveness. Behavior Research Methods, Instruments and Computers, 29, 264-269.

McGraw, K. O., Tew, M. D., & Williams, J. E. (2000). The integrity of web-delivered experiments: Can you trust the data? Psychological Science, 11, 502-506.

Musch, J., & Reips, U. D. (2000). A brief history of web experimenting. In M. H. Birnbaum (Ed.), Psychological experiments on the internet (pp. 61-87). San Diego, CA: Academic Press.

Pagani, D., & Lombardi, L. (2000). An intercultural examination of facial features communicating surprise. In M. H. Birnbaum (Ed.), Psychological experiments on the internet (pp. 169-194). San Diego, CA: Academic Press.

Piper, A., I. (1998). Conducting social science laboratory experiments on the Word Wide Web. Library & Information Science Research, 20, 5-21.

Reips, U. D. (2000). The web experiment method: Advantages, disadvantages and solutions. In M. H. Birnbaum (Ed.), Psychological experiments on the internet (pp. 89-117). San Diego, CA: Academic Press.

Reips, U. D. (2001). The Web Experimental Psychology Lab: Five years of data collection on the Internet. Behavior Research Methods, Instruments and Computers, 33, 201-211.

Reips, U. D. (2002). Standards for Internet-based experimenting. Experimental Psychology, 49, 243-256. PDF (Size: 124K)PDF

Rosenthal, R. (1966). Experimenter effects in behavioral research. New York: Appleton-Century-Crofts.

Schmidt, W. C. (1997). World-Wide Web survey research made easy with WWW Survey Assistant. Behavior Research Methods, Instruments and Computers, 29, 303-305.

Schmidt, W. C. (2001). Presentation accuracy of web animation methods. Behavior Research Methods, Instruments and Computers, 33, 187-200.

Schultz, D. P. (1972). The human subject in psychological research. In C. L. Sheridan (Ed.), Readings for experimental psychology (pp. 263-282). New York: Holt.

Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology's view of human nature. Journal of Personality and Social Psychology, 51, 515-530.

Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316.

Smart, R. G. (1966). Subject selection bias in psychological research. Canadian Psychologist, 7, 115-121.

Smith, M. A., & Leigh, B. (1997). Virtual subjects: Using the Internet as an alternative source of subjects and research environment. Behavior Research Methods, Instruments and Computers, 29, 496-505.



Return to Contents Page