Questioning the Questions of
Instructional Technology Research

Thomas C. Reeves, Ph.D.
Instructional Technology
607 Aderhold Hall
The University of Georgia
E-Mail: treeves@moe.coe.uga.edu
Athens, GA 30602-7144 USA

My First Flame

In April 1994, I was "flamed" on the Internet, a '90s phenomenon that has been portrayed in publications as diverse as The New Yorker (Seabrook, 1994) and The Chronicle of Higher Education (Lemisch, 1995). Although what exactly constitutes a "flame" in the rapidly evolving "metaverse" (Stephenson, 1993) is a matter of much debate, I can vividly recall the feelings of shock and anger that swept through me when I read the note calling me a "jerk" on a "listserv" shared by hundreds of members around the worldà.

It all began last spring when I read two queries from doctoral students on the Qualitative Research for the Human Sciences listserv . Both students came from large public institutions of higher education, one in the USA and the other in Canada. The first student wrote that she intended to focus her dissertation research on the quality of "discourse" that takes place in cafes and coffee shops located inside bookstores. She complained that she had found no "literature" on this topic àand asked the listserv participants for some guidance. The second student announced that he was preparing a dissertation prospectus centered on the question of how people learned about opportunities to take SCUBA diving lessons and what motivated them to register for such courses. He also sought directions to relevant literature and advice from the listserv membership.

After pondering these queries, I posted a message asking whether faculty members at taxpayer-supported universities have a moral responsibility to guide their students toward "socially responsible" research questions. In my posting, I suggested that in the face of problems such as adult illiteracy, attacks on public education, "at-risk" students, homelessness, AIDS, and the like, faculty members should attempt to inspire in students a dedication to research that would "make a difference."

A brief time after posting my note, the graduate student who had sought help with his SCUBA query "flamed" me with his "jerk" note in which he went on to criticize my "attack" on his freedom to address whatever research questions interested him, especially given he was a taxpayer as well. A small grass fire of flames then erupted as several listserv members castigated the student for calling me a jerk, some agreed with my critique, and others defended the perspective that the social relevance of doctoral dàissertation research (or any educational research) was irrelevant. No resolution of this issue was reached on the listserv, but I was especially impressed by the response of an education professor from a large land grant university in the USA who agreed with my criticism, but went on to suggest that much of the research he has read in the field of instructional technology could be subjected to a similar critique. This prompted me to ponder the social relevancy of research in our field.

Is Instructional Technology Research Socially Relevant?

Social relevance is an issue that is obviously subject to much debate. One's age, race, gender, socioeconomic status, education, religion, political allegiance, and many other factors are likely to influence one's interpretation of the social relevance of any given research study. Nevertheless, for the sake of this analysis, I will attempt to define social relevance with respect to scientific inquiry. My definition is based upon the following principles that guide scientific research (derived from Casti, 1à989):

  • Science is an ideology that consists of a cognitive structure concerning the nature of reality together with processes of inquiry, verification, and peer review.

  • Views of reality differ according to one's philosophy of science, e.g., realism maintains that an objective reality exists, instrumentalism asserts that reality is the readings noted on measuring instruments, and relativism claims that reality is what the community says it is.

  • Scientific research is a social activity that has certain standards and norms, e.g., it should not intentionally harm humans and it must be able to be replicated by other researchers.

  • Socially responsible research in education adheres to the basic principles listed above while at the same time it addresses problems that detract from the quality of life for individuals and groups in society, especially those problems related to learning and human development.

In the view of some, instructional technology research might lay claim to a blanket imprimatur with respect to being "socially responsible." After all, at some level, all instructional technology research can be said to focus on questions of how people learn and perform, especially with respect to how learning and performance are influenced, supported, or perhaps even caused by technology. As long as research is focused on learning and performance problems, and adheres to the principles listed above, it woàuld seem to be socially responsible.

Others in the research community argue that concern for the social responsibility of research in instructional technology or any other field is ludicrous. They maintain that the goal of research is knowledge in and of itself, and that whether research is socially responsible is a question that lies outside the bounds of science (cf., Carroll, 1973). In my experience, researchers in the natural sciences such as biology and chemistry do not often concern themselves with the relevance question, but this is a àdebate that has raged for decades among educational researchers.1 For example, as reported by Farley (1982), Nate Gage, a past president of the American Educational Research Association (AERA), has been a staunch defender of the notion that the goal of basic research in education is simply "more valid and more positive conclusions" (p. 12) whereas another past president of AERA, Robert Ebel, proclaimed:

"....the value of basic research in education is severely limited, and here is the reason. The process of education is not a natural phenomenon of the kind that has sometimes rewarded scientific investigation. It is not one of the givens in our universe. It is man-made, designed to serve our needs. It is not governed by any natural laws. It is not in need of research to find out how it works. It is in need of creative invention to make it work better."

In my opinion, Ebel's stance (with which I agree) is directly relevant to the issue of socially responsible research in instructional technology. I believe that the social relevance of research questions that are largely focused on understanding "how" education works without substantial concern for how this understanding makes education better is weak. On the other hand, the social relevance of research questions that are largely focused on making education better, and which in the process may also help usà understand more about how education works, is strong.

Most of the research in instructional technology is conducted on the basis of the assumption that education is governed by natural laws and therefore can be studied in a manner similar to other natural sciences such as chemistry and biology. As my students can attest, I often question this assumption in my teaching and advising; I have done likewise in my published scholarship (cf. Reeves, 1986, Reeves, 1993). As instructional technologists, we have made and continue to make the wrong assumptions about the nature of the phenomena we study and hence we ask the wrong questions.

Of course, I am not the first person in the field to express this point of view. I adopted the title of this paper from one published twenty-seven years ago by Keith Mielke (1968) titled "Questioning the Questions of ETV Research." Other critics of the questions and methods of research in instructional technology include Lumsdaine (1963), Schramm (1977), Clark (1983), and Salomon (1991). The debate about the nature of reality and the conduct of research in our field continues as evidenced by the recent spate of articles focused on the question of "Does media influence learning?" (Clark, 1994a, b; Jonassen, Campbell, & Davidson, 1994; Kozma, 1994a, b; Morrison, 1994; Reiser, 1994; Ross, 1994a, b; Shrock, 1994; Tennyson, 1994; Ullmer, 1994). However, few critics have dealt directly with questions of whether instructional technology research is, can be, or should be socially responsible. That is the major purpose of this paper.

The State of Instructional Technology Research

Before returning to the issue of the social relevance of instructional technology research, it is necessary to examine the state of research in the field today. To accomplish this, I reviewed the contents of two of the primary research journals in the field, the Educational Technology Research and Development (ETR&D) journal and the Journal of Computer-based Instruction (JCBI) over the periods 1989-94 for ETR&D and 1988-93 for JCBI.2 For this review, I originally intended to use a research article classifiàcation scheme developed by Dick and Dick (1989) (see Figure 1), but my initial attempts to categorize articles using that scheme led to several difficulties, especially in terms of classifying studies that were primarily interpretivist in intent and naturalistic in method, e.g., Neuman (1991).

FIGURE 1

After reflection and consultations with several research experts, I modified the classification scheme.3 This new classification scheme represents an effort to distinguish between the goals of research from the methods of research. First, I propose that most research studies in instructional technology can be classified according to the six research goals represented in Figure 2. This scheme is partially based upon discussions of research "paradigms" that have dominated educational research literature in ràcent years. For example, according to Soltis (1992), there are currently "three major paradigms, or three different ways of investigating important aspects of education" (p. 620) used in educational research: 1) the positivist or quantitative paradigm, 2) the interpretivist or qualitative paradigm, and 3) the critical theory or neomarxist paradigm. Although the "paradigm debate" literature is fascinating, I do not feel that the three categories presented by Soltis (1992) and others (e.g., Schubert & Schubeàrt, 1990) capture the full breadth of research goals in the field of instructional technology.

FIGURE 2

Second, given the aforementioned desire to separate the goals of research studies from the methodologies employed in them, I propose the methodology classification scheme represented in Figure 3. Of course, there are numerous methods available to researchers in instructional technology (cf., Driscoll, 1995), but for the sake of simplicity, these five methodological groupings provide sufficient discrimination to allow the analysis represented below.

FIGURE 3

The combination of the goal classification and the methods classification schemes yields a matrix of research goals by research methods. Figure 4 presents my analysis of the research articles published in ETR&D (1989-1994). There were one hundred and four articles published in the research section of ETR&D in the six years from 1989 through 1994.4 Not every article could be classified according to the classification matrix illustrated in Figure 4. Six "methodological articles" (presenting a new method or pàrocedure for carrying out research) and three "professional articles" (analyzing the state of the profession of instructional technology) are not included in Figure 4.

FIGURE 4

Figure 5 presents my analysis of the research articles published in JCBI (1988-1993). There were one hundred and twenty-nine articles published in JCBI from 1988 through 1993. Five "methodological articles" and one "professional article" are not included in Figure 5.

FIGURE 5

There are some obvious trends in the articles that appeared in ETR&D and JCBI during the respective review periods. First, the most common type of article in either publication is empirical in intent and quantitative in method. Thirty-nine articles (38% of the total 104) in ETR&D and fifty-six articles (43% of the total 129) in JCBI fall into the "empirical-quantitative" cell of the matrix.

The next largest subset of articles in these publications can be classified as theoretical in intent and employing literature review as the primary method. I was liberal in my classification of articles into this category. For example, I assigned all of the aforementioned media debate articles into this classification (cf., Clark, 1994a, b; Kozma, 1994a,b). The extent to which literature review methods were actually used in these articles varies greatly.

Another trend that stands out is the paucity of interpretivist articles (one in ETR&D and three in JCBI) during this time. This seems surprising given the numerous applications of the "Constructivist-Hermeneutic-Interpretivist-Qualitative Paradigm" in other fields of education (cf., Eisner, 1991). Although Neuman (1989), Driscoll (1995), Robinson (1995) and others promote interpretivist approaches to research in instructional technology, interpretivist research reports rarely find their way into our publications.

Developmental research studies are also scarce in each of these publications. With respect to ETR&D, it may be that most developmental research studies appear in the development section of the journal, but this is a hypothesis that has not been investigated. Other possible explanations are that instructional technologists rarely conduct developmental research, those that do have too little time to report it, or the review panels for the journals do not recognize this approach as legitimate research.

The complete absence of any articles in these journals that are postmodern in intent or that employ critical theory as a methodology is disappointing, but not too surprising. First, Hlynka and Belland's (1991) volume on the application of postmodern criticism to instructional technology may not be widely known. Second, the gatekeepers of ETR&D and JCBI appear to have strong preferences for empirical research employing quantitative methods. They may be unwilling or unable to entertain such radical departureàs from standard research methods as have been proposed by Yeaman (1994) and other critical theorists.

An interesting difference between the two journals is the percentage of articles that are evaluative in intent. Only nine (9%) of the articles in ETR&D were evaluation reports during this period whereas thirty-seven (29%) of the articles in JCBI were evaluations. This difference may be explained by evaluation articles in ETR&D being primarily relegated to the development section of the journal. As above, this hypothesis has not been investigated.

The Problem of Pseudoscience

A deeper analysis of those studies published in ETR&D and JCBI which are empirical in intent and quantitative in method yields a dismal picture of the quality of contemporary research in our field. In an earlier article published in the now defunct JCBI (Reeves, 1993), I presented an analysis of five studies published in refereed journals from the literature on learner control (Arnone & Grabowski, 1992; Kinzie & Sullivan, 1989; L÷pez & Harper-Meriniak, 1989; McGrath, 1992; Ross, Morrison, & O'Dell, 1989). àI characterized the research reported in these articles as pseudoscience. Figure 6 summarizes the characteristics of pseudoscience in the field of instructional technology.

FIGURE 6

Ironically, the learner control articles analyzed in Reeves (1993) are hardly the worst examples of pseudoscience in our field. My analysis of recent volumes of ETR&D and JCBI indicates that pseudoscience continues to dominate research in the field of instructional technology. A conservative review of the thirty-nine "empirical-quantitative" studies reported in ETR&D indicates that twenty-eight of them (72%) can be identified as examples of pseudoscience in that they possess two or more of the characteristàics in Figure 6. In JCBI, thirty-four (61%) of the fifty-six "empirical-quantitative" studies published during this period suffer two or more signs of pseudoscience. This analysis is evidence of a research malaise of epidemic proportions.

The question inevitably arises with respect to how so many pseudoscience studies get published. At least part of the answer rests in the incestuous nature of the relationships among the people conducting these studies and the people charged with peer review of these submissions. The review boards of these journals include many of the same people whose research studies exemplify pseudoscience. Not only does the insular nature of the review process assure these researchers of a venue for their pseudoscience àreports, but it also at least partially explains the under representation of alternative approaches of inquiry.

A Question of Relevance

The relevance of pseudoscience research studies is a moot point. Even if the researchers themselves ascribe to the highest ideals of scientific inquiry, research so flawed has little relevance for anyone other than the people who conduct and publish it. To understand the steady flow of pseudoscience in instructional technology, it is necessary to look at its source. Most of it emanates from colleges and schools of education that have graduate programs in instructional technology. As Kramer (1991) points ouà in Ed School Follies, these institutions are "intent on proving that education is an academic discipline with its own subject matter worthy of a place alongside other university schools and departments (p. 8). The faculty in these programs are subject to the same "publish or perish" pressure as their colleagues in arts and sciences. They quickly learn that it is the number of refereed publications they can amass, not the relevance or value of their research, that really matters when they come up for tenuràe and promotion.

Needless to say, this problem is hardly limited to instructional technology programs. Colleges and schools of education reward pseudoscience in every discipline from early childhood education though vocational education. A new report issued by the Holmes Group called "Tomorrow's Schools of Education," calls for tenure and promotion guidelines to be revamped so that professors are rewarded less for research and publication and more for work in the public schools (Nicklin, 1995). If such a radical shift in tàhe reward structure could be accomplished, I cannot believe that we would continue to conduct pseudoscience when we could be rewarded for making a difference in the schools where the needs are so great.

Frankly, the likelihood of changing the reward structure within universities seems at best remote. However, as instructional technologists, we do not have to wait for such a change to occur. Another way of increasing the relevance of instructional technology would be to call a moratorium on our efforts to find out how instructional technology can effect learning through empirical research. Instead, we should turn our attention to making education work better. As Cronbach (1975) pointed out two decades ago, our empirical research may be doomed to failure because we simply cannot pile up generalizations fast enough to adapt our instructional treatments to the myriad of variables inherent in any given instance of instruction. It would seem that we stand a better chance of having a positive influence on educational practice if we engage in developmental research situated in schools with real problems.

Can reports of developmental research be published? Of course! After all, as noted above, the same people who conduct the research are the gatekeepers who determine what is accepted for publication in our most important journals. We are all in this together, and if we want to fundamentally change the nature of our game we can. At the same time, we can still meet the frustrating, but practical, requirements of the larger academic game by providing our scholars with an outlet in refereed publications, albeità ones that have been radically improved in terms of goals, methods, and relevancy.

Steps Toward Socially Responsible Research

It is not enough to criticize research in instructional technology as characterized by pseudoscience and social irrelevance. Alternatives to the old ways must be found. Some may demur, believing that instructional technologists are incapable of conducting valid, socially relevant research, and that they should stick to instructional design and evaluation, leaving educational research to cognitive psychologists or practitioners better equipped to conduct it. I disagree. I think we can and will conduct meaniàgful research provided we acknowledge the sterility of our existing research base and build anew from a foundation of sound learning theory and rededicated concern for the social impact of our research. What would be the nature of a new socially relevant research agenda? Two recent studies that represent a change in direction toward developmental research are the dissertation study conducted by Idit Harel at M.I.T. (1991) and the on-going research of Richard Lehrer (1993) and his associates at the Universiàty of Wisconsin.

Harel's (1991) Instructional Software Design Project (ISDP) represents a unique effort to use programming as a cognitive tool within a software design context. Harel's ISDP combines Papert's "constructionist" theory (1993) with Perkins "knowledge as design" pedagogy (1986). In her dissertation research, seventeen fourth grade students used Logo for a semester to create software products that were intended to teach fractions to third grade students. Her study combined quantitative, qualitative, and comparatàive research methods to investigate the effects of this "learners as designers" approach.

Harel reports that the fourth grade students spent an average of seventy hours working on their software design projects. The actual nature of the software the students designed was open, but they were two requirements for students in the program: 1) writing in a "Designer's Notebook" every day, and 2) attending periodic "Focus Sessions" about software design, Logo programming, and fractions. A teacher and the researcher were available at all times to help the students with their design efforts. Although eàach of the students produced a separate software product, collaboration among the students was encouraged.

Harel compared the differences in Logo skills and fractions knowledge between the seventeen students in the ISDP and thirty-four other students in two classes who were studying Logo and fractions via "a traditional teaching method" (p. 263). No significant differences were found in pretests among the three classes. Harel reports that "In general, the 17 children of the experimental class did better than the other 34 children on all posttests (Fractions and Logo)" (p. 272). Although not all differences wereà statistically significant, the general trend was quite positive in terms of specific learning outcomes as measured by multiple measures including paper-and-pencil tests, computer exercises, video-taped observations, and interviews.

The major part of Harel's (1991) study is a detailed description of the activities and metacognition of one student, "Debbie," over the four month period of the project. Harel's wrote that her detailed analysis of Debbie's work as well as her observations of other students indicated that "Throughout ISDP, the students were constantly involved in metacognitive acts: learning by explaining, creating, and discussing knowledge representations, finding design strategies, and reflecting on all of the above" (p. à59). In addition to positive cognitive effects in terms of metacognition, Harel concluded that the ISDP students acquired enhanced cognitive flexibility, better control over their problem-solving, and greater confidence in their thinking abilities. She notes however that the study did not include any direct measures of thinking skills, but her own interpretations of the students' metacognition and problem-solving processes based upon observations and analysis of documentation such as their Designer's Notebàooks.

Lehrer (1993) describes the development, use, and results of a hypermedia construction tool called HyperAuthor that eighth graders used to design their own lessons about the American Civil War. This approach is based upon the cognitive learning theory that knowledge is a process of design and not something to be transmitted from teacher to student (Perkins, 1986). Lehrer's students were engaged in "hyper-composition" by designing their own hypermedia. In this mode, learners transform information into dimenàsional representations, determine what is important and what is not, segment information into nodes, link the information segments by semantic relationships, and decide how to represent ideas. This is a highly motivating process because authorship results in ownership of the ideas in the hypermedia (Jonassen, in press).

Lehrer's subjects were high and low ability eighth graders who worked at the hypermedia construction tasks for one class period of 45 minutes each day over a period of several months. The students worked in the school's media center where they had access to a color Macintosh computer, scanner, sound digitizer, HyperAuthor software, and print and non-print resources about the Civil War. An instructor was available to coach students in the conceptualization, design, and production of hypermedia. Students creàated programs reflecting their unique interests and individual differences. For example, they created hypermedia about the role of women in the Civil War, the perspectives of slaves toward the war, and "not-so-famous people" from that period.

According to Lehrer, "The most striking finding was the degree of student involvement and engagement" (p. 209). Both high and low ability students became very task-oriented, increasingly so as they gained more autonomy and confidence with the mindtools. At the end of the study, students in the hypermedia group and a control group of students who had studied the Civil War via traditional classroom methods during the same period of time were given an identical teacher-constructed test of knowledge. No signifàcant test differences were found. Lehrer conjectured that "these measures were not valid indicators of the extent of learning in the hypermedia design groups, perhaps because much of what students developed in the design context was not anticipated by the classroom teacher" (p. 218). However, a year later, when students in the design and control groups were interviewed by an independent interviewer unconnected with the previous year's work, important differences were found. Students in the control group coàld recall almost nothing about the historical content, whereas students in the design group displayed elaborate concepts and ideas that they had extended to other areas of history, Most importantly, although students in the control group defined history as the record of the facts of the past, students in the design class defined history as a process of interpreting the past from different perspectives. In short, the hypermedia "design approach lead to knowledge that was richer, better connected, and more aàpplicable to subsequent learning and events" (p. 221).

A New Beginning

What a contrast exists between the Harel (1991) and Lehrer (1993) studies and the morass of pseudoscience endemic in our field! In the first instances, pedagogical models grounded in robust learning theories have been identified, and subsequently, powerful technologies have been used to implement these models. In the latter, the power of various forms of technology to instruct is assumed, and reductionist experiments are conducted to detect its effects. Salomon's (1991) landmark paper about analytic and syàstemic approaches to education research highlights this contrast. Salomon argues that the contrast transcends the "basic versus applied" or "quantitative versus qualitative" arguments that so often dominate debates about the relevancy of educational research.

Salomon (1991) concludes that the analytic and systemic approaches are complementary, arguing that "the analytic approach capitalizes on precision while the systemic approach capitalizes on authenticity" (p. 16). While I agree with this in theory, the dominance of pseudoscience in instructional technology invalidates this complementarity in practice. The ugly truth is those of us who engage in analytic research approaches consistently violate many of the basic premises of this paradigm, especially with respect to the testing of meaningful hypotheses derived from strong theory (Reeves, 1993). Although we may eventually be able to conduct valid, socially responsible analytic studies in instructional technology, that time has not yet arrived.

Is instructional technology research socially responsible? At the present time, it is not. Are we asking the wrong questions? For the most part, yes. Can we change this sad state of affairs? Of course, if we have the will! Again, Salomon (1991) points the way. A major benefit of systemic research in education is that it yields new questions and nurtures the development of new theory. The aforementioned moratorium on analytic studies in our field could give us the theoretical foundations for a socially relevant analytic research agenda early in the 21st Century. There are hopeful signs as indicated by the studies of Harel (1991) and Lehrer (1993) and the methodological prescriptions of Neuman (1989), Newman (1990), and Salomon (1991).

Part of our problem stems from the "mindlessness" that is endemic in so much of our professional and personal lives as we near the 21st Century. The social psychologist, Ellen Langer, documents the terrible costs of mindless behavior in education, health care, and business in her book 1989 book, Mindfulness. She writes:

"When we are behaving mindlessly, that is to say, relying on categories drawn in the past, endpoints to development seem fixed. We are then like projectiles moving along a predetermined course. When we are mindful, we see all sorts of choices and generate new endpoints. Mindful involvement in each episode of development makes us freer to map our own course. (pp. 96-97)"

The demise of JCBI, the recurring "influence of media" debate, and the prevalence of pseudoscience in our field are all signals that we need to become more mindful about our research. If we continue as before, mindlessly conducting pseudoscience, the obsolescence of our field per se is a likely outcome. Already, the most exciting learning and performance environments are not coming out of Departments of Instructional Technology (cf., Cognition and Technology Group at Vanderbilt, 1992). On the other hand, aàs Langer emphasizes, mindfulness opens up all kinds of possibilities. Let us seize this opportunity to stop being pawns in "someone else's costly construction of reality" (p. 28) and realize that we, and we alone, can assure the validity and social relevance of research in instructional technology.

References

Arnone, M. P., & Grabowski, B. L. (1992). Effects on children's achievement and curiosity of variations in learner control over an interactive video lesson. Educational Technology Research and Development, 40(1), 15-27.

Carroll, J. B. (1973). Basic and applied research in education: Definitions, distinctions, and implications. In H. S. Broudy, R. H. Ennis, & L. I. Krimerman (Eds.), Philosophy of educational research (pp. 108-121). New York: John Wiley & Sons.

Casti, J. L. (1989). Paradigms lost: Images of man in the mirror of science. New York: William Morrow.

Clark, R. E. (1994a). Media will never influence learning. Educational Technology Research and Development, 42(2), 21-29.

Clark, R. E. (1994b). Media and method. Educational Technology Research and Development, 42(3), 7-10.

Clark, R. E. (1983). Reconsidering research on learning with media. Review of Educational Research, 53(4), 445-459.

Cognition and Technology Group at Vanderbilt (1992). The Jasper experiment: An exploration of issues in learning and instructional design. Educational Technology Research and Development, 40(1), 65-80.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116-126.

Dick, W., & Dick, W. D. (1989). Analytical and empirical comparisons of the Journal of Instructional Development and Educational Communication and Technology Journal. Educational Technology Research and Development, 37(1), 81-87.

Driscoll, M. P. (1995). Paradigms for research in instructional systems. In G. L. Anglin (Ed.), Instructional technology: Past, present, and future (pp. 322-329). Englewood, CO: Libraries Unlimited.

Eisner, E. W. (1991). The enlightened eye: Qualitative inquiry and the enhancement of educational practice. New York: Macmillan.

Farley, F. H. (1982). The future of educational research. Educational Researcher, 11(8), 11-19.

Harel, I. (Ed.). (1991). Children designers: Interdisciplinary constructions for learning and knowing mathematics in a computer-rich school. Norwood, NJ: Ablex Publishing.

Hlynka, D., & Belland, J. C. (Eds.). (1991). Paradigms regained: The uses of illuminative, semiotic and post-modern criticism as modes of inquiry in educational technology: A book of readings. Englewood Cliffs, NJ: Educational Technology Publications.

Jonassen, D. H. (in press). Mindtools for schools. New York: Macmillan.

Jonassen, D. H., Campbell, J. P., & Davidson, M. E. (1994). Learning with media: Restructuring the debate. Educational Technology Research and Development, 42(2), 31-39.

Kinzie, M. B., & Sullivan, H. J. (1989). Continuing motivation, learner control, and CAI. Educational Technology Research and Development, 37(2), 5-14.

Kozma, R. B. (1994a). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7-19.

Kozma, R. B. (1994b). A reply: Media and methods. Educational Technology Research and Development, 42(3), 11-14.

Kramer, R. (1991). Ed school follies. New York: The Free Press.

Langer, E. J. (1989). Mindfulness. Reading, MA: Addison-Wesley.

Leatherman, C. (1995, February 3). Credentials on trial. The Chronicle of Higher Education, p. A14, A16.

Lehrer, R. (1993). Authors of knowledge: Patterns of hypermedia design. In S. P. Lajoie & S. J. Derry (Eds.), Computers as cognitive tools (pp. 197-227). Hillsdale, NJ: Lawrence Erlbaum.

Lemisch, J. (1995, January 20). The First Amendment is under attack in cyberspace. The Chronicle of Higher Education, p. A56.

L÷pez, C. L., & Harper-Meriniak, M. (1989). The relationship between learner control of CAI and locus of control among Hispanic students. Educational Technology Research and Development, 37(4), 19-28.

Lumsdaine, A. A. (1963). Instruments and media of instruction. In N. Gage (Ed.), Handbook of research on teaching Chicago: Rand McNally.

McGrath, D. (1992). Hypertext, CAI, paper, or program control: Do learners benefit from choices? Journal of Research on Computing in Education, 24(4), 513-532.

Mielke, K. W. (1968). Questioning the questions of ETV research. Educational Broadcasting, 2, 6-15.

Morrison, G. R. (1994). The media effects question: "Unresolvable" or asking the right question. Educational Technology Research and Development, 42(2), 41-44.

Neuman, D. (1991). Learning disabled students' interactions with commercial courseware: A naturalistic study. Educational Technology Research and Development, 39(1), 31-49.

Neuman, D. (1989). Naturalistic inquiry and computer-based instruction: Rationale, procedures, and potential. Educational Technology Research and Development, 37(3), 39-51.

Newman, D. (1990). Opportunities for research on the organizational impact of school computers. Educational Researcher, 19(3), 8-13.

Nicklin, J. L. (1995, February 3). Education-school group issues scathing, self-critical report. The Chronicle of Higher Education, p. A17.

Perkins, D. N. (1986). Knowledge as design. Hillsdale, NJ: Lawrence Erlbaum.

Reeves, T. C. (1993). Pseudoscience in computer-based instruction: The case of learner control research. Journal of Computer-Based Instruction, 20(2), 39-46.

Reeves, T. C. (1986). Research and evaluation models for the study of interactive video. Journal of Computer-Based Instruction, 13, 102-106.

Reiser, R. A. (1994). Clark's invitation to the dance: An instructional designer's response. Educational Technology Research and Development, 42(2), 45-48.

Robinson, R. S. (1995). Qualitative research - A case for case studies. In G. L. Anglin (Ed.), Instructional technology: Past, present, and future (pp. 330-339). Englewood, CO: Libraries Unlimited.

Ross, S. M. (1994a). Delivery trucks or groceries? More food for thought on whether media (will, may, can't) influence learning. Educational Technology Research and Development, 42(2), 5-6.

Ross, S. M. (1994b). From ingredients to recipes....and back: It's the taste that counts. Educational Technology Research and Development, 42(3), 5-6.

Ross, S. M., Morrison, G. R., & O'Dell, J. K. (1989). Uses and effects of learner control of context and instructional support in computer-based instruction. Educational Technology Research and Development, 37(4), 29-39.

Salomon, G. (1991). Transcending the qualitative-quantitative debate: The analytic and systemic approaches to educational research. Educational Researcher, 20(6), 10-18.

Schramm, W. (1977). Big media, little media. Beverly Hills, CA: Sage Publications.

Schubert, W. H., & Schubert, A. L. (1990). Alternative paradigms in curriculum inquiry (pp. 157-162). In H. J. Walberg & G. D. Haertel (Eds.), The international encyclopedia of educational evaluation. New York: Pergamon Press.

Seabrook, J. (1994, June 6). My first flame. The New Yorker, p. 70-79.

Shrock, S. A. (1994). The media influences debate: Read the fine print, but don't lose sight of the big picture. Educational Technology Research and Development, 42(2), 49-53.

Soltis, J. F. (1992). Inquiry paradigms. In M. C. Alkin (Ed.), Encyclopedia of educational research (pp. 620-622). New York: Macmillan.

Stephenson, N. (1993). Snow crash. New York: Bantam Books.

Tennyson, R. D. (1994). The big wrench vs. integrated approaches: The great media debate. Educational Technology Research and Development, 42(3), 15-28.

Ullmer, E. J. (1994). Media and learning: Are there two kinds of truth? Educational Technology Research and Development, 42(1), 21-32.

Yeaman, A. R. J. (1994). Deconstructing modern educational technology. Educational Technology, 34(2), 15-24.

1 The belief that biologists and other natural scientists don't have to be as concerned about the social relevance of their research as social scientists is being tested in the courts (Leatherman, 1995). A female biology professor denied tenure at Vassar College sued on the grounds that the research of the male professors who voted on her tenure decision was less important than hers. The judge agreed finding that her research on skin differentiation might have "important implications" for cancer research àwhereas the research of the one of her male colleagues on spider behavior was "narrow" and subject to ridicule (p. A14).

2 Although I would have preferred to examine research publications in both journals during an identical six year period, this was not possible. ETR&D began its new format in 1989, and JCBI ceased publication at the end of 1993.

3 I am grateful to Marcy Driscoll, Don Ely, Kent Gustafson, Mike Hannafin, John Hedberg, and Walter Dick for their generous guidance in the development of this revised classification scheme. Of course, I take full responsibility for the flaws that will no doubt be revealed in its organization.

4 ETR&D is a product of the integration of two journals previously published by the Association for Educational Communications and Technology (Educational Communications and Technology Journal and Journal of Instructional Development). ETR&D is divided into two sections, a research section and a development section. This analysis only considered the research section of ETR&D.

Cite this document as:
Reeves, Thomas C. Questioning the Questions of Instructional Technology Research. [Online] Available http://www.hbg.psu.edu/bsed/intro/docs/dean/, February 15, 1995.