An article by Ira Socol, “Stop Chasing High-Tech Cheaters,” itself a response to recent NYT article, “Colleges Chases as Cheats Shift to Higher Tech,” presents a very interesting issue concerning evaluation procedures in institutions of higher ed. Perhaps these different perspectives represent a changing of the guard in terms of teaching methods: the younger generation of grad students like Socol embrace methods of “research”/”cheating” enabled by newer technology, while the older generation cringes at the thought that students might use “spell check” on in-class exams (prompting one journalism professor to have his students write their computer-aided exams with screens facing him). In any case, I am sure that my colleagues and friends are forced to ponder, as grading season is in full force, how should we evaluate our students?
Socol’s argument isn’t perfect, but I do like the perspective it offers. I had a wonderful High School biology professor, Dr. King, who used to say that you shouldn’t have to learn anything you could find out by looking up. In our Google-ized world, it may seem that there is very little left to know that cannot be “looked up” instantaneously. I sympathize with the remark, cited by Socol, “Why aren’t colleges teaching students how to research, organize and evaluate the information that is out there?” instead of continuing to demand from them rote memorization of facts that could so easily be found on-line.
Well, I agree that what we need to teach is effective research methods. But what are those methods? And could they include perhaps some of the mechanical practices that students have been tested on for ages? For instance, I have my students write in-class exams because I feel that one important skill in conducting research is the capacity to organize thoughts quickly and spontaneously, and then write these out coherently. I also think that handwriting is important, even in our world of the emerging hand-held PC, because hand-written notes cannot be eliminated when organizing research. I also personally find that there is something added by physically altering what one is working on (but that may be itself passé). However, the proposition I submit to my students is this: if your thoughts are not legible, they are not intelligible, and so they are not useful for research. In short, there certainly are some “old fashioned” practices, such as memorization, good spelling, etc. that remain indispensable, even in a high-tech world.
There is, of course, the issue of citation, which is an important research skill that I find very few college students have mastered. But I don’t see why this couldn’t be integrated with the use of new technologies. Websites ought to be cited just as books or articles ought to be cited. And perhaps technologically related techniques (such as word-searching an electronic database) ought to be acknowledged in certain cases.
The thing that always struck me about the outrage against cheaters is that it seems to be fueled by a kind of moral taboo: that there are certain practices which ought to be absolutely forbidden. For my part, I prefer a more aesthetic judgment: poor work is poor work and students who cheat generally do poor work. Apart from the wholesale purchasing of pre-written papers, something that I have never personally encountered, any case of cheating ought to be fairly recognizable: there are invariably sudden shifts in subject-matter and diction, students use obscure material not covered in the course, or introduce an example or fact without the requisite development and understanding of it. But these stylistic indicators of a “cheater” are also simply indications of poorly written papers. In my view they should be graded accordingly.
What is more, if paper topics and exam questions are geared toward evaluating what students have learned from the course, then issue of cheating should effectively be moot. That is to say, Googled research or purchased external papers are obviously not going to pertain directly to what particular instructors have taught in their particular courses. So these students will fail to demonstrate that they have actually internalized the teaching from the course. But herein lies the rub: how do we really evaluate the learning process? How do we judge whether students have internalized the infomation and methods that we are trying to convey to them?
Maybe the moral outrage against “cheating” funnels a deeper frustration about how to approach the very difficult task that is education.