Technology Collection Tool

TECHNOLOGY AS A RECORDING, ANALYSIS, AND COMMUNICATION TOOL

The jury is still out on how two powerful educational trends—the rise of construc- tivist teaching practices and the increased demand for accountability as expressed through high-stakes testing—will find ways to coexist in education practice. For now, society still expects student progress to be translated into numerical form for easier understanding and comparison. Data-driven decision strategies are the focus of the teacher in improving the school’s academic yearly progress (AYP) (Gamble- Risley, 2006). Standardized tests are required not only of those students desiring to pursue postsecondary study, but also of students every year in most K–12 class- rooms. While we know that learner-centered and problem-solving activities help stu- dents to make sense of the world, local standards and curriculum still place great value on the memorization of discrete data and demonstration of other lower-order skills. It is therefore important that educators comprehend how the power of tech- nology can contribute to the organization and accuracy of test taking, from sophis- ticated college entrance exams to simpler tests and quizzes that teachers can create themselves and use in their classrooms with all ages of students.

Objective tests assess knowledge that can be responded to with right or wrong answers. Other assessment tasks, such as mathematical computational problems or longer written works, do not lend themselves as well to electronic assessment. Using a computer to deliver a test offers testing on demand, the inclusion of multimedia elements to create real-life testing scenarios, and fast, accurate scoring and reporting of results (Straetmans & Eggen, 1998). Teachers can use this information to make im- mediate adjustments in instruction, and test questions can be edited and updated at any time to reflect instructional variation (Bushweller, 2000).

The most sophisticated computerized tests, computer adaptive tests, are “smart” enough to actually change their form in response to a test taker’s input. More tradi- tional paper-and-pencil tests assess all students at the same level, meaning all students must take a longer test consisting of all levels of test items, some too difficult and some too easy. Computer adaptive tests attempt to mimic a human examiner by adjusting to a level of difficulty appropriate to each individual user (Straetmans & Eggen, 1998). During examination, the computer selects an item from a large item bank and pre- sents it to the user. If the user responds correctly, a more difficult item is presented next; if incorrectly, the computer presents an easier item.

The goal is to assess student knowledge accurately in the quickest sequence of test items. An additional benefit of computer adaptive testing is that each test taker is es- sentially given a unique test, decreasing the potential for cheating (Center for Advanced Research on Language Acquisition, 1999). Computer adaptive formats are now being offered for the Graduate Record Exam (see GRE at http://www.gre.com), and computerized versions of the SAT and ACT are being considered. Software is also available for dis- tricts to create computer adaptive exams based on local standards (e.g., Northwest Eval- uation  Association,  www.nwea.org/cat-int.htm).

Computer-based tests are direct electronic versions of their paper-and-pencil coun- terparts. The computerized format is attractive because of the ease and accuracy of scoring large quantities of tests, leaving teachers time for more academic tasks, such as planning and instruction. Students have a motivating test format, can focus on one item at a time (Kingsbury, 2002), can take the tests when and where they want, can receive individual and immediate feedback, and can return to previous answers for further deliberation.

Results on the effectiveness of computer-based tests are mixed, with some studies demonstrating increased student accuracy on computer-based exams than on the same paper versions (e.g., Bocij & Greasley, 1999) and others showing com- puterized and standard formats to be equivalent (e.g., Haaf, Duncan, Skarakis- Doyle, Carew, & Kapitan, 1999). Results on computer-based tests may favor African American and Hispanic students (Gallagher, Bridgeman, & Cahalan, 2002). Results for computerized assessment of more subjective responses, such as essay writing, are even more inconclusive. Essay grading software matches scores of human graders 50 percent of the time on the Graduate Management Admission Test (GMAT) (Bushweller, 2000). This type of software assesses items such as passage length, redundancy, and spelling, but is less effective at assessing cleverness or or- ganization of a written piece (see, for example, the Intelligent Essay Assessor, http://www.knowledge-technologies.com).

Teachers also find computers a useful tool in creating their own informal electronic exams and quizzes, either for grading or review purposes in the class- room. Electronic tests can be created by entering questions and answers into test- creation software (e.g., HotPotatoes, web.uvic.ca/hrd/halfbaked). Some websites (e.g., WebAssign, http://www.webassign.net) allow teachers to take advantage of ready- made tests or to create exams based on item banks that can be given and scored online. Other sites give teachers freedom to easily create exams and quizzes in a variety of formats, such as multiple choice or essay, and have student results emailed directly to them (e.g., FunBrain’s Quiz Lab, www.FunBrain.com, and Quia, http://www.quia.com). These sites also feature databases of ready-made  quizzes searchable by content area or grade level (see Figure 15.1). Some advise attention to security to protect the integrity of test data collected or stored through the Internet (Shermis & Averitt, 2002).

FIGURE 15.1       FunBrain.com.

SaveSave

Questions