دراسة أثر بعض المتغيرات على أداء طلبة الصف الحادي عشر في مدارس دولة الكويت في الاختبارات الإلكترونية

المؤلفون

  • فهد عبدالله الخزي جامعة الكويت

الملخص

هدفت الدراسة التي طبقت على (521) طالبا وطالبة من طلبة الصف الحادي عشر في مدارس التعليم العام بدولة الكويت إلى اختبار أثر بعض المتغيرات (الجنس، التخصص، امتلاك جهاز حاسوب خاص، طبيعة المادة العلمية، والقدرة على المراجعة وتغيير الإجابات) على الأداء في الاختبارات الإلكترونية. وقد جمعت البيانات عن طريق ثلاثة اختبارات تحصيلية: اللغة العربية، واللغة الإنجليزية، إضافة إلى الإحصاء. وباستخدام مجموعة من الأساليب الإحصائية المناسبة لتحليل البيانات، أشارت النتائج إلى وجود فروق في الأداء في الاختبارات الإلكترونية تعزى لمتغيري طبيعة المادة العلمية، والقدرة على المراجعة وتغيير الإجابات. وقد خلصت الدراسة إلى مجموعة من التوصيات متعلقة بالاهتمام بحوسبة الاختبارات وتعميمها  وتدريب المعلمين ومعدي الاختبارات عليها، وإجراء المزيد من الدراسات في هذا المجال

التنزيلات

بيانات التنزيل غير متوفرة بعد.

المقاييس

يتم تحميل المقاييس...

السيرة الشخصية للمؤلف

فهد عبدالله الخزي، جامعة الكويت

كلية التربية

المراجع

أولا: المراجع العربية

- الخزي، فهد و الزكري، محمد. (قيد النشر). تكافؤ الاختبارات الإلكترونية مع الورقية في قياس التحصيل الدراسي: دراسة تجريبية على طلبة كلية التربية بجامعة الكويت. مجلة دراسات الخليج والجزيرة العربية.

- العساف، صالح (2003). المدخل إلى البحث في العلوم السلوكية. الرياض: مكتبة العبيكان

ثانيا: المراجع الأجنبية

Akdemir, O., & Oguz, A. (2008). Computer-based testing: An alternative for the assessment of Turkish undergraduate students. Computers & Education, 51(3), 1198-1204.

Al-Gahtani, S. S. (2003). Computer technology adoption in Saudi Arabia: Correlates of perceived innovation attributes. Information Technology for Development, 10(1), 57-69.

Ashton, H., Schofield, D., & Woodger, S. (2003). Pilot summative web assessment in secondary education. Paper presented at the 7th International Assisted Assessment Conference, Loughborough.

Ban, J., Hanson, B., Wang, T., Yi, Q., & Harris, D. (2001). A comparative study of on-line pretest item-calibration/scaling methods in computerized adaptive testing. Journal of Educational Measurement, 38(3), 191-212.

Bennett, R. (1999). Using new technology to improve assessment. Educational Measurement: Issues and Practice, 18(3), 5-12.

Bennett, R. (2001). How the Internet will help large-scale assessment reinvent itself. Education Policy Analysis Archive, 9(5), 1-23. Retrieved from http://epaa.asu.edu/

Bernt, F. & Bugbee, A. (1990). Factors influencing student resistance to computer administered testing. Journal of Research on Computing in Education, 22(1), 265-276.

Bracey, G. (1990). Computerized testing: A possible alternative to paper & pencil? Electronic Learning, 9(5), 16-17.

Bugbee, A. & Bernt, F. (1990). Testing by computer: findings in six years of use 1982-1988. Journal of Research on Computing in Education, 23(1), 87-100.

Chin, C. & Donn, J. (1991). Effects of computer-based tests on the achievement, anxiety, and attitudes of grade 10 science students. Educational and Psychological Measurement, 51(3), 735-745.

Clariana, R. & Wallace, P. (2002). Paper-based versus computer-based assessment: Key factors associated with the test mode effect. British Journal of Educational Technology, 33(5) 593-602.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.

Cohen, R. & Swerdlik, M. (2001). Psychological Testing and Assessment: An Introduction To Tests and Measurement (5th ed.). New York, NY: McGraw-Hill

Curtis, P. (2009, July 12). Computerised testing likely to replace traditional exams, says head of board. The Guardian. Retrieved November 06, 2010, from http://www.guardian.co.uk

Davidson, P. (2003). Why Technology Has Had Only A Minimal Impact On Testing In Education. Educational Technology Conference Proceedings, Sultan Qabus University, Oman, 65-79.

Davidson, P. (2003b). The Equivalence of Paper-Based and Computer-Based Tests. IATEFL Testing, Evaluation and Assessment SIG Newsletter, August.

Dunkel, P. (1997). Computer-Adaptive Testing of Listening Comprehension: A Blueprint for CAT Development. The Language Teacher Online, 21(10). Retrieved from http://jalt-publications.org

Ebel, R. (1972) Essentials Of Educational Measurement. Englewood Cliff, NJ: Prentice Hall.

Green, B., Bock R., Humphreys, L., Linn, R., & Reckase, M. (1984). Technical guidelines for assessing computerized adaptive tests. Journal of Educational Measurement 21, 347–360.

Haas, C. & Hayes, J. (1986). What did I just say? Reading problems in writing with the machine. Research in the Teaching of English, 20(1), 22-35.

Huff, K. & Sireci, S. (2001). Validity issues in computer-based testing. Educational Measurement: Issues and Practices, 20(3), 16-25.

Kaplan, R. & Saccuzzo, D. (2001). Psychological Testing: Principle, Applications and Issues (5th Ed). Belmont, CA: Wadsworth

Kilgore, J. (2009). Exploring the factors that influence attitudes and achievement when students take computerized tests (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3342471)

Kobrin, J. (2000). An Investigation of the Cognitive Equivalence of Computerized and Paper-and-Pencil Reading Comprehension Test Items. Paper presented at the Annual Meeting of the American Educational Research Association, New Orleans, Louisiana. (ED 442 836).

Kruk, R. & Muter, P. (1984). Reading continuous test on video screens. Human Factors, 26, 339-345.

Legg, S. & Buhr, D. (1992). Computerized adaptive testing with different groups. Educational Measuremen: Issues and Practice, 11, 23-27.

Lunz, M., Bergstrom, B. & Wright, B. (1992). The effect of review on student ability and test efficiency for computerized adaptive tests. Applied Psychological Measurement, 16, 33-40.

Mason, B., Patry, M., & Bernstein, D. (2001). An examination of the equivalence between non-adaptive computer-based and traditional testing. Journal of Educational Computing Research, 24(1), 29-39.

McDonald, A. (2002). The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessment. Computers & Education, 39(4), 299-312.

McKee, L., & Levinson, E. (1990). A review of the computerized version of the Self- Directed Search. Career Development Quarterly, 38(4), 325-333.

McMinn, M., Ellens, B., & Soref, E. (1999). Ethical perspectives and practice behaviors involving computer-based test interpretation. Psychological Assessment, 6(1), 71-77.

Mead, A. & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin 114, 449-458.

Meijer, R. & Nering, M. (1999). Computerized adaptive testing: Overview and introduction. Applied Psychological Measurement, 23(3), 187-194.

Mourant, R., Lakshmanan, R, & Chantadisai, R. (1981). Visual fatigue and cathode ray tube display terminals. Human Factors, 23(5), 529-540.

Morris, C., Brandsford, J., & Franks, J. (1977). Levels of processing versus transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior. 16, 519-533

Mueller, J. & Wasser, V. (1977). Implications of changing answers on objective test items. Journal of Educational Measurement, 14, 9-13.

Niemeyer, C. (1999). A computerized final exam for a library skills course. Reference Services Review, 27(1), 90-106.

Ogilvie, R., Trusk, T., & Blue, A. (1999). Student’s attitudes towards computer testing in a basic science course. Medical Education, 33, 828-831.

Olsen, J., Maynes, D., Slawson, D., & Ho, K. (1989). Comparison of paper-administered, computer-administered and computerized adaptive achievement test. Journal of Educational Computing Research, 5, 311-326.

Pallant, J. (2001). SPSS survival manual. Suffolk, UK: St. Edmundsbury Press.

Parshall, C, & Kromrey, J. (1993). Computer testing versus paper-and-pencil: An analysis of examinee characteristics associated with mode effect. A paper presented at the Annual Meeting of the American Education Research Association, Atlanta, GA, April (Educational Resources Document Reproduction Service (ERIC) # ED363272).

Parshall, C. & Balizet, S. (2001). Audio computer-based tests (CBTs): An initial framework for the use of sound in computerized tests. Educational Measurement :Issues and Practice, 20 (2), 5-15.

Parshall, C., Spray, J., Davey, T., & Kalohn, J. (2002). Practical considerations in computer-based testing. New York, NY: Springer.

Pomplun, M., Frey, S., & Becker, D. (2002). The score equivalence of paper-and-pencil and computerized versions of a speeded test of reading comprehension. Educational and Psychological Measurement, 62(2), 337-354.

Powers, D. (2001). Test anxiety and test performance: Comparing paper-based and computer-adaptive versions of the graduate record examinations (GRE) general test. Journal of Educational Computing Research, 24(3), 249-273.

Roever, C. (2001). Web-based language testing. Language Learning and Technology, 5(2), 84-94.

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York, NY: Free Press.

Roid, G. (1989). Item writing and item banking by microcomputer: An update. Educational Measurement: Issues and Practice, 8(3), 17-20.

Russell, M., Goldberg, A., & O’Connor, K. (2003). Computer-based testing and validity: A look back into the future. Assessment in Education: Principles, Policy & Practice, 10(3), 279-293.

Ryan, S., Scott, B., Freeman, H., & Patel, D. (2000). The Virtual University: The internet and resource-based learning. London, UK: Kogan Page.

Sawaki, Y. (2001). Comparability of conventional and computerized tests of reading in a second language. Language Learning and Technology, 5(2), 38-59.

Sherritt, C., & Basom, M. (1997). Using the internet for higher education. Retrieved from ERIC database. (ED 407 546).

Spray, J., Ackerman, T., Reckase, M., & Carlson, J. (1989). Effect of the Medium of Item Presentation on Examinee Performance & Item Characteristics. Journal of Educational Measurement, 26(3), 261-271.

Stewart, W. (2011, February 25). Pen and paper must go, says Ofqual head. TES Connect. Retrieved Retrieved March 01, 2010, from http://www.tes.co.uk

Stowell, J. & Bennett, D. (2010). Effects of online testing on student exam performance and test anxiety. Journal of Educational Computing Research, 42(2), 161-171.

Vispoel, W. (2000). Computerized versus paper-and-pencil assessment of self-concept: Score comparability and respondent preference. Measurement and Evaluation in Counseling and Development, 33(2), 130-143.

Vispoel, W., Boo, J., & Bleiler, T. (2001). Computerized and paper-and-pencil versions of the Rosenberg self-esteem scale: A comparison of psychometric features and respondent preferences. Educational and Psychological Measurement, 61, 461-474.

Vispoel, W., Wang, T., de la Torre, R., Bleiler, T. & Dings, J. (1992).How review options, administration mode and anxiety influence scores on computerized vocabulary tests. Paper presented at the Meeting of the National Council on Measurement in Education, San Francisco (ERIC Document Reproduction service, No. TM018547).

Wainer, H. (1993). Some practical considerations when converting a linearly administered test to an adaptive format. Educational Measurement: Issues and Practice, 12(1), 15-20.

Wang, H. & Shin, C. (2010). Comparability of Computerized Adaptive and Paper-Pencil Tests. Test, Measurement & Research Services. Retrieved from http://www.pearsonassessments.com/

Ward, T., Hooper, S., & Hannafin, K.. (1989). The effect of computerized tests on the performance and attitudes of college students. Journal of Educational Computing Research, 5, 327-333.

Watson, B. (2001). Key factors affecting conceptual gains from CAL materials. British Journal of Educational Technology, 32(5), 587-593.

Webster, J. & Compeau, D. (1996). Computer-assisted versus paper-and-pencil administration of questionnaires. Behavior Research Methods, Instruments, and Computers, 28(4), 567-576.

Weiss, D. & Kingsbury, G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361-375.

Weldon, L., Mills, C., Koved, L., & Schneiderman, B. (1985). The structure of information in outline and paper technical manuals. In R. W. Sweeney (Ed.). Proceedings of the Human Factors Society 29th Annual Meeting: Vol. 2. Human Factors Society (pp. 1110-1113). Santa Monica, CA.

Wilson, R. (2001). HTML E-mail: test font readability study. Retrieved from http://www.wilsonweb.com/wmt6/html-email-fonts.htm

Wise, S., and Plake, B. (1989). Research on the effects of administering tests via computers. Educational Measurement: Issues and Practice 8(3), 5-10.

Wright, P. & Lickorish, A. (1983).Proofreading tests on screen and paper. Behavior and Information Technology, 2, 227-235.

Young, R., Shermis, M., Brutten, S., & Perkins, K. (1996). From Conventional to Computer-Adaptive Testing of ESL Reading Comprehension. System, 24(1), 23-40.

Zandvliet, D. & Farragher, P. (1997). A Comparison of Computer-Administered and Written Tests. Journal of Research on Computing in Education, 29(4), 423-438.

التنزيلات

منشور

2011-06-01

كيفية الاقتباس

الخزي ف. ع. (2011). دراسة أثر بعض المتغيرات على أداء طلبة الصف الحادي عشر في مدارس دولة الكويت في الاختبارات الإلكترونية. مجلة العلوم الإنسانية, (35), 7–35. استرجع في من http://revue.umc.edu.dz/index.php/h/article/view/471

إصدار

القسم

Articles