Main Article Content

  • authorjp authorjp


Download data is not yet available.


Universitas Terbuka (UT) applied online examination system (sistem ujian online – SUO) for end of semester examination (ujian akhir semester-UAS), beside the paper and pencil test (P& P test). In order to improve efficiency, adaptive test application should be analyzed, as an alternative to present UAS system. The aim of the research was to compare the efficiency and accuracy level of the computerized adaptive testing (CAT) design and conventional test using both P & P test and SUO. The research was conducted by simulation procedure. The item bank for the simulation used calibrated 404 test items using item response theory model. In the research, CAT and P & P test algorithm was developed. To measure efficiency, the required number of the CAT design was analyzed, while to measure accuracy of the estimation, the bias and standard error of measurement of both design were compared. The simulation result showed that (1) CAT design was more efficient, since it required only half of the number of item which was used in P & P test, to estimate the ability of examinee, (2) CAT design was more accurate in estimating  ability of examinee, compared to P & P test design, since it resulted lower bias and standard error of measurement compared to conventional test design. Therefore, CAT design could be applied in UT’s UAS system, while considering the balance of content for each modules


computerized adaptive testing, item response theory, paper and pencil test


Baker, F.B. (1992). Item response theory: Parameter estimation techniques. New York: Marcel
Dekker, Inc.
Bond, T.G. & Fox, C.M. (2007). Applying the rasch model: Fundamental measurement in the human sciences (2nd ed). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.
Dodd, B.G. (1990). The effect of item selection procedure and stepsize on computerized adaptive attitude measurement using the rating scale model. Applied psychological measurement, 4,
355 – 366.
Hambleton, R.K. Swaminathan, H. & Rogers, H.J. (1991). Fundamentals of item response theory.
Newbury Park, CA: Sage Publications, Inc.
Lord, F.M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ : Lawrence Erlbaum Associates.
Thissen, D. (1990). Reliability and measurement precision. Dalam H. Wainer (Eds.), Computerized
adaptive testing: A primer (2nd ed., pp. 161–186). Hillsdale, NJ: Lawrence Erlbaum
Wainer, H., Dorans, N.J., Flaugher, R., Green, B.F., Mislery, R.J., Steinberg, L. et al. (1990).
Computerized adaptive testing: A primer (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum
Weiss, D.J. & Schleisman, J.L. (1999). Adaptive testing. Dalam G. N. Masters & J. P. Keeves (Eds.), Edvances in measurement in educational research and assessment (pp. 129–137). Pergamon, NY: Elsevier Science Ltd.
Vispoel, W.P. (1999). Creating computerized adaptive test of music aptitude: Problem, solusions, and future directions. Dalam F. Drasgow, & J. B. Olson-Buchanan (Eds.), Innovations in
computerized assessment (pp. 151 –176). Mahwah, NJ: Lawrence Erlbaum Associates

Article Sidebar

Published Dec 3, 2017
How to Cite
AUTHORJP, authorjp. EFISIENSI DAN AKURASI COMPUTERIZED ADAPTIVE TESTING PADA SISTEM UJIAN AKHIR SEMESTER UNIVERSITAS TERBUKA. Jurnal Pendidikan, [S.l.], v. 17, n. 2, p. 130-145, dec. 2017. ISSN 2443-3586. Available at: <>. Date accessed: 07 apr. 2020.