![]() Sonnleitner, Philipp ![]() ![]() E-print/Working paper (2012) Detailed reference viewed: 95 (2 UL)![]() Keller, Ulrich ![]() ![]() Software (2012) Detailed reference viewed: 166 (8 UL)![]() Sonnleitner, Philipp ![]() ![]() Report (2012) Detailed reference viewed: 243 (4 UL)![]() Sonnleitner, Philipp ![]() ![]() Software (2012) Detailed reference viewed: 120 (3 UL)![]() Sonnleitner, Philipp ![]() ![]() in Psychological Test and Assessment Modeling (2012), 54 Computer-based problem solving scenarios or “microworlds” are contemporary assessment instruments frequently used to assess students’ complex problem solving behavior – a key aspect of today’s educational ... [more ▼] Computer-based problem solving scenarios or “microworlds” are contemporary assessment instruments frequently used to assess students’ complex problem solving behavior – a key aspect of today’s educational curricula and assessment frameworks. Surprisingly, almost nothing is known about their (1) acceptance or (2) psychometric characteristics in student populations. This article introduces the Genetics Lab (GL), a newly developed microworld, and addresses this lack of empirical data in two studies. Findings from Study 1, with a sample of 61 ninth graders, show that acceptance of the GL was high and that the internal consistencies of the scores obtained were satisfactory. In addition, meaningful intercorrelations between the scores supported the instrument’s construct validity. Study 2 drew on data from 79 ninth graders in differing school types. Large to medium correlations with figural and numerical reasoning scores provided evidence for the instrument’s construct validity. In terms of external validity, substantial correlations were found between academic performance and scores on the GL, most of which were higher than those observed between academic performance and the reasoning scales administered. In sum, this research closes an important empirical gap by (1) proving acceptance of the GL and (2) demonstrating satisfactory psychometric properties of its scores in student populations. [less ▲] Detailed reference viewed: 541 (38 UL)![]() ![]() Sonnleitner, Philipp ![]() ![]() Scientific Conference (2012) Computer-based problem solving scenarios—so-called microworlds—have often been suggested as promising alternative assessment instruments of intelligence. Potential benefits compared to traditional paper ... [more ▼] Computer-based problem solving scenarios—so-called microworlds—have often been suggested as promising alternative assessment instruments of intelligence. Potential benefits compared to traditional paper-pencil tests involve tracking of students’ mental representations of the problems as well as their problem solving strategies by means of behavioral data. Though, it is still topic of ongoing debate whether the skills assessed by such microworlds are distinct from or identical to the construct of intelligence as measured by conventional reasoning tests. To address this issue, we thoroughly investigated construct and incremental validity of a recently developed microworld, the Genetics Lab (Sonnleitner et. al, 2011). We obtained data from a multilingual and representative Luxembourgish student sample (N = 563) who completed the Genetics Lab and 3 reasoning scales of an established intelligence test battery. Results of a Confirmatory Factor Analysis suggest that the construct assessed by the Genetics Lab is largely identical to the construct of intelligence as measured by traditional reasoning scales. Incremental validity was found with respect to the performance in a national assessment of students’ competencies and performance in the PISA study of 2009. Thus, the notion of microworlds to be a valuable measure of intelligence is supported. [less ▲] Detailed reference viewed: 105 (1 UL)![]() ![]() Sonnleitner, Philipp ![]() ![]() ![]() Scientific Conference (2011, September 26) Detailed reference viewed: 126 (3 UL)![]() Sonnleitner, Philipp ![]() ![]() ![]() Scientific Conference (2011, April 27) Detailed reference viewed: 92 (3 UL)![]() ![]() Sonnleitner, Philipp ![]() ![]() Scientific Conference (2011) Assessments of intelligence by means of paper-pencil tests faced several critiques that point to their lack of face validity, insufficient coverage of the definition of intelligence, their sensitivity to ... [more ▼] Assessments of intelligence by means of paper-pencil tests faced several critiques that point to their lack of face validity, insufficient coverage of the definition of intelligence, their sensitivity to the emotional state of the test taker, and the danger of getting outdated. The present paper discusses to what extent these limitations can be overcome by computer-based problem solving scenarios–so-called microworlds. Generally speaking, microworlds are supposed to be highly accepted by test takers, to provide process measures by directly tracing problem solving behavior, and to realize game-like characteristics that may increase test motivation and reduce test anxiety. To capitalize on these potential advantages, we developed the microworld Genetics Lab that was completed by a large, heterogeneous sample of more than 600 Luxembourgish students. Performance scores were derived for students’ problem solving strategies as well as their mental problem representations–important cognitive data which are not accessible with typical paper-pencil tests. Analyses of the psychometric characteristics of the Genetics Lab empirically underscored the construct validity for the derived performance scores. For example, process oriented measures of strategy use were found to possess discriminant validity with respect to grades. Further, acceptance and induced test anxiety of the Genetics lab was explored relative to a paper-pencil measure of intelligence. Our results show that the Genetics Lab is a reliable and valid assessment instrument and emphasize the benefits of using microworlds for assessing intelligence. [less ▲] Detailed reference viewed: 165 (1 UL)![]() Sonnleitner, Philipp ![]() ![]() Scientific Conference (2011) In recent years, computer based assessment has undergone substantive change. Test developers as well as test users have become aware of the fact that computers can do more than administrate traditional ... [more ▼] In recent years, computer based assessment has undergone substantive change. Test developers as well as test users have become aware of the fact that computers can do more than administrate traditional (paper-pencil) item formats like multiple-choice. More complex computer based item types allow tracking test takers’ mental representations of the problem or even their problem solving strategies by means of behavioral data. An example for such a modern item type are microworlds – dynamically changing problem solving scenarios with which the test taker has to interact. However, with the advent of complex item types, new challenges arise. First, usability is at stake - test takers do not intuitively know what to do or how to interact with complex tasks. Second, a massive load of data is produced and it gets difficult for the test developer to decide on relevant scores. Third, today’s students - so-called “digital natives”- grew up with computers and therefore set high quality standards for software applications. They may quickly loose trust and interest in tests with old fashioned design, cumbersome handling or even malfunctioning software. On basis of the Genetics Lab – a microworld developed to assess general mental ability – these challenges of modern computer based assessment are discussed. Three consecutive small scale studies were carried out to investigate usability issues, validate scoring algorithms and to ensure acceptance among students. The results demonstrate the importance of considering usability during the test development process, particularly with regard to scoring. The modification of conventional test development procedures for modern computer based assessment is suggested. Moreover, possibilities to satisfy even a critical target population are presented. [less ▲] Detailed reference viewed: 88 (3 UL)![]() ![]() ; Keller, Ulrich ![]() ![]() Scientific Conference (2011) Detailed reference viewed: 120 (0 UL)![]() ; ; et al in Educational Research and Evaluation (2011), 17(6), 483-495 In large-scale assessments, it usually does not occur that every item of the applicable item pool is administered to every examinee. Within item response theory (IRT), in particular the Rasch model (1960 ... [more ▼] In large-scale assessments, it usually does not occur that every item of the applicable item pool is administered to every examinee. Within item response theory (IRT), in particular the Rasch model (1960), this is not really a problem because item calibration works nevertheless. The different test booklets only need to be conceptualized according to a connected incomplete block design. Yet, connectedness of such a design should best be fulfilled severalfold, since deletion of some items in the course of the item pool’s IRT calibration may become necessary. The real challenge, however, is to meet constraints determined by numerous moderator variables such as different response formats and several topics of content – all the more so, if several ability dimensions are under consideration, the testing duration is strongly limited or individual scoring and feedback is an issue. In this article, we offer a report of how to deal with the resulting problems. Experience is based on the governmental project of the Austrian Educational Standards (Kubinger et al., 2007). [less ▲] Detailed reference viewed: 126 (3 UL)![]() ![]() ; ; et al Scientific Conference (2011) Detailed reference viewed: 130 (0 UL)![]() ![]() Sonnleitner, Philipp ![]() ![]() Scientific Conference (2011) Computer-based problem solving scenarios—so-called microworlds—are contemporary educational assessment instruments of intelligence that offer several benefits compared to traditional paper-pencil tests ... [more ▼] Computer-based problem solving scenarios—so-called microworlds—are contemporary educational assessment instruments of intelligence that offer several benefits compared to traditional paper-pencil tests. This involves tracking of students’ mental representations of the problems as well as their problem solving strategies by means of behavioral data which provides key information for educational interventions. Moreover, microworlds realize game-like characteristics that may increase test motivation and reduce test anxiety. In the present study, the Genetics Lab, a newly developed microworld, was completed by a representative sample of more than 800 Luxembourgish students. Students chose among three different languages (German, French and English) in which the problem content of the Genetics Lab was presented. The present paper analyzes the psychometric properties of the various performance scores derived for the Genetics Lab with respect to their relations to school grades, and measurement invariance across gender, chosen test language, and migration background. Moreover, a direct comparison with traditional measures of intelligence demonstrated construct validity of the performance scores of the Genetics Lab. In sum, the results obtained for the Genetics Lab show the benefits of behavioral data obtained for computer-based problem-solving scenarios and support the notion of microworlds to be a valuable measure of intelligence. [less ▲] Detailed reference viewed: 202 (0 UL)![]() ![]() ; Keller, Ulrich ![]() ![]() Scientific Conference (2011) Detailed reference viewed: 113 (0 UL)![]() ![]() Sonnleitner, Philipp ![]() ![]() Poster (2010) Grundlegender Wandel im schulischen Bereich führt zur berechtigten Frage, inwiefern nicht Innovationen in der Intelligenzdiagnostik nötig wären, um die hohe prädiktive Validität und dadurch wesentliche ... [more ▼] Grundlegender Wandel im schulischen Bereich führt zur berechtigten Frage, inwiefern nicht Innovationen in der Intelligenzdiagnostik nötig wären, um die hohe prädiktive Validität und dadurch wesentliche Bedeutung bei Schulplatzierungsentscheidungen weiterhin zu sichern. Die stärkere Betonung fächerübergreifender Kompetenzen, eine vermehrte Einbindung des Computers in Unterricht und Alltag sowie zunehmende Komplexität in der Arbeitswelt führen zu neuen Herausforderungen und beleben alte Kritik wieder. In diesem Kontext wurden immer wieder computerbasierte Problemlöseszenarien als vielversprechender Ansatz genannt, Schwächen traditioneller Intelligenztests zu überwinden. Vorteile werden, neben dem dynamischen Testformat und einer vollständigeren Abdeckung des Intelligenzbegriffes, unter anderem auch in erhöhter face-validity gesehen. Während bisherige Studien den Schwerpunkt auf das empirische Verhältnis zwischen Leistungsmaßen der Szenarien und traditionellen Intelligenzmaßen setzten, versucht dieser Beitrag, bewusst die Diskussion um theoretische Aspekte zu ergänzen. Einer Analyse der Anforderungen die an eine aktuelle Intelligenzdiagnostik zu stellen sind, folgt ein systematischer Vergleich traditioneller Intelligenztestformate mit computerbasierten Problemlöseszenarien. Konkrete Beispiele sowie aktuelle Daten zur Pilotierung eines derartigen Szenarios zur Intelligenzdiagnostik ergänzen die Diskussion. [less ▲] Detailed reference viewed: 258 (2 UL)![]() Sonnleitner, Philipp ![]() ![]() ![]() Scientific Conference (2009, September 06) Detailed reference viewed: 116 (5 UL)![]() ![]() Sonnleitner, Philipp ![]() in Rudinger, G.; Hörsch, K. (Eds.) Self-Assessment an Hochschulen: Von der Studienfachwahl zur Profilbildung (2009) Detailed reference viewed: 175 (0 UL)![]() Sonnleitner, Philipp ![]() Presentation (2009) Detailed reference viewed: 89 (0 UL)![]() ![]() Sonnleitner, Philipp ![]() Scientific Conference (2009) Due to inconclusive findings concerning the components responsible for the difficulty of reading comprehension items, this paper attempts to set up an item-generating system using hypothesis-driven ... [more ▼] Due to inconclusive findings concerning the components responsible for the difficulty of reading comprehension items, this paper attempts to set up an item-generating system using hypothesis-driven modeling of item complexity applying Fischer’s (1973) linear logistic test model (LLTM) to a German reading comprehension test. This approach guarantees an evaluation of the postulated item-generating system; moreover construct validity of the administered test is investigated. Previous findings in this field are considered; additionally, some text features are introduced to this debate and their impact on item difficulty is discussed. Results once more show a strong influence of formal components (e.g. the number of presented response options in a multiple-choice-format), but also indicate how this effect can be minimized. [less ▲] Detailed reference viewed: 69 (1 UL) |
||