Options
Modeling Competence Data in Large-Scale Educational Assessments
Haberkorn, Kerstin (2016): Modeling Competence Data in Large-Scale Educational Assessments, Bamberg: opus.
Faculty/Chair:
Author:
Publisher Information:
Year of publication:
2016
Pages:
Supervisor:
Year of first publication:
2015
Language:
English
Remark:
Bamberg, Univ., kumulative Diss., 2015
Licence:
Abstract:
Over the last decades, large-scale assessments focusing on skills and knowledge of individuals have expanded significantly in order to obtain information about conditions for and consequences of competence acquisition. To provide valid and accurate scores of the subjects under investigation, it is necessary to thoroughly check the psychometric properties of the competence instruments that are administered and to choose appropriate scaling models for the competence data. In this thesis, various challenges in modeling competence data were addressed that arose from different recently developed competence tests in large-scale assessments. The different tests posed specific demands on the scaling of the data such as dealing with multidimensionality, incorporating different response formats, or linking competence scores. By investigating these challenges associated with each of the competence tests, the aim of the thesis was to draw implications for the specification of the scaling models for the competence data.
First, a new metacognitive knowledge test for early elementary school children was investigated. As earlier findings on metacognitive knowledge in secondary school pointed to empirically distinguishable components of metacognitive knowledge, especially the dimensionality of the newly developed test was studied. Therefore, uni- and multidimensional models were applied to the competence data and their model fit was compared. By applying multidimensional latent-change models, the homogeneity of change was observed as further indicator for dimensionality. Overall, the new test instrument exhibited good psychometric properties including fairness of the items for various subgroups. In accordance with previous studies in other age groups the results indicated a multidimensional structure of the newly developed test instrument. In the discussion, theoretical as well as empirical arguments were compiled that should be considered for the choice of a uni- or a multidimensional model for the metacognitive knowledge data.
The next objective in the thesis was to study a series of reading competence tests intended to measure the same latent trait across a large age span. The different reading competence tests, developed in a longitudinal large-scale study, were based on the same conceptual framework and were administered from fifth grade to adulthood. We specifically investigated whether the test scores were comparable across such a large age span enabling to interpret change across time. The analyses on the reading competence tests showed that the coherence of measurement could not fully be assured across the wide age range. The application of strict linking models allowing for the interpretation of developmental progress seemed to be justified within secondary school, but not between secondary school and adulthood.
The last purpose in the thesis was to find out how to adequately incorporate different response formats in a scaling model. Therefore, multiple choice (MC) and complex multiple choice (CMC) items were regarded as they are most frequently used in large-scale assessments. Specifically, we explored whether the two response formats form distinct empirical dimensions and which a priori scoring schemes for the two response formats appropriately model the competence data. The results demonstrated that the response formats built a unidimensional measure across domains, studies, and age cohorts justifying to use a unidimensional scale score. A differentiated scoring of the CMC items yielded a better discrimination between persons and was, thus, preferred. The a priori weighting scheme of giving each subtask of a CMC item half the weight of a MC item described the empirical competence data well.
First, a new metacognitive knowledge test for early elementary school children was investigated. As earlier findings on metacognitive knowledge in secondary school pointed to empirically distinguishable components of metacognitive knowledge, especially the dimensionality of the newly developed test was studied. Therefore, uni- and multidimensional models were applied to the competence data and their model fit was compared. By applying multidimensional latent-change models, the homogeneity of change was observed as further indicator for dimensionality. Overall, the new test instrument exhibited good psychometric properties including fairness of the items for various subgroups. In accordance with previous studies in other age groups the results indicated a multidimensional structure of the newly developed test instrument. In the discussion, theoretical as well as empirical arguments were compiled that should be considered for the choice of a uni- or a multidimensional model for the metacognitive knowledge data.
The next objective in the thesis was to study a series of reading competence tests intended to measure the same latent trait across a large age span. The different reading competence tests, developed in a longitudinal large-scale study, were based on the same conceptual framework and were administered from fifth grade to adulthood. We specifically investigated whether the test scores were comparable across such a large age span enabling to interpret change across time. The analyses on the reading competence tests showed that the coherence of measurement could not fully be assured across the wide age range. The application of strict linking models allowing for the interpretation of developmental progress seemed to be justified within secondary school, but not between secondary school and adulthood.
The last purpose in the thesis was to find out how to adequately incorporate different response formats in a scaling model. Therefore, multiple choice (MC) and complex multiple choice (CMC) items were regarded as they are most frequently used in large-scale assessments. Specifically, we explored whether the two response formats form distinct empirical dimensions and which a priori scoring schemes for the two response formats appropriately model the competence data. The results demonstrated that the response formats built a unidimensional measure across domains, studies, and age cohorts justifying to use a unidimensional scale score. A differentiated scoring of the CMC items yielded a better discrimination between persons and was, thus, preferred. The a priori weighting scheme of giving each subtask of a CMC item half the weight of a MC item described the empirical competence data well.
GND Keywords: ; ; ; ; ; ; ;
Wissen
Metakognition
Lesekompetenz
Grundschulkind
Test
Vergleich
Multiple-Choice-Verfahren
Skalierung
DDC Classification:
Type:
Doctoralthesis
Activation date:
April 6, 2016
Permalink
https://fis.uni-bamberg.de/handle/uniba/40176