Last modified: 2012-05-11
Abstract
Background
Crucial to the design of favourable learning environments is the understanding of how students perceive various dimensions of education environments and sub-environments (1-4). The use of questionnaires to evaluate environments is methodologically problematic and under debate (5). However, such instruments can suggest areas for further exploration and development and be combined with other methods. An instrument that has been identified as a valid and reliable instrument for measuring educational environments in medical education is the Postgraduate Hospital Educational Environment Measure (PHEEM) (6). The instrument has also been used in undergraduate clerkships across countries. However, attempts to evaluate educational environments for undergraduate students in clinical rotations with PHEEM in Sweden indicate that these settings require measurements of additional and different aspects of the environment. The aim of our study was to develop an instrument suitable for undergraduate clinical settings, and to evaluate the psychometric properties of this instrument. The ultimate aim was a quality assessment tool that could be combined with qualitative methods in action research projects.
Method
A 38-item inventory was developed based on original and revised PHEEM items, on a student focus group-discussion and current research on work-based learning and medical students learning in clinical workplace (7-9). To determine content validity we used a combination of ratings of item-relevance, written comments on content and wordings from students and 15 other stakeholders (clinical teachers, health professionals and educational developers) and additional focus-group discussions. The inventory was distributed in a pilot-study including 77 students followed by a major study including 465 students (response rate 76%). In agreement with the authors of PHEEM, we named the questionnaire: Undergraduate Clinical Education Environment Measure (UCEEM).
Results
The instrument is currently being tested for construct validity and reliability through analyses of item response rate, floor and ceiling effect, corrected item-total correlations, factor structure and tests for internal consistency. Analyses of the pilot support construct validity and reliability. Final results of the major study will be presented.
References
1. Wilson, K., & Fowler, J. (2005) Assessing the impact of learning environments on students’ approaches to learning: Comparing conventional and action learning designs. Assessment and Evaluation in Higher Education, 30(1), 87-101.
2. Genn, JM. (2001) Curriculum, environment, climate, quality and change in medical education-a unifying perspective. AMEE Medical Education Guide No. 23 (Part 1): Medical Teacher Jul;23(4):337-44.
3. Genn JM. (2001) Curriculum, environment, climate, quality and change in medical education - a unifying perspective. AMEE Medical Education Guide No. 23 (Part 2): Medical Teacher Sep;23(5):445-54.
4. Harden, R.M. (2001) The learning environment and the curriculum,
Medical Teacher, 23, pp. 335–336.
5. Hane, M, Wennberg B-Å. Frågeformulärens kris. Göteborg: Samarbetsdynamik AB; 2002
6. Soemantri, D, Herrera C, Riquelme A. Measuring the educational environment in health professions studies: a systematic review. Med Teach;32(12):947-52.
7. Eraut, M. Informal learning in the workplace. Studies in Continuing Education2004;26(2 July 2004):247-73.
8. Wenger, E. Communities of practice : learning, meaning, and identity. Cambridge: Cambridge University Press; 1998.
9. Yardley, S. Teunissen,P.W. & Dornan, T. (2012) Experiential learning: AMEE Guide No. 63:34(2)2:102-115.