The proposed model aims to provide a comprehensive and multifaceted approach to assessing competencies and learning outcomes through both formative and summative methods. Formatively, students would receive ongoing feedback throughout their learning experience to help identify areas of strength and areas needing improvement. Summatively, assessments would evaluate the level of competency achieved at important milestones.
Formative assessments could include techniques like self-assessments, peer assessments, and process assessments conducted by instructors. Self-assessments would ask students to periodically reflect on and rate their own progress on various dimensions of each target competency. Peer assessments would involve students providing feedback to one another on collaborative work or competency demonstrations. Process assessments by instructors could include observations of student performances in class with rubric-based feedback on skills displayed.
Formative assessments would not be high-stakes evaluations but rather be geared towards guidance and improvement. Feedback from self, peer, and instructor sources would be compiled routinely in an individualized competency development plan for each student. This plan would chart progress over time and highlight areas still requiring focus. Instructors could then tailor learning activities, projects, or supplemental instruction accordingly to best support competency growth.
Summative assessments would serve to benchmark achievement at key transition points. For example, capstone courses at the end of degree programs could entail comprehensive competency demonstrations and evaluations. These demonstrations might take the form of student portfolios containing samples of their best work mapped to the targeted outcomes. Students could also participate in simulations, case studies, or practicum experiences closely mirroring real-world scenarios in their fields.
Evaluators for summative assessments would utilize detailed rubrics to rate student performances across multiple dimensions of each competency. Rubrics would contain clear criteria and gradations of competency level: exemplary, proficient, developing, or beginning. Evaluators would consider all available evidence from the student’s learning experience and aims to achieve inter-rater reliability. Students would receive individualized scored reports indicating strengths and any remaining gaps requiring remediation.
Assessment results would be aggregated both at the individual student level as well as at the program level, disaggregated by factors like gender, race, or academic exposure. This aggregation allows identification of systemic issues or biases benefiting from program improvements. It also permits benchmarking against outcomes at peer institutions. Student learning outcomes and competency achievements could be dynamically updated based on this ongoing review process.
For competencies spanning multiple levels of complexity, layered assessments may measure attainment of basic, intermediate and advanced levels over the course of a degree. As students gain experience and sophisticated in their fields, evaluations would shift focus to higher orders of application, synthesis, and creativity. Mastery of advanced competencies may also incorporate components like student teaching, research contributions, or externship performance reviews by employers.
Upon degree completion, graduates could undertake capstone exams, licensure/certification exams, or portfolio reviews mapped to the final programmatic competency framework. This would provide a final verification of readiness to perform independently at entry-level standards in their disciplines. It would also allow ongoing refinement and alignment of curriculum to ensure graduation of competent, career-ready professionals.
By utilizing a blended learning model of varied formative and summative assessments, mapped to clearly defined competencies, this proposed framework offers a comprehensive, evidence-based approach to evaluating student learning outcomes. Its multi-rater feedback and emphasis on competency growth over time also address critiques of high-stakes testing. When implemented with rigor and ongoing review, it could help ensure postsecondary education meaningfully prepares graduates for their careers and lifelong learning.