Qualitative Assessment Pilot Project Update
As we reminded everyone in March, the CDC has used the newly revised CDL Debater Development Benchmarks to design professional development seminar agendas, write curriculum and exercises, and plan for site visits. Over the past month we have been implementing a Qualitative Assessment Pilot Project, one intended to teach us as much about the process of assessing debater development and the attainment of academic debate learning objectives, as it will about the status of debaters’ understanding of and competency with specific academic debating skills.
We have worked with 14 evaluators, all experienced coaches, 8 of which have undertaken fairly intensive training and have done the preponderance of the assessments. CDC staff also performed assessments. Trainings underlined the rigor with which assessments must closely and carefully follow the relevant Qualitative Assessment Project Rubric — for debaters in:
[Be sure to download these documents — they won’t be as clear when you click on the “preview” version of them.]
This project, we’ve underlined, is not like judging debate rounds. It is a process whereby the evaluator carefully isolates indicators from in-round performance, and review of debaters’ files and flows, in order to assess how well debaters have attained a series of specific learning objectives (64 in all, 20 – 22 in each of three levels of experience) that the Debater Development Benchmarks isolate as the essential academic content of competitive debate. Evaluators perform an education testing service, rather than an academic competition’s arbitration. And perhaps unlike in the regular classroom these days, competitive debate has very few if any of these academic assessments, despite being a program whose essential purpose is improving academic performance and college readiness.
Evaluators have performed in-round assessments on 141 CDL debaters from 36 high schools: 102 year-one debaters, 15 year-two debaters, and 24 year-two debaters. A total of 2,226 ratings were given to debaters across the 64 benchmarks. We have preliminarily established a rating of 6 (out of a possible 10) as the threshold for attaining standard on each of the benchmarks. Here are summary points from the aggregated assessments.
- Overall, across all 141 assessments, students attained standard at a 70% rate
- Overall, across all 141 assessments, the average rating was 6.59
- For first-year students, they attained standard at a 69% rate and their average rating was 6.61
- For second-year students, they attained standard at a 63% rate and their average rating was 5.90
- For third/fourth-year students, they attained standard at a 90% rate and their average rating was 7.54
We will be sending each school each of their individual assessments, in addition to their school averages (by experience level). We will also be soliciting feedback on how to roll out qualitative assessment more systematically in 2012/13. Our thinking now is that we will try to assess each Varsity debater very early in the season and again late in the season, to isolate their learning differential relative to benchmarks. And we will do a larger sampling assessment of first-year debaters, possibly assessing at least four debaters from each school, late in the season (assuming a first-year baseline prior to the season of zero).
And as we mentioned in the last post on the QAP, we encourage you to closely and carefully review the Qualitative Assessment Rubrics. We think that they can be highly valuable to you in your own understanding of the CDL benchmarks, of what students should be learning each year they participate in the CDL, and of what and even how you can be teaching your students competitive academic debate.
We also encourage your feedback — on the benchmarks (which we’ve solicited and received last year, and would like more of, if you have it), on the assessment rubrics, and on our evolving process of assessment.
Leave a ReplyWant to join the discussion?
Feel free to contribute!