Reveling in our IDEA results: A gift we gave to our students and each other

We spend a lot of time talking about the things that we would like to do better.  It’s a natural disposition for educators – continually looking for ways to perfect what is, at its core, a fundamentally imperfect enterprise.  As long as we keep in mind that our efforts to perfect are really about improvement and not about literal perfection, this mindset can cultivate a healthy environment for demonstrably increasing our educational effectiveness.

However – and I admit that I’m probably a repeat offender here – I don’t think we spend enough time reveling in our success.  Often we seem to jump from brushfire to brushfire – sometimes almost frantically so.  Though this might come from a genuinely honorable sense of urgency, I think it tends to make our work more exhausting than gratifying.  Conversely, taking the time to examine and celebrate our successes does two things.  First, it bolsters our confidence in our ability to identify a problem, analyze its cause(s), and implement a successful solution – a confidence that is vital to a culture of perpetual improvement.  Second, it helps us more naturally approach problems through a problem-solving lens.  There is a lot of evidence to show that examining the nature of a successful effort can be more beneficial than simply understanding every painful detail of how we screwed up.

So this last week before Christmas break, I want to celebrate one such success.  If I could hang mistletoe over the campus, I’d likely start doling out kisses (the chocolate kind, or course).  In the four terms since we implemented the IDEA Center course feedback process, you have significantly increased the degree to which students report learning in their courses.  Between fall of 2011 and fall of 2012, the average Progress on Relevant Objectives (PRO) score for a course has increased from a 3.8 to a 4.1.  In addition, on 10 of the 12 individual IDEA learning objectives, students in Augustana courses during the fall of 2012 (last term) reported higher average learning progress scores than students from the overall IDEA data base.  More specifically, the average learning gains from our own courses last term were higher than our overall Augustana average from the previous three terms on 10 out of 12 IDEA learning objectives.

Looking deeper into the data, the evidence continues to support the conclusion that our faculty have steadily improved their teaching.  Over four terms, faculty have reduced the number of objectives they select and narrowed the gap (i.e., variance – for those of you jonesing for statistical parlance) between progress on individual objectives chosen for a given course.  This narrowing precision likely indicates an increasing clarity of educational intent on the part of our faculty.  Moreover, this reduction in selected learning objectives has not come at the expense of higher order thinking objectives that might be considered more difficult to teach.  On the contrary, the selection of individual learning objectives remains similarly distributed – and equally effective – across surface and deep learning objectives.  In addition, students’ responses to the questions regarding “excellent teacher” and “excellent course” went up from 4.2 to 4.3 and from 3.9 to 4.0, respectively.  Finally, when asked whether “as a result of this course, I have more positive feelings about this field of study,” students’ average responses increased from 3.9 to 4.0.

Are there some reasons to challenge my conclusions?  Maybe.  While last year’s participation in the IDEA course feedback process was mandated for all faculty in an effort to develop institutional norms, only about 75% of courses participated this fall.  So it’s possible that the courses that didn’t participate in the fall would have pulled down our overall averages.  Or maybe our faculty have just learned how to manipulate the system and the increased numbers in both PRO scores, individual learning objectives, and teaching methods and styles are nothing more than our improved ability to game the system.

To both of these counter-arguments, in the spirit of the holiday I say (respectfully) . . . humbug.  First of all, although older faculty are traditionally least likely to employ course evaluations (as was the case this fall), I think it is highly unlikely that these faculty are also our worst instructors.  On contrary, many of them are master teachers who have found long ago that they needed to develop other methods of gathering course feedback that matched their own approach to teaching.  Moreover, even if there were some courses taught by senior faculty in which students would have reported lesser degrees of learning, there were courses with lower PRO scores taught by faculty from all classifications.  Second, while there might be some potential for gaming the IDEA system, what I have seen some people refer to as “gaming” has actually been nothing but intentionally designed teaching.  If a faculty member decides to select objective 11, “learning to analyze and critically evaluate ideas, arguments, and points of view,” and then tells the students that this is a focus of the course, asked students to develop this skill through a series of assignments, discussions, projects, or papers, and then explains to students when and how they were making progress on this objective . . . that all sounds to me like plain ol’ good teaching.  So if that is gaming the system or teaching to the test, then (in the words of every kid who has ever played football in the street), “GAME ON!”

Are there other data points in last term’s IDEA aggregate report that we ought to examine and seek to improve?  Sure.  But let’s have that conversation later – maybe in January.  Right now, let’s revel in the knowledge that we now have evidence to show the fruits of our labor to improve our teaching.  You made the commitment to adopt the IDEA course feedback system knowing that it might require us to step up our game.  It did, and you responded in kind.  Actually, you didn’t just meet the challenge – you rose up and proved yourselves to be better than advertised.  So congratulations.  You thoroughly deserve it.  Merry Christmas.

Make it a great day,

Mark

 

 

Grades and Assessing Student Learning (can’t we all just get along?)

During a recent conversation about the value of comprehensive student learning assessment, one faculty member asked, “Why should we invest time, money, and effort to do something that we are essentially already doing every time we assign grades to student work?”  Most educational assessment zealots would respond by launching into a long explanation of the differences between tracking content acquisition and assessing skill development, the challenges of comparing general skill development across disciplines,  the importance of demonstrating gains on student learning outcomes across an entire institution, blah blah blah (since these are my peeps, I can call it that).  But from the perspective of an exhausted professor who has been furiously slogging through a pile of underwhelming final papers, I think the concern over a substantial increase in faculty workload is more than reasonable.  Why would an institution or anyone within it choose to be redundant?

If a college wants to know whether its students are learning a particular set of knowledge, skills, and dispositions, it makes good sense to track the degree to which that is happening.  But we make a grave mistake when we require additional processes and responsibilities from those “in the trenches” without thinking carefully about the potential for diminishing returns in the face of added workload (especially if that work appears to be frivolous or redundant).  So it would seem to me that any conversation about assessing student learning should emphasize the importance of efficiency so that faculty and staff can continue to fulfill all the other roles expected of them.

This brings me back to what I perceive to be an odd disconnect between grading and outcomes assessment on most campuses.  It seems to me that if grading and assessment are both intent on measuring learning, then there ought to be a way to bring them closer together.  Moreover, if we want assessment to be truly sustainable (i.e. not kill our faculty), then we need to find ways to link, if not unify, these two practices.

What might this look like?  For starters, it would require conceptualizing content learned in a course as the delivery mechanism for skill and disposition development.  Traditionally, I think we’ve envisioned this relationship in reverse order – that skills and dispositions are merely the means for demonstrating content acquisition – with content acquisition becoming the primary focus of grading.  In this context, skills and dispositions become a sort of vaguely mysterious red-headed stepchild (with apologies to step-children, red heads, and the vaguely mysterious).  More importantly, if we are now focusing on skills and dispositions, this traditional context necessitates an additional process of assessing student learning.

However, if we reconceptualize our approach so that content becomes the raw material with which we develop skills and dispositions, we could directly apply our grading practices in the same way.  One would assign a proportion of the overall grade to the necessary content acquisition, and the rest of the overall grade (apportioned as the course might require) to the development of the various skills and dispositions intended for that course.  In addition to articulating which skills and dispositions each course would develop and the progress thresholds expected of students in each course, this means that we would have to be much more explicit about the degree to which a given course is intended to foster improvement in students (such as a freshman level writing course) as opposed to a course designed for students to demonstrate competence (such as a senior level capstone in accounting procedures).  At an even more granular level, instructors might define individual assignments within a given course to be graded for improvement earlier in the term with other assignments graded for competence later in the term.

I recognize that this proposal flies in the face of some deeply rooted beliefs about academic freedom that faculty, as experts in their field, should be allowed to teach and grade as they see fit. When courses were about attaining a specific slice of content, every course was an island.  17th century British literature?  Check.  The sociology of crime?  Check.  Cell biology?  Check.  In this environment, it’s entirely plausible that faculty grading practices would be as different as the topography of each island.  But if courses are expected to function collectively to develop a set of skills and/or dispositions (e.g., complex reasoning, oral and written communication, intercultural competence), then what happens in each course is irrevocably tied to what happens in previous and subsequent courses.  And it follows that the “what” and “how” of grading would be a critical element in creating a smooth transition for students between courses.

In the end it seems to me that we already have all of the mechanisms in place to embed robust learning outcomes assessment into our work without adding any new processes or responsibilities to our workload.  However, to make this happen we need to 1) embrace all of the implications of focusing on the development of skills and dispositions while shifting content acquisition from an end to a means to a greater end, and 2) accept that the educational endeavor in which we are all engaged is a fundamentally collaborative one and that our chances of success are best when we focus our individual expertise toward our collective mission of learning.

Make it a good day,

Mark