In recent years I’ve heard a lot of higher ed talking heads imploring colleges and university to adopt a “culture of assessment.” As far as I can tell (at least from a couple of quick Google searches), the phrase has been around for almost two decades and varies considerably in what it actually means. Some folks seem to think it describes a place where everyone uses evidence (some folks use the more slippery term “facts”) to make decisions, while others seem to think that a culture of assessment describes a place where everyone measures everything all the time.
There is a pretty entertaining children’s book called Magnus Maximus, A Marvelous Measurer that tells the story of guy who gets so caught up measuring everything that he ultimately misses the most important stuff in life. In the end he learns “that the best things in life are not meant to be measured, but treasured.” While there are some pretty compelling reasons to think twice about the book’s supposed life lesson (although I dare anyone to float even the most concise post-modern pushback to a five year old at bedtime and see how that goes), the book delightfully illustrates the absurdity of spending one’s whole life focused on measuring if the sole purpose of that endeavor is merely measuring.
In the world of assessment in higher education, I fear that we have made the very mistake that we often tell others they shouldn’t make by confusing the ultimate goal of improvement with the act of measuring. The goal – or “intended outcome” if you want to use the eternally awkward assessment parlance – is that we actually get better at educating every one of our students so that they are more likely to thrive in whatever they choose to do after college. Even in the language of those who argue that assessment is primarily needed to validate that higher education institutions are worth the money (be it public or private money), there is always a final suggestion that institutions will use whatever data they gather to get better somehow. Of course, the “getting better” part seems to always be mysteriously left to someone else. Measuring, in any of its forms is almost useless if that is where most or all of the time and money is invested. If you don’t believe me, just head on down to your local Institutional Research Office and ask to see all of the dusty three-ring binders of survey reports and data books from the last two decades. If they aren’t stacked on a high shelf, they’re probably in a remote storage room somewhere.
Measuring is only one ingredient of the recipe that gets us to improvement. In fact, given the myriad of moving parts that educators routinely deal with (only some of which educators and institutions can actually control), I’m not sure that robust measuring is even the most important ingredient. An institution has no more achieved improvement just because they measure things than a chef bakes a cake by throwing a bag of flour in an oven (yes I know there are such things as flourless tortes … that is kind of my point). Without cultivating and sustaining an organizational culture that genuinely values and prioritizes improvement, measurement is just another thing that we do.
Genuinely valuing improvement means explicitly dedicating the time and space to think through any evidence of mission fulfillment (be it gains on learning outcomes, participation in experiences that should lead to learning outcomes, or the degree to which students’ experiences are thoughtfully integrated toward a realistic whole), rewarding the effort to improve regardless of success or failure, and perpetuating an environment in which everyone cares enough to continually seek out things that might be done just a little bit better.
Peter Drucker is purported to have said that “culture eats strategy for lunch.” Other strategic planning gurus talk about the differences between strategy and tactics. If we want our institutions to actually improve and continually demonstrate that, no matter how much the world changes, we can prepare our students to take adult life by the horns and thrive no matter what they choose to do, then we can’t let ourselves mistakenly think that maniacal measurement magically perpetuates a culture of anything. If anything, we are likely to just make a lot more work for quantitative geeks (like me) while excluding those who aren’t convinced that statistical analysis is the best way to get at “truth.” And we definitely will continue to tie ourselves into all sorts of knots if we pursue a culture of assessment instead of a culture of improvement.
Make it a good day,