The Holiday Wish List for a Measurement Geek

Sincerely apologies to anyone who tried to find a new post on my blog yesterday. Apparently our server went “walk-about” over the weekend and our IT folks have been working day and night to salvage everything that was no longer operational.  I think that we are in the clear today, so I’ll try to put this post up a day late.

______________________________________________________________________

This is the week where I can’t help but overhear all the talk of the holiday gifts that people are getting for their spouses, partners, kids, friends, or in-laws.  And it struck me that there aren’t nearly enough suggestions for measurement folks who need to just own their geekdom and go big with it.  So here are a few ideas, discoveries, and possibilities.

  • Statistics ties.  Any formula, pie chart, or dumb stats pun on a tie.  Because nothing bludgeons humor to death better than a stupid stats pun.
  • The children’s book Magnus Maximus, a Marvelous Measurer.  It’s a pretty fun book with wonderful illustrations.  And it’s never too early to stereotype your profession.
  • The world’s largest slide rule.  Of course, it’s located in Texas.
  • The complete DVD set of the TV show NUMB3RS. This show managed to tease my people with the hope that someday complex math skills could really save a life. And yet, to this day I’ve never been in a public venue where someone suddenly yelled frantically, “Is there a statistician in the house!?”
  • A Digicus. They were made in the late 70s and early 80s by the electronic’s company Sharp. Apparently many Japanese were suspicious of the digital calculator when it was first introduced, so the Digicus was created to allow people to check their calculator results against an abacus. And you thought higher ed types were skeptical of change???
  • And last but not least, anything by the band Big Data. Yes, there is a band called Big Data. They describe themselves as a “paranoid electronic music project from the internet.”  Okey dokey.

Make it a good holiday break,

Mark

For the want of a response, the data was crap

Any time I hear someone use data from one of the new freshman, senior, or recent graduate surveys to advocate for a particular idea, I can’t help but smile a little.  It is deeply gratifying to see faculty and administrators comfortably use our data to evaluate new policy, programming, and strategic direction ideas.  Moreover, we can all point to a growing list of data-driven decisions that we know have directly improved student learning.

So it might seem odd, but that smile slips away almost as quickly as it appears. Because underneath this pervasive use of data lies a deep trust in the veracity of those numbers. And the quality of our data depends almost entirely upon the participation of 18-22 year-olds who are . . . . let’s just say “still developing.”  Data quality is like milk – it can turn on you overnight. If the students begin to think that survey questions don’t really apply to them or they start to suspect that the results aren’t valued by the college, they’ll breeze through the questions without giving them much thought or blow off the survey entirely. If that happens on a grand scale . . . . I shudder to think about it.  So you could say that I was “mildly concerned” as I organized fall IDEA course feedback forms for processing a few weeks ago and noticed several where the only bubbles colored in were “fives.”  A few minutes later I found several where the only darkened bubbles were “ones.”

Fortunately, a larger sampling of students’ IDEA forms put my mind at ease.  I found that on most forms the distribution of darkened circles varied and, as best as I could tell, student’s responses to the individual questions seemed to reflect at least a minimal effort to provide truthful responses.  However, this momentary heart attack got me wondering: to what degree might student’s approach to our course feedback process impact the quality of the data that we get?  This is how I ended up in front of Augustana’s student government (SGA) earlier this week talking about our course feedback process, the importance of good data, the reality of student’s perceptions and experiences with these forms, and ways that we might convince more students to take this process seriously.

During this conversation, I learned three things that I hope you’ll take to heart.  First, our students really come alive when they feel they are active participants in making Augustana the best place it can be.  However, they start to slip into passive bystanders when they don’t know the “why” about processes in which they are expected to be key contributors.  When they become bystanders, they are much less likely to invest their own emotional energy in providing accurate data.  Many of the students honestly didn’t think that the IDEA data they provided on the student form was used very often – if ever. If the data doesn’t really matter anyway, so their thinking goes, the effort that they put in to providing it doesn’t matter all that much either.

Second, students often felt that not all of the questions about how much progress they made on specific objectives applied in all classes equally.  As I explained to them how the IDEA data analysis worked and how the information that faculty received was designed to connect the objectives of the course with the students’ sense of what they learned, I could almost hear the light bulbs popping on over their heads.  They were accustomed to satisfaction-type surveys in which an ideal class would elicit a high score on every survey question.  When they realized that they were expected to give lower scores to questions that didn’t fit the course (and that this data would be useful as well), their concern about the applicability of the form and all of the accompanying frustrations disappeared.

Third, even though we – faculty, staff, and administrators – know exactly what we mean when we talk about learning outcomes, our students still don’t really know that their success in launching their life after college is not just a function of their major and all the stuff they’ve listed on their resume.  On numerous occasions, students expressed confusion about the learning objectives because they didn’t understand how they applied to the content of the course.  Although they may have seen the lists of skills that employers and graduate schools look for, it seems that our students think these are skills that are largely set in stone long before they get to college, and that college is mostly about learning content knowledge and building a network of friends and “connections.”  So when they see learning objectives on the IDEA forms, unless they they have been clued in to understand that these are skills that the course is designed to develop, they are likely to be confused by the very idea of learning objectives above and beyond content knowledge.

Although SGA and I plan to work together to help students better understand the value of the course feedback process and its impact on the quality of their own college experience, we – faculty, staff, and administrators – need to do a much better job of making sure that our students understand the IDEA course feedback process.  From the beginning of the course, students need to know that they will be learning more than content.  They need to know exactly what the learning goals are for the course. Students need to know that faculty want to know how much their students’ learned and what worked best in each class to fuel that learning, and that satisfaction doesn’t always equate to learning.  And students need to know how faculty have used course feedback data in the past to alter or adapt their classes.  If you demonstrate to your students how this data benefits the quality of their learning experience, I think they will be much more willing to genuinely invest in providing you with good data.

Successfully creating an evidence-based culture of perpetual improvement that results in a better college requires faculty, staff, and administrators to take great care with the sources of our most important data.  I hope you will take just a few minutes to help students understand the course feedback process.  Because in the end, not only will they benefit from it, but so will you.

Make it a good day,

Mark

 

 

 

 

Could a focus on learning outcomes unwittingly sacrifice process for product?

A central tenet of the learning outcomes movement is that higher education institutions must articulate a specific set of skills, traits, and/or dispositions that all of its students will learn before graduation. Then, through legitimate means of measurement, institutions must assess and publicize the degree to which its students make gains on each of these outcomes. Although many institutions have yet to implement this concept fully (especially regarding the thorough assessment of institutional outcomes), this idea is more than just a suggestion. Each of the regional accrediting bodies now requires institutions to identify specific learning outcomes and demonstrate evidence of outcomes assessment as a standard of practice.

This approach to educational design seems at the very least reasonable. All students, regardless of major, need a certain set of skills and aptitudes (things like critical thinking, collaborative leadership, intercultural competence) to succeed in life as they take on additional professional responsibilities, embark (by choice or by circumstance) on a new career, or address a daunting civic or personal challenge. In light of the educational mission our institutions espouse, committing ourselves to a set of learning outcomes for all students seems like what we should have been doing all along.

Yet too often the outcomes that institutions select to represent the full scope of their educational mission, and the way that those institutions choose to assess gains on those outcomes, unwittingly limits their ability to fulfill the mission they espouse. For when institutions narrow their educational vision to a discrete set of skills and dispositions that can be presented, performed, or produced at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision that would cultivate in students a self-sustaining approach to learning. What we measure dictates the focus of our efforts to improve. As such, it’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. Instead of redesigning the college learning experience to alter the lifetime trajectory of an individual, we allow the whole to be nothing more than the sum of the parts – because all we have done is swap one collection of parts for another. Although there may be value in establishing and implementing a threshold of competence for a bachelor’s degree (for which a major serves a legitimate purpose), limiting ourselves to this framework fails to account for the deeply-held belief that a college experience should approach learning as a process – one that is cumulative, iterative, multi-dimensional, and, most importantly, self-sustaining long beyond graduation.

The disconnect between our conception of a college education as a process and our tendency to track learning as a finite set of productions (outcomes) is particularly apparent in the way that we assess our students’ development as life-long learners. Typically, we measure this construct with a pre-test and a post-test that tracks learning gains between the years of 18 and 22 – hardly a lifetime (the fact that a few institutions gather data from alumni five and ten years after graduation doesn’t invalidate the larger point). Under these conditions, trying to claim empirically that (1) an individual has developed and maintained a perpetual interest in learning throughout their life, and that (2) this life-long approach is direct attributable to one’s undergraduate education, probably borders on the delusional. The complexity of life even under the most mundane of circumstances makes such a hypothesis deeply suspect. Yet we all know of students that experienced college as a process through which they found a direction that excited them and a momentum that carried them down a purposeful path that extended far beyond commencement.

I am by no means suggesting that institutions should abandon assessing learning gains on a given set of outcomes. On the contrary, we should expect no less of ourselves than substantial growth in all of our students as a result of our efforts. Designed appropriately, a well-organized sequence of outcomes assessment snapshots can provide information vital to tracking student learning over time and potentially increasing institutional effectiveness. However, because the very act of learning occurs (as the seminal developmental psychologist Lev Vygotsky would describe it) in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve.

If you think that assessing learning outcomes effectively is difficult, then assessing the quality of the learning process ought to send chills down even the most skilled assessment coordinator’s spine. Defining and measuring the nature of process requires a very different conception of assessment – and for that matter a substantially more complex understanding of learning outcomes. Instead of merely measuring what is already in the rearview mirror (i.e., whatever has already been acquired), assessing the college experience as a process requires a look at the road ahead, emphasizing the connection between what has already occurred and what is yet to come. In other words, assessment of the learning that results from a given experience would include the degree to which a student is prepared or “primed” to make the most of a future learning experience (either one that is intentionally designed to follow immediately, or one that is likely to occur somewhere down the road). Ultimately, this approach would substantially improve our ability to determine the degree to which we are preparing students to approach life in a way that is thoughtful, pro-actively adaptable, and even nimble in the face of both unforeseen opportunity and sudden disappointment.

Of course, this idea runs counter to the way that we typically organize our students’ postsecondary educational experience. For if we are going to track the degree to which a given experience “primes” students for subsequent experiences – especially subsequent experiences that occur during college – then the educational experience can’t be so loosely constructed that the number of potential variations in the ordering of different students’ experiences virtually equals the number of students enrolled at our institution. This doesn’t mean that we return to the days in which every student took the same courses at the same time in the same order, but it does require an increased level of collective commitment to the intentional design of the student experience, a commitment to student-centered learning that will likely come at the expense of an individual instructor’s or administrator’s preference for which courses they teach or programs they lead and when they might be offered.

The other serious challenge is the act of operationalizing a concept of assessment that attempts to directly measure an individual’s preparation to make the most of a subsequent educational experience. But if we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes – whether these outcomes are somehow connected or entirely independent of each other – then we have to expand our approach to include process as well as product.  Only then can we actually demonstrate that the whole is greater than the sum of the parts, that in fact the educational process is the glue that fuses those disparate parts into a greater – and qualitatively distinct – whole.

Make it a good day,

Mark