From “what we have” to “what we do with it”

We probably all have a good example of a time when we decided to make a change – maybe drastic, maybe minimal – only to realize later the full ramifications of that change (“Yikes! Now I remember why I grew a beard.”).  This is the problem with change – our own little lives aren’t as discretely organized as we’d like to think, and there are always unintended consequences and surprise effects.

 

When Augustana decided to move from measuring itself based on the quality of what we have (incoming student profile, endowment, number of faculty, etc.) to assessing our effectiveness based on what we do (student learning and development, educational improvement and efficiency, etc.), I don’t think we fully realized the ramifications of this shift.  Although there are numerous ways in which this shift is impacting our work, I’d like to talk specifically about the implications of this shift in terms of institutional data collection and reporting.

 

First, let’s get two terms clarified.  When I say “outcomes” I mean the learning that results from educating.  When I say “experiences” I mean the experiences that students have during the course of their college career.  They could be simply described by their participation in a particular activity (e.g., a philosophy major) or they could be more ambiguously described as the quality of a student’s interaction with faculty.  Either way, the idea is – and has always been – that student experiences should lead to gains on educational outcomes.

 

I remember an early meeting during my first few months at Augustana College where one senior administrator turned to me and said, “We need outcomes.  What have you got?”  At many institutions, the answer would be something like, “I’ll get back to you in four years,”  because that is how long it takes to gather dependable data.  Just surveying students at any given point only tells you where they are at that point – it doesn’t tell you how much they’ve changed as a result of our efforts. Although we have some outcome data from a several studies that we happened to join, we still have to gather outcome data on everything that we need to measure – and that will take time.

 

But the other problem is one of design.  Ideally, you choose what you want to measure, and then you start measuring it.  In our case, although we have measured some outcomes, we don’t have measures on other outcomes that are equally important.  And there isn’t a very strong centering framework for what we have measured, what we have not, and why.  This is why we are having the conversation about identifying college-wide outcomes.  The results of that conversation will tell us exactly what to measure.

 

The second issue is in some ways almost more important for our own purposes.  We need to know what we should do to improve student learning – not just whether our students are learning (or not).  As we should know by now, learning doesn’t happen by magic.  There are specific experiences that accelerate learning, and certain experiences that grind it to a halt.  Once we’ve identified the outcomes that define Augustana, then we can track the experiences that precede them.  It is amazing how many times we have found that, despite the substantial amount of data we have on our students, the precise data on a specific experience is nowhere to be found because we never knew we were going to need it.  This is the primary reason for the changes I made in the senior survey this year.

 

This move from measuring what we have to assessing what we do is not a simple one and it doesn’t happen overnight.  And that is just the data collection side of the shop.  Just wait until I start talking about what we do with the data once we get it! (Cue evil laughter soundtrack!)

 

Make it a good day!

 

Mark

student learning as I see it

At a recent faculty forum, discussion of the curricular realignment proposal turned to the question of student learning.  As different people weighed in, it struck me that, even though many of us have been using the term “student learning” for years, some of us may have different concepts in mind.  So I thought it would be a good idea, since I think I say the phrase “student learning” at least once every hour, to explain what I mean and what I think most assessment folks mean when we say “student learning.”

 

Traditionally, “student learning” was a phrase that defined itself – it referred to what students learned.  However, the intent of college teaching was primarily to transmit content and disciplinary knowledge – the stuff that we normally think of when we think of an expert in a field or a Jeopardy champion.  So the measure of student learning was the amount of content that a student could regurgitate – both in the short term and the long term.

 

Fortunately or unfortunately, the world in which we live has completed changed since the era in which American colleges and universities hit their stride.  Today, every time you use your smart phone to get directions, look up a word, or find some other byte of arcane data, it becomes painfully clear that memorizing all of that information yourself would be sort of pointless and maybe even a little silly.  Today, the set of tools necessary to succeed in life and contribute to society goes far beyond the content itself.  Now, it’s what you can do with the content.  Can you negotiate circumstances to solve difficult problems?  Can you manage an organization in the midst of uncertainty?  Can you put together previously unrelated concepts to create totally new ideas?  Can you identify the weakness in an argument and how that weakness might be turned to your advantage?

 

It has become increasingly apparent that colleges and universities need to develop the set of skills needed to answer “yes” to those questions.  So when people like me use the phrase “student learning” we are referring to the development of the skill sets necessary to make magic out of content knowledge.  That has powerful implications for the way that we envision a general education or major curriculum.  It also holds powerful implications for how we think about integrating traditional classroom and out-of-class experiences in order to firmly develop those skills in students.

 

I would encourage all of us to reflect on what we think we mean when we say “student learning.”  First, let’s make sure we are all referring to the same thing when we talk about it.  Second, let’s move away from emphasizing content acquisition as the primary reflection of our educational effectiveness.  Yes, content is necessary, but it’s no longer sufficient.  Yes, content is foundational to substantive student learning, but very few people look at a completed functioning house and say, “Wow, what an amazing foundation.”  I’m just sayin’ . . .

 

Make it a good day!

 

Mark

The law of diminishing returns

Welcome back from the short holiday weekend.  I hope you got your fill of celebratory dinner and dessert and, most importantly, put the rest of your work life away to send quality time with the family and friends.

 

A lot of the discussion in my office recently has been about data gathering through surveys.  After all, it’s nearing the end of the academic year and there are many who sincerely want to know if our students experienced Augustana College as we hoped, whether they learned what we intended them to learn, and if any one piece of the myriad of moving parts that make up a college experience has slipped in some way to require a readjustment.

 

In the process of gathering one such survey – the Wabash National Study of Liberal Arts Education – we’ve seen an almost perfect example of the law of diminishing returns – which basically says for each additional time you try to increase production, all else being equal, the rate of production will decline.  As many seniors are conducting senior inquiry projects that involve surveys, I thought it might be of some interest to share our experience gathering data for the Wabash National Study so far, talk about what it means for gathering survey data on campus, and propose some suggestions for folks planning to collect data in the future from students, faculty, staff, or alumni.

 

As you have likely seen in some format or another, I’ve been pumping the Wabash National Study to students, faculty, and staff over the last few months because of its potential to provide key guidance on a host of questions regarding our efforts to improve student learning.  We also were able to acquire $25 gift cards as rewards for those who participate in one of our data collection events.  I’ve listed below the participation rates for each of the four data collection dates.

 

Date of Data Collection

Number of Participants

Mon, March 12

78

Mon, March 26

35

Thurs, March 29

18

Mon, April 2

10

 

With only slight variation, the rate of participation drops in half for each subsequent data collection date.  This occurred despite the repeated promotion, coverage in the Observer, soliciting additional promotion from faculty and staff, and a consistently healthy incentive for those who participated.

 

It’s one thing to hear cautionary tales about this pattern – it’s another to see it so clearly play out right in front of you.  In our case, we are going to continue to host several more data collections during the month of April, but will shift from holding them at night to holding them in the middle of the morning during the convocation time.  I hope you’ll help promote these events to your seniors as you see them announced.

 

So I could strongly encourage those of you who are gathering data yourself or guiding students in their senior inquiry projects: Come up with multiple ways to gather your data and expect that no matter what you do, your participation will slip as you continue to promote your survey.  This means that you really have one shot to get it right, and everything you can do to incentivize initial participation is worth the effort in the long run.

 

Make it a good day!

 

Mark

Moving from satisfaction to experiences – a new senior survey

One of the “exciting” parts of my job is building surveys.  I’ve worked with many of you over the past two years to construct new surveys to answer all sorts of questions.  On the one hand, it’s a pretty interesting challenge to navigate all of the issues inherent in designing what amounts to a real life “research study.”  At the same time, it can be an exhausting project because there are so many things you just can’t be sure of until you field test the survey a few times and find all of the unanticipated flaws.  But in the end, if we get good data from the new survey and learn things we didn’t know before that help us do what we do just a little bit better, it’s a pretty satisfying feeling.

As many of you already know, Augustana College has been engaged in a major change over the last several years in terms of how we assess ourselves.  Instead of determining our quality as an institution based on what we have (student incoming profile, endowment amount, etc.), we are trying to shift to determining our quality based on what we do with what we have.  Amazingly, this places us in a very different place that many higher education institutions.  Unfortunately, it also means that there aren’t many examples on which we might model our efforts.

One of the implications of this shift involves the nature of the set of institutional data points that we collect.  Although many of the numbers we have traditionally gathered continue to be important, the measure of ourselves that we are hoping to capture what we do with those traditional numbers. And while we have long maintained pretty robust ways of obtaining the numbers you would see in our traditional dashboard, our mechanisms for gathering data that would help us assess what we do with what we have are not yet robust enough.

So over the last few months, I have been working with the Assessment for Improvement Committee and my student assistants to build a new senior survey.  While the older version had served its purpose well over more than a decade, it was ready for an update, it not an overhaul.

The first thing we’ve done is move from a survey of satisfaction to a survey of experiences.  Satisfaction can sometimes give you a vague sense of customer happiness, but it often falls flat in trying to figure out how to make a change – not to mention the fact that good educating can produce customer dissatisfaction if the that customer had unrealistic expectations or didn’t participate in their half of the educational relationship.

The second thing we’ve done is build the senior survey around the educational and developmental outcomes of the entire college.  If our goal is to develop students holistically, then our inquiry needs to be comprehensive.

Finally, the third thing we’ve done is “walk back” our thinking from the outcomes of various aspects of the college to the way that students would experience our efforts to produce those outcomes.  So, for example, if the outcome in intercultural competence, then the question we ask is how often students had serious conversations with people who differed by race/ethnicity, culturally, social values, or political beliefs.  We know this is a good question to ask because we know from a host of previous research that the degree to which students engage in these experiences predicts their growth on intercultural competence.

If you want to see the new senior survey, please don’t hesitate to ask.  I am always interest in your feedback.  In the mean time . . .

 

Make it a good day!

 

Mark