Making student work work

This is it.  The end of another year and my last post for a while.  Yes, I know.  I’ll miss you, too.

I guess I’m feeling a little sentimental because my two temporary partner’s in crime are graduating next weekend and going on to graduate school.  Cameron and Emma have been wonderfully helpful over the last two years, and Kimberly and I will miss them both!

Since we’ve been talking about the value of experiential learning opportunities over the last several years, I decided to ask Cameron and Emma if they’d like to write something short about their work experience at Augustana and its impact on their learning and development.  They jumped at the chance, so I’m gonna check out of here a little early and let them have the last word of the 12/13 academic year.

Cameron’s thoughts . . .

The ability to work on campus has become a crucial piece to my educational puzzle. Not only has it helped me financially support myself, but it has become a cornerstone of who I am. Through my work I discovered my career and future plans. The experience gained over the past two years allowed me to explore my interests in a way I could not have done so otherwise. Like many other positions on campus, my job allowed me to more closely align myself with a career.  The hands-on experience also gave me an edge over others as well as valuable resources I can always come back to for support.

Understandably certain positions may not lend themselves to the same level of career planning clarity, but even on the smallest level working on campus offers a new community of people to go to and feel more connected to the college. Without my position I would not have met nearly as many wonderful people who have helped me through the challenges I’ve experienced while at Augustana. So even if the job is as simple that as a cashier in the bookstore or assistant during food services, they are all beneficial.  Student work positions can be more focused on being a learning experience instead of just a job.  

Emma’s thoughts . . .

Having a student job here at Augustana has been beneficial to my educational and academic progression. Personally, this progression has stemmed from my ability to integrate the theoretical knowledge I have gained in classes to real-world studies and research. Instead of simply learning how to build a survey, I’ve been able to actually construct one. I did not just learn the theory behind calculating a logistic regression, I actually performed it. Student jobs should aim to teach students the possibilities, frustrations, and benefits that come from real-world work or research in their field. Because I have been able to use my knowledge gained from my courses, I am much more confident in my ability to perform research studies in graduate school and in my career field. Student positions should not be a series of tasks to provide students with a paycheck. Instead, they should encourage and push students towards tackling projects that have implications for either the practical or academic world.

In accordance with integrating what is learned in the classroom to the workplace,
student workers should be encouraged to implement their personal and unique knowledge and experiences into their work. This chance to share my perspective as an Augustana student was very valuable to my identity and confidence as an academic and a researcher. Student workers should be given the opportunity to share their opinions, experiences, and knowledge and be able to see these unique contributions bring value to the discussions and work we see happening around us every day. Learning to vocalize our opinions, findings, and observations is essential in preparing undergraduate students for the next stage of their career- whether that is graduate school or a job.

While my involvement as a student worker in Institutional Research has increased my
skills and knowledge in many areas of statistics, research, writing, etc., it is these experiences of academic integration that stand out as the most beneficial to my growth as a student and a researcher. In the future, more student positions should implement this hands-on, practical application approach. Integrating knowledge from the classroom to the real world is an essential part of the learning process and student growth.

There’s no question that I got pretty luck in hiring both of these students.  They’ve jumped into the deep and murky water of college impact research and survived to tell the tale.  Moreover, they’ve made contributions that genuinely made Augustana a better place for future students.

So congratulations to Cameron and Emma.  And congrats to all of our graduates.

One piece of advice – and this goes to everyone who is expected to walk up onto the stage next Sunday.  Don’t trip!  It will be caught on camera by someone and end up on Youtube!

Make it a good summer,

Mark

 

Compete with MOOCs?! Why not co-opt them instead?

Since I won’t write another blog post until the beginning of spring term, I thought I’d write something a little different.  Instead of a traditional data-filled post, I am going to weigh in with a suggestion – an opinion that is merely my own, not to be confused with some broader administrative position.  I’ve been mulling this one over since the explosion of Massive Open Online Courses (MOOCs) last year, but it really came to a boil last week when I read about Scott Young and his MIT Challenge.

At first glance, Scott Young’s MIT Challenge smells like the arrogant prank of an affluent Silicon Valley prodigy.  A recent university graduate who fancies himself a blogger, writer, and “holistic learner” decides to see if he can complete the entire MIT curriculum for a computer science major in a year without enrolling in any MIT classes.  Instead, he plans to download all course materials – including lectures, homework assignments, and final exams – from MIT’s open courseware site and MIT’s edX.  He’ll only spend money on text books and internet access, which he estimates will cost about $2000 over the course of the entire curriculum (a paltry sum compared to cost of attending MIT for one year – $57,010 in 2012/13).

Well, he did it (that little @$#&!).  From September 2011 to September 2012, Mr. Young completed and passed all of the course work expected of MIT students to earn a major in computer science.  And just in case you think it a braggart’s hoax, he posted all of his course work, exams, and projects to verify that he actually pulled it off.  Essentially, If he had been a paying MIT student, he would now be considered one of their alums.  He might not have graduated cum laude, but you know what they call the person who graduates last in his class from Harvard Medical School (for those of you who haven’t heard the joke, the answer is “doctor”).

My point isn’t to celebrate the accomplishments of a brash, albeit intriguing, young man from Manitoba (wouldn’t you know it, this guy turns out to be Canadian!).  In the context of the academic tendencies we all too often see in students, his feat suggests more that he is an outlier among young adults than that a tsunami of self-directed learners is headed our way.

Rather, the simple fact that the full curriculum of a computer science degree from MIT is already freely available online should blow up any remaining notion that we, or any other small liberal arts college, can continue to act as if we are the lone gatekeepers of postsecondary content knowledge.  The ubiquitous availability of this kind of content knowledge delivered freely in educationally viable ways makes many a small college’s course catalogue seem like a quaint relic of a nostalgic past.  Moreover, if any major we offer is merely, or even mostly, an accumulation of content-heavy survey courses and in-depth seminars, we make ourselves virtually indistinguishable from an exponentially expanding range of educational options – except for our exorbitant cost.  And though we might stubbornly argue that our classes are smaller, our faculty more caring, or the expectations more demanding (all of which may indeed be so!), if the education we offer appears to prospective students as if it differs little from far less expensive educational content providers (e.g., general education is designed to provide content introductions across a range of disciplines, majors are organized around time periods, major theoretical movements, or subfields, students earn majors or minors in content-heavy areas), we increase the likelihood that future students will choose the less expensive option – even as they may whole-heartedly agree that we are marginally better.  And if those less expensive providers happen to be prestigious institutions like MIT, we are definitely in trouble.  For even if there is a sucker born every minute, I doubt there will be many who are willing to borrow gargantuan sums of money to pay for the same content knowledge that they can acquire for 1/100th of the cost – especially when they can supplement it on their own as needed.

Admittedly, I am trying to be provocative.  But please note that I haven’t equated “content knowledge” with “an education.”  Because in the end, the bulk of what Mr. Young acquired was content knowledge.  He’d already earned a undergraduate degree in a traditional setting, and by all indications, seems to have benefited extensively from that experience.  At Augustana, our educational mission has always been about much more than content knowledge.  This reality is clearly articulated in the composition of our new student learning outcomes.  We have recognized that content knowledge is a necessary but by no means sufficient condition of a meaningful education.   With this perspective, I’d like to suggest that we explicitly cast ourselves in this light: as guides that help students evaluate, process, and ultimately use that knowledge.  This doesn’t mean that we devalue content knowledge.  Rather, it means that we deliberately position content as a means to a greater end, more explicitly designing every aspect of our enterprise to achieve it.  Incidentally, this also gives us a way to talk about the educational value of our co-curricular experiences that directly ties them to our educational outcomes and makes them less susceptible to accusations of edu-tainment, extravagance, or fluff.

To date, the vast majority of successful MOOCs and online programs focuses on traditional content knowledge delivery or skill development specific to a given profession.  The research on the educational effectiveness of online courses suggests that while online delivery can be at least as effective as face-to-face courses in helping students develop and retain content knowledge and lower-order thinking skills, face-to-face courses tend to be more effective in developing higher-order thinking skills.  So if our primary focus is on showing students how to use the knowledge they have acquired to achieve a deeper educational goal rather than merely delivering said content to them, then . . . .

What if, instead of fearing the “threat” of MOOCs and online learning, we chose to see them as a wonderful cost- and time-saving opportunity?  What if we were to co-opt the power and efficiency of MOOCs and other online content delivery mechanisms to allow us to focus more of our time and face-to-face resources on showing students how to use that knowledge?  I don’t begin to claim to have a fully fleshed-out model of what all of this would look like (in part because I don’t think there is a single model of how an institution might pull this off), but it seems to me that if we choose to see the explosion of online learning possibilities as a threat, we drastically shorten our list of plausible responses (i.e., ignore them and hope they go away or try to compete without a glimmer of the resources necessary to do so).  On the other hand, if we co-opt the possibilities of online learning and find ways to fit them into our current educational mission, our options are as broad as the possibilities are endless.  I guess I’d rather explore an expanding horizon.  Enjoy your break.

Make it a good day,

Mark

 

 

 

 

Big Data, Intuition, and the Potential of Improvisation

Welcome back to the second half of winter term!  As nice as it is to walk across campus in the quiet calm of a fresh new year (ignoring the giant pounding on top of the library for the moment), it’s a comfort to see faculty and students bustling between buildings again and feel the energy of the college reignited by everyone’s return.

Over the last several weeks, I’ve been trying to read the various higher ed opinionators’ perspectives on MOOCs (Massive Open Online Courses) and the implications they foresee for colleges like Augustana.  Based on what I’ve read so far, we are either going to 1) thrive without having to change a thing, 2) shrivel up and die a horrible death sometime before the end of the decade, or 3) see lots of changes that will balance each other out and leave us somewhere in the middle.  In other words – no one has a clue.  But this hasn’t stopped many a self-appointed Nostradami (Nostradamuses?) from rattling off a slew of statistics to make their case: the increasing number of students taking online courses, the number of schools offering online courses, the hundreds of thousands of people who sign up for MOOCs, the shifting demographics of college students, blah blah blah.  After all, as these prognosticators imply, historical trends predict the future.

Except when they don’t.  A recent NYT article, Sure, Big Data Is Great, But So Is Intuition, highlights the fundamental weakness in thinking that a massive collection of data gathered from individual behaviors (web-browsing, GPS tracking, social network messaging, etc.) inevitably holds the key to a brighter future.  As the article puts it, “The problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according the laws of physics.”  The article goes on to point out the implications of abiding by this false presumption, such as the catastrophic failure of financial modeling to predict the world-wide economic collapse of 2008.  I particularly like the way that the article summarizes this cautionary message.  “Listening to the data is important, they [experts interviewed for the article] say, but so is experience and intuition.  After all, what is intuition at its best but large amounts of data of all kinds filtered through a human brain rather than a math model?”

This is where experience and intuition intersect with my particular interest in improvisation.  When done well, improvisation is not merely random actions.  Instead, good improvisation occurs when the timely distillation of experience and observation coalesces through intuition to emerge in an action that both resolves a dilemma and introduces opportunity.  Improvisation is the way that we discover a new twist in our teaching that magically “just seemed to work.”  Those moments aren’t about luck; they materialize when experience meets intuition meets trust meets action.  Only after reflecting on what happened are we able to figure out the “why” and the “how” in order to replicate the new innovation onto which we have stumbled.  Meanwhile, back in the moment, it feels like we are just “in a zone.”

Of course, improvisation is no more a guarantee of perfection than predictive modeling.  That is because the belief that one can somehow achieve perfection in educating is just as flawed as the fallacy of predictive modeling.  Statisticians are taught to precede findings with the phrase “all else remaining constant . . . ” But in education, that has always been the supremely ironic problem.  Nothing remains constant.  So situating evidence of a statistically significant finding within the the real and gnarly world of teaching and learning requires sophisticated thinking borne of extensive experience and keen intuition.

Effective improvising emerges when we are open to its possibilities – individually and collectively.  It’s just a matter of letting our experience morph into intuition in a context of trust that spurs us to act.  Just because big data isn’t the solution that some claim it to be doesn’t mean that we batten down the hatches, pretend that MOOCs and every other innovation in educational technology don’t exist, and keep doing what we’ve always done (only better, faster, smarter, more, more, more . . . ).  Effective improvising is always preceded by intuition that is informed by some sort of data analysis.  When asked why they did what they did, successful improvisers can often explain in detail the thought processes that spurred them to take a particular action or utter a particular line.  In the same way, we know a lot about how our students learn and what seems to work well in extending their learning.  Given that information, I believe that we have the all of the experience and knowledge to improvise successfully.  We just need to flip the switch (“Lights, Action, Improv!”).

Early in the spring term, I’ll host a Friday Conversation where I’ll teach some ways to apply the principles of improvisation to our work.  Some of you may remember that I did a similar session last year – although you may have repressed that memory if you were asked to volunteer for one of the improv sketches.

In the mean time, I hope you’ll open yourself up to the potential of improvisation.  Enjoy your return to the daily routine.  It’s good to have you back.

Make it a good day,

Mark

 

 

Grades and Assessing Student Learning (can’t we all just get along?)

During a recent conversation about the value of comprehensive student learning assessment, one faculty member asked, “Why should we invest time, money, and effort to do something that we are essentially already doing every time we assign grades to student work?”  Most educational assessment zealots would respond by launching into a long explanation of the differences between tracking content acquisition and assessing skill development, the challenges of comparing general skill development across disciplines,  the importance of demonstrating gains on student learning outcomes across an entire institution, blah blah blah (since these are my peeps, I can call it that).  But from the perspective of an exhausted professor who has been furiously slogging through a pile of underwhelming final papers, I think the concern over a substantial increase in faculty workload is more than reasonable.  Why would an institution or anyone within it choose to be redundant?

If a college wants to know whether its students are learning a particular set of knowledge, skills, and dispositions, it makes good sense to track the degree to which that is happening.  But we make a grave mistake when we require additional processes and responsibilities from those “in the trenches” without thinking carefully about the potential for diminishing returns in the face of added workload (especially if that work appears to be frivolous or redundant).  So it would seem to me that any conversation about assessing student learning should emphasize the importance of efficiency so that faculty and staff can continue to fulfill all the other roles expected of them.

This brings me back to what I perceive to be an odd disconnect between grading and outcomes assessment on most campuses.  It seems to me that if grading and assessment are both intent on measuring learning, then there ought to be a way to bring them closer together.  Moreover, if we want assessment to be truly sustainable (i.e. not kill our faculty), then we need to find ways to link, if not unify, these two practices.

What might this look like?  For starters, it would require conceptualizing content learned in a course as the delivery mechanism for skill and disposition development.  Traditionally, I think we’ve envisioned this relationship in reverse order – that skills and dispositions are merely the means for demonstrating content acquisition – with content acquisition becoming the primary focus of grading.  In this context, skills and dispositions become a sort of vaguely mysterious red-headed stepchild (with apologies to step-children, red heads, and the vaguely mysterious).  More importantly, if we are now focusing on skills and dispositions, this traditional context necessitates an additional process of assessing student learning.

However, if we reconceptualize our approach so that content becomes the raw material with which we develop skills and dispositions, we could directly apply our grading practices in the same way.  One would assign a proportion of the overall grade to the necessary content acquisition, and the rest of the overall grade (apportioned as the course might require) to the development of the various skills and dispositions intended for that course.  In addition to articulating which skills and dispositions each course would develop and the progress thresholds expected of students in each course, this means that we would have to be much more explicit about the degree to which a given course is intended to foster improvement in students (such as a freshman level writing course) as opposed to a course designed for students to demonstrate competence (such as a senior level capstone in accounting procedures).  At an even more granular level, instructors might define individual assignments within a given course to be graded for improvement earlier in the term with other assignments graded for competence later in the term.

I recognize that this proposal flies in the face of some deeply rooted beliefs about academic freedom that faculty, as experts in their field, should be allowed to teach and grade as they see fit. When courses were about attaining a specific slice of content, every course was an island.  17th century British literature?  Check.  The sociology of crime?  Check.  Cell biology?  Check.  In this environment, it’s entirely plausible that faculty grading practices would be as different as the topography of each island.  But if courses are expected to function collectively to develop a set of skills and/or dispositions (e.g., complex reasoning, oral and written communication, intercultural competence), then what happens in each course is irrevocably tied to what happens in previous and subsequent courses.  And it follows that the “what” and “how” of grading would be a critical element in creating a smooth transition for students between courses.

In the end it seems to me that we already have all of the mechanisms in place to embed robust learning outcomes assessment into our work without adding any new processes or responsibilities to our workload.  However, to make this happen we need to 1) embrace all of the implications of focusing on the development of skills and dispositions while shifting content acquisition from an end to a means to a greater end, and 2) accept that the educational endeavor in which we are all engaged is a fundamentally collaborative one and that our chances of success are best when we focus our individual expertise toward our collective mission of learning.

Make it a good day,

Mark

 

Finding the ideal balance between faculty and administrators

During the term break, the Chronicle of Higher Education reviewed a research paper about the impact of an administrator-faculty ratio on institutional costs.  The researchers were seeking evidence to test the long-standing hypothesis that the rising costs in higher education can be attributed to an ever-growing administrator class.  The paper’s authors found that the ideal ratio of faculty to administrators at large research institutions was 3:1 and that institutions with a lower ratio (fewer faculty per administrator) tend to be more expensive.

Even though we are a small liberal arts college and not the type of institution on which this study focused, I wondered what our ratio might look like.  I am genuinely curious about the relationship between in-class educators (faculty) and out-of-class educators (student affairs staff) because we often emphasize our belief in the holistic educational value of a residential college experience.  In addition, since some have expressed concern about a perceived increase in administrative positions, I thought I’d run our numbers and see what turns up.

Last year, Augustana employed 184 full time, tenured or tenure-track faculty and 65 administrators.  Thus, the ratio of faculty to administrators was 2.8 to 1.  If we were to include faculty FTE and administrator FTE (which means we include all part-time folks as one-third of a full time employee and add them to the equation), the ratio becomes 3.35 to 1.  By comparison, in 2003 (the earliest year in which this data was reported to IPEDS), our full time, tenured or tenure-track faculty (145) to administrator (38) ratio was 3.82 to 1.  When using FTE numbers, that ratio slips to 4.29 to 1.

What should we make of this?  On its face, it appears that we’ve suffered from the same disease that has infected many larger institutions.  Over about ten years, the balance between faculty to administrators has shifted even though we have increased the size of the faculty considerably.  But if you consider these changes in the context of our students (something that seems to me to be a rather important consideration), the results seem to paint a different picture.  For even though our ratio of faculty to administrators might have shifted, our ratios of students to faculty and students to administrators have moved in similar directions over the same period, with the student/faculty ratio going from about 14:1 to just over 11:1 and our student/administrator ratio going from about 51:1 to close to 39:1.  Proportionally, both ratios drop by about 20%.

For me, these numbers inspire two questions that I think are worth considering.  First, although the absolute number of administrators includes a wide variety of campus offices, a substantial proportion of “administrators” exist in student affairs.  And there seems to be some disparity between the nature of the educational relationship that we find acceptable between students and in-class educators (faculty) and between students and out-of-class educators (those administrators who work in student affairs).  There’s a lot to sort out here (and I certainly don’t have it all pegged), but this disparity doesn’t seem to match up with the extent to which we believe that important student learning and development happens outside of the classroom.  Now I am not arguing that the student/administrator ratio should approach 11:1.  Admittedly, I have no idea what the ideal student/faculty ratio or student/administrator ratio should be (although, like a lot of things, distilling that relationship down to one ratio is probably our first big mistake). Nonetheless, I suspect we would all benefit from a deeper understanding of the way in which our student affairs professionals impact our students’ development.  As someone who spends most of my time in the world of academic affairs, I wonder whether my own efforts to support this aspect of the student learning experience have not matched the degree to which we believe it is important.  Although I talk the talk, I’m not sure I’ve fully walked the walk.

Second, examining the optimal ratio between faculty and administrators doesn’t seem to have much to do with student learning.  I fear that posing this ratio without a sense of the way in which we collaboratively contribute to student learning just breathes life into an administrator vs. faculty meme that tends to pit one against the other.  If we start with a belief that there is an “other side,” and we presume the other side to be the opposition before we even begin a conversation, we are dead in the water.

Our students need us to conceptualize their education in the same way that they experience it – as one comprehensive endeavor.  We – faculty, administrators, admissions staff, departmental secretaries, food service staff, grounds crew, Board of Trustees – are all in this together.  And from my chair, I can’t believe how lucky I am to be one of your teammates.

Make it a good day,

Mark

 

 

Talking, albeit eloquently, out of both sides of our mouths

Many of my insecurities emerge from a very basic fear of being wrong.  Worse still, my brain takes it one step further, playing this fear out through the infamously squeamish dream in which I am giving a public presentation somewhere only to discover in the middle of it that my pants lie in a heap around my ankles.  But in my dream, instead of acknowledging my “problem,” buckling up, and soldiering on, I inexplicably decide that if I just pretend not to notice anything unusual, then no one in the audience will notice either.  Let’s just say that this approach doesn’t work out so well.

It’s pretty hard to miss how ridiculous this level of cognitive contortionism sounds.  Yet this kind of foolishness isn’t the exclusive province of socially awkward bloggers like me.  In the world of higher education we sometimes hold obviously contradictory positions in plain view, trumpeting head-scratching nonsequiturs with a straight face.  Although this exercise might convince many, including ourselves, that we are holding ourselves accountable to our many stakeholders, we actually make it harder to meaningfully improve because we don’t test the underlying assumptions that set the stage for these moments of cognitive dissonance.  So I’d like to wrestle with one of these “conundrums” this week: the ubiquitous practice of benchmarking in the context of a collective uncertainty about the quality of higher education – admitting full well that I may well be the one who ends up pinned to the mat crying “uncle.”

It’s hard to find a self-respecting college these days that hasn’t already embedded the phrase “peer and aspirant groups” deep into its lexicon of administrator-speak.  This phrase refers to the practice of benchmarking – a process to support internal assessment and strategic planning that was transplanted from the world of business several decades ago.  Benchmarking is a process of using two groups of other institutions to assess one’s own success and growth.  Institutions start by choosing a set of metrics to identify two groups of colleges: a set of schools that are largely similar at present (peers) and a set of schools that represent a higher tier of colleges for which they might strive (aspirants). The institution then uses these two groups as yardsticks to assess their efforts toward:

  1. improved efficiency (i.e., outperforming similarly situated peers on a given metric), or
  2. increased effectiveness (i.e., equaling or surpassing a marker already attained by colleges at the higher tier to which the institution aspires).

Sometimes this practice is useful, especially in setting goals for statistics like retention rates, graduation rates, or a variety of operational measures.  However, sometimes this exercise can unintentionally devolve into a practice of gaming, in which comparisons with the identified peer group too easily shine a favorable light on the home institution, while comparisons with the aspirant group are too often interpreted as evidence of how much the institution has accomplished in spite of its limitations.  Nonetheless, this practice seems to be largely accepted as a legitimate way of quantifying quality.  So in the end, our “go-to” way of demonstrating value and a commitment to quality is inescapably tethered to how we compare ourselves to other colleges.

At first, this seems like an entirely reasonable way to assess quality.  But it depends on one  fundamental assumption: the idea that, on average, colleges are pretty good at what they do.  Unfortunately, the last decade of research on the relative effectiveness of higher education suggests that, at the very least, the educational quality of colleges and universities is uneven, or at worst, that the entire endeavor is a fantastically profitable house of cards.

No matter which position one takes, it seems extraordinarily difficult to simultaneously assert that the quality of any given institution is somewhere between unknown and dicey, while at the same time using a group of institutions – most of which we know very little about beyond some cursory, outer layer statistics – as a basis for determining one’s own value.  It’s sort of like the sixth grade boy who justifies his messy room by suggesting that it’s cleaner than all of his friends’ rooms.

My point is not to suggest that benchmarking is never useful or that higher education is not in need of improvement.  Rather, I think that we have to be careful about how we choose to measure our success.  I think we need to be much more willing to step forward and spell out what we think success should look like, regardless of what other institutions are doing or not doing.  In my mind, this means starting by selecting a set of intended outcomes, defining clearly what success will look like, and then building the rest of what we do in a purposeful way around achieving those outcomes.  Not only does this give us a clear direction simply described to people within and without our own colleges, but gives us all the pieces necessary to build a vibrant feedback loop to assess and improve our efforts and our progress.

I fully understand the allure of “best practices” – the idea that we can do anything well simply by figuring out who has already done it well and then copying what they do.  But I’ve often seen the best of best practices quickly turn into worst practices when plucked out of one setting and dropped wholesale into a different institutional culture.  Maybe we’d be better off paying less attention to what everyone else does, and concentrate instead on designing a learning environment that starts with the end in mind and uses all that we already know about college student development, effective teaching, and how people learn.  It might look a lot different than the way that we do it now.  Or it might not look all that different, despite being substantially more effective.  I don’t know for sure.  But it’s got to be more effective than talking, albeit eloquently, out of both sides of our mouths.

Make it a good day,

Mark

 

What’s in a name?

When I first floated the idea of a weekly column, everyone in the Dean’s office seemed to be on board.  But when I proposed calling it “Delicious Ambiguity,” I got more than a few funny looks.  Although these looks could have been a mere byproduct of the low-grade bewilderment that I normally inspire, let’s just say for the sake of argument that they were largely triggered by the apparent paradox of a column written by the measurement guy that seems to advocate winging it.  So let me tell you a little bit about the origins of the phrase “Delicious Ambiguity” and why I think it embodies the real purpose of Institutional Research and Assessment.

This particular phrase is part of a longer quote from Gilda Radner – a brilliant improvisational comedian and one of the early stars of Saturday Night Live.  The line goes like this:

“Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what’s going to happen next.  Delicious Ambiguity.”

For those of you who chose a career in academia specifically to reduce ambiguity, this statement probably inspires a measure of discomfort.  And there is a part of me that admittedly finds some solace in the task of isolating statistically significant “truths.”  I suppose I could have named this column “Bland Certainty,”  but – in addition to single-handedly squelching reader interest – such a title would suggest that my only role at Augustana is to provide final answers – nuggets of fact that function like the period at the end of a sentence.

Radner’s view of life is even more intriguing because she wrote this sentence as her body succumbed to cancer.  For me, her words exemplify intentional – if not stubborn – optimism in the face of darkly discouraging odds.  I have seen this trait repeatedly demonstrated in many of you over the last several years as you have committed yourself to help a particular student even as that student seems entirely disinterested in  learning.

Some have asserted that a college education is a black box; some good can happen, some good does happen – we just don’t know how it happens.  On the contrary, we actually know a lot about how student learning and development happens – it’s just that student learning doesn’t work like an assembly line.  Instead, student learning is like a budding organism that depends on the conduciveness of its environment; a condition that emerges through the interaction between the learner and the learning context.  And because both of these factors perpetually influence each other, we are most successful in our work to the degree that we know which educational ingredients to introduce, how to introduce them, and when to stir them into the mix.  The exact sequence of the student learning process is, by its very nature, ambiguous because it is unique to each individual learner.

In my mind, the act of educating is deeply satisfying precisely because of its unpredictability.  Knowing that we can make a profound difference in a young person’s life – a difference that will ripple forward and touch the lives of many more long after a student graduates – has driven many of us to extraordinary effort and sacrifice even as the ultimate outcome remains admittedly unknown.  What’s more, we look forward to that moment when our perseverance suddenly sparks a flicker of unexpected light that we know increases the likelihood – no matter how small – that this person will blossom into the life-long student we believe they can be.

The purpose of collecting educational data should be to propel us – the teacher and the student – through this unpredictability, to help us navigate the uncertainty that comes with a process that is so utterly dependent upon the perpetually reconstituted synergy between teacher and student.  The primary role of Institutional Research and Assessment is to help us figure out the very best ways to cultivate – and in just the right ways – manipulate this process.  The evidence of our success isn’t a result at the end of this process.  The evidence of our success is the process.  And pooling our collective expertise, if we focus on cultivating the quality, depth, and inclusiveness of that process, it isn’t outlandish at all to believe that our efforts can put our students on a path that someday just might change the world.

To me; this is delicious ambiguity.

Make it a good day,

Mark

 

From “what we have” to “what we do with it”

We probably all have a good example of a time when we decided to make a change – maybe drastic, maybe minimal – only to realize later the full ramifications of that change (“Yikes! Now I remember why I grew a beard.”).  This is the problem with change – our own little lives aren’t as discretely organized as we’d like to think, and there are always unintended consequences and surprise effects.

 

When Augustana decided to move from measuring itself based on the quality of what we have (incoming student profile, endowment, number of faculty, etc.) to assessing our effectiveness based on what we do (student learning and development, educational improvement and efficiency, etc.), I don’t think we fully realized the ramifications of this shift.  Although there are numerous ways in which this shift is impacting our work, I’d like to talk specifically about the implications of this shift in terms of institutional data collection and reporting.

 

First, let’s get two terms clarified.  When I say “outcomes” I mean the learning that results from educating.  When I say “experiences” I mean the experiences that students have during the course of their college career.  They could be simply described by their participation in a particular activity (e.g., a philosophy major) or they could be more ambiguously described as the quality of a student’s interaction with faculty.  Either way, the idea is – and has always been – that student experiences should lead to gains on educational outcomes.

 

I remember an early meeting during my first few months at Augustana College where one senior administrator turned to me and said, “We need outcomes.  What have you got?”  At many institutions, the answer would be something like, “I’ll get back to you in four years,”  because that is how long it takes to gather dependable data.  Just surveying students at any given point only tells you where they are at that point – it doesn’t tell you how much they’ve changed as a result of our efforts. Although we have some outcome data from a several studies that we happened to join, we still have to gather outcome data on everything that we need to measure – and that will take time.

 

But the other problem is one of design.  Ideally, you choose what you want to measure, and then you start measuring it.  In our case, although we have measured some outcomes, we don’t have measures on other outcomes that are equally important.  And there isn’t a very strong centering framework for what we have measured, what we have not, and why.  This is why we are having the conversation about identifying college-wide outcomes.  The results of that conversation will tell us exactly what to measure.

 

The second issue is in some ways almost more important for our own purposes.  We need to know what we should do to improve student learning – not just whether our students are learning (or not).  As we should know by now, learning doesn’t happen by magic.  There are specific experiences that accelerate learning, and certain experiences that grind it to a halt.  Once we’ve identified the outcomes that define Augustana, then we can track the experiences that precede them.  It is amazing how many times we have found that, despite the substantial amount of data we have on our students, the precise data on a specific experience is nowhere to be found because we never knew we were going to need it.  This is the primary reason for the changes I made in the senior survey this year.

 

This move from measuring what we have to assessing what we do is not a simple one and it doesn’t happen overnight.  And that is just the data collection side of the shop.  Just wait until I start talking about what we do with the data once we get it! (Cue evil laughter soundtrack!)

 

Make it a good day!

 

Mark

student learning as I see it

At a recent faculty forum, discussion of the curricular realignment proposal turned to the question of student learning.  As different people weighed in, it struck me that, even though many of us have been using the term “student learning” for years, some of us may have different concepts in mind.  So I thought it would be a good idea, since I think I say the phrase “student learning” at least once every hour, to explain what I mean and what I think most assessment folks mean when we say “student learning.”

 

Traditionally, “student learning” was a phrase that defined itself – it referred to what students learned.  However, the intent of college teaching was primarily to transmit content and disciplinary knowledge – the stuff that we normally think of when we think of an expert in a field or a Jeopardy champion.  So the measure of student learning was the amount of content that a student could regurgitate – both in the short term and the long term.

 

Fortunately or unfortunately, the world in which we live has completed changed since the era in which American colleges and universities hit their stride.  Today, every time you use your smart phone to get directions, look up a word, or find some other byte of arcane data, it becomes painfully clear that memorizing all of that information yourself would be sort of pointless and maybe even a little silly.  Today, the set of tools necessary to succeed in life and contribute to society goes far beyond the content itself.  Now, it’s what you can do with the content.  Can you negotiate circumstances to solve difficult problems?  Can you manage an organization in the midst of uncertainty?  Can you put together previously unrelated concepts to create totally new ideas?  Can you identify the weakness in an argument and how that weakness might be turned to your advantage?

 

It has become increasingly apparent that colleges and universities need to develop the set of skills needed to answer “yes” to those questions.  So when people like me use the phrase “student learning” we are referring to the development of the skill sets necessary to make magic out of content knowledge.  That has powerful implications for the way that we envision a general education or major curriculum.  It also holds powerful implications for how we think about integrating traditional classroom and out-of-class experiences in order to firmly develop those skills in students.

 

I would encourage all of us to reflect on what we think we mean when we say “student learning.”  First, let’s make sure we are all referring to the same thing when we talk about it.  Second, let’s move away from emphasizing content acquisition as the primary reflection of our educational effectiveness.  Yes, content is necessary, but it’s no longer sufficient.  Yes, content is foundational to substantive student learning, but very few people look at a completed functioning house and say, “Wow, what an amazing foundation.”  I’m just sayin’ . . .

 

Make it a good day!

 

Mark

Moving from satisfaction to experiences – a new senior survey

One of the “exciting” parts of my job is building surveys.  I’ve worked with many of you over the past two years to construct new surveys to answer all sorts of questions.  On the one hand, it’s a pretty interesting challenge to navigate all of the issues inherent in designing what amounts to a real life “research study.”  At the same time, it can be an exhausting project because there are so many things you just can’t be sure of until you field test the survey a few times and find all of the unanticipated flaws.  But in the end, if we get good data from the new survey and learn things we didn’t know before that help us do what we do just a little bit better, it’s a pretty satisfying feeling.

As many of you already know, Augustana College has been engaged in a major change over the last several years in terms of how we assess ourselves.  Instead of determining our quality as an institution based on what we have (student incoming profile, endowment amount, etc.), we are trying to shift to determining our quality based on what we do with what we have.  Amazingly, this places us in a very different place that many higher education institutions.  Unfortunately, it also means that there aren’t many examples on which we might model our efforts.

One of the implications of this shift involves the nature of the set of institutional data points that we collect.  Although many of the numbers we have traditionally gathered continue to be important, the measure of ourselves that we are hoping to capture what we do with those traditional numbers. And while we have long maintained pretty robust ways of obtaining the numbers you would see in our traditional dashboard, our mechanisms for gathering data that would help us assess what we do with what we have are not yet robust enough.

So over the last few months, I have been working with the Assessment for Improvement Committee and my student assistants to build a new senior survey.  While the older version had served its purpose well over more than a decade, it was ready for an update, it not an overhaul.

The first thing we’ve done is move from a survey of satisfaction to a survey of experiences.  Satisfaction can sometimes give you a vague sense of customer happiness, but it often falls flat in trying to figure out how to make a change – not to mention the fact that good educating can produce customer dissatisfaction if the that customer had unrealistic expectations or didn’t participate in their half of the educational relationship.

The second thing we’ve done is build the senior survey around the educational and developmental outcomes of the entire college.  If our goal is to develop students holistically, then our inquiry needs to be comprehensive.

Finally, the third thing we’ve done is “walk back” our thinking from the outcomes of various aspects of the college to the way that students would experience our efforts to produce those outcomes.  So, for example, if the outcome in intercultural competence, then the question we ask is how often students had serious conversations with people who differed by race/ethnicity, culturally, social values, or political beliefs.  We know this is a good question to ask because we know from a host of previous research that the degree to which students engage in these experiences predicts their growth on intercultural competence.

If you want to see the new senior survey, please don’t hesitate to ask.  I am always interest in your feedback.  In the mean time . . .

 

Make it a good day!

 

Mark