The Fallacy of Matching Majors with Careers

It seems that most of the talk in recent months about the ROI (return on investment) of a college degree from a given institution has been focused on the degree to which new graduates from that institution can get well-paying jobs related to their major.  For liberal arts colleges and those of us who believe in the importance of a well-rounded education, the whole idea of assuming an inherent connection between major choice and career seems problematic.  Not only are there plenty of majors that don’t have a natural correlate on the job market (e.g., philosophy majors come to mind), but we are also regularly bombarded with the claims that individuals in today’s world will hold multiple jobs in multiple professions over the course of their working careers. Thus it seems odd to suggest that a college’s effectiveness could be pinned to the proportion of graduates who have landed jobs in their field within six months of graduation.

One data point from our survey of recent graduates seems to highlight this conundrum. Nine months after a class of seniors graduates, we ask them to complete a survey that asks a variety of questions about their current status, the degree to which their Augustana experience helped prepare them for their present circumstance, and the degree to which they believe that they are on the right long-term path.

One of the questions we asked our 2012 graduates last spring (about nine months after they had received their BA degrees from Augustana) was:

“Have your long-term professional goals changed since you graduated from Augie?”

The distribution of responses was revealing.

Not at all


A little








In other words, fewer than 50% of the 2012 graduating class considered themselves on the exact same long-term path that they were on when they walked across the stage to collect their diplomas.  In addition, over a quarter of the respondents said that their long-term goals had changed “somewhat,” “substantially,” or “completely.”

I believe the result of this single question holds critical implications for our efforts to best prepare our students to succeed after college.  First of all, this finding supports what we already know to be true – many of our students are going to change their long-term goals during their first several years after graduation. This is what happens to young people during their first foray into the world of working adulthood. We would be foolish to tie ourselves too tightly to a data point that doesn’t allow for these natural developments in the life a young adult.

Second, rather than mere job or graduate school placement, we would be smart to begin thinking about our students’ post-graduate success in terms of direction and momentum. Our students need to develop a clear sense of direction in order to decide what the best “next step” is for them. In addition, our job is to help them know when to take that “next step,” whether it be getting into the right graduate school or finding the right job or taking advantage of a once-in-a-lifetime opportunity that will better position them to move in the direction they have chosen for themselves. If we can do that, then no matter what happens to our students in the years after they graduate, they will be better able to succeed in the face of life’s inevitable challenges.

In concert with a sense of direction, our students need momentum.  This momentum should be self-perpetuating, cultivated by the right mix of motivations to handle setbacks and success. More importantly, it needs to be strong enough to thrive in the midst of a change in direction. This means that we develop their ability to be autonomous while holding themselves to high standards.  It means that they know how to be strategic in staying true to themselves and their goals no matter the distractions that might appear.

This doesn’t mean that we shouldn’t care about our students’ success in applying to graduate school or entry-level jobs in a given profession. On the contrary, we absolutely should care about statistics like these – especially if they support a student’s chosen direction and momentum.  But we should remember that a successful life isn’t etched in stone upon graduation from college.  And we should have the courage to track our students’ life trajectory in a way that doesn’t limit both us and them.

Make it a good day,




How does student learning happen?

Since it’s finals week, I’ll be quick.  However, I hope you’ll take some time to think about this little tidbit below as our strategic planning conversations address examine how we are going to make sure that every student develops the ability to integrate ideas to solve complex problems.

I saw George Kuh give a talk on Saturday afternoon in which he showed the following cartoon.  Even though the whole audience found it funny, the point he was trying to make about the degree to which we often fail to ensure that students learn what we say we teach them was dead serious.

We claim that a liberal arts education teaches students how to integrate disparate ideas from a wide range of disciplines and contexts to solve complex 21st century problems.  At the same time, however, the experiences we require are specific to individual disciplines or topics while the truly integrative experiences remain optional add-ons . . . if they exist at all outside of the major.

So the question I’d ask you to think about is this:  How do we know that every student participates in a rigorously designed activity that explicitly develops the ability to integrate knowledge from multiple fields of study to solve substantive, complex problems? And how could we design a college experience where we could demonstrate that every student participated in such an activity?

Make it a good day.  And have a great fall break.


Sometimes assessing might be the wrong thing to do

Because of the break-neck pace of our work lives, we tend to look for pre-determined processes to address problems instead of considering whether or not there is another approach that might increase the chances of a successful long-term solution.  This makes sense since pre-determined processes often feel like they help to solve complicated problems by giving us a vetted action plan.  But if we begin defaulting to this option too easily, we can sometimes create more work for ourselves just because we absentmindedly opted for “doing it the way we’re supposed to do it.”  So I thought it might be worthwhile to share an observation about our efforts to improve our educational effectiveness that could help us be more efficient in the process.

We have found tremendous value in gathering evidence to inform our decisions instead of relying on anecdotes, intuition, or speculation.  Moreover, the success of our own experiences seems to have fostered a truly positive sea-change both in terms of the frequency of requests for data that might inform an upcoming discussion or decision as well as the desire to ask new questions that might help us understand more deeply the nature of our educational endeavors.  So why would I suggest that sometimes “assessing might be the wrong thing to do?”

First, let’s revisit two different conceptions of “assessment.”  One perceives “assessment” as primarily about measuring.  It’s an act that happens over a finite period of time and produces a finding that essentially becomes the end of the act of measuring.  Another conception considers assessment as a process composed of various stages: asking a question, gathering data, designing an intervention, and evaluating the effectiveness of that intervention.  Imagine the difference between the two to mirror the difference between a dot (a point in time) and a single loop within a coil (a perpetually evolving process).  So in my mind, “measurement” is a singular act that might involve numbers or theoretical frameworks. “Assessment” is the miniature process that includes asking a question, engaging in measurement of some kind, and evaluating the effectiveness of a given intervention.  “Continuous improvement” is an organizational value that results in the perpetual application of assessment.  The focus of this post is to suggest that we might help ourselves by expanding the potential points at which we could apply a process of assessment.

Too often, after discovering the possibility that student learning resulting from a given experience might not be what we had hoped, we decide that we should measure the student learning in question.  I think we expect to generate a more robust set of data that confirms or at least complicates the information we think we already know. Usually, after several months of gathering data (and if all goes well with that process) our hunch turns out to be so.

I’d like to suggest a step prior to measuring student learning that might get us on track to improvement more quickly.  Instead of applying another means of measurement to evaluate the resultant learning, we should start by applying what we know about effective educational design to assess whether or not the experience in question is actually designed to produce the intended learning.  Because if the experience is not designed and delivered effectively, then the likelihood of it falling short of its expectations are pretty high.  And if there is one truth about educating that we already know, it’s that if we don’t teach our students something, they won’t learn it.

Assessing the design of a program or experiences takes a lot less time than gathering learning outcome data.  And it will get you to the fun part of redesigning the program or experience in question much sooner.

So if you are examining a learning experience because you don’t think it’s working as it should, start by tearing apart its design.  If the design is problematic, then skip the measuring part . . . fix it, implement the changes, and then test the outcomes.

Make it a good day,




Planning, Doing, Being

Unless you’ve been holding your breath at the bottom of the slough for the past six months, you know that we are smack in the middle of developing a new strategic plan for Augustana College.  This weekend our Board of Trustees hold their annual fall meetings during which President Bahls and Dean Lawrence will provide an update to the board, answer questions, address criticisms and concerns, and work with board members to refine the strategic directions that will be prioritized in the final plan.  If you haven’t done so already, I’d highly recommend that you take some time to look at the current state of this process here.

After living in the inner sanctum of this process for the last six months, I’ve been struck by how difficult it is to effectively link the abstract aspirations of vision, mission, and strategic direction with the concrete actions, specific tactics, and measurable moments that we think will prove whether or not we have accomplished our plans.  If we lean too hard to one side, we could end up with little more than strategery – a word I use in all seriousness here because it manages to capture what happens when vision gets disconnected from any actual means of demonstrating its achievement on the ground (click here to see the origins of this word – we are in your debt, Will Ferrell.)  And if we lean too far to the other side, we can fall into the trap of simply adding a host of new programs, policies, activities, and experiences under the flawed belief that busy is always better.  If we’re honest with ourselves, I suspect we’d have to admit that we’ve driven over both of these potholes in recent years as we’ve genuinely tried to make Augustana better – in the present and for the future.

In the face of these difficulties, I understand the temptation to be silly about it and throw the strategic planning baby out with the tactical bathwater.  But that would be – in a word – stupid.  A primary reason why higher education is in such trouble these days is because so many institutions believed that they didn’t really have to plan ahead (or that anything might change over time) because they thought there would always be lots of students who would pay whatever the institution charged to sit at the feet of masters and learn whatever was taught.

Frankly, I really like a lot of what is going to be proposed and discussed this weekend. However, we are always faced with the challenge of following through.  How are we going to walk this thing out to its fullest completion, and will we really have chosen the right metrics to demonstrate the degree to which we have achieved the goal we set out to accomplish?

All of these thoughts were bouncing around in my head as I watched two TED Talks by Derek Sivers over the weekend.  Although both of them are only about three minutes long, they made me think a lot about how we might go from the laudable abstractions of mission, vision, and strategic directions to the simple, sustainable, and concrete evidence that will demonstrate to everyone whether we have reached the goals we set for ourselves.

The first TED Talk focuses on a key element of success for individuals who set goals for themselves.  The crux of his point is that those who talk too much about what they intend to accomplish can sometimes fool themselves into thinking that they have already accomplished it.  I’ve often heard a nearby college’s strategic plan described as, “Fake it ’til you make it.”  Yet there are a myriad of colleges and universities that became more selective simply by declaring themselves to be more selective.  In the end, the quality of the education they provided didn’t change a bit.  In terms of making our strategic plan something worth the kilobytes it’s saved on, we might be careful to talk more about the things we need to do or be today in order to achieve our long-term goals, and talk less about publicizing the institution we will become and the prestige we will acquire as if we were already well on our way to getting there.

The second TED Talk teases out a critical and oft overlooked moment in the origins of a social movement.  Sivers shows a video of an impromptu dance party on a hillside.  The point he makes seems to be particularly applicable to our work once the strategic plan is finalized.  Essentially, he emphasizes the leadership effect of the first follower – the individual who finds something great and has the guts to jump up and join in.

I’m sure there are several other potentially important take-aways from these clips.  I wanted to share them with you in the hopes that something from them might help us move from planning to doing to being.

Make it a good day,



Making student work work

This is it.  The end of another year and my last post for a while.  Yes, I know.  I’ll miss you, too.

I guess I’m feeling a little sentimental because my two temporary partner’s in crime are graduating next weekend and going on to graduate school.  Cameron and Emma have been wonderfully helpful over the last two years, and Kimberly and I will miss them both!

Since we’ve been talking about the value of experiential learning opportunities over the last several years, I decided to ask Cameron and Emma if they’d like to write something short about their work experience at Augustana and its impact on their learning and development.  They jumped at the chance, so I’m gonna check out of here a little early and let them have the last word of the 12/13 academic year.

Cameron’s thoughts . . .

The ability to work on campus has become a crucial piece to my educational puzzle. Not only has it helped me financially support myself, but it has become a cornerstone of who I am. Through my work I discovered my career and future plans. The experience gained over the past two years allowed me to explore my interests in a way I could not have done so otherwise. Like many other positions on campus, my job allowed me to more closely align myself with a career.  The hands-on experience also gave me an edge over others as well as valuable resources I can always come back to for support.

Understandably certain positions may not lend themselves to the same level of career planning clarity, but even on the smallest level working on campus offers a new community of people to go to and feel more connected to the college. Without my position I would not have met nearly as many wonderful people who have helped me through the challenges I’ve experienced while at Augustana. So even if the job is as simple that as a cashier in the bookstore or assistant during food services, they are all beneficial.  Student work positions can be more focused on being a learning experience instead of just a job.  

Emma’s thoughts . . .

Having a student job here at Augustana has been beneficial to my educational and academic progression. Personally, this progression has stemmed from my ability to integrate the theoretical knowledge I have gained in classes to real-world studies and research. Instead of simply learning how to build a survey, I’ve been able to actually construct one. I did not just learn the theory behind calculating a logistic regression, I actually performed it. Student jobs should aim to teach students the possibilities, frustrations, and benefits that come from real-world work or research in their field. Because I have been able to use my knowledge gained from my courses, I am much more confident in my ability to perform research studies in graduate school and in my career field. Student positions should not be a series of tasks to provide students with a paycheck. Instead, they should encourage and push students towards tackling projects that have implications for either the practical or academic world.

In accordance with integrating what is learned in the classroom to the workplace,
student workers should be encouraged to implement their personal and unique knowledge and experiences into their work. This chance to share my perspective as an Augustana student was very valuable to my identity and confidence as an academic and a researcher. Student workers should be given the opportunity to share their opinions, experiences, and knowledge and be able to see these unique contributions bring value to the discussions and work we see happening around us every day. Learning to vocalize our opinions, findings, and observations is essential in preparing undergraduate students for the next stage of their career- whether that is graduate school or a job.

While my involvement as a student worker in Institutional Research has increased my
skills and knowledge in many areas of statistics, research, writing, etc., it is these experiences of academic integration that stand out as the most beneficial to my growth as a student and a researcher. In the future, more student positions should implement this hands-on, practical application approach. Integrating knowledge from the classroom to the real world is an essential part of the learning process and student growth.

There’s no question that I got pretty luck in hiring both of these students.  They’ve jumped into the deep and murky water of college impact research and survived to tell the tale.  Moreover, they’ve made contributions that genuinely made Augustana a better place for future students.

So congratulations to Cameron and Emma.  And congrats to all of our graduates.

One piece of advice – and this goes to everyone who is expected to walk up onto the stage next Sunday.  Don’t trip!  It will be caught on camera by someone and end up on Youtube!

Make it a good summer,



Compete with MOOCs?! Why not co-opt them instead?

Since I won’t write another blog post until the beginning of spring term, I thought I’d write something a little different.  Instead of a traditional data-filled post, I am going to weigh in with a suggestion – an opinion that is merely my own, not to be confused with some broader administrative position.  I’ve been mulling this one over since the explosion of Massive Open Online Courses (MOOCs) last year, but it really came to a boil last week when I read about Scott Young and his MIT Challenge.

At first glance, Scott Young’s MIT Challenge smells like the arrogant prank of an affluent Silicon Valley prodigy.  A recent university graduate who fancies himself a blogger, writer, and “holistic learner” decides to see if he can complete the entire MIT curriculum for a computer science major in a year without enrolling in any MIT classes.  Instead, he plans to download all course materials – including lectures, homework assignments, and final exams – from MIT’s open courseware site and MIT’s edX.  He’ll only spend money on text books and internet access, which he estimates will cost about $2000 over the course of the entire curriculum (a paltry sum compared to cost of attending MIT for one year – $57,010 in 2012/13).

Well, he did it (that little @$#&!).  From September 2011 to September 2012, Mr. Young completed and passed all of the course work expected of MIT students to earn a major in computer science.  And just in case you think it a braggart’s hoax, he posted all of his course work, exams, and projects to verify that he actually pulled it off.  Essentially, If he had been a paying MIT student, he would now be considered one of their alums.  He might not have graduated cum laude, but you know what they call the person who graduates last in his class from Harvard Medical School (for those of you who haven’t heard the joke, the answer is “doctor”).

My point isn’t to celebrate the accomplishments of a brash, albeit intriguing, young man from Manitoba (wouldn’t you know it, this guy turns out to be Canadian!).  In the context of the academic tendencies we all too often see in students, his feat suggests more that he is an outlier among young adults than that a tsunami of self-directed learners is headed our way.

Rather, the simple fact that the full curriculum of a computer science degree from MIT is already freely available online should blow up any remaining notion that we, or any other small liberal arts college, can continue to act as if we are the lone gatekeepers of postsecondary content knowledge.  The ubiquitous availability of this kind of content knowledge delivered freely in educationally viable ways makes many a small college’s course catalogue seem like a quaint relic of a nostalgic past.  Moreover, if any major we offer is merely, or even mostly, an accumulation of content-heavy survey courses and in-depth seminars, we make ourselves virtually indistinguishable from an exponentially expanding range of educational options – except for our exorbitant cost.  And though we might stubbornly argue that our classes are smaller, our faculty more caring, or the expectations more demanding (all of which may indeed be so!), if the education we offer appears to prospective students as if it differs little from far less expensive educational content providers (e.g., general education is designed to provide content introductions across a range of disciplines, majors are organized around time periods, major theoretical movements, or subfields, students earn majors or minors in content-heavy areas), we increase the likelihood that future students will choose the less expensive option – even as they may whole-heartedly agree that we are marginally better.  And if those less expensive providers happen to be prestigious institutions like MIT, we are definitely in trouble.  For even if there is a sucker born every minute, I doubt there will be many who are willing to borrow gargantuan sums of money to pay for the same content knowledge that they can acquire for 1/100th of the cost – especially when they can supplement it on their own as needed.

Admittedly, I am trying to be provocative.  But please note that I haven’t equated “content knowledge” with “an education.”  Because in the end, the bulk of what Mr. Young acquired was content knowledge.  He’d already earned a undergraduate degree in a traditional setting, and by all indications, seems to have benefited extensively from that experience.  At Augustana, our educational mission has always been about much more than content knowledge.  This reality is clearly articulated in the composition of our new student learning outcomes.  We have recognized that content knowledge is a necessary but by no means sufficient condition of a meaningful education.   With this perspective, I’d like to suggest that we explicitly cast ourselves in this light: as guides that help students evaluate, process, and ultimately use that knowledge.  This doesn’t mean that we devalue content knowledge.  Rather, it means that we deliberately position content as a means to a greater end, more explicitly designing every aspect of our enterprise to achieve it.  Incidentally, this also gives us a way to talk about the educational value of our co-curricular experiences that directly ties them to our educational outcomes and makes them less susceptible to accusations of edu-tainment, extravagance, or fluff.

To date, the vast majority of successful MOOCs and online programs focuses on traditional content knowledge delivery or skill development specific to a given profession.  The research on the educational effectiveness of online courses suggests that while online delivery can be at least as effective as face-to-face courses in helping students develop and retain content knowledge and lower-order thinking skills, face-to-face courses tend to be more effective in developing higher-order thinking skills.  So if our primary focus is on showing students how to use the knowledge they have acquired to achieve a deeper educational goal rather than merely delivering said content to them, then . . . .

What if, instead of fearing the “threat” of MOOCs and online learning, we chose to see them as a wonderful cost- and time-saving opportunity?  What if we were to co-opt the power and efficiency of MOOCs and other online content delivery mechanisms to allow us to focus more of our time and face-to-face resources on showing students how to use that knowledge?  I don’t begin to claim to have a fully fleshed-out model of what all of this would look like (in part because I don’t think there is a single model of how an institution might pull this off), but it seems to me that if we choose to see the explosion of online learning possibilities as a threat, we drastically shorten our list of plausible responses (i.e., ignore them and hope they go away or try to compete without a glimmer of the resources necessary to do so).  On the other hand, if we co-opt the possibilities of online learning and find ways to fit them into our current educational mission, our options are as broad as the possibilities are endless.  I guess I’d rather explore an expanding horizon.  Enjoy your break.

Make it a good day,






Big Data, Intuition, and the Potential of Improvisation

Welcome back to the second half of winter term!  As nice as it is to walk across campus in the quiet calm of a fresh new year (ignoring the giant pounding on top of the library for the moment), it’s a comfort to see faculty and students bustling between buildings again and feel the energy of the college reignited by everyone’s return.

Over the last several weeks, I’ve been trying to read the various higher ed opinionators’ perspectives on MOOCs (Massive Open Online Courses) and the implications they foresee for colleges like Augustana.  Based on what I’ve read so far, we are either going to 1) thrive without having to change a thing, 2) shrivel up and die a horrible death sometime before the end of the decade, or 3) see lots of changes that will balance each other out and leave us somewhere in the middle.  In other words – no one has a clue.  But this hasn’t stopped many a self-appointed Nostradami (Nostradamuses?) from rattling off a slew of statistics to make their case: the increasing number of students taking online courses, the number of schools offering online courses, the hundreds of thousands of people who sign up for MOOCs, the shifting demographics of college students, blah blah blah.  After all, as these prognosticators imply, historical trends predict the future.

Except when they don’t.  A recent NYT article, Sure, Big Data Is Great, But So Is Intuition, highlights the fundamental weakness in thinking that a massive collection of data gathered from individual behaviors (web-browsing, GPS tracking, social network messaging, etc.) inevitably holds the key to a brighter future.  As the article puts it, “The problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according the laws of physics.”  The article goes on to point out the implications of abiding by this false presumption, such as the catastrophic failure of financial modeling to predict the world-wide economic collapse of 2008.  I particularly like the way that the article summarizes this cautionary message.  “Listening to the data is important, they [experts interviewed for the article] say, but so is experience and intuition.  After all, what is intuition at its best but large amounts of data of all kinds filtered through a human brain rather than a math model?”

This is where experience and intuition intersect with my particular interest in improvisation.  When done well, improvisation is not merely random actions.  Instead, good improvisation occurs when the timely distillation of experience and observation coalesces through intuition to emerge in an action that both resolves a dilemma and introduces opportunity.  Improvisation is the way that we discover a new twist in our teaching that magically “just seemed to work.”  Those moments aren’t about luck; they materialize when experience meets intuition meets trust meets action.  Only after reflecting on what happened are we able to figure out the “why” and the “how” in order to replicate the new innovation onto which we have stumbled.  Meanwhile, back in the moment, it feels like we are just “in a zone.”

Of course, improvisation is no more a guarantee of perfection than predictive modeling.  That is because the belief that one can somehow achieve perfection in educating is just as flawed as the fallacy of predictive modeling.  Statisticians are taught to precede findings with the phrase “all else remaining constant . . . ” But in education, that has always been the supremely ironic problem.  Nothing remains constant.  So situating evidence of a statistically significant finding within the the real and gnarly world of teaching and learning requires sophisticated thinking borne of extensive experience and keen intuition.

Effective improvising emerges when we are open to its possibilities – individually and collectively.  It’s just a matter of letting our experience morph into intuition in a context of trust that spurs us to act.  Just because big data isn’t the solution that some claim it to be doesn’t mean that we batten down the hatches, pretend that MOOCs and every other innovation in educational technology don’t exist, and keep doing what we’ve always done (only better, faster, smarter, more, more, more . . . ).  Effective improvising is always preceded by intuition that is informed by some sort of data analysis.  When asked why they did what they did, successful improvisers can often explain in detail the thought processes that spurred them to take a particular action or utter a particular line.  In the same way, we know a lot about how our students learn and what seems to work well in extending their learning.  Given that information, I believe that we have the all of the experience and knowledge to improvise successfully.  We just need to flip the switch (“Lights, Action, Improv!”).

Early in the spring term, I’ll host a Friday Conversation where I’ll teach some ways to apply the principles of improvisation to our work.  Some of you may remember that I did a similar session last year – although you may have repressed that memory if you were asked to volunteer for one of the improv sketches.

In the mean time, I hope you’ll open yourself up to the potential of improvisation.  Enjoy your return to the daily routine.  It’s good to have you back.

Make it a good day,




Grades and Assessing Student Learning (can’t we all just get along?)

During a recent conversation about the value of comprehensive student learning assessment, one faculty member asked, “Why should we invest time, money, and effort to do something that we are essentially already doing every time we assign grades to student work?”  Most educational assessment zealots would respond by launching into a long explanation of the differences between tracking content acquisition and assessing skill development, the challenges of comparing general skill development across disciplines,  the importance of demonstrating gains on student learning outcomes across an entire institution, blah blah blah (since these are my peeps, I can call it that).  But from the perspective of an exhausted professor who has been furiously slogging through a pile of underwhelming final papers, I think the concern over a substantial increase in faculty workload is more than reasonable.  Why would an institution or anyone within it choose to be redundant?

If a college wants to know whether its students are learning a particular set of knowledge, skills, and dispositions, it makes good sense to track the degree to which that is happening.  But we make a grave mistake when we require additional processes and responsibilities from those “in the trenches” without thinking carefully about the potential for diminishing returns in the face of added workload (especially if that work appears to be frivolous or redundant).  So it would seem to me that any conversation about assessing student learning should emphasize the importance of efficiency so that faculty and staff can continue to fulfill all the other roles expected of them.

This brings me back to what I perceive to be an odd disconnect between grading and outcomes assessment on most campuses.  It seems to me that if grading and assessment are both intent on measuring learning, then there ought to be a way to bring them closer together.  Moreover, if we want assessment to be truly sustainable (i.e. not kill our faculty), then we need to find ways to link, if not unify, these two practices.

What might this look like?  For starters, it would require conceptualizing content learned in a course as the delivery mechanism for skill and disposition development.  Traditionally, I think we’ve envisioned this relationship in reverse order – that skills and dispositions are merely the means for demonstrating content acquisition – with content acquisition becoming the primary focus of grading.  In this context, skills and dispositions become a sort of vaguely mysterious red-headed stepchild (with apologies to step-children, red heads, and the vaguely mysterious).  More importantly, if we are now focusing on skills and dispositions, this traditional context necessitates an additional process of assessing student learning.

However, if we reconceptualize our approach so that content becomes the raw material with which we develop skills and dispositions, we could directly apply our grading practices in the same way.  One would assign a proportion of the overall grade to the necessary content acquisition, and the rest of the overall grade (apportioned as the course might require) to the development of the various skills and dispositions intended for that course.  In addition to articulating which skills and dispositions each course would develop and the progress thresholds expected of students in each course, this means that we would have to be much more explicit about the degree to which a given course is intended to foster improvement in students (such as a freshman level writing course) as opposed to a course designed for students to demonstrate competence (such as a senior level capstone in accounting procedures).  At an even more granular level, instructors might define individual assignments within a given course to be graded for improvement earlier in the term with other assignments graded for competence later in the term.

I recognize that this proposal flies in the face of some deeply rooted beliefs about academic freedom that faculty, as experts in their field, should be allowed to teach and grade as they see fit. When courses were about attaining a specific slice of content, every course was an island.  17th century British literature?  Check.  The sociology of crime?  Check.  Cell biology?  Check.  In this environment, it’s entirely plausible that faculty grading practices would be as different as the topography of each island.  But if courses are expected to function collectively to develop a set of skills and/or dispositions (e.g., complex reasoning, oral and written communication, intercultural competence), then what happens in each course is irrevocably tied to what happens in previous and subsequent courses.  And it follows that the “what” and “how” of grading would be a critical element in creating a smooth transition for students between courses.

In the end it seems to me that we already have all of the mechanisms in place to embed robust learning outcomes assessment into our work without adding any new processes or responsibilities to our workload.  However, to make this happen we need to 1) embrace all of the implications of focusing on the development of skills and dispositions while shifting content acquisition from an end to a means to a greater end, and 2) accept that the educational endeavor in which we are all engaged is a fundamentally collaborative one and that our chances of success are best when we focus our individual expertise toward our collective mission of learning.

Make it a good day,



Finding the ideal balance between faculty and administrators

During the term break, the Chronicle of Higher Education reviewed a research paper about the impact of an administrator-faculty ratio on institutional costs.  The researchers were seeking evidence to test the long-standing hypothesis that the rising costs in higher education can be attributed to an ever-growing administrator class.  The paper’s authors found that the ideal ratio of faculty to administrators at large research institutions was 3:1 and that institutions with a lower ratio (fewer faculty per administrator) tend to be more expensive.

Even though we are a small liberal arts college and not the type of institution on which this study focused, I wondered what our ratio might look like.  I am genuinely curious about the relationship between in-class educators (faculty) and out-of-class educators (student affairs staff) because we often emphasize our belief in the holistic educational value of a residential college experience.  In addition, since some have expressed concern about a perceived increase in administrative positions, I thought I’d run our numbers and see what turns up.

Last year, Augustana employed 184 full time, tenured or tenure-track faculty and 65 administrators.  Thus, the ratio of faculty to administrators was 2.8 to 1.  If we were to include faculty FTE and administrator FTE (which means we include all part-time folks as one-third of a full time employee and add them to the equation), the ratio becomes 3.35 to 1.  By comparison, in 2003 (the earliest year in which this data was reported to IPEDS), our full time, tenured or tenure-track faculty (145) to administrator (38) ratio was 3.82 to 1.  When using FTE numbers, that ratio slips to 4.29 to 1.

What should we make of this?  On its face, it appears that we’ve suffered from the same disease that has infected many larger institutions.  Over about ten years, the balance between faculty to administrators has shifted even though we have increased the size of the faculty considerably.  But if you consider these changes in the context of our students (something that seems to me to be a rather important consideration), the results seem to paint a different picture.  For even though our ratio of faculty to administrators might have shifted, our ratios of students to faculty and students to administrators have moved in similar directions over the same period, with the student/faculty ratio going from about 14:1 to just over 11:1 and our student/administrator ratio going from about 51:1 to close to 39:1.  Proportionally, both ratios drop by about 20%.

For me, these numbers inspire two questions that I think are worth considering.  First, although the absolute number of administrators includes a wide variety of campus offices, a substantial proportion of “administrators” exist in student affairs.  And there seems to be some disparity between the nature of the educational relationship that we find acceptable between students and in-class educators (faculty) and between students and out-of-class educators (those administrators who work in student affairs).  There’s a lot to sort out here (and I certainly don’t have it all pegged), but this disparity doesn’t seem to match up with the extent to which we believe that important student learning and development happens outside of the classroom.  Now I am not arguing that the student/administrator ratio should approach 11:1.  Admittedly, I have no idea what the ideal student/faculty ratio or student/administrator ratio should be (although, like a lot of things, distilling that relationship down to one ratio is probably our first big mistake). Nonetheless, I suspect we would all benefit from a deeper understanding of the way in which our student affairs professionals impact our students’ development.  As someone who spends most of my time in the world of academic affairs, I wonder whether my own efforts to support this aspect of the student learning experience have not matched the degree to which we believe it is important.  Although I talk the talk, I’m not sure I’ve fully walked the walk.

Second, examining the optimal ratio between faculty and administrators doesn’t seem to have much to do with student learning.  I fear that posing this ratio without a sense of the way in which we collaboratively contribute to student learning just breathes life into an administrator vs. faculty meme that tends to pit one against the other.  If we start with a belief that there is an “other side,” and we presume the other side to be the opposition before we even begin a conversation, we are dead in the water.

Our students need us to conceptualize their education in the same way that they experience it – as one comprehensive endeavor.  We – faculty, administrators, admissions staff, departmental secretaries, food service staff, grounds crew, Board of Trustees – are all in this together.  And from my chair, I can’t believe how lucky I am to be one of your teammates.

Make it a good day,




Talking, albeit eloquently, out of both sides of our mouths

Many of my insecurities emerge from a very basic fear of being wrong.  Worse still, my brain takes it one step further, playing this fear out through the infamously squeamish dream in which I am giving a public presentation somewhere only to discover in the middle of it that my pants lie in a heap around my ankles.  But in my dream, instead of acknowledging my “problem,” buckling up, and soldiering on, I inexplicably decide that if I just pretend not to notice anything unusual, then no one in the audience will notice either.  Let’s just say that this approach doesn’t work out so well.

It’s pretty hard to miss how ridiculous this level of cognitive contortionism sounds.  Yet this kind of foolishness isn’t the exclusive province of socially awkward bloggers like me.  In the world of higher education we sometimes hold obviously contradictory positions in plain view, trumpeting head-scratching nonsequiturs with a straight face.  Although this exercise might convince many, including ourselves, that we are holding ourselves accountable to our many stakeholders, we actually make it harder to meaningfully improve because we don’t test the underlying assumptions that set the stage for these moments of cognitive dissonance.  So I’d like to wrestle with one of these “conundrums” this week: the ubiquitous practice of benchmarking in the context of a collective uncertainty about the quality of higher education – admitting full well that I may well be the one who ends up pinned to the mat crying “uncle.”

It’s hard to find a self-respecting college these days that hasn’t already embedded the phrase “peer and aspirant groups” deep into its lexicon of administrator-speak.  This phrase refers to the practice of benchmarking – a process to support internal assessment and strategic planning that was transplanted from the world of business several decades ago.  Benchmarking is a process of using two groups of other institutions to assess one’s own success and growth.  Institutions start by choosing a set of metrics to identify two groups of colleges: a set of schools that are largely similar at present (peers) and a set of schools that represent a higher tier of colleges for which they might strive (aspirants). The institution then uses these two groups as yardsticks to assess their efforts toward:

  1. improved efficiency (i.e., outperforming similarly situated peers on a given metric), or
  2. increased effectiveness (i.e., equaling or surpassing a marker already attained by colleges at the higher tier to which the institution aspires).

Sometimes this practice is useful, especially in setting goals for statistics like retention rates, graduation rates, or a variety of operational measures.  However, sometimes this exercise can unintentionally devolve into a practice of gaming, in which comparisons with the identified peer group too easily shine a favorable light on the home institution, while comparisons with the aspirant group are too often interpreted as evidence of how much the institution has accomplished in spite of its limitations.  Nonetheless, this practice seems to be largely accepted as a legitimate way of quantifying quality.  So in the end, our “go-to” way of demonstrating value and a commitment to quality is inescapably tethered to how we compare ourselves to other colleges.

At first, this seems like an entirely reasonable way to assess quality.  But it depends on one  fundamental assumption: the idea that, on average, colleges are pretty good at what they do.  Unfortunately, the last decade of research on the relative effectiveness of higher education suggests that, at the very least, the educational quality of colleges and universities is uneven, or at worst, that the entire endeavor is a fantastically profitable house of cards.

No matter which position one takes, it seems extraordinarily difficult to simultaneously assert that the quality of any given institution is somewhere between unknown and dicey, while at the same time using a group of institutions – most of which we know very little about beyond some cursory, outer layer statistics – as a basis for determining one’s own value.  It’s sort of like the sixth grade boy who justifies his messy room by suggesting that it’s cleaner than all of his friends’ rooms.

My point is not to suggest that benchmarking is never useful or that higher education is not in need of improvement.  Rather, I think that we have to be careful about how we choose to measure our success.  I think we need to be much more willing to step forward and spell out what we think success should look like, regardless of what other institutions are doing or not doing.  In my mind, this means starting by selecting a set of intended outcomes, defining clearly what success will look like, and then building the rest of what we do in a purposeful way around achieving those outcomes.  Not only does this give us a clear direction simply described to people within and without our own colleges, but gives us all the pieces necessary to build a vibrant feedback loop to assess and improve our efforts and our progress.

I fully understand the allure of “best practices” – the idea that we can do anything well simply by figuring out who has already done it well and then copying what they do.  But I’ve often seen the best of best practices quickly turn into worst practices when plucked out of one setting and dropped wholesale into a different institutional culture.  Maybe we’d be better off paying less attention to what everyone else does, and concentrate instead on designing a learning environment that starts with the end in mind and uses all that we already know about college student development, effective teaching, and how people learn.  It might look a lot different than the way that we do it now.  Or it might not look all that different, despite being substantially more effective.  I don’t know for sure.  But it’s got to be more effective than talking, albeit eloquently, out of both sides of our mouths.

Make it a good day,