Grades and Assessing Student Learning (can’t we all just get along?)

During a recent conversation about the value of comprehensive student learning assessment, one faculty member asked, “Why should we invest time, money, and effort to do something that we are essentially already doing every time we assign grades to student work?”  Most educational assessment zealots would respond by launching into a long explanation of the differences between tracking content acquisition and assessing skill development, the challenges of comparing general skill development across disciplines,  the importance of demonstrating gains on student learning outcomes across an entire institution, blah blah blah (since these are my peeps, I can call it that).  But from the perspective of an exhausted professor who has been furiously slogging through a pile of underwhelming final papers, I think the concern over a substantial increase in faculty workload is more than reasonable.  Why would an institution or anyone within it choose to be redundant?

If a college wants to know whether its students are learning a particular set of knowledge, skills, and dispositions, it makes good sense to track the degree to which that is happening.  But we make a grave mistake when we require additional processes and responsibilities from those “in the trenches” without thinking carefully about the potential for diminishing returns in the face of added workload (especially if that work appears to be frivolous or redundant).  So it would seem to me that any conversation about assessing student learning should emphasize the importance of efficiency so that faculty and staff can continue to fulfill all the other roles expected of them.

This brings me back to what I perceive to be an odd disconnect between grading and outcomes assessment on most campuses.  It seems to me that if grading and assessment are both intent on measuring learning, then there ought to be a way to bring them closer together.  Moreover, if we want assessment to be truly sustainable (i.e. not kill our faculty), then we need to find ways to link, if not unify, these two practices.

What might this look like?  For starters, it would require conceptualizing content learned in a course as the delivery mechanism for skill and disposition development.  Traditionally, I think we’ve envisioned this relationship in reverse order – that skills and dispositions are merely the means for demonstrating content acquisition – with content acquisition becoming the primary focus of grading.  In this context, skills and dispositions become a sort of vaguely mysterious red-headed stepchild (with apologies to step-children, red heads, and the vaguely mysterious).  More importantly, if we are now focusing on skills and dispositions, this traditional context necessitates an additional process of assessing student learning.

However, if we reconceptualize our approach so that content becomes the raw material with which we develop skills and dispositions, we could directly apply our grading practices in the same way.  One would assign a proportion of the overall grade to the necessary content acquisition, and the rest of the overall grade (apportioned as the course might require) to the development of the various skills and dispositions intended for that course.  In addition to articulating which skills and dispositions each course would develop and the progress thresholds expected of students in each course, this means that we would have to be much more explicit about the degree to which a given course is intended to foster improvement in students (such as a freshman level writing course) as opposed to a course designed for students to demonstrate competence (such as a senior level capstone in accounting procedures).  At an even more granular level, instructors might define individual assignments within a given course to be graded for improvement earlier in the term with other assignments graded for competence later in the term.

I recognize that this proposal flies in the face of some deeply rooted beliefs about academic freedom that faculty, as experts in their field, should be allowed to teach and grade as they see fit. When courses were about attaining a specific slice of content, every course was an island.  17th century British literature?  Check.  The sociology of crime?  Check.  Cell biology?  Check.  In this environment, it’s entirely plausible that faculty grading practices would be as different as the topography of each island.  But if courses are expected to function collectively to develop a set of skills and/or dispositions (e.g., complex reasoning, oral and written communication, intercultural competence), then what happens in each course is irrevocably tied to what happens in previous and subsequent courses.  And it follows that the “what” and “how” of grading would be a critical element in creating a smooth transition for students between courses.

In the end it seems to me that we already have all of the mechanisms in place to embed robust learning outcomes assessment into our work without adding any new processes or responsibilities to our workload.  However, to make this happen we need to 1) embrace all of the implications of focusing on the development of skills and dispositions while shifting content acquisition from an end to a means to a greater end, and 2) accept that the educational endeavor in which we are all engaged is a fundamentally collaborative one and that our chances of success are best when we focus our individual expertise toward our collective mission of learning.

Make it a good day,

Mark

 

Finding the ideal balance between faculty and administrators

During the term break, the Chronicle of Higher Education reviewed a research paper about the impact of an administrator-faculty ratio on institutional costs.  The researchers were seeking evidence to test the long-standing hypothesis that the rising costs in higher education can be attributed to an ever-growing administrator class.  The paper’s authors found that the ideal ratio of faculty to administrators at large research institutions was 3:1 and that institutions with a lower ratio (fewer faculty per administrator) tend to be more expensive.

Even though we are a small liberal arts college and not the type of institution on which this study focused, I wondered what our ratio might look like.  I am genuinely curious about the relationship between in-class educators (faculty) and out-of-class educators (student affairs staff) because we often emphasize our belief in the holistic educational value of a residential college experience.  In addition, since some have expressed concern about a perceived increase in administrative positions, I thought I’d run our numbers and see what turns up.

Last year, Augustana employed 184 full time, tenured or tenure-track faculty and 65 administrators.  Thus, the ratio of faculty to administrators was 2.8 to 1.  If we were to include faculty FTE and administrator FTE (which means we include all part-time folks as one-third of a full time employee and add them to the equation), the ratio becomes 3.35 to 1.  By comparison, in 2003 (the earliest year in which this data was reported to IPEDS), our full time, tenured or tenure-track faculty (145) to administrator (38) ratio was 3.82 to 1.  When using FTE numbers, that ratio slips to 4.29 to 1.

What should we make of this?  On its face, it appears that we’ve suffered from the same disease that has infected many larger institutions.  Over about ten years, the balance between faculty to administrators has shifted even though we have increased the size of the faculty considerably.  But if you consider these changes in the context of our students (something that seems to me to be a rather important consideration), the results seem to paint a different picture.  For even though our ratio of faculty to administrators might have shifted, our ratios of students to faculty and students to administrators have moved in similar directions over the same period, with the student/faculty ratio going from about 14:1 to just over 11:1 and our student/administrator ratio going from about 51:1 to close to 39:1.  Proportionally, both ratios drop by about 20%.

For me, these numbers inspire two questions that I think are worth considering.  First, although the absolute number of administrators includes a wide variety of campus offices, a substantial proportion of “administrators” exist in student affairs.  And there seems to be some disparity between the nature of the educational relationship that we find acceptable between students and in-class educators (faculty) and between students and out-of-class educators (those administrators who work in student affairs).  There’s a lot to sort out here (and I certainly don’t have it all pegged), but this disparity doesn’t seem to match up with the extent to which we believe that important student learning and development happens outside of the classroom.  Now I am not arguing that the student/administrator ratio should approach 11:1.  Admittedly, I have no idea what the ideal student/faculty ratio or student/administrator ratio should be (although, like a lot of things, distilling that relationship down to one ratio is probably our first big mistake). Nonetheless, I suspect we would all benefit from a deeper understanding of the way in which our student affairs professionals impact our students’ development.  As someone who spends most of my time in the world of academic affairs, I wonder whether my own efforts to support this aspect of the student learning experience have not matched the degree to which we believe it is important.  Although I talk the talk, I’m not sure I’ve fully walked the walk.

Second, examining the optimal ratio between faculty and administrators doesn’t seem to have much to do with student learning.  I fear that posing this ratio without a sense of the way in which we collaboratively contribute to student learning just breathes life into an administrator vs. faculty meme that tends to pit one against the other.  If we start with a belief that there is an “other side,” and we presume the other side to be the opposition before we even begin a conversation, we are dead in the water.

Our students need us to conceptualize their education in the same way that they experience it – as one comprehensive endeavor.  We – faculty, administrators, admissions staff, departmental secretaries, food service staff, grounds crew, Board of Trustees – are all in this together.  And from my chair, I can’t believe how lucky I am to be one of your teammates.

Make it a good day,

Mark

 

 

Talking, albeit eloquently, out of both sides of our mouths

Many of my insecurities emerge from a very basic fear of being wrong.  Worse still, my brain takes it one step further, playing this fear out through the infamously squeamish dream in which I am giving a public presentation somewhere only to discover in the middle of it that my pants lie in a heap around my ankles.  But in my dream, instead of acknowledging my “problem,” buckling up, and soldiering on, I inexplicably decide that if I just pretend not to notice anything unusual, then no one in the audience will notice either.  Let’s just say that this approach doesn’t work out so well.

It’s pretty hard to miss how ridiculous this level of cognitive contortionism sounds.  Yet this kind of foolishness isn’t the exclusive province of socially awkward bloggers like me.  In the world of higher education we sometimes hold obviously contradictory positions in plain view, trumpeting head-scratching nonsequiturs with a straight face.  Although this exercise might convince many, including ourselves, that we are holding ourselves accountable to our many stakeholders, we actually make it harder to meaningfully improve because we don’t test the underlying assumptions that set the stage for these moments of cognitive dissonance.  So I’d like to wrestle with one of these “conundrums” this week: the ubiquitous practice of benchmarking in the context of a collective uncertainty about the quality of higher education – admitting full well that I may well be the one who ends up pinned to the mat crying “uncle.”

It’s hard to find a self-respecting college these days that hasn’t already embedded the phrase “peer and aspirant groups” deep into its lexicon of administrator-speak.  This phrase refers to the practice of benchmarking – a process to support internal assessment and strategic planning that was transplanted from the world of business several decades ago.  Benchmarking is a process of using two groups of other institutions to assess one’s own success and growth.  Institutions start by choosing a set of metrics to identify two groups of colleges: a set of schools that are largely similar at present (peers) and a set of schools that represent a higher tier of colleges for which they might strive (aspirants). The institution then uses these two groups as yardsticks to assess their efforts toward:

  1. improved efficiency (i.e., outperforming similarly situated peers on a given metric), or
  2. increased effectiveness (i.e., equaling or surpassing a marker already attained by colleges at the higher tier to which the institution aspires).

Sometimes this practice is useful, especially in setting goals for statistics like retention rates, graduation rates, or a variety of operational measures.  However, sometimes this exercise can unintentionally devolve into a practice of gaming, in which comparisons with the identified peer group too easily shine a favorable light on the home institution, while comparisons with the aspirant group are too often interpreted as evidence of how much the institution has accomplished in spite of its limitations.  Nonetheless, this practice seems to be largely accepted as a legitimate way of quantifying quality.  So in the end, our “go-to” way of demonstrating value and a commitment to quality is inescapably tethered to how we compare ourselves to other colleges.

At first, this seems like an entirely reasonable way to assess quality.  But it depends on one  fundamental assumption: the idea that, on average, colleges are pretty good at what they do.  Unfortunately, the last decade of research on the relative effectiveness of higher education suggests that, at the very least, the educational quality of colleges and universities is uneven, or at worst, that the entire endeavor is a fantastically profitable house of cards.

No matter which position one takes, it seems extraordinarily difficult to simultaneously assert that the quality of any given institution is somewhere between unknown and dicey, while at the same time using a group of institutions – most of which we know very little about beyond some cursory, outer layer statistics – as a basis for determining one’s own value.  It’s sort of like the sixth grade boy who justifies his messy room by suggesting that it’s cleaner than all of his friends’ rooms.

My point is not to suggest that benchmarking is never useful or that higher education is not in need of improvement.  Rather, I think that we have to be careful about how we choose to measure our success.  I think we need to be much more willing to step forward and spell out what we think success should look like, regardless of what other institutions are doing or not doing.  In my mind, this means starting by selecting a set of intended outcomes, defining clearly what success will look like, and then building the rest of what we do in a purposeful way around achieving those outcomes.  Not only does this give us a clear direction simply described to people within and without our own colleges, but gives us all the pieces necessary to build a vibrant feedback loop to assess and improve our efforts and our progress.

I fully understand the allure of “best practices” – the idea that we can do anything well simply by figuring out who has already done it well and then copying what they do.  But I’ve often seen the best of best practices quickly turn into worst practices when plucked out of one setting and dropped wholesale into a different institutional culture.  Maybe we’d be better off paying less attention to what everyone else does, and concentrate instead on designing a learning environment that starts with the end in mind and uses all that we already know about college student development, effective teaching, and how people learn.  It might look a lot different than the way that we do it now.  Or it might not look all that different, despite being substantially more effective.  I don’t know for sure.  But it’s got to be more effective than talking, albeit eloquently, out of both sides of our mouths.

Make it a good day,

Mark

 

What’s in a name?

When I first floated the idea of a weekly column, everyone in the Dean’s office seemed to be on board.  But when I proposed calling it “Delicious Ambiguity,” I got more than a few funny looks.  Although these looks could have been a mere byproduct of the low-grade bewilderment that I normally inspire, let’s just say for the sake of argument that they were largely triggered by the apparent paradox of a column written by the measurement guy that seems to advocate winging it.  So let me tell you a little bit about the origins of the phrase “Delicious Ambiguity” and why I think it embodies the real purpose of Institutional Research and Assessment.

This particular phrase is part of a longer quote from Gilda Radner – a brilliant improvisational comedian and one of the early stars of Saturday Night Live.  The line goes like this:

“Life is about not knowing, having to change, taking the moment and making the best of it, without knowing what’s going to happen next.  Delicious Ambiguity.”

For those of you who chose a career in academia specifically to reduce ambiguity, this statement probably inspires a measure of discomfort.  And there is a part of me that admittedly finds some solace in the task of isolating statistically significant “truths.”  I suppose I could have named this column “Bland Certainty,”  but – in addition to single-handedly squelching reader interest – such a title would suggest that my only role at Augustana is to provide final answers – nuggets of fact that function like the period at the end of a sentence.

Radner’s view of life is even more intriguing because she wrote this sentence as her body succumbed to cancer.  For me, her words exemplify intentional – if not stubborn – optimism in the face of darkly discouraging odds.  I have seen this trait repeatedly demonstrated in many of you over the last several years as you have committed yourself to help a particular student even as that student seems entirely disinterested in  learning.

Some have asserted that a college education is a black box; some good can happen, some good does happen – we just don’t know how it happens.  On the contrary, we actually know a lot about how student learning and development happens – it’s just that student learning doesn’t work like an assembly line.  Instead, student learning is like a budding organism that depends on the conduciveness of its environment; a condition that emerges through the interaction between the learner and the learning context.  And because both of these factors perpetually influence each other, we are most successful in our work to the degree that we know which educational ingredients to introduce, how to introduce them, and when to stir them into the mix.  The exact sequence of the student learning process is, by its very nature, ambiguous because it is unique to each individual learner.

In my mind, the act of educating is deeply satisfying precisely because of its unpredictability.  Knowing that we can make a profound difference in a young person’s life – a difference that will ripple forward and touch the lives of many more long after a student graduates – has driven many of us to extraordinary effort and sacrifice even as the ultimate outcome remains admittedly unknown.  What’s more, we look forward to that moment when our perseverance suddenly sparks a flicker of unexpected light that we know increases the likelihood – no matter how small – that this person will blossom into the life-long student we believe they can be.

The purpose of collecting educational data should be to propel us – the teacher and the student – through this unpredictability, to help us navigate the uncertainty that comes with a process that is so utterly dependent upon the perpetually reconstituted synergy between teacher and student.  The primary role of Institutional Research and Assessment is to help us figure out the very best ways to cultivate – and in just the right ways – manipulate this process.  The evidence of our success isn’t a result at the end of this process.  The evidence of our success is the process.  And pooling our collective expertise, if we focus on cultivating the quality, depth, and inclusiveness of that process, it isn’t outlandish at all to believe that our efforts can put our students on a path that someday just might change the world.

To me; this is delicious ambiguity.

Make it a good day,

Mark

 

From “what we have” to “what we do with it”

We probably all have a good example of a time when we decided to make a change – maybe drastic, maybe minimal – only to realize later the full ramifications of that change (“Yikes! Now I remember why I grew a beard.”).  This is the problem with change – our own little lives aren’t as discretely organized as we’d like to think, and there are always unintended consequences and surprise effects.

 

When Augustana decided to move from measuring itself based on the quality of what we have (incoming student profile, endowment, number of faculty, etc.) to assessing our effectiveness based on what we do (student learning and development, educational improvement and efficiency, etc.), I don’t think we fully realized the ramifications of this shift.  Although there are numerous ways in which this shift is impacting our work, I’d like to talk specifically about the implications of this shift in terms of institutional data collection and reporting.

 

First, let’s get two terms clarified.  When I say “outcomes” I mean the learning that results from educating.  When I say “experiences” I mean the experiences that students have during the course of their college career.  They could be simply described by their participation in a particular activity (e.g., a philosophy major) or they could be more ambiguously described as the quality of a student’s interaction with faculty.  Either way, the idea is – and has always been – that student experiences should lead to gains on educational outcomes.

 

I remember an early meeting during my first few months at Augustana College where one senior administrator turned to me and said, “We need outcomes.  What have you got?”  At many institutions, the answer would be something like, “I’ll get back to you in four years,”  because that is how long it takes to gather dependable data.  Just surveying students at any given point only tells you where they are at that point – it doesn’t tell you how much they’ve changed as a result of our efforts. Although we have some outcome data from a several studies that we happened to join, we still have to gather outcome data on everything that we need to measure – and that will take time.

 

But the other problem is one of design.  Ideally, you choose what you want to measure, and then you start measuring it.  In our case, although we have measured some outcomes, we don’t have measures on other outcomes that are equally important.  And there isn’t a very strong centering framework for what we have measured, what we have not, and why.  This is why we are having the conversation about identifying college-wide outcomes.  The results of that conversation will tell us exactly what to measure.

 

The second issue is in some ways almost more important for our own purposes.  We need to know what we should do to improve student learning – not just whether our students are learning (or not).  As we should know by now, learning doesn’t happen by magic.  There are specific experiences that accelerate learning, and certain experiences that grind it to a halt.  Once we’ve identified the outcomes that define Augustana, then we can track the experiences that precede them.  It is amazing how many times we have found that, despite the substantial amount of data we have on our students, the precise data on a specific experience is nowhere to be found because we never knew we were going to need it.  This is the primary reason for the changes I made in the senior survey this year.

 

This move from measuring what we have to assessing what we do is not a simple one and it doesn’t happen overnight.  And that is just the data collection side of the shop.  Just wait until I start talking about what we do with the data once we get it! (Cue evil laughter soundtrack!)

 

Make it a good day!

 

Mark

student learning as I see it

At a recent faculty forum, discussion of the curricular realignment proposal turned to the question of student learning.  As different people weighed in, it struck me that, even though many of us have been using the term “student learning” for years, some of us may have different concepts in mind.  So I thought it would be a good idea, since I think I say the phrase “student learning” at least once every hour, to explain what I mean and what I think most assessment folks mean when we say “student learning.”

 

Traditionally, “student learning” was a phrase that defined itself – it referred to what students learned.  However, the intent of college teaching was primarily to transmit content and disciplinary knowledge – the stuff that we normally think of when we think of an expert in a field or a Jeopardy champion.  So the measure of student learning was the amount of content that a student could regurgitate – both in the short term and the long term.

 

Fortunately or unfortunately, the world in which we live has completed changed since the era in which American colleges and universities hit their stride.  Today, every time you use your smart phone to get directions, look up a word, or find some other byte of arcane data, it becomes painfully clear that memorizing all of that information yourself would be sort of pointless and maybe even a little silly.  Today, the set of tools necessary to succeed in life and contribute to society goes far beyond the content itself.  Now, it’s what you can do with the content.  Can you negotiate circumstances to solve difficult problems?  Can you manage an organization in the midst of uncertainty?  Can you put together previously unrelated concepts to create totally new ideas?  Can you identify the weakness in an argument and how that weakness might be turned to your advantage?

 

It has become increasingly apparent that colleges and universities need to develop the set of skills needed to answer “yes” to those questions.  So when people like me use the phrase “student learning” we are referring to the development of the skill sets necessary to make magic out of content knowledge.  That has powerful implications for the way that we envision a general education or major curriculum.  It also holds powerful implications for how we think about integrating traditional classroom and out-of-class experiences in order to firmly develop those skills in students.

 

I would encourage all of us to reflect on what we think we mean when we say “student learning.”  First, let’s make sure we are all referring to the same thing when we talk about it.  Second, let’s move away from emphasizing content acquisition as the primary reflection of our educational effectiveness.  Yes, content is necessary, but it’s no longer sufficient.  Yes, content is foundational to substantive student learning, but very few people look at a completed functioning house and say, “Wow, what an amazing foundation.”  I’m just sayin’ . . .

 

Make it a good day!

 

Mark

Moving from satisfaction to experiences – a new senior survey

One of the “exciting” parts of my job is building surveys.  I’ve worked with many of you over the past two years to construct new surveys to answer all sorts of questions.  On the one hand, it’s a pretty interesting challenge to navigate all of the issues inherent in designing what amounts to a real life “research study.”  At the same time, it can be an exhausting project because there are so many things you just can’t be sure of until you field test the survey a few times and find all of the unanticipated flaws.  But in the end, if we get good data from the new survey and learn things we didn’t know before that help us do what we do just a little bit better, it’s a pretty satisfying feeling.

As many of you already know, Augustana College has been engaged in a major change over the last several years in terms of how we assess ourselves.  Instead of determining our quality as an institution based on what we have (student incoming profile, endowment amount, etc.), we are trying to shift to determining our quality based on what we do with what we have.  Amazingly, this places us in a very different place that many higher education institutions.  Unfortunately, it also means that there aren’t many examples on which we might model our efforts.

One of the implications of this shift involves the nature of the set of institutional data points that we collect.  Although many of the numbers we have traditionally gathered continue to be important, the measure of ourselves that we are hoping to capture what we do with those traditional numbers. And while we have long maintained pretty robust ways of obtaining the numbers you would see in our traditional dashboard, our mechanisms for gathering data that would help us assess what we do with what we have are not yet robust enough.

So over the last few months, I have been working with the Assessment for Improvement Committee and my student assistants to build a new senior survey.  While the older version had served its purpose well over more than a decade, it was ready for an update, it not an overhaul.

The first thing we’ve done is move from a survey of satisfaction to a survey of experiences.  Satisfaction can sometimes give you a vague sense of customer happiness, but it often falls flat in trying to figure out how to make a change – not to mention the fact that good educating can produce customer dissatisfaction if the that customer had unrealistic expectations or didn’t participate in their half of the educational relationship.

The second thing we’ve done is build the senior survey around the educational and developmental outcomes of the entire college.  If our goal is to develop students holistically, then our inquiry needs to be comprehensive.

Finally, the third thing we’ve done is “walk back” our thinking from the outcomes of various aspects of the college to the way that students would experience our efforts to produce those outcomes.  So, for example, if the outcome in intercultural competence, then the question we ask is how often students had serious conversations with people who differed by race/ethnicity, culturally, social values, or political beliefs.  We know this is a good question to ask because we know from a host of previous research that the degree to which students engage in these experiences predicts their growth on intercultural competence.

If you want to see the new senior survey, please don’t hesitate to ask.  I am always interest in your feedback.  In the mean time . . .

 

Make it a good day!

 

Mark

Why should our seniors participate in the Wabash National Study?

When thing get really hectic, I have a hard time remembering what month it is.  Judging by the snow falling outside as a write this first column of the spring term, it’s not just me.  Fortunately, we all have our anchoring mechanisms – our teddy bear or our safe space that keeps us grounded.  For me, it’s the Wabash National Study senior data collection that will occur in March and April.  At long last, it’s time to find out from our seniors how their Augustana experience impacted their development on many of the primary intended outcomes of a liberal arts education.  (I know.  Own it!)

 

I believe that the data we gather from the Wabash National Study could be the most important data that Augustana has collected in its 150+ year history.  I’d like to give you three reasons why I make this claim, and three ways that I need your help.

 

First, the Wabash National Study measures individual gains across a range of specific outcomes.  Instead of taking a snapshot of a group of freshmen and a snapshot of a different group of seniors and assuming that those two sets of findings represent change over time, in this study we will have actually followed the same group of students from the first year to the fourth year.  Furthermore, instead of tracking only one outcome, this study tracks 15 different outcomes, allowing us to examine how gains on one outcome might relate to gains on another outcome.

 

Second, the Wabash National Study is the first and only study that allows us to figure out which student experiences significantly impact our students’ change on each outcome measure.  In other words, from this data we can determine which experiences improve gains, which experiences inhibit gains, and which experiences seem to have little educational impact. Furthermore, this data allows us to determine whether the gains we identify on each outcome are a function of pre-college characteristics (like intellectual aptitude) or a function of an experience that happened during college (like meaningful student-faculty interaction).  This gives us the kind of information on which we can more confidently base decisions about program design, college policies, and the way we link student experiences to optimize learning.

 

Third, as we continue to try to more fully embody a college that assesses itself based on what we do rather than what we have, this data can provide a foundation as we think about clearly articulating the kind of institution we want to be in the future and how we are best able to get there.  In the past decade, we have collected bits and pieces of this kind of data from NSSE, CLA, and various Teagle-sponsored studies – all important evidence on which we have made critical decisions that have improve the quality of the education we provide.  This time around, we will have all of that data in one study, allowing us to answer many of the questions that we need to answer now; questions that have previously been exceedingly difficult answer because the applicable data was scattered across different, often incompatible, studies.

 

But just because we are going to try to collect this data from our seniors over the next two months doesn’t mean we automatically get to have our cake and eat it, too.  Our seniors have to volunteer to provide this data.  Although we have some pretty decent incentives ($25 gift cards to the book store and group incentives for some student groups), this thing could be a monumental belly flop if no one shows up to fill out our surveys.  This brings me to how you can help.

 

1)      Make it your mission to tell every senior with whom you interact to participate in the survey.  We are going to invite them by email, announce this study at various student venues, and hopefully have some articles in the Observer.  But the students need to be encouraged to participate at every turn.

2)      Tell them why they should participate!  It’s not enough to ask them to do it.  They need to know that this will fundamentally shape the way that we construct Augustana College for the next generation of students.  They can play a massive role in that effort just by showing up and filling out some surveys.  Oh, if the rest of life was so easy!

3)      Remind them to participate.  We will have four different opportunities for seniors to provide data.  We will give $25 gift cards to the first 100 students at each session – so if they all wait to participate, most of them won’t get the incentives we would like to give them.  The dates, times, and locations of these sessions are:

 

  1. Monday, March 12, 6-8:30 PM in Science 102
  2. Monday, March 26, 6-8:30 PM in Olin Auditorium
  3. Thursday, March 29, 6-8:30 PM in Science 102
  4. Thursday, April 26, 10:30 AM – 12:30 PM in John Deere Lecture Hall

 

Thank you so much for your help.  Just to let you know ahead of time, I’m not going to shut up about this data collection effort until we give away all of the gift cards or we run out of data collection dates.  Yes, it’s that important.

 

Make it a good day,

 

Mark

A positivity distraction

As you slog your way through the snow and the grading and the (hypothetical) curriculum reconstruction this week, I hope you will take a moment to wire your brain for positive thoughts.  I don’t have much to say today – I’m feeling a little beat down myself – but I watched this TED talk last night and it was just the tidbit I needed to get my head straight.

 

Make it a good day (sometimes I’m really am talking to myself),

 

Mark

Understanding the “new” learning outcomes of a college education

At the Augustana Board Retreat a couple of weeks ago, Allen Bertsche (Director of International Programs) and I hosted a discussion with members of the Board, administrators, and faculty about a fundamental shift that has occurred in higher education over the past several decades.  While a college education used to be primarily about acquiring content knowledge, today the most important outcomes of a college education are a broad range of complex cognitive, psychosocial, and interpersonal skills and dispositions. These outcomes transcend a student’s major choice and are applicable in every facet of life.  In short, although content is still necessary, it is no longer sufficient.  In recent years Augustana has identified outcomes like critical thinking, collaborative leadership, and information literacy as fundamental skills that every student should develop before graduation.

 

During our conversation at the Board Retreat, Kent Barnds (Vice President of Enrollment, Communications, and Planning) pointed out that, while some of us might grasp the ramifications of this shift, perspective students and their families are still firmly entrenched in the belief that content acquisition is the primary goal of a college education.  In their minds, a college’s value is directly related to the amount of content knowledge it can deliver to its students.  As many of you know, when prospective students and families visit, they often ask about opportunities to obtain multiple majors while participating in a host of experiences.  By comparison, they rarely ask about the exact process by which we develop critical thinking or cross-cultural skills in students.

 

I think it would do us some good to consider what the current calendar discussion looks like to those who believe that the cost of tuition primarily buys access to content knowledge.  The students quoted in the most recent Observer about the 4-1-4 calendar discussion exemplify this perspective.  Their rationale for keeping the trimester system is clearly about maximizing content acquisition – more total courses required for graduation equals more total content acquired, and shorter trimesters allow students to minimize the time spent acquiring content that they don’t need, don’t like, or don’t want.  With tuition and fees set well over $40,000 next year, it’s not hard to see their concerns.

 

Now please don’t misunderstand me – I am much more interested in what we do within the calendar we choose than whether we continue on trimesters or move to semesters.  Nor am I suggesting that student opinions should or should not influence this discussion.  But if we’re trying to have a conversation about student learning – with or without students – and we don’t share a common definition of the term, then we are likely doomed to talk right past each other and miss a real opportunity to meaningfully improve what we do regardless of whether or not the faculty votes to alter the calendar.  On the other hand, if we can more clearly spell out for students, parents, (and ourselves) what we mean when we talk about “student learning” and why our focus on complex skills and outcomes is better suited to prepare students for life after graduation, not only might it temper the tensions that seem to be bubbling up among our students, it might also allow us to help them more intentionally calibrate the relationship between their current activities and obligations and their post-graduate aspirations.

 

So no matter where you sit on the semester/trimester debate, and no matter what you think about the shift in emphasis from content acquisition to the development of skills and outcomes, I would respectfully suggest that we need to better understand the presumptions that undergird each assertion in the context of the calendar discussion. In my humble opinion, as Desi used to say to Lucy, we still “got some ‘splainin’ to do.”

 

Make it a good day,

 

Mark