Triangulating our assessment of quantitative literacy

Whether we like it or not, the ability to convey, interpret, and evaluate data affects every part of our personal and professional lives. So it’s not a surprise to find quantitative literacy among Augustana’s nine student learning outcomes. Yet, of all those outcomes, quantitative literacy may be the most difficult to pin down. First of all, this concept is relatively new when compared to other learning outcomes like intercultural competence or critical thinking. Second, there isn’t nearly the range measurement mechanisms – surveys or otherwise – that capture this concept effectively. And third, quantitative literacy is the kind of skill that is particularly susceptible to social desirability bias (i.e., the tendency to believe that you are better at a desirable intellectual skill than you actually are).

Despite the obstacles I noted above, the Assessment for Improvement Committee (AIC) felt like this was an outcome ripe for the assessing. First, we’ve never really measured quantitative literacy among Augustana students before (it wasn’t addressed in the Wabash National Study when we participated between 2008 and 2012). Second, it isn’t clear that we know how each student develops this skill, as we have defined it in our own college documents, beyond what a student might learn in a “Q” course required by the core curriculum. As a result, it’s entirely possible that we have established a learning outcome for all students that our required curriculum isn’t designed to achieve. Uh oh.

In all fairness, we do have one bit of data – imperfect as it is. A few years ago, we borrowed an idea from the National Survey of Student Engagement (NSSE) and inserted a question into our senior survey that asked students to respond to the statement, “I am confident in my ability to interpret numerical and statistical quantities,” giving them five response options that ranged from “strongly disagree” to “strongly agree.”

Since we began asking this question, about 75% of seniors have indicated that they “agree” or “strongly agree” with that statement. Unfortunately, our confidence in that number began to wain as we looked more closely at those responses. For that number to be credible, we would expect to see that students from majors that have no quantitative focus were less confident in their quantitative abilities than students from majors that employed extensive quantitative methods. However, we found the opposite to often be the case. It turned out that students who had learned something about how complicated quantitative methods can be were less confident in their quantitative literacy skills than those students who had no exposure to such complexities, almost as if knowing more about the nuances and trade-offs that can make statistics such a maddeningly imperfect exercise had a humbling effect. In the end it appeared that in the case of quantitative literacy, ignorance might indeed be bliss (a funny story about naming another bias called the Dunning-Kruger Effect).

So last year the AIC decided to conduct a more rigorous study of our students’ quantitative literacy skills. To make this happen, we first had to build an assessment instrument that matched our definition of quantitative literacy. Kimberly Dyer, our measurement ninja, spent weeks pouring over the research on quantitative literacy and the survey instruments that had already been created to find something that fit our definition of this learning outcome. Finally, she ended up combining the best of several surveys to build something that matched our conception of quantitative literacy and included questions that addressed interpreting data, understanding visual presentations of data, calculating simple equations (remember story problems from grade school?), applying findings from data, and evaluating the assumptions underlying a quantitative claim. We then solicited faculty volunteers who would be willing to take time out of their upper-level classes to give their students this survey. In the end, we were able to get surveys from about 100 students.

As you might suspect, the results of this assessment project provided a bit more sobering picture of our students quantitative literacy skills. These are the proportions of questions within each of the aforementioned quantitative literacy categories that students who had completed at least one Q course got right.

  • Interpreting data  –  41%
  • Understanding visual presentations of data  –  41%
  • calculating simple equations  –  45%
  • applying findings from data  –  52%
  • evaluating assumptions underlying a quantitative claim  –  51%

Interestingly, students who had completed two Q classes didn’t fare any better.  It wasn’t until students had taken 3 or more Q classes that the proportion of correct answers improved significantly.

  • Interpreting data  –  58%
  • Understanding visual presentations of data  –  59%
  • calculating simple equations  –  57%
  • applying findings from data  –  65%
  • evaluating assumptions underlying a quantitative claim  –  59%

There are all kinds of reasons that we should interpret these results with some caution – a relatively small sample of student participants, the difficulty of the questions in the survey, or the uneven distribution of the student participants across majors (the proportion of STEM and social science majors that took this survey was higher than the proportion of STEM and social science majors overall). But interpreting with caution doesn’t mean that we discount these results entirely. In fact, since prior research on students’ self-reporting of learning outcomes attainment indicates that students often overestimate their abilities on complex skills and dispositions, the 75% of students who agree or strongly agree is probably substantially higher than the proportion of graduates who are actually quantitatively literate. Furthermore, since the proportion of students who took this survey was skewed toward majors where quantitative literacy is a more prominent part of that major, these findings are more likely to overestimate the average student’s quantitative literacy than underestimate it. Triangulating these data with prior research suggests that our second set of findings might paint a more accurate picture of our graduates.

So how should we respond to these findings? To start, we probably ought to address the fact that there isn’t a clear pathway between what students are generally expected to learn in a “Q” course and what the college outcome spells out as our definition of quantitative literacy. That gap alone creates the condition in which we leave students’ likelihood of meeting our definition of quantitative literacy up to chance. So our first question might be to explore how we might ensure that all students get the chance to achieve this outcome; especially those students who major in disciplines that don’t normally include quantitative literacy skills.

The range of quantitative literacy, or illiteracy as the case might be, is a gnarly problem. It’s not something that we can dump onto an individual experience and expect that box to be checked. It’s hard work, but if we are serious about the learning outcomes that we’ve set for our students and ourselves, then we can’t be satisfied with leaving this outcome to chance.

Make it a good day,

Mark

A Motherload of Data!

It’s probably a bit of a reach to claim that the Institutional Effectiveness and Mission Fulfillment report (begrudgingly called the IEMF) is the cutting edge of data reporting, but it is true that this annual report is something that a lot of people work pretty hard on for several months at the end of each academic year. Unlike the college’s dashboard – a single page of data points that is supposed to cut the quantitative quick, the IEMF is a motherload of data and a treasure trove of information about Augustana College.

In past years we have posted the IEMF on the Institutional Research web page and hoped that people would look at it because, you know . . . nerd click-bait! Not since the first year that we produced this report have we hosted a public gathering to invite comment from anyone who might have an observation about the data and how it is conveyed. One thing I will not soon forget from that meeting was the degree to which data becomes political as soon as it becomes public, and therefore how important it is to convey precisely and anticipate how data presentations might be interpreted from different points of view.

With that in mind, I want to share with you the 2016 version of the IEMF. It is organized into nine sections that each cover different aspects of what and how we do what we do. For example, in the section titled Persistence, Graduation, and Attrition (p. 1) you might be interested in the distribution of reasons that students give for withdrawing and how those reasons might have changed over the last three years. Or, in the section titled Our Practices (p. 20) you might be interested in the rising costs to recruit a single student over the last three years. There are a lot of tidbits throughout the document that provide a glimpse into Augustana College – areas of strength, opportunities for growth, and how we compare to similar liberal liberal arts colleges around the country.

Click on the link below and swim in a river of data to your heart’s content.

2016_IEMF_Report

Certainly, the IEMF isn’t a perfect snapshot. Even though it has improved considerably from it’s first iteration several years ago, there are plenty of places where we wish our data were a little better or a little more precisely able to show who we are and what we do. Most importantly, this document isn’t intended to be a braggart’s bible. On the contrary, the IEMF is designed to be an honest presentation of Augustana College and of us. We aren’t perfect. And we know that. But we are trying to be as good as we can be with the resources we have. And in more than a few instances, we are doing pretty well.

Before I forget, a special and sincere “thank you” goes out to everyone who played a role in hunting down this data and putting the document together: Kimberly Dyer, Keri Rursch, Cindy Schroeder, Quan Vi, Erin Digney, Angie Williams, Katey Bignall, Kelly Hall, Randy Roy, Lisa Sears, Matt Walsh, Sheri Curran, Robert Scott, Jeff Thompson, Dom Sullivan, Katrina Friedrich, Bonnie Hewitt, Scott Dean, Shawn Beattie, and Kent Barnds.

So have a look. If you have any questions or critiques or suggestions, please send them to me. I’m genuinely looking for ways to improve this document.

For starters . . . anyone got any catchy ideas for a better name?

Make it a good day,

Mark

 

Even more details regarding term-to-term retention

The more we dig into our retention data, the more interesting it gets. Earlier this term, I shared with you some of our findings regarding term-to-term retention rates. These data seem to suggest that we are slowly improving our within-year retention rates.

As always, the overall numbers only tell us so much. To make the most of the data we collect, we need to dig deeper and look at within-year retention rates for subpopulations of students that have historically left at a higher rate than their peers. Interestingly, this data might also tell us something about when these students are most vulnerable to departing and, as a result, when we might increase our focus on supporting their success.

The table below presents 2014-15 within-year retention rates of the five subpopulations of students that significantly deviated from the overall term-to-term retention rates. The percentages that are more than one point below the overall number are in red.

Student Demographic Group Fall to Winter Winter to Spring Fall to Spring
Overall 96.6% 97.6% 94.3%
Males 94.4% 95.9% 90.5%
Multicultural Students 98.7% 93.9% 92.7%
Gov’t Subsidized Loan Qualifiers 94.8% 97.6% 92.5%
Non IL/IA Residents 96.0% 90.0% 90.0%
First-Generation Students 95.3% 96.7% 92.3%

The first thing I’d like to highlight is a pair of subpopulations that aren’t on this list. Analyses of older data would no doubt highlight the lagging retention rates of students who came to Augustana with lower ACT scores or who applied test-optional (i.e., without submitting a standardized test score). However, in the 2014-15 cohort these subpopulations retained from fall to winter (96.9% and 97.9%, respectively) and from winter to spring (96.8% and 97.9%, respectively) at rates similar to the overall population. The winter-to-spring numbers are particularly encouraging because that is when first-year students can be suspended for academic performance. Although it would be premature to declare that this improvement results directly from our increased student support efforts, these numbers suggest that we may indeed be on the right track.

In looking at the table above, the highlighted demographic groups are probably not a surprise to those who are familiar with retention research. However, this table gives us  a glimpse into when certain groups are more vulnerable to departure. For example, male students’ retention rates are consistently lower than the campus average. By contrast, multicultural students were retained at a higher rate from fall to winter. But from winter to spring, our early success evaporated completely. Winter term might also play a role for non IL/IA residents who retain at rates similar to their peers from fall to winter but from winter to spring depart at a higher rate than the rest of the cohort.

Since this is only one year of data, I wouldn’t suggest making any emphatic claims based on it. But I do think that these findings should challenge us to think more deeply about the kind of support different types of student might need and when they might benefit most from it.

Make it a good day,

Mark

 

Applying a Story Spine to Guide Assessment

As much as I love my assessment compadres, sometimes I worry that the language we use to describe the process of continual improvement sounds pretty stiff. “Closing the loop” sounds too much like teaching a 4 year-old to tie his shoe. Over the years I’ve learned enough about my own social science academic nerdiness to envy those who see the world through an entirely foreign lens. So when I stumbled upon a simple framework for telling a story called a “Story Spine,” it struck me that this framework might spell out the fundamental pieces of assessment in a way that just makes much more sense.

The Story Spine idea can be found in a lot of places on the internet (e.g., Pixar and storytelling), but I found out about it through the world of improv. At its core, the idea is to help improvisers go into a scene with a shared understanding of how a story works so that, no matter what sort of craziness they discover in the course of their improvising, they know that they are all playing out the same meta-narrative.

Simply put, the Story Spine divides a story into a series of sections that each start with the following phrases. As you can tell, almost every story you might think of would fit into this framework.

Once upon a time . . .

And every day . . .

Until one day . . .

Because of that . . .

Because of that . . .

Until finally . . .

And ever since then . . .

These section prompts can also fit into four parts of a cycle that represent the transition from an existing state of balance (“once upon a time” and “every day”), encountering a disruption of the existing balance (“until one day”), through a quest for resolution (“because of that,” “because of that,” and “until finally”), and into a new state of balance (“and ever since then”).

To me, this framework sounds a lot like the assessment loop that is so often trotted out to convey how an individual or an organization engages assessment practices to improve quality. In the assessment loop, we are directed to “ask questions,” “gather evidence,” “analyze evidence,” and “use results.” But to be honest, I like the Story Spine a lot better. Aside from being pretty geeky, the assessment loop starts with a vague implication that trouble exists below the surface and without our knowledge. This might be true, but it isn’t particularly comforting. Furthermore, the assessment loop doesn’t seem to leave enough room for all of the forces that can swoop in and affect our work despite our best intentions. There is a subtle implication that educating is like some sort of assembly line that should work with scientific precision. Finally, the assessment loop usually ends with “using the results” or, at its most complex, some version of “testing the impact of something we’ve added to the mix as a result of our analysis of the evidence.” But in the real world, we are often faced with finding a way to adjust to a new normal – another way of saying that entering a new state of balance is as much a function of our own adjustment as it is the impact of our interventions.

So if you’ve ever wondered if there was a better way to convey the way that we live an ideal of continual improvement, maybe the Story Spine works better. And maybe if we were to orient ourselves toward the future by thinking of the Story Spine as a map for what we will encounter and how we ought to be ready to respond, maybe – just maybe – we will be better able to manage our way through our own stories.

Make it a good day,

Mark

Some comfort thoughts about mapping

I hope you are enjoying the bright sunshine today.  Seeing that we might crack the 70 degree mark by the end of the week makes the sun that much more invigorating!

As you almost certainly know by now, we have been focusing on responding to the suggestions raised in the Higher Learning Commission accreditation report regarding programmatic assessment. The first step in that response has been to gather curricular and learning outcome maps for every major.

So far, we have 32 out of 45 major-to-college outcomes maps and 14 out of 45 courses-to-major outcomes maps.  Look at it as good or look at it as bad – at least we are making progress, and we’ve still got a couple weeks to go before I need to have collected them all. More importantly, I’ve been encouraged by the genuine effort that everyone has made to tackle this task. So thank you to everyone.

Yet as I’ve spoken with many of you, two themes have arisen repeatedly that might be worth sharing across the college and reframing just a bit.

First, many of you have expressed concern that these maps are going to be turned into sticks that are used to poke you or your department later. Second, almost everyone has worried about the inevitable gap between the ideal student’s progress through a major and the often less-ideal realities of the way that different students enter and progress through the major.

To both of those concerns, I’d like to suggest that you think of these maps as a perpetually working document instead of some sort of contract that cannot be changed. The purpose of drawing out these maps is to make explicit the implicit only as a starting point from which your program will constantly evolve. You’ll change things as your students change, as your instructional expertise changes, and as the future for which your program prepares students changes. In fact, probably the worst thing that could happen is a major that never changes anything no matter what changes around it.

The goal at this point isn’t to produce an unimprovable map. Instead, the goal is put a map together that is your best estimate of what you and your colleagues are trying to do right now. From there, you’ll have a shared starting point that will make it a lot easier to identify and implement adjustments that will in turn produce tangible improvement.

So don’t spend too much time on your first draft. Just get something on paper (or pixels) that honestly represents what you are trying to do and send it to me using the templates I’ve already shared with everyone. Then expect that down the road you’ll decide to make a change and produce a second draft. And so on, and so on. It really is that simple.

Make it a good day,

Mark

I so wish I had written this!

Hi Folks,

Yes, I’m late with my blog this week. And I’m sorry about that. But I’ve been busy thinking about ways to organize my desk. And that’s something.

Brian Leech shared this with me yesterday, so he deserves whatever credit someone is supposed to get when they share something with someone who then “borrows” it to present to his blog audience in place of something that actually required original work. So all thanks goes to Brian for enabling my slacker gene this week.

We all need to laugh at ourselves and the absurd parts of our work sometimes. So enjoy having a “go” at the assessment culture run amok and the weird world of Institutional Research.

RUBRIC FOR THE RUBRIC CONCERNING STUDENTS’ CORE EDUCATIONAL COMPETENCY IN READING THINGS IN BOOKS AND WRITING ABOUT THEM.

From Timothy McSweeney’s Internet Tendency blog.

Make it a great day!

Mark

So how do our retention numbers look now?

Early in the winter term, I wrote about the usefulness of tracking term-to-term retention. This approach is particularly valuable in evaluating and improving our efforts with first-year students, since they are the ones most susceptible to the challenges of transitioning to college and for whom many of our retention programs are designed. Now that we have final enrollment numbers for the spring term, let’s have a look at our term-to-term retention rates over the last five years and see if our increased student success efforts might be showing up in the numbers.

Here are the last five years of fall-to-winter retention rates for the first-year cohort.

  • 2011 – 94.1%
  • 2012 – 95.6%
  • 2013 – 97.0%
  • 2014 – 95.9%
  • 2015 – 96.6%

As you can see, we’ve improve by 2.5 percentage points over the last five years. This turns out to be real money, since a 2.5% increase in the number of first-year students returning for the winter term means that we retained an additional 17 students and added roughly $84,000 in revenue (assuming we use 3-year averages for the incoming class and the first-year net tuition revenue per term: 675 students and $4940, respectively).

But one of the difficult issues with retention is that success is sometimes fleeting. In other words, retaining a student for one additional term might just delay the inevitable. Furthermore, in the case of first-year term-to-term retention the fall-to-winter retention rates can be deceiving because we don’t impose academic suspensions on first-year students after the fall term. Thus students who are in serious academic trouble might just hang on for one more term even though there is little reason to think that they might turn things around. Likewise, students who are struggling to find a niche at Augustana may begrudgingly come back for one more term even though they are virtually sure that this place isn’t the right fit. With that in mind, looking at our fall-to-spring retention rates would give us a more meaningful first glimpse at the degree to which our retention efforts are translating into a sustained impact. If the fall-to-winter retention rates are nothing more than a mirage, then the fall-to-spring retention rates would remain unchanged over the same five year period. Conversely, if our efforts are bearing real fruit then the fall-to-spring retention rates ought to reflect a similar trend of improvement.

Here are the last five years of fall-to-spring retention rates for the first-year cohort.

  • 2011 – 92.1%
  • 2012 – 93.1%
  • 2013 – 93.3%
  • 2014 – 93.5%
  • 2015 – 94.1%

As you can see, it appears that the improving fall-to-winter retention rate largely carries through to the spring term. That translates into more real money: approximately $69,100 in additional spring term revenue. Overall, that’s about $153,000 that we wouldn’t have seen in this year’s revenue column had we not improve our term-to-term retention rates among first-year students.

Certainly this doesn’t mean that we should rest on our laurels. Even though retaining a student to the second year gets them over the biggest hump in terms of the likelihood of departure, it still seems to me like small consolation if that student doesn’t ultimately graduate from Augustana. However, especially facing the financial challenges that the state of Illinois has dumped in our lap, we ought to pat each other on the back for a moment and take some credit for our work to help first-year students succeed at Augustana. The data suggests that our hard work is paying off.

Make it a good day,

Mark

What’s the Problem We’re Trying to Address?

If you’ve had to sit through more than one meeting with me, you’ve almost certainly heard me ask this question. Even though I can see how the question might sound rhetorical and maybe even a little snarky, I’m really just trying to help. Because I know from my own experience how easy it is to get lost in the weeds when trying to tackle a complex issue that is full of dicey trade-offs and unknown unknowns. So sometimes I’ve found that it can be useful to pause, take a couple of deep breaths and refocus on the problem at the core of the conversation.

By now you’ve almost certainly heard about the discussion about transitioning from an academic calendar based on trimesters to one based on semesters. Last week, Faculty Council provided a draft proposal to the faculty to be discussed, vetted, and even adjusted as legitimate concerns are identified by the community. Since I’ve already seen a calendar discussion sap us of most of our energy twice (or once if you count the two-year discussion a few years back as a single event), I hope that this time we can find a way to get through this without quite so much emotional fallout.

With that in mind, after listening to the calendar conversation for the last few months I thought it might be helpful to revisit the question at the top of this post:

What’s the problem we’re trying to address?

It is true, in one very real sense, that there is not a single answer. In fact the “problem” looks different depending upon where you sit. But since the topic of semesters was formally put back onto the front burner by the senior administration and the Board of Trustees, it’s probably useful to understand the problem as they see it. From their perspective, the problem we are facing is actually a pretty straight-forward one. In a nutshell we, like a lot of colleges and universities these days, have a balance sheet problem. In other words, we are having an increasingly difficult time ensuring that our revenues keep pace with our expenses (or put differently, that our expenses don’t outpace our revenues).

The reasons for this problem have been presented countless times, so I’ll try not to dive down that rabbit-hole too far again. But suffice it to say that since American family incomes have been stagnant for a long time, each year that our costs go up we lose a few more prospective families that might otherwise be willing to pay what we charge. Combine that with a shrinking population of high school graduates in the Midwest overall, and you can imagine how it gets harder and harder to come up with the increased revenue necessary to pay for inescapable increases in expenses like electricity, gas, and water, not to mention reasonable salary raises, building and sidewalk repairs, and replacements of worn out equipment.

The possible solutions to a straight-forward balance sheet problem like ours are also relatively straight-forward. If we decide to think of it primarily as insufficient revenue, then we would likely choose a way to increase revenue (e.g., enroll more students, add graduate programs, start online programs . . . each of the examples in this category are perceived by many as a potential threat to our philosophical core). If we decide to think of this problem primarily as excessive expenses, then we would likely choose a way to reduce expenses (e.g., make the college demonstrably smaller, eliminate Augie Choice . . . the only examples in this category that I can think of are pretty depressing). If we don’t see plausible options to increase revenues or reduce expenses, then the only other possibility is to find ways to become more efficient (i.e., achieve similar results from smaller expenditures). Of course, we could concoct some combination of all three approaches.

From the administration’s perspective, the possibility of moving to a semester-based academic calendar addresses the balance sheet problem by giving the college access to an expanded set of opportunities for increased efficiency (i.e., achieving similar results from smaller expenditures). Some of those efficiencies are more self-evident, such as reducing the number of times we power up and power down specific buildings. Some of them are more abstract, such as reducing the number of times we conduct a large-scale process like registration. But the central problem that the semester idea attempts to address is an issue of imbalance between revenues and expenses.

Although some have suggested otherwise, the semester idea is not primarily intended to improve retention rates or increase the number of mid-year transfer students. It is possible that a semester calendar might be more conducive to retaining students who struggle initially or attracting transfer students just after the Christmas break. But there are plenty of similar institutions on semester calendars with lower retention rates and fewer transfer student. Of course, that doesn’t disprove anything either; it just demonstrates that a move to semesters doesn’t guarantee anything. Increases in retention and mid-year transfers will happen (if they happen at all) as a result of what we do within a new calendar, not because we move to a new calendar.

I truly don’t have a strong opinion on the question of calendar. Both trimesters and semesters can be done well and can be done badly. This is why Faculty Council and others have thought long and hard about how to construct a semester system that maintains our commitment to an integrated liberal arts education and delivers it in a way that allows faculty to do it well. Nonetheless, I think it is useful to remind ourselves why we are having this conversation and the nature of the problem we are trying to address. If you think that we should address our balance sheet issues by expanding revenue sources or by reducing expenses, then by all means say so. If you don’t think a balance sheet problem exists, then by all means say so. But let’s make sure we understand the nature of the problem we are trying to address. At the least, this will help us have a more transparent conversation that leaves us in a healthier place at the end, no matter what we decide to do.

And one more thing. Let’s not equate “increasing efficiency” with “doing more with less.” Increasing efficiency is doing differently with the same resources in a way that is more effective. If we are in fact continually doing more with less, in the long term we’re doing it wrong.

Make it a good day,

Mark

 

Improving Advising in the Major: Biology Drives our Overall Increase

Last week I shared a comparison of the overall major advising data from seniors in 2014 and 2015. Although not all of the differences between the two years of data met the threshold for statistical significance, taken together it seemed pretty likely that these improved numbers weren’t just a function of chance. As you might expect by now, another aspect of this finding piqued my curiosity. Is this change a result of a relatively small campus-wide improvement or are the increases in the overall numbers a result of a particular department’s efforts to improve?

Since the distribution of our seniors’ major choices leans heavily toward a few departments (about half of our students major in Biology, Business, Psychology, or Education), it didn’t take too long to isolate the source of our jump in major advising scores. Advising scores in Business, Psychology, and Education didn’t change much between 2014 and 2015. But in Biology? Something pretty impressive happened.

Below is a comparison of the increases on each advising question overall and the increases on each advising question for Biology and Pre-Med majors.  In particular, notice the column marked “Diff.”

Senior Survey Questions             Overall    Biology/PreMed
2014 2015  Diff 2014 2015  Diff
Cared about my development 4.11 4.22 +.11 3.70 4.02 +.32
Helped me select courses 3.93 4.05 +.12 3.49 3.90 +.41
Asked about career goals 3.62 3.73 +.11 3.39 3.81 +.42
Connected with campus resources 3.35 3.47 +.12 3.11 3.36 +.25
Asked me to think about links btwn curr., co-curr., and post-grad plans 3.41 3.57 +.16 3.04 3.48 +.44
Helped make the most of college 3.85 3.97 +.12 3.36 3.80 +.44
How often you talked to your adviser 3.62 3.51 -.11 3.09 3.27 +.18

It’s pretty hard to miss the size of the increased scores for Biology and Pre-Med majors between 2014 and 2015. In every case, these increased scores are three or four times larger than the increases in overall scores.  In a word: Impressive!

So what happened?

Advising is a longstanding challenge for Biology and Pre-Med faculty. For decades this department has struggled to adequately advise a seemingly endless flow of majors. Last spring, Biology and Pre-Med graduated almost 150 students and at the beginning of the 2014-15 academic year there were 373 declared majors in either program. Moreover, that number probably underestimates the actual number of majors they have to work with since many students declare their major after the 10th day of the term (when this data snapshot was archived).

Yet the faculty in the Biology and Pre-Med department decided to tackle this challenge anyway. Despite the overwhelming numbers, maybe there was a way to get a little bit better by making even more of the limited time each adviser spent with each student. Each faculty adviser examined senior survey data from their own advisees and picked their own point of emphasis for the next year. Several of the Biology and Pre-Med faculty shared with me the kinds of things that they identified for themselves. Without fail, each faculty member decided to make sure that they talked about CORE in every meeting, be it the resources available in CORE for post-graduate preparation or just the value of making a visit to the CORE office and establishing a relationship. Several others talked about making sure that they pressed their advisees to describe the connections between the classes they were taking and the co-curricular activities in which they were involved, pushing their students to be intentional with everything they chose to do in college. Finally, more than one person noted that even though advising had always been important to them, they realized how easy it was to let one or more of the the usual faculty stresses color their mood during advising meetings, (e.g., succumbing to the stress of an upcoming meeting or a prior conversation). They found ways to get themselves into a frame of mind that improved the quality of their interaction with students.

None of these changes seem all that significant by themselves.  Yet together, it appears that the collective effort of the Biology and Pre-Med faculty – even in the face of a continued heavy stream of students, made a powerful difference in the way that students’ rated their advising experience in the major.

Improvement isn’t as daunting as it might sometimes seem. In many cases, it just takes an emphasis on identifying small changes and implementing them relentlessly. So three cheers for Biology and Pre-Med. You’ve demonstrated that even under pretty tough circumstances, we can improve something by focusing on it and making it happen.

Make it a good day,

Mark

We’ve gotten better at advising, and we can (almost) prove it!

With all of the focus on reaccreditation, budget concerns, employee engagement, and the consideration of a different academic calendar, it seems like we’ve spent a lot of time dwelling on things that aren’t going well or aren’t quite good enough. However, in the midst of these conversations I think we might do ourselves some good to remember that we are in the midst of doing some things very well. So before we plunge ourselves into another brooding conversation about calendar, workload disparity, or budget issues, I thought we could all use a step back from the precipice and a solid pat on the back.

You’d have to have been trapped under something thick and heavy to have missed all of the talk in recent years about the need to improve advising. We’ve added positions, increased the depth and breadth of training, and aspired to adopt an almost idyllic conception of deeply holistic advising. This has stretched many of us outside of our comfort zones and required that we apply a much more intentional framework to something that “in the old days” was supposed to be a relaxing and more open-ended conversation between scholar and student.

With this in mind, I thought it might be fun to start the spring term by sharing a comparison of 2014 and 2015 senior survey advising data.

Our senior survey asks seven questions about major advising. These questions are embedded into a section focused on our seniors’ experience in their major so that we can be sure that the student’s responses refer to their advising experience in each of their majors (especially since so many students have more than one major and therefore more than one major adviser). The first six questions focus on aspects of an intentional and developmental advising experience. The last question provides us with a way to put those efforts into the nitty gritty context of efficiency. In an ideal world, our student responses would show a trend toward higher scores on the first six questions, while average scores for the seventh question would remain relatively flat or even declining somewhat.

Here is a list of the senior survey advising questions and the corresponding response options.

  • My major adviser genuinely seemed to care about my development as a whole person. (strongly disagree, disagree, neutral, agree, strongly agree)
  • My major adviser helped me select courses that best met my educational and person goals. (strongly disagree, disagree, neutral, agree, strongly agree)
  • How often did your major adviser ask you about your career goals? (never, rarely, sometimes, often, very often)
  • My major adviser connected me with other campus resources and opportunities (OSL, CORE, the Counseling Center, etc.) that helped me succeed in college. (strongly disagree, disagree, neutral, agree, strongly agree)
  • How often did your major adviser ask you to think about the connections between your academic plans, co-curricular activities, and your career or post-graduate plans? (never, rarely, sometimes, often, very often)
  • My major adviser helped me plan to make the most of my college career. (strongly disagree, disagree, neutral, agree, strongly agree)
  • About how often did you talk with your primary major adviser? (never, less than once per term, 1-2 times per term, 2-3 times per term, we communicated regularly through each term)

A comparison of overall numbers from seniors graduating in 2014 and 2015 seems to suggest a reason for optimism.

Senior Survey Question 2014 2015
Genuinely cared about my development 4.11 4.22
Helped me select the right courses 3.93 4.05
How often asked about career goals 3.62 3.73
Connected me with campus resources 3.35 3.47
How often asked to think about links between curricular, co-curricular, and post-grad plans 3.41 3.57
Helped me make the most of my college career 3.85 3.97
How often did you and your adviser talk 3.62 3.51

As you can tell, the change between 2014 and 2015 on each of these items aligns with what we would hope to see. We appear to be improving the quality of the student advising experience without taking more time to do so. Certainly this doesn’t mean that every single case reflects this overall picture, but taken together this data seems to suggest that our efforts to improve are working.

I suspect that more than a few of you are wondering whether or not these changes are statistically significant. Without throwing another table of data at you, here is what I found. The change in “how often advisers asked students to think about the links between curricular, co-curricular, and post-grad plans” (.16) solidly crossed the threshold of statistical significance. The change in “genuinely cared about my development” (.11) was not statistically significant. The change in each of the other five items (from .12 to .15) turned out to be “marginally significant,” meaning, in essence, that the difference between the two average scores is worth noting even if it doesn’t meet the gold standard for statistical significant.

The reason I would argue that these changes, when taken together, are worth noting is a function of looking at all of these changes together. The probability that all seven of these items would move in our intended direction randomly is less than 1% (.0078 to be exact). In other words, it’s likely that something is going on that would push all of these items in the directions we had hoped. Given the scope of our advising emphasis recently, these findings seem to me to suggest that we are indeed on the right track.

I know that there are plenty of reasons to pull the “correlation doesn’t equal causation” handbrake. But I’m not arguing that this data is inescapable proof. Rather, I’m arguing that these findings make a pretty strong case for the possibility that our efforts are producing results.

So before we get ourselves tied into knots about hard questions and tough choices over the next 10 weeks, maybe take a moment to remember that we can tackle issues that might at first seem overwhelming. It might not be easy, but where is the fun in “easy”?

Make it a good day,

Mark