A Motherload of Data!

It’s probably a bit of a reach to claim that the Institutional Effectiveness and Mission Fulfillment report (begrudgingly called the IEMF) is the cutting edge of data reporting, but it is true that this annual report is something that a lot of people work pretty hard on for several months at the end of each academic year. Unlike the college’s dashboard – a single page of data points that is supposed to cut the quantitative quick, the IEMF is a motherload of data and a treasure trove of information about Augustana College.

In past years we have posted the IEMF on the Institutional Research web page and hoped that people would look at it because, you know . . . nerd click-bait! Not since the first year that we produced this report have we hosted a public gathering to invite comment from anyone who might have an observation about the data and how it is conveyed. One thing I will not soon forget from that meeting was the degree to which data becomes political as soon as it becomes public, and therefore how important it is to convey precisely and anticipate how data presentations might be interpreted from different points of view.

With that in mind, I want to share with you the 2016 version of the IEMF. It is organized into nine sections that each cover different aspects of what and how we do what we do. For example, in the section titled Persistence, Graduation, and Attrition (p. 1) you might be interested in the distribution of reasons that students give for withdrawing and how those reasons might have changed over the last three years. Or, in the section titled Our Practices (p. 20) you might be interested in the rising costs to recruit a single student over the last three years. There are a lot of tidbits throughout the document that provide a glimpse into Augustana College – areas of strength, opportunities for growth, and how we compare to similar liberal liberal arts colleges around the country.

Click on the link below and swim in a river of data to your heart’s content.

2016_IEMF_Report

Certainly, the IEMF isn’t a perfect snapshot. Even though it has improved considerably from it’s first iteration several years ago, there are plenty of places where we wish our data were a little better or a little more precisely able to show who we are and what we do. Most importantly, this document isn’t intended to be a braggart’s bible. On the contrary, the IEMF is designed to be an honest presentation of Augustana College and of us. We aren’t perfect. And we know that. But we are trying to be as good as we can be with the resources we have. And in more than a few instances, we are doing pretty well.

Before I forget, a special and sincere “thank you” goes out to everyone who played a role in hunting down this data and putting the document together: Kimberly Dyer, Keri Rursch, Cindy Schroeder, Quan Vi, Erin Digney, Angie Williams, Katey Bignall, Kelly Hall, Randy Roy, Lisa Sears, Matt Walsh, Sheri Curran, Robert Scott, Jeff Thompson, Dom Sullivan, Katrina Friedrich, Bonnie Hewitt, Scott Dean, Shawn Beattie, and Kent Barnds.

So have a look. If you have any questions or critiques or suggestions, please send them to me. I’m genuinely looking for ways to improve this document.

For starters . . . anyone got any catchy ideas for a better name?

Make it a good day,

Mark

 

Even more details regarding term-to-term retention

The more we dig into our retention data, the more interesting it gets. Earlier this term, I shared with you some of our findings regarding term-to-term retention rates. These data seem to suggest that we are slowly improving our within-year retention rates.

As always, the overall numbers only tell us so much. To make the most of the data we collect, we need to dig deeper and look at within-year retention rates for subpopulations of students that have historically left at a higher rate than their peers. Interestingly, this data might also tell us something about when these students are most vulnerable to departing and, as a result, when we might increase our focus on supporting their success.

The table below presents 2014-15 within-year retention rates of the five subpopulations of students that significantly deviated from the overall term-to-term retention rates. The percentages that are more than one point below the overall number are in red.

Student Demographic Group Fall to Winter Winter to Spring Fall to Spring
Overall 96.6% 97.6% 94.3%
Males 94.4% 95.9% 90.5%
Multicultural Students 98.7% 93.9% 92.7%
Gov’t Subsidized Loan Qualifiers 94.8% 97.6% 92.5%
Non IL/IA Residents 96.0% 90.0% 90.0%
First-Generation Students 95.3% 96.7% 92.3%

The first thing I’d like to highlight is a pair of subpopulations that aren’t on this list. Analyses of older data would no doubt highlight the lagging retention rates of students who came to Augustana with lower ACT scores or who applied test-optional (i.e., without submitting a standardized test score). However, in the 2014-15 cohort these subpopulations retained from fall to winter (96.9% and 97.9%, respectively) and from winter to spring (96.8% and 97.9%, respectively) at rates similar to the overall population. The winter-to-spring numbers are particularly encouraging because that is when first-year students can be suspended for academic performance. Although it would be premature to declare that this improvement results directly from our increased student support efforts, these numbers suggest that we may indeed be on the right track.

In looking at the table above, the highlighted demographic groups are probably not a surprise to those who are familiar with retention research. However, this table gives us  a glimpse into when certain groups are more vulnerable to departure. For example, male students’ retention rates are consistently lower than the campus average. By contrast, multicultural students were retained at a higher rate from fall to winter. But from winter to spring, our early success evaporated completely. Winter term might also play a role for non IL/IA residents who retain at rates similar to their peers from fall to winter but from winter to spring depart at a higher rate than the rest of the cohort.

Since this is only one year of data, I wouldn’t suggest making any emphatic claims based on it. But I do think that these findings should challenge us to think more deeply about the kind of support different types of student might need and when they might benefit most from it.

Make it a good day,

Mark

 

Applying a Story Spine to Guide Assessment

As much as I love my assessment compadres, sometimes I worry that the language we use to describe the process of continual improvement sounds pretty stiff. “Closing the loop” sounds too much like teaching a 4 year-old to tie his shoe. Over the years I’ve learned enough about my own social science academic nerdiness to envy those who see the world through an entirely foreign lens. So when I stumbled upon a simple framework for telling a story called a “Story Spine,” it struck me that this framework might spell out the fundamental pieces of assessment in a way that just makes much more sense.

The Story Spine idea can be found in a lot of places on the internet (e.g., Pixar and storytelling), but I found out about it through the world of improv. At its core, the idea is to help improvisers go into a scene with a shared understanding of how a story works so that, no matter what sort of craziness they discover in the course of their improvising, they know that they are all playing out the same meta-narrative.

Simply put, the Story Spine divides a story into a series of sections that each start with the following phrases. As you can tell, almost every story you might think of would fit into this framework.

Once upon a time . . .

And every day . . .

Until one day . . .

Because of that . . .

Because of that . . .

Until finally . . .

And ever since then . . .

These section prompts can also fit into four parts of a cycle that represent the transition from an existing state of balance (“once upon a time” and “every day”), encountering a disruption of the existing balance (“until one day”), through a quest for resolution (“because of that,” “because of that,” and “until finally”), and into a new state of balance (“and ever since then”).

To me, this framework sounds a lot like the assessment loop that is so often trotted out to convey how an individual or an organization engages assessment practices to improve quality. In the assessment loop, we are directed to “ask questions,” “gather evidence,” “analyze evidence,” and “use results.” But to be honest, I like the Story Spine a lot better. Aside from being pretty geeky, the assessment loop starts with a vague implication that trouble exists below the surface and without our knowledge. This might be true, but it isn’t particularly comforting. Furthermore, the assessment loop doesn’t seem to leave enough room for all of the forces that can swoop in and affect our work despite our best intentions. There is a subtle implication that educating is like some sort of assembly line that should work with scientific precision. Finally, the assessment loop usually ends with “using the results” or, at its most complex, some version of “testing the impact of something we’ve added to the mix as a result of our analysis of the evidence.” But in the real world, we are often faced with finding a way to adjust to a new normal – another way of saying that entering a new state of balance is as much a function of our own adjustment as it is the impact of our interventions.

So if you’ve ever wondered if there was a better way to convey the way that we live an ideal of continual improvement, maybe the Story Spine works better. And maybe if we were to orient ourselves toward the future by thinking of the Story Spine as a map for what we will encounter and how we ought to be ready to respond, maybe – just maybe – we will be better able to manage our way through our own stories.

Make it a good day,

Mark

Some comfort thoughts about mapping

I hope you are enjoying the bright sunshine today.  Seeing that we might crack the 70 degree mark by the end of the week makes the sun that much more invigorating!

As you almost certainly know by now, we have been focusing on responding to the suggestions raised in the Higher Learning Commission accreditation report regarding programmatic assessment. The first step in that response has been to gather curricular and learning outcome maps for every major.

So far, we have 32 out of 45 major-to-college outcomes maps and 14 out of 45 courses-to-major outcomes maps.  Look at it as good or look at it as bad – at least we are making progress, and we’ve still got a couple weeks to go before I need to have collected them all. More importantly, I’ve been encouraged by the genuine effort that everyone has made to tackle this task. So thank you to everyone.

Yet as I’ve spoken with many of you, two themes have arisen repeatedly that might be worth sharing across the college and reframing just a bit.

First, many of you have expressed concern that these maps are going to be turned into sticks that are used to poke you or your department later. Second, almost everyone has worried about the inevitable gap between the ideal student’s progress through a major and the often less-ideal realities of the way that different students enter and progress through the major.

To both of those concerns, I’d like to suggest that you think of these maps as a perpetually working document instead of some sort of contract that cannot be changed. The purpose of drawing out these maps is to make explicit the implicit only as a starting point from which your program will constantly evolve. You’ll change things as your students change, as your instructional expertise changes, and as the future for which your program prepares students changes. In fact, probably the worst thing that could happen is a major that never changes anything no matter what changes around it.

The goal at this point isn’t to produce an unimprovable map. Instead, the goal is put a map together that is your best estimate of what you and your colleagues are trying to do right now. From there, you’ll have a shared starting point that will make it a lot easier to identify and implement adjustments that will in turn produce tangible improvement.

So don’t spend too much time on your first draft. Just get something on paper (or pixels) that honestly represents what you are trying to do and send it to me using the templates I’ve already shared with everyone. Then expect that down the road you’ll decide to make a change and produce a second draft. And so on, and so on. It really is that simple.

Make it a good day,

Mark

I so wish I had written this!

Hi Folks,

Yes, I’m late with my blog this week. And I’m sorry about that. But I’ve been busy thinking about ways to organize my desk. And that’s something.

Brian Leech shared this with me yesterday, so he deserves whatever credit someone is supposed to get when they share something with someone who then “borrows” it to present to his blog audience in place of something that actually required original work. So all thanks goes to Brian for enabling my slacker gene this week.

We all need to laugh at ourselves and the absurd parts of our work sometimes. So enjoy having a “go” at the assessment culture run amok and the weird world of Institutional Research.

RUBRIC FOR THE RUBRIC CONCERNING STUDENTS’ CORE EDUCATIONAL COMPETENCY IN READING THINGS IN BOOKS AND WRITING ABOUT THEM.

From Timothy McSweeney’s Internet Tendency blog.

Make it a great day!

Mark

So how do our retention numbers look now?

Early in the winter term, I wrote about the usefulness of tracking term-to-term retention. This approach is particularly valuable in evaluating and improving our efforts with first-year students, since they are the ones most susceptible to the challenges of transitioning to college and for whom many of our retention programs are designed. Now that we have final enrollment numbers for the spring term, let’s have a look at our term-to-term retention rates over the last five years and see if our increased student success efforts might be showing up in the numbers.

Here are the last five years of fall-to-winter retention rates for the first-year cohort.

  • 2011 – 94.1%
  • 2012 – 95.6%
  • 2013 – 97.0%
  • 2014 – 95.9%
  • 2015 – 96.6%

As you can see, we’ve improve by 2.5 percentage points over the last five years. This turns out to be real money, since a 2.5% increase in the number of first-year students returning for the winter term means that we retained an additional 17 students and added roughly $84,000 in revenue (assuming we use 3-year averages for the incoming class and the first-year net tuition revenue per term: 675 students and $4940, respectively).

But one of the difficult issues with retention is that success is sometimes fleeting. In other words, retaining a student for one additional term might just delay the inevitable. Furthermore, in the case of first-year term-to-term retention the fall-to-winter retention rates can be deceiving because we don’t impose academic suspensions on first-year students after the fall term. Thus students who are in serious academic trouble might just hang on for one more term even though there is little reason to think that they might turn things around. Likewise, students who are struggling to find a niche at Augustana may begrudgingly come back for one more term even though they are virtually sure that this place isn’t the right fit. With that in mind, looking at our fall-to-spring retention rates would give us a more meaningful first glimpse at the degree to which our retention efforts are translating into a sustained impact. If the fall-to-winter retention rates are nothing more than a mirage, then the fall-to-spring retention rates would remain unchanged over the same five year period. Conversely, if our efforts are bearing real fruit then the fall-to-spring retention rates ought to reflect a similar trend of improvement.

Here are the last five years of fall-to-spring retention rates for the first-year cohort.

  • 2011 – 92.1%
  • 2012 – 93.1%
  • 2013 – 93.3%
  • 2014 – 93.5%
  • 2015 – 94.1%

As you can see, it appears that the improving fall-to-winter retention rate largely carries through to the spring term. That translates into more real money: approximately $69,100 in additional spring term revenue. Overall, that’s about $153,000 that we wouldn’t have seen in this year’s revenue column had we not improve our term-to-term retention rates among first-year students.

Certainly this doesn’t mean that we should rest on our laurels. Even though retaining a student to the second year gets them over the biggest hump in terms of the likelihood of departure, it still seems to me like small consolation if that student doesn’t ultimately graduate from Augustana. However, especially facing the financial challenges that the state of Illinois has dumped in our lap, we ought to pat each other on the back for a moment and take some credit for our work to help first-year students succeed at Augustana. The data suggests that our hard work is paying off.

Make it a good day,

Mark

What’s the Problem We’re Trying to Address?

If you’ve had to sit through more than one meeting with me, you’ve almost certainly heard me ask this question. Even though I can see how the question might sound rhetorical and maybe even a little snarky, I’m really just trying to help. Because I know from my own experience how easy it is to get lost in the weeds when trying to tackle a complex issue that is full of dicey trade-offs and unknown unknowns. So sometimes I’ve found that it can be useful to pause, take a couple of deep breaths and refocus on the problem at the core of the conversation.

By now you’ve almost certainly heard about the discussion about transitioning from an academic calendar based on trimesters to one based on semesters. Last week, Faculty Council provided a draft proposal to the faculty to be discussed, vetted, and even adjusted as legitimate concerns are identified by the community. Since I’ve already seen a calendar discussion sap us of most of our energy twice (or once if you count the two-year discussion a few years back as a single event), I hope that this time we can find a way to get through this without quite so much emotional fallout.

With that in mind, after listening to the calendar conversation for the last few months I thought it might be helpful to revisit the question at the top of this post:

What’s the problem we’re trying to address?

It is true, in one very real sense, that there is not a single answer. In fact the “problem” looks different depending upon where you sit. But since the topic of semesters was formally put back onto the front burner by the senior administration and the Board of Trustees, it’s probably useful to understand the problem as they see it. From their perspective, the problem we are facing is actually a pretty straight-forward one. In a nutshell we, like a lot of colleges and universities these days, have a balance sheet problem. In other words, we are having an increasingly difficult time ensuring that our revenues keep pace with our expenses (or put differently, that our expenses don’t outpace our revenues).

The reasons for this problem have been presented countless times, so I’ll try not to dive down that rabbit-hole too far again. But suffice it to say that since American family incomes have been stagnant for a long time, each year that our costs go up we lose a few more prospective families that might otherwise be willing to pay what we charge. Combine that with a shrinking population of high school graduates in the Midwest overall, and you can imagine how it gets harder and harder to come up with the increased revenue necessary to pay for inescapable increases in expenses like electricity, gas, and water, not to mention reasonable salary raises, building and sidewalk repairs, and replacements of worn out equipment.

The possible solutions to a straight-forward balance sheet problem like ours are also relatively straight-forward. If we decide to think of it primarily as insufficient revenue, then we would likely choose a way to increase revenue (e.g., enroll more students, add graduate programs, start online programs . . . each of the examples in this category are perceived by many as a potential threat to our philosophical core). If we decide to think of this problem primarily as excessive expenses, then we would likely choose a way to reduce expenses (e.g., make the college demonstrably smaller, eliminate Augie Choice . . . the only examples in this category that I can think of are pretty depressing). If we don’t see plausible options to increase revenues or reduce expenses, then the only other possibility is to find ways to become more efficient (i.e., achieve similar results from smaller expenditures). Of course, we could concoct some combination of all three approaches.

From the administration’s perspective, the possibility of moving to a semester-based academic calendar addresses the balance sheet problem by giving the college access to an expanded set of opportunities for increased efficiency (i.e., achieving similar results from smaller expenditures). Some of those efficiencies are more self-evident, such as reducing the number of times we power up and power down specific buildings. Some of them are more abstract, such as reducing the number of times we conduct a large-scale process like registration. But the central problem that the semester idea attempts to address is an issue of imbalance between revenues and expenses.

Although some have suggested otherwise, the semester idea is not primarily intended to improve retention rates or increase the number of mid-year transfer students. It is possible that a semester calendar might be more conducive to retaining students who struggle initially or attracting transfer students just after the Christmas break. But there are plenty of similar institutions on semester calendars with lower retention rates and fewer transfer student. Of course, that doesn’t disprove anything either; it just demonstrates that a move to semesters doesn’t guarantee anything. Increases in retention and mid-year transfers will happen (if they happen at all) as a result of what we do within a new calendar, not because we move to a new calendar.

I truly don’t have a strong opinion on the question of calendar. Both trimesters and semesters can be done well and can be done badly. This is why Faculty Council and others have thought long and hard about how to construct a semester system that maintains our commitment to an integrated liberal arts education and delivers it in a way that allows faculty to do it well. Nonetheless, I think it is useful to remind ourselves why we are having this conversation and the nature of the problem we are trying to address. If you think that we should address our balance sheet issues by expanding revenue sources or by reducing expenses, then by all means say so. If you don’t think a balance sheet problem exists, then by all means say so. But let’s make sure we understand the nature of the problem we are trying to address. At the least, this will help us have a more transparent conversation that leaves us in a healthier place at the end, no matter what we decide to do.

And one more thing. Let’s not equate “increasing efficiency” with “doing more with less.” Increasing efficiency is doing differently with the same resources in a way that is more effective. If we are in fact continually doing more with less, in the long term we’re doing it wrong.

Make it a good day,

Mark

 

Improving Advising in the Major: Biology Drives our Overall Increase

Last week I shared a comparison of the overall major advising data from seniors in 2014 and 2015. Although not all of the differences between the two years of data met the threshold for statistical significance, taken together it seemed pretty likely that these improved numbers weren’t just a function of chance. As you might expect by now, another aspect of this finding piqued my curiosity. Is this change a result of a relatively small campus-wide improvement or are the increases in the overall numbers a result of a particular department’s efforts to improve?

Since the distribution of our seniors’ major choices leans heavily toward a few departments (about half of our students major in Biology, Business, Psychology, or Education), it didn’t take too long to isolate the source of our jump in major advising scores. Advising scores in Business, Psychology, and Education didn’t change much between 2014 and 2015. But in Biology? Something pretty impressive happened.

Below is a comparison of the increases on each advising question overall and the increases on each advising question for Biology and Pre-Med majors.  In particular, notice the column marked “Diff.”

Senior Survey Questions             Overall    Biology/PreMed
2014 2015  Diff 2014 2015  Diff
Cared about my development 4.11 4.22 +.11 3.70 4.02 +.32
Helped me select courses 3.93 4.05 +.12 3.49 3.90 +.41
Asked about career goals 3.62 3.73 +.11 3.39 3.81 +.42
Connected with campus resources 3.35 3.47 +.12 3.11 3.36 +.25
Asked me to think about links btwn curr., co-curr., and post-grad plans 3.41 3.57 +.16 3.04 3.48 +.44
Helped make the most of college 3.85 3.97 +.12 3.36 3.80 +.44
How often you talked to your adviser 3.62 3.51 -.11 3.09 3.27 +.18

It’s pretty hard to miss the size of the increased scores for Biology and Pre-Med majors between 2014 and 2015. In every case, these increased scores are three or four times larger than the increases in overall scores.  In a word: Impressive!

So what happened?

Advising is a longstanding challenge for Biology and Pre-Med faculty. For decades this department has struggled to adequately advise a seemingly endless flow of majors. Last spring, Biology and Pre-Med graduated almost 150 students and at the beginning of the 2014-15 academic year there were 373 declared majors in either program. Moreover, that number probably underestimates the actual number of majors they have to work with since many students declare their major after the 10th day of the term (when this data snapshot was archived).

Yet the faculty in the Biology and Pre-Med department decided to tackle this challenge anyway. Despite the overwhelming numbers, maybe there was a way to get a little bit better by making even more of the limited time each adviser spent with each student. Each faculty adviser examined senior survey data from their own advisees and picked their own point of emphasis for the next year. Several of the Biology and Pre-Med faculty shared with me the kinds of things that they identified for themselves. Without fail, each faculty member decided to make sure that they talked about CORE in every meeting, be it the resources available in CORE for post-graduate preparation or just the value of making a visit to the CORE office and establishing a relationship. Several others talked about making sure that they pressed their advisees to describe the connections between the classes they were taking and the co-curricular activities in which they were involved, pushing their students to be intentional with everything they chose to do in college. Finally, more than one person noted that even though advising had always been important to them, they realized how easy it was to let one or more of the the usual faculty stresses color their mood during advising meetings, (e.g., succumbing to the stress of an upcoming meeting or a prior conversation). They found ways to get themselves into a frame of mind that improved the quality of their interaction with students.

None of these changes seem all that significant by themselves.  Yet together, it appears that the collective effort of the Biology and Pre-Med faculty – even in the face of a continued heavy stream of students, made a powerful difference in the way that students’ rated their advising experience in the major.

Improvement isn’t as daunting as it might sometimes seem. In many cases, it just takes an emphasis on identifying small changes and implementing them relentlessly. So three cheers for Biology and Pre-Med. You’ve demonstrated that even under pretty tough circumstances, we can improve something by focusing on it and making it happen.

Make it a good day,

Mark

We’ve gotten better at advising, and we can (almost) prove it!

With all of the focus on reaccreditation, budget concerns, employee engagement, and the consideration of a different academic calendar, it seems like we’ve spent a lot of time dwelling on things that aren’t going well or aren’t quite good enough. However, in the midst of these conversations I think we might do ourselves some good to remember that we are in the midst of doing some things very well. So before we plunge ourselves into another brooding conversation about calendar, workload disparity, or budget issues, I thought we could all use a step back from the precipice and a solid pat on the back.

You’d have to have been trapped under something thick and heavy to have missed all of the talk in recent years about the need to improve advising. We’ve added positions, increased the depth and breadth of training, and aspired to adopt an almost idyllic conception of deeply holistic advising. This has stretched many of us outside of our comfort zones and required that we apply a much more intentional framework to something that “in the old days” was supposed to be a relaxing and more open-ended conversation between scholar and student.

With this in mind, I thought it might be fun to start the spring term by sharing a comparison of 2014 and 2015 senior survey advising data.

Our senior survey asks seven questions about major advising. These questions are embedded into a section focused on our seniors’ experience in their major so that we can be sure that the student’s responses refer to their advising experience in each of their majors (especially since so many students have more than one major and therefore more than one major adviser). The first six questions focus on aspects of an intentional and developmental advising experience. The last question provides us with a way to put those efforts into the nitty gritty context of efficiency. In an ideal world, our student responses would show a trend toward higher scores on the first six questions, while average scores for the seventh question would remain relatively flat or even declining somewhat.

Here is a list of the senior survey advising questions and the corresponding response options.

  • My major adviser genuinely seemed to care about my development as a whole person. (strongly disagree, disagree, neutral, agree, strongly agree)
  • My major adviser helped me select courses that best met my educational and person goals. (strongly disagree, disagree, neutral, agree, strongly agree)
  • How often did your major adviser ask you about your career goals? (never, rarely, sometimes, often, very often)
  • My major adviser connected me with other campus resources and opportunities (OSL, CORE, the Counseling Center, etc.) that helped me succeed in college. (strongly disagree, disagree, neutral, agree, strongly agree)
  • How often did your major adviser ask you to think about the connections between your academic plans, co-curricular activities, and your career or post-graduate plans? (never, rarely, sometimes, often, very often)
  • My major adviser helped me plan to make the most of my college career. (strongly disagree, disagree, neutral, agree, strongly agree)
  • About how often did you talk with your primary major adviser? (never, less than once per term, 1-2 times per term, 2-3 times per term, we communicated regularly through each term)

A comparison of overall numbers from seniors graduating in 2014 and 2015 seems to suggest a reason for optimism.

Senior Survey Question 2014 2015
Genuinely cared about my development 4.11 4.22
Helped me select the right courses 3.93 4.05
How often asked about career goals 3.62 3.73
Connected me with campus resources 3.35 3.47
How often asked to think about links between curricular, co-curricular, and post-grad plans 3.41 3.57
Helped me make the most of my college career 3.85 3.97
How often did you and your adviser talk 3.62 3.51

As you can tell, the change between 2014 and 2015 on each of these items aligns with what we would hope to see. We appear to be improving the quality of the student advising experience without taking more time to do so. Certainly this doesn’t mean that every single case reflects this overall picture, but taken together this data seems to suggest that our efforts to improve are working.

I suspect that more than a few of you are wondering whether or not these changes are statistically significant. Without throwing another table of data at you, here is what I found. The change in “how often advisers asked students to think about the links between curricular, co-curricular, and post-grad plans” (.16) solidly crossed the threshold of statistical significance. The change in “genuinely cared about my development” (.11) was not statistically significant. The change in each of the other five items (from .12 to .15) turned out to be “marginally significant,” meaning, in essence, that the difference between the two average scores is worth noting even if it doesn’t meet the gold standard for statistical significant.

The reason I would argue that these changes, when taken together, are worth noting is a function of looking at all of these changes together. The probability that all seven of these items would move in our intended direction randomly is less than 1% (.0078 to be exact). In other words, it’s likely that something is going on that would push all of these items in the directions we had hoped. Given the scope of our advising emphasis recently, these findings seem to me to suggest that we are indeed on the right track.

I know that there are plenty of reasons to pull the “correlation doesn’t equal causation” handbrake. But I’m not arguing that this data is inescapable proof. Rather, I’m arguing that these findings make a pretty strong case for the possibility that our efforts are producing results.

So before we get ourselves tied into knots about hard questions and tough choices over the next 10 weeks, maybe take a moment to remember that we can tackle issues that might at first seem overwhelming. It might not be easy, but where is the fun in “easy”?

Make it a good day,

Mark

Instructor behaviors can impact students’ ability to balance

One well-known feature of our trimester calendar is the crazy crush of busyness during the last few weeks of the term. You can see it in the eyes of almost every student as they trudge up and down the quad (and if you look closely you can see it in many instructors’ eyes too). Although it’s not fun, I’m not sure that experiencing this kind of crunch of deadlines is such a bad thing. After all, our students will more than likely be required to successfully juggle multiple time-sensitive pressures often after they graduate. So the issue may not be whether or how we might dial back the seemingly inevitable onslaught of curricular deadlines. Instead, our greatest contribution to our students might be to:

  1. help them learn how to successfully navigate these experiences, and
  2. ensure that our instructional interactions allow them to focus their energies on increasingly efficient navigating strategies instead of adding extraneous noise, unnecessarily vague direction, or downright confusion into the mix.

Last week, I shared findings that explored the things students can do to help them develop better time-management skills. This week I thought it might be useful to focus on the potential impact of instructional behaviors on this equation. Just like last week, these findings are the result of teamwork in the IR office. Katrina, one of my three student workers, and I worked together to develop the analysis and run the statistical tests; I’m responsible for writing up what we found.

In our mid-year freshman survey (data collected during the last third of fall term), we ask first-year students to tell us how often they struggle to balance their academics with their extra-curricular activities. They can choose from five response options that range from “most or all of the time” to “never.” Most of the students select “sometimes,” the middle response (i.e., a three), while the second most popular response selected is “rarely” (AKA a four).

Like last week’s analysis, we wanted to hold constant some of the typical potentially confounding variables – demographic traits and pre-college academic preparation – so that we could hone in on instructional behaviors that appear influential for all types of students. (This turned out to be particularly important for reasons that I’ll discuss later.) After a series of preliminary tests, we added three items to our analysis to see if they produced statistically significant results.

  • I had access to my grades or other feedback early enough in the term to adjust my study habits or seek additional academic help as necessary.
  • My LSFY/Honors instructor helped me develop at least one specific way to be a more successful college student.
  • My first year adviser helped me understand my Student Readiness Survey (SRS) results.

The first two items offer five response options that range from “strongly disagree” to “strongly agree.” The question about the SRS includes a different set of four response options that seem more appropriate to the question: “We never talked about them (What is the SRS?),” “Only briefly,” “Yes, but they weren’t all that useful,” and “Yes, and they influenced how I approached the beginning of my freshman year.” We hypothesized that all three of these items might have positive effect on students’ frequency of struggling to balance academic and extra-curricular activities.

Our analysis confirmed our hypotheses in two out of the three cases (last week we were right once, wrong once, and neither once, so maybe we’ve improved our prognosticating skills a little bit since last week!). Even though one of the elements of the Student Readiness Survey focuses on academic habits, the first year adviser’s use of the Student Readiness Survey didn’t have any effect one way or the other. Maybe it was a bit aspirational to think that one conversation could impact something as potentially continuous or more likely intermittent as struggling to balance academic and co-curricular activities.

However, both early access to grades or other feedback and an LSFY/Honors instructor who helped the respondent develop at least one specific way to be a better student produced statistically significant positive effects. In other words, as students agreed more strongly that they had received access to information that helped them calibrate the effectiveness of their study habits they struggled less often to balance academics and co-curricular activities. Similarly, as students agreed more strongly that their LSFY/Honors instructor helped them become a better college student, the students indicated that they struggled less often in balancing academics and co-curricular activities. In both cases, these findings held even after accounting for differences in race/ethnicity, gender, socio-economic status, and pre-college academic preparation (i.e., ACT score).

Thus, it appears that instructional behaviors can play an important role in helping students develop the ability to effectively balance their curricular and co-curricular obligations. Instructors can certainly design their courses to provide substantive feedback early in the term instead of waiting until a mid-term exam or late term paper assignment. Likewise, LSFY/Honors instructors can insert experiences or assignments into their courses that, instead of merely focusing on the production of intellectual work, also focus on the process of more efficiently and effectively producing intellectual work.

Now, remember my foreshadowing about the importance of demographic traits in this analysis? Even in the final equation, being female or being white still produced statistically significant positive effects. In other words, even after accounting for all of the other variables in our equation, white students and female students struggled to balance less often than their non-white and male peers.

It makes intuitive sense that gender would appear significant in this equation. We know from all sorts of research on men that their time-management skills and general maturity is usually behind women at this age. This finding highlights the additional effort that we need to make to help male students grow. Finally, the finding about race struck me as particularly important. In the context of the other research we’ve published about a lower sense of belonging scores for non-white students, maybe the findings in the current study reflect another way in which the collateral damage that comes from feeling less like you belong on our campus hinders the success of minority students. We have all experienced the negative effects of disturbing distractions that hinder productivity. Maybe the persistent negative effect of being non-white is a function of just such a destructive distraction and is another confirmation of the corrosive effect of stereotype threat.

We all want our students to be active and involved on campus. We also want them to develop and maintain a healthy balance between their academic and co-curricular obligations. It seems to me that historically we have put the onus for maintaining this balance on our students, providing all sorts of guidance on time management, priority setting, and life organization. Maybe, just maybe, we could also contribute to their success by focusing more on what we do to set them up for success. And I don’t mean dialing back our expectations of them. Instead, we might add to their potential for growth if we help them obtain and use the right information at the right time in order to continually (re)calibrate their balancing efforts. In the midst of the seemingly accelerated trimester calendar, this might be even more important than usual.

Good luck with week 10, everyone. For all of you who by now have noted that I’ve pointed out the increased busyness that occurs at the end of the term and followed that with an unusually long blog post that has just soaked up that much more of your time . . . I’m truly sorry.  Ok, well kind of sorry.

Make it a good day,

Mark