The Lost Potential of Institutional Research: Insights from ‘Outsourcing Student Success’
Higher Education Strategy Associates
We develop customized, evidence-driven solutions for challenges facing higher education professionals.
Inside every higher education institution lies a secret cabal of people — gurus really — who know everything about the institution and how it works. They’re called institutional researchers. And yet, despite all this specialized knowledge, the field (it’s not really a profession) is not usually at the heart of university decision making. Why is that exactly?? Today, my guest is Joseph Wycoff. He’s the author of ‘Outsourcing Student Success, the History of Institutional Research, and the Future of Higher Education ‘. It’s a fascinating century long view of the development of a field of endeavor, which is specific to higher education, and which you might think has been central to its political development as well.? But the story Wycoff tells about this group is both an odd and surprising one. Though we live in a society where virtually any occupation can be, and usually is, professionalized, the field of institutional research, or its leadership in the Association of Institutional Research at any rate, took a voluntary vow of deprofessionalization about 60 years ago, renouncing the idea that what they did was science, and instead claim to just be an art, one that needed to be bespoke to each institution.
This seemed to me to be a very unique story, not just in higher education, but anywhere in the modern economy. Who avoids professionalization? The strange story of how this deprofessionalization came about, and what its consequences were, is at the heart of the interview you’re about to watch. And from here, I’ll let Joe take it away .
The World of Higher Education Podcast Episode 3.10 | The Lost Potential of Institutional Research: Insights from ‘Outsourcing Student Success’
Transcript
Alex Usher (AU): Joe, let’s start from the beginning. How did offices of institutional research begin? What’s the origin story of this rather peculiar area in institutional life?
Joseph Wycoff (JW): I’m glad you asked about the offices specifically rather than institutional research in general, because there’s a narrative—what I call the consensus paradigm—that traces institutional research back through the whole history of universities, especially back to Yale and some early self-studies they conducted. But I think the modern offices of institutional research, as we understand them today, actually began at the University of Illinois in 1918. So it’s about a hundred years old. There may have been others, but the Illinois office was actually called the Bureau of Institutional Research. It was led by Coleman Griffith, who’s known as the father of sports psychology because, aside from institutional research, he was also a psychology scholar who even worked with the Chicago Cubs for a while. But his role in institutional research was largely forgotten for about 50 years after the Association for Institutional Research was formed and this other narrative, this idea that institutional research has been around since Yale, became the more accepted history. So that’s where I’d draw the line for its true beginning.
AU: And these offices expanded, especially during the big boom of the 1950s and 60s during the era of massification in American higher education. You describe how they played a big role in developing state systems of higher education, and it seemed like, at that point, institutional research offices might become pretty central to managing universities. Is that accurate? When was the high point for institutional research as a discipline?
JW: Well, I’d say it never really became a “discipline,” but in terms of impact, early records from the 1930s mention there were maybe a dozen or so offices of institutional research. Coleman Griffith’s work at Illinois started the Bureau around 1918, and he first published about it in the mid-1930s. But with the Depression and then World War II, there wasn’t much momentum for new offices until after the war and the expansion period you mentioned. Then there was a publication out of the University of Minnesota by Ruth Eckert, who should probably be considered the “mother” of institutional research. Her work reignited interest in what these offices could do at the institutional level. This led to real growth in institutional research, especially in California, where research and planning studies led to the famous 1960 master plan for higher education. Much of that planning was shaped by people from Minnesota and elsewhere who had developed institutional research at a state level. So, in my estimation, the field was most impactful during this period up until about 1962.
AU: After that, as you describe, as the 60s wore on institutional research started to meet some resistance, especially from faculty, who seemed to view it with suspicion. It represented a kind of high-modernist approach to managing universities through data, and I get the impression that faculty saw this as an alternative source of authority about what universities should be. How did IR offices respond to this pushback?
JW: That’s a bit hard to answer because my focus is on the literature around institutional research, not on individual offices. But from what I’ve gathered, there was a mix of responses. Some researchers may have accepted the resistance or simply weren’t aware of it. These offices were relatively new, and people working in them didn’t always know the broader history. But the backlash also led to what I call the “consensus paradigm,” which argued that institutional research wasn’t a profession and didn’t need its own discipline. The idea was that institutional researchers learned on the job, and that narrative downplayed any ambition to make it a science. And you can see this today too. There are people who are very knowledgeable about what makes the science but they use it to undermine or discourage people from thinking about institutional research as a science. So, institutional research was effectively kept in a pre-paradigm state because of this consensus that it shouldn’t be a field with its own authority or scientific status, not just from the faculty but from some administrators as well.
AU: Well, that’s my next question, because it’s interesting—labor and management in higher education rarely agree, but it seems both groups agreed that they didn’t want IR offices to become a source of authority about how universities function. Why wasn’t management more supportive of developing institutional research as a tool for managing universities?
JW: That’s probably the hardest question because, initially, presidents were the ones pushing for these offices and supporting their creation. But as IR capabilities grew, there seemed to be a pullback. Institutional research was initially about self-study within universities, but when researchers started publishing results, it pushed the findings outside the “walled city” of the university and the results came up for public discourse. So at that point, faculty—especially social scientists who studied higher education—and some administrators argued that institutional research could threaten the institution’s autonomy. This led to a narrative that institutional research was an instrument of the state and external control, essentially a “foreign agent” within the university.
AU: So a threat to autonomy more than anything else?
JW: Autonomy—yeah, that’s something you’ve addressed before on your podcast, and it’s something I look at in depth. Institutional autonomy as we think of it today didn’t even exist until after the 1950s. It became almost this post hoc reasoning for why universities needed to be protected from the state and other external agencies. So, institutional autonomy, in a sense, became an unspoken litmus test for being considered a scholar of higher education. And this standard rested on seeing each institution as unique—no one defined unique or tried to measure it, but it was this idea that discouraged any scientific study or systemic approach. Faculty argued that each institution, each department, even each discipline, was unique. And so, if you start from that assumption, there’s no reason to study higher education as a system. If anything, many authors of these reports are anti-intellectuals and discouraging the studies of higher education.? Institutional research was essentially kept in this paradigm state by this approach, stopping it from ever becoming a real field of study with a scientific basis.
AU: In your telling, it sounds like the IR profession, or maybe practitioners is the better term since they rejected “professionalism,” bargained with faculty and administrators to downplay their own expertise. They basically said, “we’ll pretend we’re not a science if you supply us with the data to do reporting.” Was that bargain effective?
JW: It’s tricky to generalize because institutional researchers are largely shaped by their local institution’s culture. The narrative became that an institutional researcher needs 20 years at an institution to be truly effective at that they do, so they end up becoming ingrained in that one institution. Personally, I’m not a member of the Association of Institutional Research because I don’t believe it represents my professional interests, and I don’t work full-time as an institutional researcher. I take interim roles at various institutions across the country, and I see the same dynamics over and over. People spend decades in these roles, often with little respect or recognition when they retire. When they retire, there is no party for the IR person. But, when IR offices first began sharing and publishing, they were acting as scholars, building toward a scientific community. But that trajectory was cut off around 1965, with the formation of the Association for Institutional Research.
AU: That’s interesting. I hadn’t thought about the divide between AIR and organizations like Association for the Studies of Higher Education (ASHE), or, in Canada, CIRPA and CSSHE, same in the UK. There’s this divide between data practitioners and scholars of higher education. Is that divide necessary, and who suffers more because of it?
JW: It’s hard to speak to what happens outside the U.S., but in my view, U.S. higher education set the template for institutional research. What happened here was that scholars of higher education wanted to establish themselves as a profession and, in the process, sidelined institutional research. That has consequences, primarily for students, because institutional leaders and policymakers lack accurate insights into how universities work. Scholars of higher education benefited from the divide, while institutional researchers—and the institutions themselves—suffered.
AU: Right. And one area where collaboration could have really helped is in inter-institutional comparisons which we’ve seen examples from like the work of George Kuh at NSSE. Your book suggests that the field’s inward focus limits good comparative work, which seems like a missed opportunity, not just in the U.S., but globally. Was de-emphasizing comparison a necessary part of IR’s path?
JW: Comparative work is obviously one of the threats to autonomy because it means having some external agency looking into what institutions do. But that doesn’t mean it doesn’t happen. There are lots of voluntary, inter-institutional comparisons, and there’s compliance-driven reporting for things like U.S. News & World Report or IPEDS, where institutions have to submit data to qualify as institutions of higher education. But these are burdens placed on IR offices, not initiatives driven by institutional researchers. For real comparative work, you need a scientific community, and while some institutional researchers want to be part of that, it’s not something you can build on your own. Institutional researchers deserve a scientific community that respects them and treats them as part of the community of scholars. But that’s missing, and it’s largely because of the association for the study of higher education and others that don’t see institutional research as a serious area of inquiry.
AU: Last question. As I was reading your book, I thought about librarians. They’re not traditional academics, but they’re treated as part of the scholarly community, often tenured and often in the same bargaining units as faculty. Institutional research could have been that way—part of the academic enterprise. If the field had gone down a different path, how do you think it would have changed universities?
JW: Well, it would have been transformative, and I don’t want to suggest there was never another option for institutional research to be taken seriously. In fact, as I discuss in the early chapters of my book, there were people who laid out what institutional research as a science could look like, and there were advocates who pushed for it to be recognized as a legitimate scientific endeavor. But any time these ideas were advanced, they were suppressed. To be clear, I’m self-published—if readers see any grammatical errors, I take responsibility for that. I tried three times to present at the Association for Institutional Research on my critiques of their aspirational statements, but I was rejected each time, often with no feedback at all. I also attempted to publish through Johns Hopkins University Press, only to receive a dismissive and, frankly, sophomoric review.
I’ve had considerable difficulty getting my ideas into public discourse. I’ve met with leaders at the European Association for Institutional Research, and yet, not once have I been invited to talk about my work, contribute to their professional journals, or participate in shaping the direction of institutional research. Since the publication of AIR’s aspirational goals for the profession, their individual membership has dropped by 50 percent, but there’s little concern about it. They moved on to institutional memberships, following the same path of defining everyone as an “institutional researcher.” This approach is accelerating the erosion of the profession.
AU: I’ve been speaking with Joe Wycoff, author of?Outsourcing Student Success: The History of Institutional Research and the Future of Higher Education.?Joe, thank you for joining us.
JW: Thank you, Alex. I really appreciate the opportunity to talk about my work.
AU: And just a quick thanks to our producers, Tiffany MacLennan and Sam Pufek, and to our viewers and listeners for tuning in. If you have any questions about today’s episode or suggestions for future ones, please don’t hesitate to reach out at?[email protected] . Join us next week when our guest will be Sharowat Shamin. She’s an assistant professor of law at the University of Dhaka, and she’ll be joining us to talk about Bangladesh, its student movement, its graduate labor market, and the summer riots that ousted a prime minister. Bye for now.
*This podcast transcript was generated using an AI transcription service with limited editing. Please forgive any errors made through this service.
President, Historia|Research
1 周I have made OSS Kindle version free for download through Sunday: https://www.amazon.com/dp/B07D7NDSJ7. I think it is a worldwide free download (no longer a need to select a market as I understand). I would like to thank Alex Usher, Tiffany, and Sam for the opportunity to discuss this historical work.
Director, Institutional Analysis and Academic Planning - Red River College Polytech
1 周Thanks Alex. That's a very interesting overview of IR. I was largely unaware of the historical origins of the field. As a long-time IR "practitioner" I would agree that we seldom see applications of the type of academic research that you'd expect of a profession. Much of the time we focus on providing and interpreting data (numbers, survey research, qualitative) in response to immediate business needs. For example - figuring out which programs international students will be eligible to study for and the related financial impact. In Canada, we are typically more like consultants than researchers. Furthermore, we are seeing technological needs for dashboards and more advance data management and analytics drive us to hire those with more of a computer science and data analytics background and away from the social science disciplines where the quantitative side is balanced against other analytic skills. I'm not sure if "professionalism" or advancing the science side is the core need, but being a bit less institutional myopic and having broader discussions about the profession through groups like CIRPA are definitely needed.