Volume VIII, #10
Before there was artificial intelligence, there was natural intelligence. And in the world of comedy, no natural intelligence shone as bright as Doug Kenney’s. In the 1970s, Kenney was the comic genius behind National Lampoon, Animal House, and Caddyshack and helped spawn Saturday Night Live, Spinal Tap, and a golden era of American comedy.
In a new Netflix biopic of Kenney, A Futile and Stupid Gesture, Kenney is on Tom Snyder’s talk show and charged by the host with publishing offensive content. “If I could just say something in defense of National Lampoon,” Kenney responds. “We come from a tradition of truth-tellers. A long time ago, there was someone else society found offensive. They thought that what he did was radical, dangerous. They persecuted him and eventually killed him.” The audience is rapt, expecting a solemn point about a hero of American liberty. Then Kenney concludes: “Of course, I’m referring to Dracula.” [Pan to stone-faced Tom Snyder.]
There’s often a fine line between hero and villain, and by most accounts, artificial intelligence (AI) is on the Dracula side, sucking jobs out of the economy. These days you can’t throw a rock without hitting some pundit prognosticating on the millions of jobs that will be lost from AI. One oft-cited Oxford University study predicted 47% of jobs are in jeopardy.
But while AI conjures up robots and dystopian science fiction movies, it isn’t magic. Today’s AI consists of algorithms developed with “training data” that improve over time, otherwise known as machine learning. The result is better pattern recognition, as when Google seems to predict what you’re searching for after typing a few letters. What it ultimately means is automation of the information economy in the same way that the industrial revolution changed manufacturing. Any job that involves processing or manipulating information in a repeatable or even predictable way is a job that probably will be automated by AI and its kissing cousin Robotic Process Automation (“RPA”).
Colleges and universities have lots of jobs like this. Not in the classroom, mind you; teaching and learning have a relatively low level of repeatability. But keep in mind that only $0.21 out of every tuition dollar is actually spent on instruction. That leaves a lot of repeatable processes that AI will automate. Nevertheless, what we’re seeing so far suggests predictions of massive job losses in higher education are overblown.
An article last month in the Chronicle of Higher Education focused on how UT Austin is utilizing AI to monitor and adjust landscape sprinkler systems. The entirely uncontroversial result is not job cuts, but rather a huge improvement in water conservation and concomitant cost savings. Then there’s admissions. Reviewing college applications is a highly repeatable process, particularly at the top of the admissions funnel. University of Arizona’s Dean of Undergraduate Admissions has commented that AI won’t be used to “count anybody out automatically,” but will rather “help to enhance” the admissions staff make good matches. No mention of any plans to reduce admissions staff.
When it comes to interacting with students, we’re seeing a similar pattern: AI isn’t displacing workers, but rather enhancing student experience by filling current gaps in the service offering. In the enrollment and financial aid processes, the Chronicle profiled Georgia State’s use of AI chatbot AdmitHub to respond to enrollment and financial aid questions. Tim Renick, GSU’s dynamic VP for Enrollment Management and Student Success, says that in the run up to the start of each semester, his team can receive as many as 2,000 calls a day. That’s volume GSU cannot handle: “We’re not American Express. We don’t have a call center with 200 people.” AdmitHub’s chatbot allows students to ask any question at all hours. The technology takes a statistical approach to responding to questions. If it’s less than 95% certain of the answer, it connects the student to a human staff member. As you might expect, AdmitHub gets smarter with each question. In its first summer at GSU, AdmitHub answered 200,000 questions and successfully reduced summer melt by 20%.
Another area of great promise is the online discussion board. Anyone who’s been in an online discussion for a large lecture course will tell you they’re unproductive and tedious at the best of times. Neither faculty nor teaching assistants are incentivized or able to provide the requisite level of structure, guidance, and feedback to keep every thread moving towards a productive educational outcome. That’s where Packback Questions comes in. Packback Questions is an AI-powered tool that picks up where the humanoids leave off. Its algorithms coach students to improve responses and to ask more thought-provoking questions, sparking better discussion and critical thinking. Packback also provides recommendations to faculty on how to further improve student engagement.
It seems likely that students will find themselves engaging with more bots. Georgia State has now enrolled students who are wondering “where did the chatbot go? I still want to ask it questions.” Across town at Georgia Tech, one faculty member has already utilized a bot (“Jill Watson”) as a teaching assistant without telling students. Said the faculty member, “I don’t intend to put myself out of business. I think of this as improving teaching quality… [not] decreasing teaching quantity.”
While higher education may not see major job losses, there’s little question that entire job categories in other sectors of the economy (e.g., data entry, tax preparation) will go the way of bowling alley pin setters. However, for most of us in the labor market, AI won’t result in job loss, but rather significant changes in what we do every day. McKinsey has estimated that AI will automate 30% of tasks in about 60% of jobs. This means most of us will need to use AI technology in the same way as we currently use non-AI software and SaaS platforms to do our jobs. As configuring and managing AI software – and interpreting data output – will be more complex than using Word or Salesforce, this means upskilling will be required for most jobs, primarily a higher level of cognitive and technical skills.
And so while AI should allow colleges and universities to become more efficient and effective in supplying higher education, it will also shift the demand curve for postsecondary education to the right. With greater demand for cognitive and technical skills, colleges and universities will have a golden opportunity to reassert their preeminence in human capital development.
But just as the impact on the broader economy will be uneven, AI will result in some dislocation in higher education. Specifically, it will be a harder road for disciplines that cannot seriously and credibly claim to further skills in managing software. That’s what employers will expect. And as the drumbeat of AI gets louder and louder over the next decade, these requirements will become gospel to guidance counselors, parents, and applicants alike. Carnegie Mellon’s new bachelor’s degree in AI (the first of its kind) is an obvious early winner. But any program or pathway that builds skills at or close to the human-machine interface central to the coming AI revolution will thrive. Adding software + data to work yields a multidisciplinarity that will resonate across every profession and every area of study. Those who opt to remain on the periphery – regardless of how much they claim to foster core cognitive or creative skills – are likely to wither.
A Northeastern-Gallup survey released in January showed that only 22% of current workers with bachelor’s degrees or higher think their own education has prepared them to work with AI. Meanwhile, only 43% are confident they can obtain the education they’ll need. Some of that is undoubtedly concern about affordability, but it probably also reflects worries about availability. The response from America’s colleges and universities can’t be “more of the same.”
Doug Kenney’s National Lampoon was published by the same company as Weight Watchers magazine. On one occasion, a Weight Watchers subscriber was sent an issue of National Lampoon in error and immediately responded with an angry letter about the offensive content. How did National Lampoon respond? The “subscription manager” sent the subscriber a note that read “Sorry for the mistake,” and enclosed what was then considered the most offensive issue of National Lampoon. When another furious letter arrived, he sent the same note and another National Lampoon. After going back and forth for several weeks, the subscriber getting more and more agitated, the subscription manager brought the epistolary altercation to a screeching halt by sending the subscriber a bill for all the issues he’d mailed her.
Doug Kenney was a comedic genius and hero who left disruption in his wake, which fairly describes the impact AI will have on higher education. The reason there’s often a fine line between hero and villain is that some heroes are disruptive. And for the sector charged with cultivating natural intelligence, artificial intelligence will be a hero, albeit a very disruptive one.