The world record for crossing the English Channel on foot is 14 hours and 51 minutes. President Obama does not have prime number of friends because he is not a prime number. These are a few things learned by ChatGPT users since its release nine months ago. As generative AI chatbots leverage large language models (LLMs) to predict the next word, sometimes they predict inaccurately, leading to “hallucinations.” AI hallucinations occur due to problems with training data (ChatGPT used Reddit as an input – not always the most accurate source of information) or because large language models can be biased towards generic or specific words. But although we’ve now entered the age of AI, it’s still far too early to attribute AI hallucinations to paranoid or depressed bots.
In Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, Sirius Cybernetics Corporation has progressed AI from chatbots to robots and spaceships with genuine people personalities. Spaceship AI Eddie is annoyingly cheerful whenever someone walks through a door: “Thank you for making a simple door very happy.” On the other end of the spectrum, Marvin the Paranoid Android gets everyone down: “Do you want me to sit in a corner and rust, or just fall apart where I’m standing?... Pardon me for breathing, which I never do anyway. So I don’t know why I bother to say it. Oh God, I’m so depressed.” Although Marvin has a brain the size of a planet, he’s assigned menial tasks and lacks the self-awareness to stop complaining for hours on end about the pain in the diodes down his left side.
There are two prescient conceits at the heart of Adams’ 1978 radio series (later adapted into novels, television, and film). The first is the Hitchhiker’s Guide itself, an exhaustive electronic compendium of all knowledge – including Babel fish (insert in ear for automatic translation) and the Sirius Cybernetics Corporation (“a bunch of mindless jerks who’ll be the first against the wall when the revolution comes”) – pretty much predicting the course of the next 45 years of digital transformation. The second is Deep Thought, a supercomputer built to answer the ultimate question of “life, the universe, and everything.” After running for eons, Deep Thought famously produces this answer: 42. The much harder part, it turns out, is determining the question.
This fairly sums up the state of generative AI: answers are readily available, but questions are everything. As poor question framing won’t produce useful results, failure to ask the right questions is a much bigger barrier to our AI-enabled future than hallucinations. Insufficient human questioning-capital could end up bursting AI’s bubble.
Figuring out what to ask ChatGPT may not be as complex as the ultimate question of life, the universe, and everything, but it’s far from trivial. What skills do we need to ask the right questions of our new chatbot friends? For tens of millions of Americans, it could be the difference between gainful employment and being as unhappily underemployed as Marvin the Paranoid Android.
Knowing what to ask about requires subject matter expertise. If your job’s in claims management, you need to have some understanding of how the insurance industry works and its lexicon. If you’re a digital marketer, you need to know industry-standard platforms, tools, and metrics. Underscoring all this is an ability to understand the subject matter. But as specialized LLMs evolve for every industry and job function (and likely for each industry-function pairing), experience and pattern recognition will become even more important.
The emerging field of prompt engineering helps you figure out how to ask the right question. It can involve inputting specific instructions, context, and format of the desired output. You may need to employ techniques like chain-of-thought prompting or directional stimulus prompting. And the right question may require changing parameters like temperature and top_p. Mastering prompt engineering is like learning a foreign language to communicate with AI. (So don’t expect to learn it at West Virginia’s flagship university!) According to McKinsey, 7% of organizations that have adopted AI are already including prompt engineering skills in job descriptions.
Knowing why you’re asking means understanding the problem you’re trying to solve. It can require thinking like the owner of the problem rather than a functionary – a principal instead of an agent. Much more than what or how, the “why” faculty springs from a bedrock of problem solving and critical thinking. As former IBM CEO Ginni Rommety told a Chronicle of Higher Education audience last week, “as an employer, I can give people hard skills, and in the digital world they change every three to five years. It doesn’t matter that much. I really need them to understand how to work, how to think, how to problem solve.” (Except it does matter. Because they wouldn’t have been hired in the first place without the digital skills needed to be productive in entry-level jobs. Especially now that AI has rolled along.) But Rommety’s hit the nail on the head for “why,” which involves constantly reframing questions i.e., the capability to determine the next question. And that’s the kind of thinking/solving enterprise beloved by our academic-industrial complex.
At what point in the problem journey should you uncork the AI genie? When is too soon – where AI could send you on a wild goose chase – and when is too late? There’s judgment involved here. But as with “what,” when is mostly about experience and pattern recognition.
Likewise, figuring out what to do with AI’s output – whether to copy and paste into an email or incorporate into a broader deliverable – requires critical thinking, but also reps in the role.
If these are the skills needed to harness generative AI, how well is our current education system (K-12 + college) preparing graduates? Let’s go with grades the system will understand.
|What?||D||Lack of integrated work experience|
|How?||F||No training yet on prompt engineering|
|Why?||A||[Insert school name] has worked very hard this semester/year, and I am proud of all of his/her accomplishments|
|When?||C||Lack of integrated work experience|
|What Next?||B||Lack of integrated work experience|
|Overall||C||[Insert school name]’s grades are suffering from missing assignments|
Within a few years, schools will get the how grade up, perhaps all the way to an A. But it’s hard to see our educational institutions as currently constituted doing better than a B-. Will the problem self-correct when we get next-generation always-on AI that watches your work and presents suggestions? While what, how, and why might be off the table, when and what next would loom even larger. Either way, it’s clear that equipping young Americans for AI-enhanced employment will require thinking outside the classroom box.
IBM has predicted that AI won’t replace people, but people who use AI will replace people who don’t. As a result, keeping students penned in classrooms will impede career launch. While digital transformation has already put a premium on learning-by-doing, AI will make work experience mandatory for every learning journey. Watch as America’s most innovative schools try to stabilize wobbly academic programs on a tripod consisting of these three legs:
#1. Work-integrated learning
Because scaling a co-op program takes decades and isn’t replicable beyond a handful of selective colleges (and certainly not for high schools), schools will need a turnkey work-integrated learning network like Riipen to connect with employers with real work problems and allow teachers to easily incorporate employer projects.
While about 40% of college students have internships, nearly half are unpaid. Meanwhile, formal internship programs are hard to find in higher education and as rare as hen’s teeth in high school. Beyond the most selective employers leveraging summer internships as the first phase of an elaborate recruitment process, internships tend to be catch-as-catch can: informal and often relationship-based. Even when they’re paid, interns are viewed as lesser employees – informality breeding a culture of not counting (as demonstrated by these Amazon and Capital One job descriptions expressly disowning internships as relevant experience). Accordingly, colleges and school districts need to formalize internship programs, probably by helping catalyze the creation of a new category of internship service providers: non-profits and for-profits that set up and operate internship programs for the benefit of employers, schools, and interns.
I have a few things to say about apprenticeships. There’s no better way to build skills for an age of AI because apprenticeships are paid pathways to professions that equip new workers with the AI skills and experience they’ll need. So figuring out how to integrate apprenticeship into curriculum (in the form of youth apprenticeships) and connect graduates to apprentice jobs should be a top priority for every school.
American education has been struggling to stay above water through the first stage of digital transformation. With the onset of AI, employer expectations will shift even more dramatically. As menial tasks will no longer be fit for employee consumption, even brand new workers will be expected to be AI-productive from day one, effectively transforming entry-level jobs into mid-level jobs. The consequence for schools is stark: the work experience tripod can’t be optional, adjacent, co-curricular, or extracurricular. The only way to elevate students is to make it part and parcel of core educational programs.
The good news is that generative AI has the potential to help classroom education become more efficient in developing “why” skills, opening up time for integrated work experience. Nord Anglia Education just released a report on how its students are honing critical thinking skills by debating against AI. AI-powered tools like Packback are supporting writing development through continuous moderation, feedback, revising, and rewriting while streamlining faculty grading; constant sparring on the level of an Oxbridge tutorial will do a lot to improve critical thinking and problem solving. Teachers are also beginning to use AI to develop lesson plans, presentations, and assessments. For the first time it seems possible that Baumol’s cost disease – i.e., delivery of education uniquely failing to become more efficient as a result of technological development – could go the way of McGuffey Readers.
But even with extra time, colleges and school districts won’t solve this problem on their own. Educational institutions aren’t built to connect deeply with employers. The only way to prepare the next generation for AI-enabled careers is to deliberately foster a robust ecosystem of intermediaries: work-integrated learning platforms, internship service providers, and apprenticeship intermediaries. And schools shouldn’t have to go searching for them. Policymakers must recognize their importance and begin providing incentives to do this difficult work. The goal should be to have so many that all schools need to do is answer the phone (or email, or bot) and let intermediaries do the heavy lifting of incorporating work experience into secondary and postsecondary programs.
Help from work experience intermediaries has the potential to open up even more classroom time: time to be human. Because until we reach Eddie- or Marvin-level AI, bot output will be devoid of self-awareness, ethics, values, and emotions. It’s no accident that in Hitchhiker’s Guide to the Galaxy, the unfathomably complex next-generation computer designed by Deep Thought to determine the ultimate question turns out to be planet Earth.