Skip to Content

A massive government investment in academia can unlock the “unknown unknowns” of AI

By Raffi Krikorian

Though the coverage of AI is dominated by private sector firms like OpenAI and Google, the truth is that the field of AI has historically been driven by research universities. From the 1950s onwards, the big breakthroughs in this field were done by professors and taking place in research labs—not in technological office parks. Even the term “artificial intelligence” traces its origins to a 1956 Dartmouth summer conference, which featured some of engineering and early computing’s leading academics.

Which is why it’s surprising to see so many of the latest developments in AI come out of industry, not the academy. So what happened? The latest phases of AI’s development—deep learning and generative AI—require massive, expensive access to data and computing power. Or to put it differently: Universities have been priced out of the AI field they helped to foster and nourish.

We should ask ourselves: Are we comfortable with that shift? Commercial actors have a specific set of incentives: They want to ship products to engage users to make money. Academia is driven by entirely different motives: They want to investigate topics deeply and add to the body of knowledge, building the foundation atop which others can innovate. Of course, we need both—but today, academia is hamstrung in making the contributions it has historically made to AI research.


It is true that academia and industry move on two different time horizons. Commercial entities move fast and scale aggressively. They need to capture users, and they win if they’re seductive and sticky—not if they improve the world or contribute to meaningful research. Academia moves more methodically and slowly, but they produce the deep and lasting research that asks the question: “What’s next?”. Because they asked those questions over the past 50 years, the commercial sector benefitted, too: There would be no ChatGPT or Stable Diffusion if not for the academic research that built AI’s foundations. 

Which is why the country stands to benefit if academia's work on AI is given a boost—whether through increased government grants, public-private partnerships, or commercial support of academic research. With massive, sustained, government investment in universities, we’d not only advance the technology—we could also invest in solving problems like AI literacy, data transparency, and AI ethics. This is the rare investment where the country could both do good and do well.

Unlike their private sector counterparts, university researchers are also subject to institutional research boards and other checks and balances to prevent harm in their work. This commitment to ethical considerations is crucial in a rapidly evolving field like AI, where potential risks and unintended consequences must be considered. By supporting academia, we foster a more diverse AI research ecosystem that upholds ethical standards while driving innovation.

This isn’t to write off industry nor its ethical obligations—in fact, industry would benefit from embracing what academics have done and how they do it. As a part of this program, private sector actors can commit to using technology and systems creation by academics, which would encourage more open source processes. Industry can create rotation programs that allow their leaders to spend time at universities and vice versa. Lest we forget: Among the great consumer product innovators of the last century, Apple put a high premium on ties to the nation’s universities. Products like the iPod and the Macintosh found ready audiences on college campuses, and they allowed generations of students and faculty to accelerate their work.

Government support could scale these kinds of partnerships—ones that we know work in other sectors like the medical field where academic research on different therapies regularly evolve into businesses to deploy these medical solutions. Carnegie Mellon’s Parallel Data Lab, for example, has achieved this for systems research; the MIT Media Lab conducts interdisciplinary work at the nexus of technology, media, science, art, and design. A concerted, large-scale, government-sponsored, well-funded effort is necessary for AI—because it would help academia keep pace with developments in the field and bring about new breakthroughs.

Large language models are at the heart of today’s technology—but what comes tomorrow for AI?

Another inspiring proof point: MIT-Watson Lab, where researchers, faculty, and students from MIT University and IBM work jointly together. Established in 2017, the Lab unites industry with academia and enables both to bridge the gap between theoretical research and practical applications. With a focus on data-driven, deep learning methodologies, the Lab has done cutting-edge work on language comprehension, visual analysis, and the optimization of large-scale AI systems. Alongside a core research portfolio, the Lab also partners with various companies, including those developing AI solutions for healthcare as well as Wells Fargo, Boston Scientific, and Evonik, among others. Perhaps most importantly, the Lab prioritizes the creation of trustworthy and socially responsible AI systems—a crucial commitment for a still-unfolding technology.

The US government can support similar collaborative platforms, which could yield powerful AI-driven contributions in fields like healthcare, transportation, engineering, and others in which academia has historically played a pivotal role. Already, AI algorithms and data science are being used by the private sector to develop personalized healthcare solutions and improve the efficiency of transportation networks. Collaboration between academia and industry could accelerate these projects and ensure that the resulting solutions are scientifically sound, ethically grounded, and commercially viable.

Government focus and a massive investment in resources can also allow researchers to explore the “unknown unknowns.” Sure, large language models are at the heart of today’s technology—but what comes tomorrow for AI? As companies find every use case possible for LLMs, it’s academics who can produce the proximate breakthroughs and innovations. Freed from the restrictions of shareholder returns, academics can ask unorthodox questions that yield fresh ideas in the field, as well as the hard questions that industry won’t ask at all. A focus on AI literacy or AI ethics doesn’t have a clear ROI for a start-up—but it is precisely what universities were designed to research.

In this way, we can widen the aperture of critical thinking about AI, by specifically supporting research that commercial entities may overlook. For instance, the proposal for a National AI Research Resource (NAIRR) by the National Artificial Intelligence Research Task Force is commendable. The NAIRR would provide cost-effective access to compute and data, supporting researchers in areas such as AI ethics, transparency, and explainability. This research can then be integrated into the work of commercial entities, benefiting both sectors and the public at large.

Through such a program, the government can nudge the entire AI ecosystem toward long-lasting innovation, diverse perspectives, and responsible deployment. We could achieve both the rapid innovation of the private sector, as well as the stewardship and long-tail academic research that can unlock new horizons. That’s why the government must scale its investment in uniting the AI efforts of academia with those of industry, enabling the best of each to strengthen the other. 

What does it take to build technology for good? Listen to Technically Optimistic, a new podcast from Emerson Collective, and journey into the rapidly evolving world of artificial intelligence with host Raffi Krikorian, Chief Technology Officer. The future is unknown but we are technically optimistic. Listen on Apple Podcasts or wherever you listen to podcasts.

Related Articles

    • Technically Optimistic Podcast

      150 min Listen

    • The U.S. urgently needs an AI oversight agency to promote its potential and protect us from its harms

      The U.S. urgently needs an AI oversight agency to promote its potential and protect us from its harms

      3 min Read

    • About Raffi Krikorian

      1 min Read