Khan Academy CEO Says Using AI in Education is ‘An Imperative’ at Harvard-MIT Event


Khan Academy founder and CEO Sal Khan said the use of artificial intelligence in education is “an imperative” at a Harvard-MIT event Tuesday afternoon.

The panel kicked off a conference hosted by the Axim Collaborative, a joint project of Harvard and MIT, and several other organizations affiliated with the two schools, including the Harvard Graduate School of Education and the Berkman Klein Center for Internet and Society.

Khan said he was “curious but skeptical” when OpenAI, which released ChatGPT, reached out to him last year.

He added that, while he was “bummed” by the initial public response to the release of ChatGPT, which included school districts banning the model, the education system had begun to view the tool differently by time Khan Academy’s released its own AI chatbot in March. The system “realized that they can’t ignore it,” Khan said.


“But they wanted ways to make sure it’s not cheating, make sure that especially under 18 there are ways to monitor and maybe proactively make sure there are some guardrails in there,” he added.

Khan Academy’s chatbot, known as “Khanmigo,” converses and debates with students, guiding them through homework exercises. The chatbot then reports achievements and areas for improvement to teachers. Rather than telling students answers, Khanmigo asks students to explain their thinking to reach the next step on their own.

“They tend to be very transactional,” Khan said of generative AI models. “We’ve definitely been trying to get more personality and to make it drive conversations.”

In the coming weeks, Khan Academy will introduce a feature of the chatbot allowing it to collaborate with students on written essays. The chatbot will also provide teachers with a full report of the writing process to prevent plagiarism.

“If a student goes to ChatGPT or some other tool and copy-and-pastes the essay in, Khanmigo is going to tell the teacher, ‘I don’t know where this essay came from, it’s kind of shady,’” Khan said.

“We really think this can save teachers hours and hours of time,” he added.

HGSE Dean Bridget Terry Long, who moderated the event’s Q&A session, said when the Ed School first encountered the rise of generative AI, they were immediately concerned with whether AI had a place in academic work and how to adapt their pedagogical practices.

“But longer term,” Long added, “there are much more important questions that we’ve started to think about as faculty. What are the essential elements of the learning process? How might AI complement or bolster them?”

Khan also discussed the potential disruptions that AI may have on jobs in computer science. Khan said programmers who can do “higher level work” and manage a group of “AI subordinates” will be “empowered,” though less complex work may be replaced.

“Now I think if you’re in some of these fields and you are doing basic data integration or doing basic porting from one language to another, and that’s all you can do, you’re in trouble,” Khan said.

Khan also said AI will affect Hollywood, but in a different manner from that popularly theorized.

“In the next five years, you, the screenplayer, are going to be in a position where you can produce a whole movie for maybe a budget of $10,000 as opposed to $100 million,” Khan said, adding that screenwriters should be embracing the rise of AI.

Khan said he hopes to harness the potential influence of AI usage to direct attention toward positive educational growth, pointing to efforts by social media corporations and search engines to compete for consumer attention using AI.

“We already have all these AIs that are working for social media companies, that are working for search engines — that are working for their objective functions, trying to keep us watching, trying to keep us clicking on things,” Khan said. “What if we had AIs on our side that are trying to protect us?”

—Staff writer Azusa M. Lippit can be reached at Follow her on X @azusalippit or on Threads @azusalippit.