“The old normal is gone forever,” says Ashok Goel, a computer science professor at Georgia Institute of Technology. “Even when students return to campus, they’ll be going back to more online and blended courses, and we’ll be looking for ways that AI can enhance those classes.”
Goel is the director of Georgia Tech’s Design & Intelligence Laboratory and teaches a class on AI. Six years ago he partnered with a new teaching assistant by the name of Jill Watson. It took months for the students to realize that their TA Jill, who had been responding to their questions online, was actually AI and not human. Jill is now being used in many other classes at Georgia Tech.
COVID-19 has accelerated the push toward digital further, according to a recent article published in EdTech. Teachers that had resisted online environments have had their hand forced to dive headfirst into it, as many students are not permitted on campus. There is also great opportunity in using AI to personalize lessons, known as ‘adaptive learning.’ Kathe Pelletier works with EDUCAUSE, a non-profit that seeks to integrate education with technology. “AI can process a tremendous amount of data and give students what they need when they need it,” Pelletier told EdTech. “It extends the human capacity to access information, to optimize research, and make it more efficient. It can also be the basis for more personalized learning pathways. But humans have to develop algorithms for those pathways, and development time is significant.”
A report released today notes that the global artificial intelligence in education market is expected to grow 45% in the next five years. It states that the ‘driving force of global AI in Education is integrating the intelligent tutoring system (ITS) into the learning process.’ The report details ‘Geekie Lab,’ a Brazilian startup, New Oriental, China’s largest educational AI provider that created ‘Realskill,’ and Cognii which offers AI-based virtual tutors and a grading tool.
Back in the U.S., the National Science Foundation (NSF) is also supporting the proliferation of AI in education. The University of Colorado Boulder is collaborating with the NSF on a new $20-million research center called the ‘AI Institute for Student-AI Teaming.’ The Institute will delve into the role that artificial intelligence may play in the future of education and workforce development. The University of Colorado Boulder says it is particularly interested in providing new learning opportunities for students from historically underrepresented populations in Colorado and beyond.
“We aim to advance a new science of teaming,” Sidney D’Mello, a professor in the Institute of Cognitive Science said this week. “We have a lot of knowledge of what makes effective human-human teams. The next phase is understanding what underlies effective human-agent teams. In our case, that means students, AI, and teachers working together.” D’Mello will lead the new Institute that brings together a team of researchers from nine universities from across the country in close collaboration with two public school districts, private companies, and community leaders.
The NSF is investing the funds to elevate the quality of education. “With growing classroom sizes and online learning, it becomes increasingly difficult for teachers to offer individualized instruction,” the NSF said when it announced the Center. “This new AI institute will focus research on developing ‘AI partners’ that will facilitate collaborative learning in classrooms by interacting naturally through speech, gesture, gaze, and facial expression in classrooms.”
Partnership is key because it facilitates diverse stakeholders and the development of new methods for broadening participation in the design of AI systems that promote equity and new ethical frameworks. The NSF notes that is interested in guiding AI system design and deployment.
Equity and ethics are key to the development and implementation of AI in many industries. “Can AI build better, stronger human interaction? We hope so,” Georgia Tech professor Goel says. “But the application also raises questions of data privacy and security, bias, and trust, which we’ll have to answer as we continue with AI.”