In a narrow office four floors above the freshly renovated Seamans Center, one engineer is optimistic amid an uncertain industry.
Sitting at a desk with four screens and a colorful keyboard, Tyler Bell assists in a research project that aims to change how teachers are trained forever. Using virtual reality powered with ChatGPT, education students can train in how to deal with potential problems with students.
Bell is a University of Iowa assistant professor in the Department of Electrical and Computer Engineering and isn’t new to tinkering with machine learning. His past research projects led him to become a finalist in the Innovation of the Year award at the 2018 TechPoint Mira Awards in Indiana.
This project, which he is collaborating on with College of Education Associate Professor Seth King, has been long in the making.
Back in 2020, the project began with researchers working remotely to develop a virtual classroom for future educators to place upon their heads using VR headsets. Originally, the program was run by observing an education student and providing feedback on their responses to the virtual student in person.
“You’re sitting across from a virtual student who might exhibit some challenging behaviors that might either harm themselves or the teacher,” Bell said. “We give students exposure to these situations in a safe, controlled, reproducible manner rather than just waiting for the students to encounter such situations in their student teaching.”
As the program evolved, AI software did, too. Once OpenAI, the company that operates ChatGPT, opened its servers for anyone to use, Bell seized an opportunity. Instead of transcribing the student’s speech and then analyzing it, the new software could adapt on the spot.
Just as you would type into ChatGPT’s textbox, headset wearers could speak their answers to the virtual student’s questions, and the student could react within moments. Thus, training teachers receive immediate, immersive feedback and can adapt their teaching style in the moment.
The program the team has developed is universal enough that it could be transposed on any field. It’s a process Bell is excited about, but not because he thinks he’s stumbled on the next Silicon Valley goldmine.
“We would love to have it deployed at a nationwide scale like that, but for now it’s been deployed as part of a course,” Bell said. “It would be awesome to translate any research to wide-scale use, whether that’s through commercialization or just open source. I think for the open source, it’s better for the greater good, especially when so much of the AI community is built on the back of open source anyway.”
This freedom of use is vital to the development of AI software. As more and more people learn how to use it, the better it becomes. Bell, however, said many people are hesitant to invest too much into AI.
“It’s an unknown entity and unknowns are scary. I encourage people to play with it. Play is fun. Play is safe. It allows for failure,” Bell said.
Even though the AI in Bell’s research is analyzing the teacher’s responses, a subject expert is still necessary to ensure the program operates smoothly. This is where Seth King comes in, working as the guide rails to make sure the research results are sound.
King, like Bell, believes that AI can have a place in education and be used as an effective instructional aid and as something used for professional development. However, he acknowledges that harnessing it can sometimes be a real challenge.
“I think that it sort of presents a challenge in how we [professors] teach,” King said. “But at the same time, I think it can be used to supplement some of the things we do and create opportunities for students to engage in practices that they wouldn’t be able to do otherwise.”
King does not believe people should be scared of AI, stressing both uncritical optimism and pessimism should be avoided when discussing the topic.
“There are a lot of real, practical challenges that need to be addressed,” King said. “And if you want to use it as a tool, you’ve got to think about how just because it exists, that doesn’t automatically make it a source for good, and a lot of development in this area is very irresponsible.”
At the same time, when used correctly, AI can help make some teaching instruction more accessible through chat bots or instructional aids that can prompt ideas and concepts.
While innovations continue to be made, professors try to contend with a changing classroom landscape. Even though ChatGPT has continued to grow over the years, UI English Professor David Gooblar hasn’t seen much of a notable increase in the software’s usage in his classes. However, the software’s very existence has led to some questioning.
“It’s really tricky and I’m not sure, but I wouldn’t be surprised if some people used ChatGPT and it got through. I don’t trust the detectors online,” Gooblar said.
Gooblar attempts to reduce the usage of ChatGPT and other generative AI sites by working closely with his students on their writing assignments throughout the year.
RELATED: Reel change: the importance of understanding cinema
“I usually see drafts and talk to them about their ideas in class,” Gooblar said. “So very often, it’s clear to me when a student’s work isn’t in their own voice. So, that’s one thing that will set my alarm bells off.”
According to Gooblar, the writing that ChatGPT produces isn’t quality writing, so he doesn’t concern himself with catching cheaters too much, believing that poorly written essays will cause students’ grades to suffer anyway.
That being said, as a professor concerned with the state of education, Gooblar considers AI tools to be entirely inappropriate in the college classroom.
“The point of writing in college is not actually to produce great writing. It’s to learn how to produce great writing,” Gooblar said. “That’s a huge distinction from my point of view. What I tell my students and what I really believe is that these AI tools save you from doing precisely what you’re here to do.”
A large part of taking a teaching job is to do the difficult work of trying to convince students that the actual writing of papers is where learning really happens. Many students get drawn in by the flashiness and promise of different AI generative tools, but in reality, they are often simply used as yet another way for students to cheat.
When considering AI’s place in a learning environment, Gooblar draws on theories from professor and author James Lang and one of his books, entitled “Cheating Lessons.” The book states the desire to cheat comes from a classroom environment that is not working for students.
“My job as a professor is to create a teaching environment that works for students,” Gooblar said. “I want to give them the conditions and the tools so that they can learn skills and content that they wouldn’t otherwise. But I can’t do the work of learning for them.”
Looking toward the future, Gooblar is not personally worried about cheating in his classes, however, he does worry that culturally, all the money and time being poured into the development and marketing of AI tools is helping convince people that art and writing are less human than they actually are.
The further development of these products could lead to the belief that art and writing are creative endeavors that can be easily produced by an algorithm, leading to a devaluing of human work and creativity, things that bring meaning to many people’s lives.
“I worry that, in the interest of selling more products, we’re going to be convinced that the things we produce are cheap and are not worth our time and effort,” Gooblar said. “We can talk about craft all day long and the right way to write sentences, but ultimately what you’re looking for in writing is the human being who wrote it. That’s what good writing is made from.”
Gooblar encourages everybody, especially professors, to adapt a healthy skepticism to anything produced by big tech companies.
Bell, too, encourages people to come to their own conclusions but also strives to remind students and faculty alike that products like ChatGPT are tools. Alongside his VR research project, he also teaches a course in how to use AI.
The class isn’t a simple step-by-step in how to manually use the program, but rather a more general exploration of how the tool can and should be used responsibly.
“AI as it relates to the students is going to be as fundamental as teaching people how to use their computer,” Bell said. “Learning how to use AI or work with AI or just the general understanding of how these systems work, not the technical details of how they work, is incredibly important.”
While the professor does reflect Gooblar’s concerns for cheating, Bell points out there have always been outlets for students to take shortcuts.
“We don’t tell the accountant to stop using Excel and to go back to their pocket calculator. We expect them to do their job, but with the modern tools of the day,” Bell said. “The Internet existed. Google existed. YouTube existed. The back of the book existed. There’s always been the potential to cheat.
“The people who always considered cheating are still going to consider cheating,” he continued. “But if that means thousands of more people can be connected and engaged with the content that they desire, be it educational, professional, personal, it’s worth it.”
Initiatives across the university are being implemented to help students better understand where AI fits into their personal and professional lives. Whether it be Bell’s course or workshops through the department of education, opportunities are popping up everywhere.
“At the end of the day, I understand where the fears are coming from,” Bell said. “I hope by addressing those fears we can dissuade them and add more nuance to the conversation.”