[ad_1]
Whether channeling Shakespeare’s voice, building rubrics for teachers or helping students know when they’re being duped by a deepfake, the new wave of artificial intelligence tools is being embraced by educators at universities this fall — and not just in the computer science labs where they were developed.
As the technology become more ingrained in both the economy and everyday life, instructors say that post-secondary education has a critical role to play in helping students understand the opportunities and perils that AI tools present.
Patrick Pennefather, assistant professor in the department of theatre and film at the University of British Columbia, is among those working generative AI into his courses this fall, and says it can be a powerful brainstorming aid.
“It can support different aspects of that entire creative pipeline,” he said.
One exercise involves using ChatGPT — one of the more popular generative AIs, capable of creating text and images from scratch — to write new plays in the style of William Shakespeare.
“I’m interested in understanding, ‘Hey, what does Macbeth, the historical Macbeth, think about how they were represented in the play Macbeth?'”
Shakespeare’s play is derived from the story of an 11th century Scottish king. Students ask ChatGPT to write his response in Shakespeare’s style. But what it returns will be far from a finished product, says Pennefather.
“You’ll never get precisely what you want out of these generative AI, which demands human involvement, interaction, tweaking, refinement,” he said.
From there, a writer will flesh out and improve the script, which is passed off to a director and actors.
They’ll tweak it further, and put more material into the AI to produce visuals — for example a background to fit the setting — or might use AI and motion-capture technology to create material for an animated version.
AI’s use in creative fields has been controversial, and is a prominent issue in the ongoing strikes of Hollywood writers and actors. Pennefather says he thinks having creatives in control over how they’re using generative AI, and retaining ownership over what they create, would help mitigate the issues at play in the strikes.
“How can we adopt this technology and be a part of it, rather than feeling like we’re being used, or our content is being used, by a third party? And we have no control over that and no compensation for that use?” he said.
Front Burner20:46ChatGPT in university: useful tool or cheating hack?
“That’s my optimistic point of view — that AI will not replace or supplant the creative, but help to augment their own process.”
Pennefather says it’s important that students are exposed to generative AI and some of the ethical issues is raises, such as copyright, bias and stereotypes.
AIs can, for example, propagate stereotypes when asked to generate images, he says. Asking Midjourney, another popular generative AI, to come up with a “female warrior” will lead to some eye-rolling results, for example.
“More than likely you’re going to get feathers, you’re going to get some Indigenous reference, right? How come?” he said. “How did this machine learning model come to be trained to have bias?”
Such cases are an “amazing” chance for students to analyze and “reflect on their own bias,” he said.
Potentially problematic
The development those generative AIs has also been potentially problematic, says Ebrahim Bagheri, who leads an inter-university research program on the responsible development of AI.
Beghari says the people, working for low wages in the Global South, who have been labelling the data used to train AIs have been subjected to sensitive content.
Last month, The Guardian reported that a group of former content moderators in Nairobi working on behalf of OpenAI, the company behind ChatGPT, called for an investigation into what they termed exploitative conditions, claiming they had to review texts and images with graphic violence, self-harm, bestiality and incest.
Separately, Time reported that OpenAI content moderators in Kenya were paid less than $2 US an hour.
Beghari says it’s important to educate students about ethical issues related to AI development.
“Are you comfortable … using technology that’s not being created the right way?”
Paula MacDowell teaches other teachers in her role as an assistant professor in curriculum studies at the University of Saskatchewan. She works with her classes to create a sliding scale for when it might be appropriate to use AI.
For example, she sees its value as a “peer editor” to provide feedback about students’ work in much the same way as they might have had their classmates do in the past.
“Students need to reflect and think ‘Well, this feedback, does it align with my thinking and what I want to do with the project?’ And they do the same with peer feedback,” she said.
Teachers could also use AI to quickly create rubrics, or to differentiate lesson plans to try to meet students’ individual needs, she says.
“Hopefully the [AI] tools can help teachers to be more productive so that they actually have more time to spend with students and build some of those relationships.”
When ChatGPT first made headlines, there were widespread concerns in academia about its potential for plagiarism. The generative AI can easily be used to cheat — writing code for computing assignments, or even entire essays, with no real learning taking place.
‘Perfect’ assistant
But at the University of Moncton, Prof. Moulay Akhloufi, head of its Perception, Robotics and Intelligent Machines Laboratory, says it can be a useful teaching aid to computer science students.
ChatGPT is “the perfect teaching assistant,” he said, because it can answer questions 24 hours a day, and can help students test their code.
He acknowledges the need to think critically about the advice it gives, and about the risk of plagiarism, but says it’s impossible to police what students do outside the classroom.
“I think we have to evolve and think of using these tools the best way possible,” he said. “I see them as a help in education if you use them the right way and if we teach students how to use them.”
Likewise, MacDowell says it’s important that AI tools aren’t used in a way that detracts from teacher-student relationships. For example, she feels AI shouldn’t be used to write report card comments or provide feedback to students.
“Feedback should be more personal. It should be more relational. It should be more meaningful,” she said.
As AI technology continues to improve, MacDowell says there’s an ever-increasing potential for it to be used to generate misinformation, for example through deepfakes — when AI is used to imitate the voice and image of real people.
“It will soon be very difficult to tell if something is real or fake,” she said, so it’s important to educate students on what AI can do, and part of that is having them work with the tools themselves.
There is a “real need” for critical thinking and knowledge about the power of AI tools, she says. “And to be able to very rigorously identify the sources of information, and how we might be fooled into believing things that actually are not true.”
[ad_2]
Source link