
Alexander Ross
He/Him/His
Assistant Professor
School of Information, Faculty of Arts
We recently had the privilege of sitting down with Dr. Alexander Ross from the School of Information to hear his perspectives on AI and its impacts on teaching and learning at UBC.
Alex is a critical communications scholar, with a focus on media theory and the political economy of communication. His research is interdisciplinary and focuses on how communication systems and infrastructures impact the development of new media industries and cultural production. Alex has a PhD in Information Studies from the University of Toronto’s Faculty of Information and an MA in Communication and Culture from Toronto Metropolitan University. He is Mi’kmaw and a proud member of the Millbrook First Nation in Mi’kma’ki (Nova Scotia).
This interview has been edited for length and clarity.
How do you approach AI in the classroom?
I’ve taken a very restrictive approach to the use of AI. In one of the first classes I taught last semester, there was some limited use around brainstorming, but in both the classes I am teaching now, especially the one focused on Indigenous media and information, I’ve expressly forbidden the use of AI. I’m concerned about how it’s being applied as a technology in the classroom. Not just in terms of some of its effects on the writing process and how it can intervene in that, but I’m also worried that with AI there hasn’t been an opportunity to really think through where it’s coming from, how it should be applied… whether or not it’s actually a useful educational technology.
I’m trying to help my students think more critically about technology. So far, there hasn’t been any pushback. In fact, being more critical of AI has been received positively.
What trends are you seeing with students and their use of AI?
There’s definitely a concern with how people are using it in their work.
The process of learning how to write, it’s very important. I worry these technologies based on generating text automatically, prevent that process from happening. They give you a very superficial grasp of an argument or how to structure or categorize your writing, but they don’t actually give you the whole process. The process is the point; that’s what makes you a better writer.
I’m worried that students, when they use these tools, actually lose control over their writing. It’s now being processed and remixed and redone for them by a system that they don’t really have access to. I want my students to feel that they have control over what they’re doing, rather than having a system of control that’s imposed on them.
What do you hope that future UBC graduates will understand about AI?
I would hope that graduates of UBC really learn about the history of these technologies. To understand that “AI” didn’t just start back in 2022, there’s a whole history of studying artificial intelligence. For thousands of years, people wondered about automated or artificial constructs that do work and labour. Understanding how those influence the way that we see AI now. I think about the Mechanical Turk and realizing that a lot of these things that are considered automated, often there’s content moderation, there are people working in the background. A lot of hidden labour.
So looking at technology as something that is always embodying social and power relations. Technology is not neutral, but creates particular things in the world or can bias certain things.
What would the classroom look like if everyone at UBC was using AI tools which respect Indigenous data sovereignty?
I think that if there was a world where everyone was using these very respectful tools that abided by Indigenous protocols, it would be totally different.
I think there’s a way in which knowledge is treated as something that’s very individualized and something that you extract, break down, and apply yourself rather than something that is communally shared, that puts you in relation with other people that encodes a kind of responsibility, and this is something that has been thought about in Indigenous communities for thousands of years.
You have people like Elders and Knowledge Keepers who play very specific roles in the community, have real responsibility for sharing, conveying knowledge and very specific ways of ensuring that it’s given in a good way. One of the things that is underrated is the need for a kind of resiliency for that knowledge. It wouldn’t just be, putting something in, a prompt to generate an image or a text, it would actually be based on creating very specific relationships. Understanding that this medium or technology actually has a very significant relationship or a series of relationships that have to be respected and also connected, whether, it’s connected to the land or to the community, connected in all of these ways and that is embodied or has something very specific, that it’s doing, that needs to be respected.
“How do we imagine a future with AI that contributes to the flourishing of all humans and non-humans?” –Indigenous Protocol and Artificial Intelligence Position Paper
I think maybe that’s the perspective that the authors of Making Kin with the Machines want to bring in, not this, commodified large language model perspective. What I like about Making Kin with the Machines and also the later AI Protocols, you get a lot of different voices. We use the term Indigenous, but there’s a lot of diversity, a lot of different perspectives. So bringing those philosophies to the fore and saying, what if — there was another being that emerged from this technology, what are our responsibilities to that being?
I’m thinking more, especially in the course I’m teaching now, about Indigenous media and information, with this Mi’kmaq concept of msit no’kmaq which means all my relations. That’s also a shared philosophy with other nations. It’s not just about you, it’s about all the things that you are interconnected to, all the relations that you have, all the different ways that you are positioned. What are you in relation with?
How has AI influenced your own area of focus/research?
It’s weird because I’m not directly studying generative AI. Where I have thought about AI the most is its impact on Indigenous data sovereignty and data governance. I think data governance is something that has been practiced for a really long time in every Indigenous community, and also something that has been formalized through policies like OCAP®. These frameworks talk about how Indigenous data should be governed and used, where its use is ethical or unethical.
I worry about someone out there, who might want to learn a bit more about an Indigenous nation or a bit more about particular Indigenous ideas, beliefs, practices, history, and then they get an AI prompt that gives them all of this incorrect information, right? Because it’s been trained on historically suspect, inaccurate or biased information.
I think there can be an opportunity for us to really make sure that if there is any Indigenous information out there, that it’s responsibly shared and responsibly developed. That’s something that I would like to help with. To make sure that there aren’t these distortions of language, culture, and knowledge. To ensure these things are appropriately shared and respected.
Is there anything else you’d like to share?
I guess with AI, I’ve had a bit of a concern also about its promotion as an educational technology, because educational technology has always been very fraught at the university.
We really do need to think about its influence on classrooms, especially because sometimes I feel like there’s a bit of a mixed message. On one hand, we’re saying to students, it’s up to you to set these policies, you’re talking about academic integrity and honesty and all these things. But at the same time, there’s this broader promotion of AI technology and its use and so that creates some headaches for people. And I think having some clearer guidance could be good…
Even with something like Turnitin or these plagiarism detectors in terms of how that maybe actually violates their students’ intellectual property. We have to remember our students have rights. It’s very important that they understand they have those rights to their work and to what they’re producing. I don’t want to see that violated.
I joke with my students that they should see me as their ChatGPT. I am happy to guide them. Part of teaching is also making them not be afraid to ask you questions or approach you. What we can do is really try to provide more support for that, for faculty, having more time for their students. So that students aren’t feeling like now I have to use this system because I don’t have enough time, I’m stressed, I’m overworked.
You know, we can maybe create something that has more of a balance that’s able to help students out and really bring them in and see you as somebody that can help them.
So maybe that’s the final thought I’d like to leave with.