Can we do quality Education at scale? Will AI fix everything?
|
I want to talk about two things in this post. One is the classic argument between scale and quality in education and the other thing is artificial intelligence which I think often comes up when you have this conversation. So let me start by asking a rhetorical question that I'm going to immediately respond to. Can we do quality education at scale? I'm not sure... I think there are some things that can work at scale and there are other things that honestly fall apart, and those things that fall apart I'm not convinced AI will help with. I do think that there are some things that where AI does have the potential to have a role, but I also think that there are so many ethical and sustainability issues with certainly generative AI at the very least (which is what many people are talking about when they say AI) that I'm not convinced it is worth the trade-off except in some very niche contexts. There seems to be a lot of interest in using artificial intelligence that have been trained for one purpose in completely different purposes for which it's not suitable. There's one fundamental aspect to education that I do not believe AI will ever be able to reproduce, and that is connection. human to human connection. when we are learning, the feedback that we get has meaning not just because it's about our work and it's about how our work relates to a greater performance standards that make it better or poorer in different ways that we can improve on. There's value in it because it comes from our teacher, our teacher who cares about us, our teacher who knows us, our teacher who knows what we want to achieve and what nudging and support we need to get there. Fundamentally, artificial intelligence cannot care about a person. it can pretend to care and it can pretend to care in ways that perhaps may be indistinguishable from the outside. But they still are not a human-to-human connection. So let's talk about the things that can be done to do education scale. I think the important thing here is these efforts do become considerable logistical challenges, but we can certainly do some things. relevant and meaningful (though depersonalised) feedback can happen at scale if you have the right processes. You can refer back to my post about feedback banks for ways that that can be done. I should say you can certainly have classes with more students in them, although that does come at a cost. Even if you maintain a reasonable staff student ratio and we can quibble on exactly what that number is. But I'm going to say that it is certainly not 60 students to one teacher, I don't think it's 40 students to one teacher, I think it's considerably less than that. Let's say you do have 60 students though and three teachers in the same class. That may be on paper, the same as having one class with 20 students and one teacher. However, how do you ensure that you have the consistency of that teacher student connection? How do you do so when the student could go to any of those three, and those three don't have the opportunity to all really get to know that student? Those three connections are weaker. Even adding them up together is less than a singular connection with a good teacher would be I'd argue. Another thing that I do hear spoken about is this sort of the sense that AI will do good feedback and you know compensate or make up for not getting good feedback from their teachers. Fundamentally I would say that that's not about the right problem, it's misdirected. It's ultimately that we haven't addressed the problem at the source, which is that we need to make sure that the people who are those teachers actually have the time and resources and training to do that feedback well. I don't see that coming up I'd say when these arguments arise. As institutions start cutting costs for the various reasons that arise when you go big and prioritise profits, is that people are going to take shortcuts which you don't solve by injecting AI into it. You solve that by addressing the labour shortage and reviewing your priorities. Pastoral care also can't be done at scale. That's a human to human connection. If you're trying to optimise the number of students having a crisis who are seen in a given period of time, I think what you're really optimising for is an OHS violation. You're not actually helping students who are in need because it takes time to have a conversation like that. It requires that your teacher has the space around their other responsibilities to check in with their students to kind of pop an hour down and, noticing that that student has been really different to how they've been in the past, that there are some serious changes in their behaviour... You need that space to be able to take the student aside and say "hey, I've noticed these things, I am concerned because... what's going on for you?" and then being that listening ear for them. Now one might say an AI can do that role, it can not. It absolutelycannot. In fact, it is dangerous for an AI to take that role and again, AI does not care. It cannot care. It does not have empathy. It does not have compassion. This is not what they're designed to do. It can sound validating but it can validate people to make dangerous decisions. And there have been many news articles where this has happened. People have died. That's unsafe and we can't pretend otherwise. Now this is the point where I would say I am pragmatic but, first of all, despite the public conversation, we actually do have an opportunity to not go down the AI path as much as it's treated as a foregone conclusion. Actually if we collectively said (and we don't even need that many of us to collectibly say) But if we collectively said we don't like this direction as a society, we don't accept this as the future for humanity we collectively refuse. We will not accept the moral and ethically fraught ways that these systems have been constructed. We do not accept the deeply biased nature of them. Their control that has been placed within a small number of wealthy elites. We do not accept the consequences for our environment, for our planet, where data centres for these AI systems are churning through the energy and clean water that could support a nation. We actually have the opportunity to say no. That critical point across, I do think there are high-risk and low risk usage of artificial intelligence and I think this is hard to kind of necessarily wed down to a really tightly defined set of rules. But I think when we are thinking of using an AI system in a given context, I think we need to start by asking what are the possible outcomes of this usage? What could happen? What decisions are being made? So let's, for instance, imagine a context where an AI will scan through students work and suggest whether they are cheating. That is a high-stakes decision. A very high-stakes decision. One that I've heard a news story has happened already in fact. Students are vulnerable, they are at a very vulnerable age and these decisions can disrupt their entire degree can prevent them from graduating. It can put them in unsafe situations. I have had many conversations with students where we were investigating academic misconduct and I was concerned for their safety. that is high stakes and we must reject that usage at all costs. Alternatively, if you were to use an AI as a grammatical helper to review your writing style and say you know help you identify where you haven't used the citation standard correctly, whether it's hard to understand the point that you're making. I also think that you should run these things past your teachers, but I also know that helpseeking can be quite challenging and the sort of the fear of showing that you don't understand something showing that you're uncertain can sometimes feel overwhelming for people, so that may be lower stakes. Again, I think it's hard to judge for certain. I can definitely say things that are high stakes, but things that are low stakes. We really need to kind of make sure that we have examined all the possible outcomes before We do say that this is a low stakes situation. We don't think the consequences are very high, so we will accept the use of it in this case. But yes, I think ultimately, I really think people need to be thinking about AI very carefully. thinking carefully about how we integrate, how and if we integrate artificial intelligence into teaching and learning. And I'm not even just talking about generative AI although that is the thing that people are mostly talking about when they talk about artificial intelligence. Remembering that it is not a knowledge retrieval system. It is a "what does the human want to hear?" prediction system (which people forget) so think critically about the use of AI but also understand that when we're talking about education there are some things that can work at scale some things that can't. Yes, you can design for meaningful feedback at scale. No, it's not a freebie. There's a considerable amount of overhead that you need to plan for that from a process perspective to make sure that that works and you need to train up people in how to do that. You can have more teachers and more students in classrooms, but it will diminish the sense of connection between those students and ultimately you can't continue building classrooms forever and we don't want to have students in classes at 4:00 in the morning, sleep deprived and not really in a position to engage in effective learning. There are a number of other factors, you could of course generally design a meaningful assessment for any number of students in your course. There are also some forms of assessment which become infeasible as you get to large class sizes and honestly from my perspective they're some of the more interesting ones. I would love to see more more student-directed project based assessments but they require a lot of interaction with your teachers and having a teacher that you trust. that's hard to do at scale. Assessments that are easy to do at scale are things like tests and exams and I am deeply concerned about a trend towards that practice long term. Assessments need to be representative of what students can do and the variety of ways that students represent those skills. When we lean too heavily on tests and exams, we lose that validity of assessment. We also introduce confounding variables in that there are a number of students who do not do well in test conditions. At all. They may actually be very bright students, but in a test circumstances they cannot perform well. They simply never will and so that becomes a consequence of going to scale where there are some forms of assessment that become more likely and those forms of assessment may actually provide a poorer picture of what students can do and may even actively discourage students from pursuing educational pathways that may otherwise be right for them. In the absence of that particular assessment decision, I also think the ability to get the nuanced feedback directly from your students is harder to do. understanding what they want and need becomes radically harder at large scale. But also, as a colleague of mine who I respect enormously said, when you have a big class things that almost never come up, are actually guaranteed to come up multiple times. So you're sort of forced to firefight all these niche challenges that takes away time that could be spent on other matters. And yes you can add more staff. Adding more staff doesn't fix things sometimes and increases overhead at the same time. Overall, my fundamental thesis here is we can't guarantee that education at scale will be quality education. There will be some things lost and AI might be able to assist with some of those things, but we should not be assuming that it will fix all of those things. It is not some Silver Bullet and there are deep problems and concerns that we should have with the use of AI in educational settings. I see it used far too uncritically and I think we need to make sure that we go into these matters with open eyes and don't always assume that bigger is better |