What to consider before adopting AI
|
I was recently asked to put together some guidelines around generative AI for teachers and after discussing it a few friends suggested my commentary would be really helpful for the broader community, especially for any folks thinking wanting to temper a sometimes too gung-ho approach. With that as context, I’m going to share some thinking points that are important to consider before moving ahead with an AI based approach for something. How to approach these issuesThe point of this post is not to give some black and white view on AI as intrinsically good or bad, rather it is to bring attention to some important yet often underappreciated issues within this space. Through this it is hoped readers will keep these in mind and do the uncomfortable work of wrestling with a complex and sometimes fraught space to decide whether any particular use of AI can be justified to ourselves. This is how we go from being passive users to critical contemplators, entering with informed consent. Particularly where one advises others (e.g. students) on matters of AI usage, one must themselves role model good practice before we can expect it of others. Note: some of this will apply to AI in general and others may be specific to generative AI (a.k.a. randomised text and media prediction techniques). Issues to consider and understandWho controls the AI?As an inclusive educator, the first point that comes to mind is that of power dynamics, which exist across our society and are central to the barriers that many face within the world. In terms of the companies which control at least the Large-Language-Models (LLMs) of GenerativeAI, these tend to be the so-called "tech-bros”. A very privileged group of millionaires and billionaires highly invested in maintaining that wealth at best in ways that are ambivalent to the effects on broader society. This is not a neutral group to control such a widely-used technology; in fact one need not look to far to see one of these utilising these technologies for the purposes of spreading dis/misinformation and child-exploitation materials. This should not be seen as the exception, but rather the intention with Musk merely being the most transparent about his goals. So why should this influence your decision-making? Firstly, knowing that responses will not be neutral and may have a specific agenda is important to keep in mind to consider how much the results can be trusted. This is in addition to the standard level of misinformation which text-prediction techniques always produce (sometimes people forget that generativeAI is a predictive technique, not a knowledge retrieval technique). Secondly, given these figures do not have the best interests of humanity at heart, how much do you wish to contribute to their business model? Understand that while these LLMs are woefully unprofitable, every usage adds to their user-count which may assist in retaining investors, provides free (sometimes private) data and training some of which may appear in some part for other users or generate additional income. This is in addition to direct funding received through fees from paid personal or workplace accounts. Don’t want to support these companies but still want to use generativeAI? Some models can be run on your personal machine (though will be slower) if you have the technical knowledge. Resource usageData centres are big. Very big. They use up huge amounts of (clean) water and energy. There have been many analyses I’ve seen which show that the contribution of any individual prompt (even for generating video which is more intensive) to be quite tiny. That said, much like the above it is about supporting a business model and this business model is one that takes away significant fresh water and power which could otherwise be contributing to hundreds of thousands of people’s quality-of-life. This is much like the many stories of big multinationals who run factories in parts of the developing world where they won’t be held to account and end up destroying the lands and waters that local people and animals rely on, for instance this case from Mexico. This raises the potential for collective action. If a significant portion of regular people make it clear that we will not support such practices, the companies engaging in such practices will be forced to adapt. While I have yet to see this in this space, there are many companies in other fields which have demonstrated corporate accountability and sustainable practices and supporting companies like that allows us to shift corporate expectations. Imagine an AI company with B-Corp certification! Theft and disregard for authorshipFor those of us engaged in teaching and knowledge related areas, it should be concerning that every LLM so far is predicated on the very practice that would have any of us shamed and unable to continue practicing; the very thing we tell our students they must never do. Claim the work of others as their own without accreditation. This is exactly how this technology works. It requires huge amounts of “content” in order to provide predictions users would be happy with. These organisations have trawled the internet and broken copyright to steal thousands of published works (literature, music, any creative human works really) and repurposed them for their model. None of these creators were attributed or paid for their works, they were not asked for permission nor even told that it occurred. Furthermore, but around the world these companies are pushing to retroactively change copyright law to make their illegal practices legal. Why should this influence your decision-making? If you work in a knowledge field, these services may potentially have stolen your work already. Indeed every prompt to an LLM is freely giving away even more of your intellectual creations and bypassing your own copyright. If you work for a company this may even technically be a breach of your work contract. Effects on the already marginalisedThis is a point which applies to all kinds of AI. AI tools are only as good as their training data and especially with so many in the hands of a highly homogenous group of people the data fed into these models tend to be highly biased. This is especially true in how well they represent women, people of colour, queer people, really any marginalised groups are likely to be insufficiently represented leading to poor outcomes for those communities. This can involve replicating harmful stereotypes but also in decision-making such as access to home-loans, and policing practices. Interestingly folks can tend to believe that AI systems are likely to be less biased than humans without recognising the role of bias in training data and often a lack of opportunities for marginalised peoples to be involved in designing systems that will affect them. Some may also feel tempted to use generative AI to predict out characters from marginalised groups rather than inviting participation of actual people of those groups. This has a disastrous effect on the sense of trust especially on those from those groups, simultaneously harming them through amplifying feelings of being an outsider, being unwelcome and unvalued. Particularly for those in helping roles (e.g. teachers, counsellors, etc.) which rely on supportive relationships built on trust, this practice would represent the complete destruction of any possibility of working together meaningfully into the future. One other thing I do like to point out is that AI (including generativeAI) also has the potentially to equalise experiences for those with disabilities. However one must understand that this is in the context of a world which largely does not cater to disabilities at all despite this being the law in many nations. Thus disabled folks are forced to rely on such tools when the experiences they go through should always have been designed better, drawing on the many resources that highly profitable organisations have on hand but choose not to invest in that way. Job losses and de-skillingThe last point here is an effect on us personally. When we offload a task to an AI which could have been done by a human, we are doing two different things. (1) We are taking away an opportunity for ourselves to develop a skill which may be important for us to maintain. (2) Taking away an opportunity for a human expert to be paid for work they are trained to provide. It certainly is more convenient to rely on AI and generative AI is the most obvious case of this). Our brains have evolved to be highly efficient but still use up considerable bodily resources so it shouldn’t surprise us that cognitively difficult tasks are unpleasant and cognitive shortcuts highly appealing. This is the same thinking process that leads many to cheat on their homework (a decision often made in the moment with a high stress load). Why should this influence your decision-making process? We need to consider what skill the use of AI is replacing and how important it is for us and we need to be serious about this. For instance a programmer who delegates coding and debugging to text-prediction system will degrade their personal skills making it harder for them to know whether the generated code is suitable and correct and doesn’t introduce problems down the track. We also need to consider how we feel about actual creators losing out on work they would otherwise have had access to? Do we wish to live in a world without artists? To lose access to that essential social commentary which forces us to grapple with the problems in our world? Where to from here?You might like to bookmark this page and bring it up next time an AI solution to a problem is brought to centre. That way you can review the proposed benefits and contrast them against some of the issues to decide whether the usage is warranted in that specific case. You may also like to share on these with colleagues and others you interact with who could benefit from a broad view of the issues at play. Remember, individually we are weak, but together we are strong. We can achieve change if we work together. |