Free shipping on UK orders over £75
Click to continue shopping
Social Supermarket Logo

Is AI bad for our planet?

16/1/2024

AI technology boomed in 2023, as more and more of us embraced generative AI in both our business and personal lives. 100 million users were logging onto ChatGPT every week by November, while one-third of organisations in a recent McKinsey survey reported using some form of this tech. 

As generative AI embeds itself ever deeper into society, there are some weighty ethical considerations for its users and the wider world. For those of us in the social impact space, concerns about the mammoth environmental impact of large language models and also their inherent bias are especially pertinent. 

So, can we make the most of these technologies in a way that minimises the harmful effects? And does AI’s potential as a force for good outweigh the negative impacts? Here’s our take… 

The environmental impact of AI

Generative AI is highly energy intensive. This starts with their development, as training large language models requires heavy-duty computing power. The more complex the model, the more power is needed to fill up the AI with data and refine its processes. It’s like the workout stage, burning up energy as the AI trains and strengthens. 

These language models continue ploughing through energy when they’re put to work in the real world, in what’s called the “inference phase”. Behind the scenes as AI serves up responses to our prompts, huge data centres of hardware are whirring away. On top of operating processors and chips, a huge amount of energy is used just on keeping this hardware cool. 

By 2040, it’s estimated that emissions from the Information and Communications Technology industry will make up 14% of global emissions, largely thanks to data centres. Analysts are also predicting AI’s carbon footprint could even outgrow that of bitcoin mining. 

Global emmisions from AI
Canvas AI generated text-to-image

 

Is the future greener for AI?

There’s no doubt the environmental impact of AI’s growth spurt needs to be controlled. Tech developers like Google and Microsoft have already pledged to run AI systems on carbon-free or renewable energy in the near future. 

We can’t rely on tech firms to act responsibly of their own accord, though. The realm of generative AI needs to be regulated, with clear guidelines and standards that enforce green practices and transparency from developers. 

In the meantime, what can those of us using generative AI do regarding its environmental impact? We might feel at the mercy of tech innovators and policy makers. But getting clued up and raising awareness are great first steps to being part of the solution, before adopting greener AI technology when it’s possible.

 

The problem of bias in generative AI

Generative AI is the product of whatever information we feed it. Its diet of human-made data reflects the biases that already exist in society related to stereotypes and inequalities. So inevitably, generative AI algorithms and output are also inherently biased. 

Even more worryingly, AI can make these existing societal inequalities even worse. For example, Bloomberg tested AI text-to-image tool Stable Diffusion for bias related to job titles. The reporters found that the images it generated overrepresented people with darker skin tones in lower-paid jobs. 

Examples like this of AI bias in action (of which there are many) highlight the urgent need to “debias” the technology. That starts with diverse and inclusive datasets in AI training, reviewing them for over- and under-represented groups. 

Like with the environmental impact, better regulation is also essential: we need policies and practices for responsible AI development. Change is coming in this respect, with a historic EU deal recently establishing landmark laws touching on areas like fundamental rights, democracy and environmental sustainability. Similar legislation from countries like the US, UK and China is also on the way.   

EU regulation AI
Canvas AI generated text-to-image

 

 

Using biassed AI responsibly

Making the technology fairer isn’t going to happen overnight, and perfectly unbiased AI is perhaps a pipedream anyway. So, what can businesses do to rectify potential biases when they’re using them in the meantime?

Top of the list should be training. Give your team a deep understanding of how AI works, its biases and diverse prompt writing that uses representative language for the most inclusive, intersectional results possible. 

For example, say you’re generating content on a topic that’s traditionally been labelled under “men’s interests”, like motor racing or gaming. To avoid producing content that alienates or discriminates against women, you’ll need to make clear in your prompting that you want the AI to use inclusive, gender-neutral language. Before you even write the prompt, that demands having an awareness of where bias might impact results. 

Alongside educating your team, create your own AI policy to promote ethical usage and make sure anyone using AI knows it inside-out. And don’t stop at the training phase – have processes ready to continually assess generated content for biases, adjusting your input where needed and considering if your AI model is fit for purpose. 

 

The positive potential of AI

Despite the complex problems, there’s a lot to be said for how AI can help create positive change – when it’s harnessed thoughtfully. 

Perhaps ironically given its own environmental downsides, generative AI is enhancing environmental sustainability. Architects are using it to generate greener designs, for instance, while it’s identifying energy-saving patterns in things like supply and demand forecasting. 

The time and money-saving benefits of generative AI for impact-led organisations could also be transformative. As TrendWatching has highlighted, less time on labour-intensive processes means freeing up budgets and reducing stress. A common example is using an AI-powered chatbot to answer basic enquiries, leaving staff free to focus on more pressing tasks.

Other non-profits are tapping into the technology’s creative potential to help fundraising efforts, in ways such as creating more tailored content for donor segments. Meanwhile, for impact-led businesses, there’s generative AI software out there to make ESG reports quicker and easier too. It collects and summarises information from different systems and neatly pulls it together into your reporting framework. 

Just giving story enhancer
Just Giving’s new AI-powered tool allows fundraisers to tell richer, more compelling stories on their pages – a feature that can significantly impact fundraising.

 

From these few examples alone, you can see how there’s huge scope to further your mission and do more good with a helping hand from generative AI. However you choose to engage with these tools, the important thing is to make informed decisions. Understanding the technology is the first step to using it in a responsible way that aligns with, and doesn’t undermine, your values.  

Those of us leading socially responsible businesses and embracing AI can also be powerful advocates for its ethical use and development. Let’s set an example for others and engage in debate, to help make it a force for good. 

Do you feel positive about the potential of AI for impact-led businesses? What are your biggest concerns about its use? Continue the conversation on LinkedIn – tag us at Social Supermarket