AI & Business

Ethical Issues in AI – A Closer Look

It seems everyone is talking about AI these days, particularly since ChatGPT was unveiled to the world at large in November 2022. Artificial intelligence has been the focus of many business leaders for the past decade, but it was a tough concept for most people to firmly grasp until they simply typed ANY question and got a human-like, articulate, and detailed response from ChatGPT. 

The promise of AI capabilities for businesses, and society at large, is most often the subject being discussed. In this article we’ll delve into a topic that gets less attention, but is nonetheless a serious matter: ethics of AI. While AI leaders push forward with rapid progress, it’s important to assess not only what they achieve, but how it is enabled. Some of civilisation’s greatest achievements – the Temple of Olympian Zeus, Angkor Wat temple in Cambodia, were built on the backs of slave labourers. As society advances with achievements more technical than physical, can we break the pattern of exploitation?

Exploring AI ethics requires more than looking at labour practices. We must also consider levels of equality among the creators of AI models, unconscious bias in AI, diversity representation in the datasets used to train AI models, as well as the environmental footprint. If we refuse to pay attention to these issues now, we may find ourselves in a world where AI is not our ally, but rather our biggest villain, causing the most pollution and further discriminating against already marginalised groups.

The aim of this article is to start a discussion about AI ethics, to inspire individuals to offer new AI capabilities in an ethical manner, and to look for answers to these difficulties while there is still time. 

The Range of Ethical Issues Facing AI Innovators

The most significant artificial intelligence questions may not be technological, but rather about potential ethical impacts on society. AI can be used for many benevolent purposes such as making businesses more efficient, improving the environment, supporting human health, and improving public safety. But it can also be used for malign intentions such as disinformation, political suppression, or human abuse. The decisions faced by AI teams are not black or white; there are numerous shades of grey, and initiatives that start with good intent can veer into potentially harmful territory with just a slight shift in business objective or approach. This is why on May 11, 2023 The European Parliament endorsed new rules on Artificial Intelligence, advancing a human-centric approach in Europe. The proposed regulations aim to ensure transparency, risk management, and ethical development of AI systems. 

Luckily, it seems that most business leaders recognise a real risk. In Deloitte’s report, State of AI in the Enterprise, nearly all business leaders expressed concern about the ethical risks of their AI initiatives. The risk is not theoretical; it is imminent. Nine of ten respondents to a 2020 Capgemini Research Institute survey indicated they were aware of at least one instance where the use of AI had caused ethical issues for their business.  

quotemark icon

I see some concerns arising around censorship - if we try to prevent certain unwanted biases in our model do we accidentally make room for even more surprising ones? How much information which we would label as “inappropriate'' is necessary for linguistic competence anyway? In the end, the best thing about LLMs is not what they can or can’t do, but how they provoke new debates around old problems.

Michał Stańczyk Research Engineer, SentiOne Automate

Is AI Built in Digital Sweatshops?

To fully explore the ethics of AI it is essential to assess the impact of artificial intelligence on the workers hired to support it. The example of OpenAI’s development of ChatGPT illustrates the interplay of ethical issues facing AI development teams. In an effort to remove bias and hateful messages from their chat platform, the company outsourced a team of moderators to review thousands of ChatGDP responses, weeding out messages that were considered racist, sexist, inappropriate, or hateful. 

However, when a TIME magazine feature detailed the specifics of the work arrangement, they raised the question of whether OpenAI had done the right thing the wrong way. At the centre of the issue was offshore hiring of Kenyan workers who received wages between $1.32 and $2 per hour (for comparison, the minimum wage for a receptionist in Nairobi is around $1.52 per hour.). The work involved categorising and labelling sexual, violent, and hateful messages and images, and the employees interviewed by TIME reported being mentally scarred by the work. The outsourcing firm who contracted with the Kenyan workers suggested that the work boosted the Kenyan economy and provided critical funds to the workers, but others might label the operation as a digital sweatshop. OpenAI eventually terminated the contract, likely to minimise the reputational risk of paying some of the lowest-available hourly rates in the world while building one of the world’s most valued AI organisations.

Many corporations are secretive about offshoring operations to low-wage countries, but there is certainly evidence to suggest that some of the biggest players are involved. Around the time that OpenAI was ending its involvement with Kenyan workers, a feature by TIME magazine reported on 200 workers in Nairobi employed for similar content moderation tasks for Facebook for as little as $1.50 per hour. 

How Training Large Language Models Contributes to Pollution

Training Large Language Models (LLMs) comes at a significant cost in terms of energy consumption and potential CO2 emissions. The sheer scale and complexity of these models require enormous computational power, which translates into massive energy requirements. 

A paper from the University of Massachusetts Amherst stated that “training a single AI model can emit as much carbon as five cars in their lifetimes.” And that’s considering only one training run. One of the most popular models, GPT-3, was estimated to have consumed 936 MWh over only 34 days of training. To put it more bluntly, AI language models and data centres are responsible for 1% of all carbon emissions in the world, according to the International Energy Agency.

With the AI industry experiencing rapid growth, there is a growing chorus of voices urging the recognition and reduction of its environmental footprint. While numerous industries have made commendable strides in recent years to mitigate their negative impact, including the adoption of renewable energy, implementation of energy-efficient measures, and adherence to regulatory frameworks, AI managed to evade responsibility. Hopefully for not much longer.

Gender Equality Among Creators of AI

One of the fastest emerging AI questions concerns gender equality among the teams that are creating artificial intelligence systems. A 2020 World Economic Forum report found that women comprise only 26% of data and AI roles in the workforce. The OECD estimates that women are 13 times less likely to file for a technology patent than men. Despite representing such a small percentage, women are leading some tremendous work in the field of AI. Women like Adriana Bora, who is using machine learning to help combat modern slavery; Anu Meena, who is using AI to help farmers reduce food waste; and Alice Zhang, who is using AI to develop new drugs to address diseases like Parkinson’s and Alzheimer’s. Our industry needs more women bringing innovative visions like these to life. Without women in AI, the world will miss out on brilliant, new ideas and revert to a time when women’s issues were overlooked in healthcare, education, politics, and other fields. After all, the purpose of AI should be to bring people closer together and improve the lives of everyone, not just a chosen few.

The massive gender gap will continue unless organisations prioritise gender equality and the diversity of teams. It illustrates that AI ethics are not solely in the hands of product development teams, but have impacts with Human Resources and many other professional functions, as well as corporate-wide mandates and directives.

quotemark icon

The purpose of AI should be to bring people closer together and improve the lives of everyone, not just a chosen few. If we refuse to pay attention to these issues now, we may find ourselves in a world where AI is not our ally, but rather our biggest villain, further discriminating against already marginalised groups.

Bartosz Baziński CEO SentiOne

Unconscious Bias in AI

There have been some famous examples of AI bias, including one that led to Amazon discontinuing an AI-based hiring tool. Meant to filter and select top candidates from a pool of resumes, the bot was designed to learn from previous patterns of hiring. The problem was that previous hiring patterns reflected a male bias, and as a result Amazon’s hiring tool was biassed against women

Microsoft’s Tay chatbot was quickly abandoned after it began spewing a disturbing level of hateful and racist responses on Twitter. The bot had been trained using data from the internet, which undoubtedly conferred some bias, but things got much uglier when groups targeted Tay with hateful messages in a deliberate and coordinated effort to corrupt the chatbot’s responses. While these examples generated headlines and media attention, many others don’t. Nearly two-thirds of respondents in the CapGemini study AI and the Ethical Conundrum indicated they had experienced issues with discriminatory bias in AI systems. 

AI bots are not inherently biassed, so why are we seeing bias in AI? AI systems are trained with existing data. ChatGPT, for example, was trained using billions of words from web pages on the Internet. So, the AI system essentially inherits a form of the collective consciousness of humans from the messages we share. And unfortunately but undeniably, that includes messages of hate, racism, sexism, and violence. Furthermore, it’s nearly impossible to train an AI system with a dataset that provides equal representation to every minority and underrepresented group. 

Artificial intelligence bias is an ongoing challenge for development teams. Manual quality checking is expensive and difficult to scale, but there is a growing suite of tools to help with AI bias. Organisations like Aequitas offer open source tools to measure bias in datasets. The open source machine learning library from Themis-ml uses bias-mitigation algorithms to reduce bias. Similarly, IBM’s AI Fairness 360 is an open source toolkit that includes bias-mitigating algorithms to help improve machine learning models. Another useful resource is Google’s responsible AI practices which provides recommendations to help detect unfair biases in machine learning systems.

quotemark icon

The main source of problems is misrepresentation in data. AI chatbots can only learn based on the input provided by humans. If our training data ignores a certain target group, type of accent or minorities, it will skew the chatbot’s ability to understand their questions, intents and motivations.

Agnes Uba Head of Marketing, SentiOne Automate

 

Machine learning ethics may not be top of the list of AI problems for most organisations, but many are taking notice of the potential legal risk. Imagine AI analysing CVs and being discriminatory towards certain groups in the recruitment process or a self-driving car making a mistake and causing a serious accident. According to CapGemini, six of ten organisations surveyed had attracted legal scrutiny related to AI applications. AI legal issues are a challenge for regulators and lawmakers to address. Some of the ethical concerns with AI systems are targeted in legislation such as the “Right to Explanation” clause of the EU General Data Protection Regulations (GDPR), as well as some sections of the California Consumer Privacy Act. We’ll undoubtedly see AI in lawsuits more often, and entrepreneurs will increasingly need to consider legal and AI. 

One step many companies are taking is mitigating risk by having a clear charter that illustrates the company’s intent. A 2021 survey from Capgemini Research Institute found that nearly half or organisations surveyed had defined an ethical charter to provide guidelines on AI development. Once again, we see the ethical issues of AI impacting much more than development teams – organisations will increasingly need a savvy legal team or counsel that is versed on the rapidly evolving domain of AI law.

Want a Closer Look at AI in Practice?

As a company developing AI conversational chatbots and social listening tools, we at SentiOne believe it’s important for us to be a part of the conversation on the ethical issues of AI. Visit our website to learn more about our AI research team, and our SentiOne Automate product features. You can also book a demo for a first-hand look at our AI capabilities, or explore the opportunity for an AI-based chatbot for your business with our Bot ROI calculator.


 

Article Summary

The article explores the ethical issues surrounding AI and highlights the need for responsible development and usage. It emphasises the importance of considering labour practices, equality among creators, unconscious bias, diversity representation in AI models and datasets as well as the environmental footprint of LLMs. We look at a recent example of OpenAI outsourcing content moderation to low-wage workers in Kenya, at a gender gap in AI roles, and at how training AI models contributes to carbon emissions. We also mention the potential legal risks associated with AI’s ethical issues, emphasising the need for ethical charters and legal counsel. Overall, the article calls for a discussion on AI ethics and the need to address these challenges before it’s too late.