By Amelia Field
Artificial intelligence has been around as long as technology has but we have seen its rise in recent years due to the introduction of such devices as Amazon’s Echo and smart devices. From Netflix suggesting us films to being able to open our phones with purely our faces, artificial intelligence has permeated its way into our everyday existence without us even knowing. Despite this, with the growing intelligence of these machines and their wider use comes the introduction of ethical issues associated with them. These issues are monumental in the development of artificial intelligence because tech giants are enabling discriminatory software to be programmed into their devices, often without their knowledge.
Recent examples include facial recognition software not being able to recognise black people’s faces and recruitment software discriminating against people with disabilities.
Often the discrimination that occurs is not intentional. For example, the training data may be faulty. For instance, a study which contains more men than women within its sample (which is then used to train AI) may lead to inaccurate data that doesn’t properly reflect the population due to the smaller sample. This small sample magnifies anomalies and minimises similarities and may pay less attention to statistical patterns in data. Although not initially problematic the combination of multiple flawed copies of training data may lead to an entirely inaccurate system.
Another example of faulty AI occurs within facial recognition systems where a disproportionate number of white faces are used to train the systems therefore when facial recognition systems are used on black peoples faces they are much less likely to correctly identify them.
Examinations of articles which present human biases may also contribute to discrimination. For example, if an AI software has access to online articles which pose a range of controversial ideologies it may automatically operate using those controversial views and hence being able to have a damaging impact.
Although multiple sci-fi films and tv programs tell a tale of robots taking over humans and developing conciseness, a machine is, in fact, only as good as its creator. The discrimination present within engineers minds is able to be implanted into AI machines easily and often without thought causing them to appear in a number of ways. This may be overwhelmingly positive through how an engineer can create a machine which can ‘think’ similar to a human, but, on the other hand, can also result in the creation of machines which discriminate against people based on race, gender and disability. Although machines are said to remove bias it is clear that this is near impossible. This bias can be particularly negative when considering job roles as recruitment software employed within a company may lead to an overwhelmingly white workforce which the business may have to be accountable for as they believed the software came free from bias.
The use of positive discrimination has been a topic of scrutiny in recent years. By introducing AI, the discrimination element is removed and candidates will be picked on the basis of their skills but this may, in turn, lead to the AI not producing so-called ‘ forward-thinking ‘ results. Hence companies may be accused of programming biased machines and then using their supposed impartiality as a front for their discrimination. As well as this, discovering uncomfortable truths that may cause the company to look bad may lead to some companies avoiding going down the artificial intelligence route, therefore, stunting progress.
Overall we can see the negative impact that artificial intelligence is able to have on progress towards an indiscriminate society and based on human error machines are learning from biased sources. In order to combat this, it is important to factor in different types of users within these softwares in order to create an indiscriminate positive experience of AI for all. As well as this, companies should consider even to use AI, as often, some discrimination can be positive to promote a positive image of the company.