Register For Our Mailing List

Register to receive our free weekly newsletter including editorials.

Home / 306

AI is running ahead of its ethical issues

In the Wisconsin city of La Crosse in 2013, Eric Loomis, then in his early 30s, pleaded guilty to eluding police in a stolen car. The judge sentenced Loomis, who had a criminal record, to six years in jail, the longer end of possible terms.

So? Well, the judge based Loomis’s prison term partly on the recommendations of an artificial-intelligence, or AI, programme that uses secret algorithms to assess the risk a person poses. In Loomis’s case, the Compas report showed “a high risk of violence, high risk of recidivism”. Loomis appealed the length of the sentence saying he had no opportunity to evaluate the algorithms and their assessment based on his gender violated "due process rights".

The court’s use of AI to sentence Loomis attracted much criticism because it raised questions about the role that ‘big data’ and AI are playing in everyday decisions. Expect more such controversies for society to solve as AI’s rapid deployment is creating many ethical issues – a gentler way of saying AI is capable of ill as well as good.

The role of AI and algorithms in day-to-day decisions

AI is certainly causing concern. Among potential dangers, AI might be used by despots who want to enforce censorship, micro-target propaganda and impose society-wide controls on citizens. Many think the disinformation, conspiracy theories and echo chambers that AI-driven recommendations engines can promote on social media deepen social tensions. AI can be used in warfare and has the potential to make swathes of workers redundant. AI can act discriminately or invasively. Many worry about the privacy violations surrounding the data used to train and improve AI algorithms.

Many of the concerns about AI are tied to the nature of the algorithms. People worry that society is handing over decision-making to secret software codes – essentially instructions – that have no understanding of the context, meaning or consequences of what they do. They fret algorithms are being entrusted with tasks they are incapable of fulfilling and they magnify the biases and flaws of their human coders and the data inputted. People are concerned about how algorithms can manipulate behaviour and promote digital addiction. They see they can be gamed by attention seekers.

People are tackling some of the ethical issues involved. Researchers have withheld AI because of possible misuse. Governments, notably the EU, have acted to protect privacy. The EU is developing an AI code of ethics. Companies are creating principles around AI use and setting up ethical boards to monitor its deployment. Platforms are using AI to inhibit the ability of other algorithms to spread viral extremist content. Data gatherers are better protecting user information. US tech employees are rebelling against AI’s use in warfare.

Lack of concern about online data trails

But not enough might be happening to limit AI’s possible harm. People seem blasé about how their online data trails are used to sway their spending and thinking. Businesses appear far more focused on generating positive returns from AI than in overseeing and mitigating the negative side effects. Autocratic states are using AI to tighten their control over media and communication. When ethical issues are raised, valid rebuttals can result in inaction. Authorities with genuine concerns appear hobbled because of the public’s fondness for the cyberworld.

Be aware that AI is being deployed at a faster rate than ethical issues can be properly identified and resolved. The moral concerns encircling AI are likely to become big enough political issues in time to warrant much public scrutiny and government intervention.

To be sure, many of the ethical issues raised are broader than AI. Some of the tech’s biggest ethical issues, such as gene-edited babies, are away from AI. Many of the ethical issues swamping AI are everyday ones that are as old as humans – AI is just a new setting for them.

But that fresh setting looms so large AI is bound to spark controversies, especially since AI’s political weakness is that it’s easy to demonise. Expect a rigorous human overlay on AI in due course. The challenge for authorities will be to limit AI’s possible harm and Loomis-style controversies without suppressing its advantages.

Flawed codes

The algorithms that power AI are reams of code that can process data efficiently to assist in making parole, medical, military, work-dismissal, university-admission and many other decisions. These instructions can perform vast analysis within these narrow functions at speeds beyond human ability.

But algorithms lack many human qualities and smarts. These algorithms do not understand cause and effect. They lack common sense, emotion, imagination and senses of humour or irony. They have no free will. They can have inbuilt biases, generally delivered by the data that drives them. They can be gamed and outsmarted. The ethical issue is: How can society justify the handing over of vital decision-making to AI when it falls well short of human ability in so many ways?

The ethical cloud over algorithms is highlighted when they are set tasks beyond their design limits. ‘Content moderation’ algorithms that scour for inappropriate content keep much out. But they have often failed to remove all copies of an offensive video because people can alter the footage enough to outwit algorithms that can only look for earlier versions.

A wider ethical issue is whether or not AI-dependent platforms should be responsible for the content shared and viewed on their platforms, whereas now they bear little legal responsibility. Another ethical issue is whether or not private companies should be monitoring the ‘cyber public square’ – that private companies are acting as censors and judges of what’s appropriate. And what is the responsibility of users in all this? Enough

Another ethical issue to resolve with AI is whether or not to let algorithms operate in situations with infinite possibilities (such as powering driverless cars on open roads) when, for now, AI works best in defined conditions (such as translation or, in the case of driving, keeping a car within white lines). The death of a pedestrian in Arizona by a self-driving car in 2018 highlighted how AI programs can prove fatal in uncontrolled situations. A central ethical issue here is whether or not the hopes that autonomous vehicles might one day reduce road fatalities is worth the loss of life in the experimental stage.

Another prominent flaw of algorithms and data is that they promote the biases of code writers and data. The common problem here is that data, as a record of the past, feeds algorithms the prejudices of the past. While no one defends discrimination per se and code writers can attempt to overcome this flaw, the ethical issues require subjective solutions.

Data with gender, race and other biases and the limits on the abilities of algorithms are prompting calls for algorithms to be regulated. Companies could come under pressure to reveal their algorithms, as France is doing with those used by the government. The tech industry, however, resists such transparency, saying their formulae are intellectual property.

Such ethical issues around AI are prompting reassessments of the technology, as shown by talk of a second ‘AI winter’ (when research and deployment stalls), a surge in warnings of its potential harm, and by a spate of books highlighting its flaws, such as Meredith Broussard’s Artificial Unintelligence.

While the Loomis appeal was rejected by the Wisconsin Supreme Court in 2016 and the US Supreme Court in 2017 refused to hear the case, the ethical issues it raised will be among many that surround AI as its deployment brings many benefits to society.

 

Michael Collins is an Investment Specialist at Magellan Asset Management, a sponsor of Cuffelinks. This article is for general information purposes only, not investment advice. For the full version of this article and to view sources, go to: https://www.magellangroup.com.au/insights/.

For more articles and papers from Magellan, please click here.

2 Comments
Tony Reardon
May 16, 2019

This sounds like the radio commentators were channeling Isaac Asimov's famous (fictional) three laws of robotics:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov wrote numerous stories exploring the possible conflicts and situations that these laws might give rise to showing that they were not a cast iron way of getting the desired robotic behaviour in all cases.
The problems with the current generation of quite non-intelligent AI systems is that they are “trained” on some selected set of data. In the court case cited in the article, the training would have been on previous offenders and, no doubt, the mix of people in this group showed that certain characteristics led to higher likelihoods of violence and recidivism. One might guess that age, gender, education and race all played a part in identifying riskier sub-groups but, as a society, we have been trying to overcome prejudice based on these sorts of factors and want people judged as individuals. The “prejudice” is not in the algorithm but in the data and this is really an ethical dilemma for the judge – essentially he has been given statistics by the system, does he use them?

DougC
May 16, 2019

An interesting article.

While in New York recently I listened to a debate on radio which projected the ethical concerns related to AI into the context of AI-controlled robots (many versions of which are apparently under development now) and the issue of self-learning AI systems which potentially decrease the predictability (and comprehensibility) of robots’ behaviour.

The debate concluded with the confident statement that the primary rule of any AI-programmed robotics is that the robot shall do no harm to humans; ie they are not malevolent towards humans – which, of course, ignores the military robots whose primary task is to be malevolent towards humans.

Add together high capability, self-learning AI systems, and autonomous military robots, and the need for ethical controls of AI becomes readily apparent and urgent.

 

Leave a Comment:

     

RELATED ARTICLES

8 ways that AI will impact how we invest

When algorithms go rogue the havoc is all too human

Robots and AI will automate workplaces at a frenzied pace

banner

Most viewed in recent weeks

Warren Buffett changes his mind at age 93

This month, Buffett made waves by revealing he’d sold almost 50% of his shares in Apple in the second quarter. The sale not only shows that Buffett has changed his mind on the stock but remains at the peak of his powers.

Wealth transfer isn't just about 'saving it up and passing it on'

We’ve seen how the transfer of wealth can work well, with inherited wealth helping families grow and thrive for generations, as well as how things can go horribly wrong. Here are tips on how to get it right.

Welcome to Firstlinks Edition 575 with weekend update

A new study has found Australians far outlive people in other English-speaking countries. We live four years longer than the average American and two years more than the average Briton, and some of the reasons why may surprise you.

  • 29 August 2024

A health scare changes my investment plans

Recently, I spent time in hospital for pneumonia. Health issues can clarify what really matters, and one thing became clear to me: 99% of what we think is important is either irrelevant or doesn’t need our immediate attention.

The tortoise wins in investing

For decades, it’s been a truism that taking greater risks with stocks should equate to higher returns. New research casts doubt on that and suggests investing in ‘boring’ stocks and industries may be a better bet.

Welcome to Firstlinks Edition 573 with weekend update

Steve Eisman, best known for his ‘Big Short’ bet against US subprime mortgages before the 2008 financial crisis, is now long and betting on what he thinks are the two biggest stories of our time: AI and infrastructure.

  • 15 August 2024

Latest Updates

Investing

The challenges of building a portfolio from scratch

It surprises me how often individual investors and even seasoned financial professionals don’t know the basics of building an investment portfolio. Here is a guide to do just that, as well as the challenges involved.

Property

What's left unsaid in Australia's housing bubble

The current difficulties confronting housing policy partially stem from an explosion of mortgage debt. We've engineered a price for housing that will cause a severe problem for future generations – if it isn't addressed.

Superannuation

A $3m super tax could make this strategy attractive again

Transition to Retirement Income Streams have waned in popularity but that could change if the proposed extra tax on super balances above $3 million goes ahead. 60-65-year-olds who are still working could benefit most.

SMSF strategies

Does a declaration of trust satisfy SMSF separation of asset regulations?

While separation of assets remains one of the most reported contraventions by SMSF auditors, the question is: does a declaration of trust satisfy the requirements of SMSF regulations? There isn't a simple answer.

Investing

Stop paying attention

Want to make better investing decisions? Do what the most skilled investors do and find a way to ignore the meaningless information you are bombarded with on a daily basis.

Shares

How to unlock the big opportunity in misunderstood small caps

Political turmoil and new regulations have left Europe-listed small caps unloved and under-covered. Taking a 'friendly activist' approach to investing in those with global growth opportunities can reap dividends.

Shares

This cornerstone of stock market valuation has been left behind

For decades, cyclically adjusted P/E ratios have been a common and widely accepted gauge of market valuation. But as the financial landscape continues to evolve, so too must our tools for understanding it.

Sponsors

Alliances

© 2024 Morningstar, Inc. All rights reserved.

Disclaimer
The data, research and opinions provided here are for information purposes; are not an offer to buy or sell a security; and are not warranted to be correct, complete or accurate. Morningstar, its affiliates, and third-party content providers are not responsible for any investment decisions, damages or losses resulting from, or related to, the data and analyses or their use. To the extent any content is general advice, it has been prepared for clients of Morningstar Australasia Pty Ltd (ABN: 95 090 665 544, AFSL: 240892), without reference to your financial objectives, situation or needs. For more information refer to our Financial Services Guide. You should consider the advice in light of these matters and if applicable, the relevant Product Disclosure Statement before making any decision to invest. Past performance does not necessarily indicate a financial product’s future performance. To obtain advice tailored to your situation, contact a professional financial adviser. Articles are current as at date of publication.
This website contains information and opinions provided by third parties. Inclusion of this information does not necessarily represent Morningstar’s positions, strategies or opinions and should not be considered an endorsement by Morningstar.