Register For Our Mailing List

Register to receive our free weekly newsletter including editorials.

Home / 306

AI is running ahead of its ethical issues

In the Wisconsin city of La Crosse in 2013, Eric Loomis, then in his early 30s, pleaded guilty to eluding police in a stolen car. The judge sentenced Loomis, who had a criminal record, to six years in jail, the longer end of possible terms.

So? Well, the judge based Loomis’s prison term partly on the recommendations of an artificial-intelligence, or AI, programme that uses secret algorithms to assess the risk a person poses. In Loomis’s case, the Compas report showed “a high risk of violence, high risk of recidivism”. Loomis appealed the length of the sentence saying he had no opportunity to evaluate the algorithms and their assessment based on his gender violated "due process rights".

The court’s use of AI to sentence Loomis attracted much criticism because it raised questions about the role that ‘big data’ and AI are playing in everyday decisions. Expect more such controversies for society to solve as AI’s rapid deployment is creating many ethical issues – a gentler way of saying AI is capable of ill as well as good.

The role of AI and algorithms in day-to-day decisions

AI is certainly causing concern. Among potential dangers, AI might be used by despots who want to enforce censorship, micro-target propaganda and impose society-wide controls on citizens. Many think the disinformation, conspiracy theories and echo chambers that AI-driven recommendations engines can promote on social media deepen social tensions. AI can be used in warfare and has the potential to make swathes of workers redundant. AI can act discriminately or invasively. Many worry about the privacy violations surrounding the data used to train and improve AI algorithms.

Many of the concerns about AI are tied to the nature of the algorithms. People worry that society is handing over decision-making to secret software codes – essentially instructions – that have no understanding of the context, meaning or consequences of what they do. They fret algorithms are being entrusted with tasks they are incapable of fulfilling and they magnify the biases and flaws of their human coders and the data inputted. People are concerned about how algorithms can manipulate behaviour and promote digital addiction. They see they can be gamed by attention seekers.

People are tackling some of the ethical issues involved. Researchers have withheld AI because of possible misuse. Governments, notably the EU, have acted to protect privacy. The EU is developing an AI code of ethics. Companies are creating principles around AI use and setting up ethical boards to monitor its deployment. Platforms are using AI to inhibit the ability of other algorithms to spread viral extremist content. Data gatherers are better protecting user information. US tech employees are rebelling against AI’s use in warfare.

Lack of concern about online data trails

But not enough might be happening to limit AI’s possible harm. People seem blasé about how their online data trails are used to sway their spending and thinking. Businesses appear far more focused on generating positive returns from AI than in overseeing and mitigating the negative side effects. Autocratic states are using AI to tighten their control over media and communication. When ethical issues are raised, valid rebuttals can result in inaction. Authorities with genuine concerns appear hobbled because of the public’s fondness for the cyberworld.

Be aware that AI is being deployed at a faster rate than ethical issues can be properly identified and resolved. The moral concerns encircling AI are likely to become big enough political issues in time to warrant much public scrutiny and government intervention.

To be sure, many of the ethical issues raised are broader than AI. Some of the tech’s biggest ethical issues, such as gene-edited babies, are away from AI. Many of the ethical issues swamping AI are everyday ones that are as old as humans – AI is just a new setting for them.

But that fresh setting looms so large AI is bound to spark controversies, especially since AI’s political weakness is that it’s easy to demonise. Expect a rigorous human overlay on AI in due course. The challenge for authorities will be to limit AI’s possible harm and Loomis-style controversies without suppressing its advantages.

Flawed codes

The algorithms that power AI are reams of code that can process data efficiently to assist in making parole, medical, military, work-dismissal, university-admission and many other decisions. These instructions can perform vast analysis within these narrow functions at speeds beyond human ability.

But algorithms lack many human qualities and smarts. These algorithms do not understand cause and effect. They lack common sense, emotion, imagination and senses of humour or irony. They have no free will. They can have inbuilt biases, generally delivered by the data that drives them. They can be gamed and outsmarted. The ethical issue is: How can society justify the handing over of vital decision-making to AI when it falls well short of human ability in so many ways?

The ethical cloud over algorithms is highlighted when they are set tasks beyond their design limits. ‘Content moderation’ algorithms that scour for inappropriate content keep much out. But they have often failed to remove all copies of an offensive video because people can alter the footage enough to outwit algorithms that can only look for earlier versions.

A wider ethical issue is whether or not AI-dependent platforms should be responsible for the content shared and viewed on their platforms, whereas now they bear little legal responsibility. Another ethical issue is whether or not private companies should be monitoring the ‘cyber public square’ – that private companies are acting as censors and judges of what’s appropriate. And what is the responsibility of users in all this? Enough

Another ethical issue to resolve with AI is whether or not to let algorithms operate in situations with infinite possibilities (such as powering driverless cars on open roads) when, for now, AI works best in defined conditions (such as translation or, in the case of driving, keeping a car within white lines). The death of a pedestrian in Arizona by a self-driving car in 2018 highlighted how AI programs can prove fatal in uncontrolled situations. A central ethical issue here is whether or not the hopes that autonomous vehicles might one day reduce road fatalities is worth the loss of life in the experimental stage.

Another prominent flaw of algorithms and data is that they promote the biases of code writers and data. The common problem here is that data, as a record of the past, feeds algorithms the prejudices of the past. While no one defends discrimination per se and code writers can attempt to overcome this flaw, the ethical issues require subjective solutions.

Data with gender, race and other biases and the limits on the abilities of algorithms are prompting calls for algorithms to be regulated. Companies could come under pressure to reveal their algorithms, as France is doing with those used by the government. The tech industry, however, resists such transparency, saying their formulae are intellectual property.

Such ethical issues around AI are prompting reassessments of the technology, as shown by talk of a second ‘AI winter’ (when research and deployment stalls), a surge in warnings of its potential harm, and by a spate of books highlighting its flaws, such as Meredith Broussard’s Artificial Unintelligence.

While the Loomis appeal was rejected by the Wisconsin Supreme Court in 2016 and the US Supreme Court in 2017 refused to hear the case, the ethical issues it raised will be among many that surround AI as its deployment brings many benefits to society.

 

Michael Collins is an Investment Specialist at Magellan Asset Management, a sponsor of Cuffelinks. This article is for general information purposes only, not investment advice. For the full version of this article and to view sources, go to: https://www.magellangroup.com.au/insights/.

For more articles and papers from Magellan, please click here.

2 Comments
Tony Reardon
May 16, 2019

This sounds like the radio commentators were channeling Isaac Asimov's famous (fictional) three laws of robotics:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov wrote numerous stories exploring the possible conflicts and situations that these laws might give rise to showing that they were not a cast iron way of getting the desired robotic behaviour in all cases.
The problems with the current generation of quite non-intelligent AI systems is that they are “trained” on some selected set of data. In the court case cited in the article, the training would have been on previous offenders and, no doubt, the mix of people in this group showed that certain characteristics led to higher likelihoods of violence and recidivism. One might guess that age, gender, education and race all played a part in identifying riskier sub-groups but, as a society, we have been trying to overcome prejudice based on these sorts of factors and want people judged as individuals. The “prejudice” is not in the algorithm but in the data and this is really an ethical dilemma for the judge – essentially he has been given statistics by the system, does he use them?

DougC
May 16, 2019

An interesting article.

While in New York recently I listened to a debate on radio which projected the ethical concerns related to AI into the context of AI-controlled robots (many versions of which are apparently under development now) and the issue of self-learning AI systems which potentially decrease the predictability (and comprehensibility) of robots’ behaviour.

The debate concluded with the confident statement that the primary rule of any AI-programmed robotics is that the robot shall do no harm to humans; ie they are not malevolent towards humans – which, of course, ignores the military robots whose primary task is to be malevolent towards humans.

Add together high capability, self-learning AI systems, and autonomous military robots, and the need for ethical controls of AI becomes readily apparent and urgent.

 

Leave a Comment:

RELATED ARTICLES

8 ways that AI will impact how we invest

When algorithms go rogue the havoc is all too human

Robots and AI will automate workplaces at a frenzied pace

banner

Most viewed in recent weeks

2024/25 super thresholds – key changes and implications

The ATO has released all the superannuation rates and thresholds that will apply from 1 July 2024. Here's what’s changing and what’s not, and some key considerations and opportunities in the lead up to 30 June and beyond.

The greatest investor you’ve never heard of

Jim Simons has achieved breathtaking returns of 62% p.a. over 33 years, a track record like no other, yet he remains little known to the public. Here’s how he’s done it, and the lessons that can be applied to our own investing.

Five months on from cancer diagnosis

Life has radically shifted with my brain cancer, and I don’t know if it will ever be the same again. After decades of writing and a dozen years with Firstlinks, I still want to contribute, but exactly how and when I do that is unclear.

Is Australia ready for its population growth over the next decade?

Australia will have 3.7 million more people in a decade's time, though the growth won't be evenly distributed. Over 85s will see the fastest growth, while the number of younger people will barely rise. 

Welcome to Firstlinks Edition 552 with weekend update

Being rich is having a high-paying job and accumulating fancy houses and cars, while being wealthy is owning assets that provide passive income, as well as freedom and flexibility. Knowing the difference can reframe your life.

  • 21 March 2024

Why LICs may be close to bottoming

Investor disgust, consolidation, de-listings, price discounts, activist investors entering - it’s what typically happens at business cycle troughs, and it’s happening to LICs now. That may present a potential opportunity.

Latest Updates

Shares

20 US stocks to buy and hold forever

Recently, I compiled a list of ASX stocks that you could buy and hold forever. Here’s a follow-up list of US stocks that you could own indefinitely, including well-known names like Microsoft, as well as lesser-known gems.

The public servants demanding $3m super tax exemption

The $3 million super tax will capture retired, and soon to retire, public servants and politicians who are members of defined benefit superannuation schemes. Lobbying efforts for exemptions to the tax are intensifying.

Property

Baby Boomer housing needs

Baby boomers will account for a third of population growth between 2024 and 2029, making this generation the biggest age-related growth sector over this period. They will shape the housing market with their unique preferences.

SMSF strategies

Meg on SMSFs: When the first member of a couple dies

The surviving spouse has a lot to think about when a member of an SMSF dies. While it pays to understand the options quickly, often they’re best served by moving a little more slowly before making final decisions.

Shares

Small caps are compelling but not for the reasons you might think...

Your author prematurely advocated investing in small caps almost 12 months ago. Since then, the investment landscape has changed, and there are even more reasons to believe small caps are likely to outperform going forward.

Taxation

The mixed fortunes of tax reform in Australia, part 2

Since Federation, reforms to our tax system have proven difficult. Yet they're too important to leave in the too-hard basket, and here's a look at the key ingredients that make a tax reform exercise work, or not.

Investment strategies

8 ways that AI will impact how we invest

AI is affecting ever expanding fields of human activity, and the way we invest is no exception. Here's how investors, advisors and investment managers can better prepare to manage the opportunities and risks that come with AI.

Sponsors

Alliances

© 2024 Morningstar, Inc. All rights reserved.

Disclaimer
The data, research and opinions provided here are for information purposes; are not an offer to buy or sell a security; and are not warranted to be correct, complete or accurate. Morningstar, its affiliates, and third-party content providers are not responsible for any investment decisions, damages or losses resulting from, or related to, the data and analyses or their use. To the extent any content is general advice, it has been prepared for clients of Morningstar Australasia Pty Ltd (ABN: 95 090 665 544, AFSL: 240892), without reference to your financial objectives, situation or needs. For more information refer to our Financial Services Guide. You should consider the advice in light of these matters and if applicable, the relevant Product Disclosure Statement before making any decision to invest. Past performance does not necessarily indicate a financial product’s future performance. To obtain advice tailored to your situation, contact a professional financial adviser. Articles are current as at date of publication.
This website contains information and opinions provided by third parties. Inclusion of this information does not necessarily represent Morningstar’s positions, strategies or opinions and should not be considered an endorsement by Morningstar.