Home > Featured > In Using AI, Let’s Be Wary of Artificial Stupidity

In Using AI, Let’s Be Wary of Artificial Stupidity

Author: Julienne Chen, Research Fellow at the Lee Kuan Yew Centre for Innovative Cities, Singapore University of Technology and Design
Originally published at www.todayonline.com/commentary/yes-ai-can-transform-lives-beware-artificial-stupidity

Do you remember Sim City? It is a computer game introduced in the late 1980s where the player is the mayor of a new city. The objective is to create a functioning urban system by building a city from scratch, with inhabitants, amenities, parks, transportation and utilities.

To add an extra layer of complexity, the mayor also needs to respond to the challenges that all cities face — economic recession, infrastructure costs, pollution and natural disasters, amongst others.

The decisions of the mayor careen the city towards a vast array of potential scenarios. There is technically no way to win or lose the game, but the mayor needs to be continually engaged to maintain a favourable approval rating: Good health and education outcomes; high land values; low crime, traffic and unemployment.

Of course, one should never underestimate the power of the human mind and will to circumvent the rules of the game.

In 2009, an architecture student in the Philippines named Vincent Ocasla spent copious amounts of time and energy calculating how to “beat” SimCity, creating an urban system of six million residents so well optimised that the city would be able to run indefinitely on its own. The mayor had rendered his own job obsolete through data-driven smart planning.

Called “Magnasanti” and set to pounding, ominous music, a video showing what this perfectly optimised city looks like can be found on YouTube. Spoiler alert: It’s a terrifying vision, ensconced in the category of sci-fi about dystopian futures where we are governed by robot overlords who don’t care about human happiness or any other emotional baggage that could derail the best-laid plans.

I narrate this anecdote not to instill fear in anyone’s heart about a future in which computers plan our cities.

In fact, I believe in the transformative potential of artificial intelligence (AI) for cities and society. There are some brilliant examples of how AI has been used to improve urban management, meet societal needs, and even save lives.

For instance, AI can parse through patient feedback on medical care, analyse tissue samples to create more accurate health diagnoses and to accelerate drug discovery, and scour satellite imagery to identify natural disaster zones and expedite relief efforts to the areas that are in greatest need.

In this light, I am excited about Singapore’s recent announcement of a new National AI Strategy, which will be applied to projects in five areas, including smart cities and estates as well as transport and logistics.

Over the next 50 years, we will continue to make great leaps in the capabilities of AI, even potentially achieving “strong AI” — machines that are as functionally capable as humans.

However, as much as we should be aware of the potential of AI, we must also be aware of its innocent-looking but sinister-lurking counterpart, artificial stupidity.

What is artificial stupidity? Artificial stupidity is often used to describe either 1) AI that is dumbed down to be more relatable and human-like; or 2) the inability of AI to conduct even simple tasks well or accurately.

I would also add to this list a third item: A tendency to be so enamoured by AI that we forget to ask whether it is the best solution for what we ultimately want to achieve, and consider how it fits into the broader socio-environmental context within which it takes part.

Take, for instance, the personal car, which was once considered to be a stunning innovation. The desire for cars dramatically changed the way that we planned and built cities, which suddenly needed to accommodate vast amounts of roads, parking and sprawl.

Decades later, the problems of a car-oriented city have become well-known: Traffic, lower levels of physical activity, air pollution and other ills. Ironically, all of these have become key challenges that AI is now tasked to help us solve.

There are also more concrete examples. AI algorithms used by human resource departments to predict future job performance have been problematically found to screen out well-qualified job applicants. Sophisticated machine learning used to determine the likelihood of a criminal defendant to re-offend has been found to have more racial bias than human judges.

I state this only as a reminder that amongst all of the excitement, we must also be vigilant. How can we help to ensure that we maintain a balanced approach towards AI? Below are some starting points:

Develop strong regulatory frameworks that decisively prioritise people over companies

Reject a technology-first approach. Have a relentless insistence to fairly consider all potential solutions to a problem, knowing that sometimes simple, tried-and-tested solutions are the most effective. Autonomous vehicles are the new frontier, but the two-wheeled bicycle has been moving people around for the last 200 years.

Acknowledge that technological solutions almost never work the way that we expect them to in the real world. To mitigate this, we must instill a stronger appreciation of the social systems that underpin technological systems, built in part upon centuries of inquiry and thought in the social sciences and humanities.

Invest more, and not less, in the people who are working in sectors that are ripe for AI. For instance, computer-based learning will only work if it is created together with, and used by, great teachers.

Change the culture of data collection and sharing within government, to ensure that AI systems are built upon robust data and that each department or agency is not collecting its own partial data sets that sit in a silo for eternity. It goes without saying that this requires a rigorous and methodological approach to maintain privacy and confidentiality. The Los Angeles Department of Transportation’s mobility data specification is a pioneering example of standardised data specifications and data sharing requirements that private sector partners must adhere to, and is now even being used across multiple cities.

Foster a civil society that is active and engaged in these issues. More than ever, we need to have our wits about us, to feel safe to question data-driven assumptions, to push back against what we are not comfortable with, and to insist upon both short- and long-term accountability for those who are monetising and interpreting data about us and our lives.

While AI continues to improve over the coming years, let’s use that time in the meanwhile to strengthen our own capabilities. Not just in computer programming, but also in understanding history, society, politics, collaboration and collective action.

Of course, we need to invest and take some risks to allow AI to reach its true potential. But let’s not forget our own true potential along the way.

You may also like
SWAT Mobility Applauded by Frost & Sullivan for Leading Market Position and Solving Complex Transportation Challenges
Air India Unveils Transformation Plan: Vihaan.AI
Transforming Virtual Reality into Actual Reality | December 2020
Will robotics reshape lift truck leasing?
Malcare WordPress Security