A balanced view of artificial intelligence

0

COURTNEY NGO

This March, people had a glimpse of the “bad” side of artificial intelligence, or AI: a woman died because she was hit by Uber’s self-driving car in the US; Facebook’s massive corpus of data was allegedly used to influence the last US elections; and China is set to implement a “citizen score,” which inevitably will remind Black Mirror fans of a probable dismal future.

But with every introduction of a massive technological disruption, it is common to see these hiccups at the onset.
For instance, in the early stages of development of the automobile, pedestrians were killed due to vehicular accidents, with the car moving at 4 miles per hour.

Independently, the introduction of electricity paved the way for the massive wars over direct and alternating current.

Today, people from those eras would be surprised to know that we now cruise through paved roads at five times their original speed, or that electricity is considered a staple utility in every home.


We now see the beginning of the same evolution with AI. It wasn’t like a sudden large change as some sci-fi advocates would lead us to believe. For the past few years, AI has been creeping into our daily lives through small, incremental changes that made us numb to the fact that they were ever there at all. Weaving through traffic with Waze is better. Searching through large volumes of online data with Google to get product recommendations tailored to our preferences and historical purchases, and finding out the title of a playing song are all just small things that improve the way we live.

While there is some truth to the fear that AI will replace jobs, the general trend seems to be toward aiding us to be more effective and efficient in carrying out our tasks. Examples of these range from AI as “artificial intelligence” portrayed by science fiction as robots that can act, think and reason like humans, to “augmented intelligence” that complements human abilities as we see manifested in most of the computing tools that we have come to rely on.

Today, the state-of-the-art techniques of AI are in the field of machine learning and deep learning. Computers learn to make recommendations by analyzing our previous actions and how we make decisions. For this to work, a large collection of data and a computationally efficient computer are required. The reason this field has been progressing by leaps and bounds is that we have gathered enough data over the years from everything we do; from our recent purchases at the local grocery store or online transactions at a commercial site, to the use of apps that track our movements, to the posts we share with our families and friends through our social media accounts.

But how does AI make recommendations from all these data? How it decides is largely based on how we choose to make it learn. The decisions made by traditional AI, for which people lay out the rules, are explainable. But the rules are only as ethical as the person writing them. Decisions from traditional machine learning, where people “point” at data features the computers will pay attention to, are somewhat explainable. But similarly, the AI model will only be as ethical as the person who puts in the data features. Deep learning, where people just provide the data and the computer are supposed to find the features, produces decisions that are extremely hard to explain, to the point where we end up forcing our frame of mind to match the computer’s newfound rules. In this case, once again, the AI model is only as ethical as the person feeding the data.

What is the implication of all these? People should know that they are still the boss even in the age of AI. They must exercise judgement and critical thinking when deciding to follow a computer’s directive and analyze the results given by the computer before making any decisions. This calls for new skill sets that will enable the public to make use of available data to support their decisions, while, at the same time, discern when the AI recommendation is valid or not.

Courtney Ngo wrote this column, with assistance from Ethel Ong. They are assistant professors in the Software Technology Department of the College of Computer Studies at De La Salle University. They may be emailed at courtney.ngo@dlsu.edu.ph and ethel.ong@dlsu.edu.ph.

Share.
.
Loading...

Please follow our commenting guidelines.

Leave A Reply

Please follow our commenting guidelines.