Home / AI / Can Machine Learning Teach Devices to Be Morally Conscious?
Can Machine Learning Teach Devices to Be Morally Conscious?

Can Machine Learning Teach Devices to Be Morally Conscious?

 

There’s no stopping progress. But as we head toward an age of Artificial Intelligence (AI) – a logical progression from the Digital Age – and tech becomes more complex, so do the issues surrounding it. And we’re faced with ethical issues we’re probably not ready to tackle.

The developing Internet of Things (IoT) is giving us a first glimpse of what AI could mean – with devices like Amazon Echo and others representing early inklings of a real-life Rosey the Robot. But AI can only be as good as we program it to be, which isn’t always as easy as it sounds.

The first steps toward creating more human, intuitive devices is disconnecting them from the cloud and giving them their own “brains.” That’s something Qualcomm is working on in developing a “new kind of chip specifically for connected cameras, one that will actually be able to recognize what it’s looking at — without running to the Internet.”

Being able to identify faces and objects on its own makes for a faster analysis of what needs to be tracked and recorded by the camera, resulting in “fewer false positives,” as well as preservation of battery life and bandwidth (no more tracking the neighbor’s cat every time it jumps into frame).

This level of autonomy is wonderful. But what about the bigger fish that need frying? Like self-driving cars?

Crises of consciousness

How do you instill a state of moral consciousness into something like a car? AI characters/devices in sci-fi tales (think Star Trek: The Next Generation’s Mr. Data) make for great storytelling. But the devices in our reality can’t be indulged in a quest for human understanding (even if they were self-aware enough to want that) – they must simply possess it.

The trouble is, it’s our job to provide that information. And our own code of ethics is far from that black and white.

Consider this scenario: Your self-driving car understands how to slow, stop, or even swerve to avoid vehicles or people in its path. But how does it know what to do when the choices become less straightforward? When swerving to avoid one pedestrian puts you on a path to hit another one? What’s the solution, when the options become:

  • Hitting the pedestrian in the road – potentially killing them
  • Veering onto the sidewalk where there are additional pedestrians – potentially killing them
  • Veering in another direction, like into a brick wall – potentially killing the driver/passengers

According to the MIT Technology Review, we have to teach our self-driving cars to kill.

And actually, that’s the easy part. Teaching them HOW to kill is where the biggest question comes up.

Who decides who dies?

The future of self-driving cars (and other vehicles) relies on instilling them with the ability to make a conscious, autonomous, immediate decision, which means this moral and ethical riddle must be solved. Some choices will be simple: Squirrel vs. school bus full of children? RIP, Mr. Squirrel. Others are much harder.

Here are some people working to make those choices clear:

Lawyers

The Web has kept lawyers busy trying to keep pace on everything from copyright law (thanks, Napster) to social slander, but this is definitely life-or-death stakes. Whose fault is it when a driverless car results in a fatality?

Bryant Walker Smith, a fellow at the Stanford Center for Internet and Society at Stanford Law School, discusses this subject in his blog regularly. He’s also taught a seminar called Technology Law: Law of the Newly Possible (LAWS 680) at the South Carolina School of Law, which “examines how law responds to, incorporates, and affects innovation” to bring law students into the discussion.

Car companies

Tesla’s recently released “Autopilot” upgrade is perhaps poorly named – as it’s not meant to serve as an alternative to driver control, but merely an assist. Some close calls were shared via YouTube, indicating drivers may not be ready for this tech advance.

That may be why Google took drivers out of the equation with its self-driving Lexus, “after tests showed that human drivers weren’t trustworthy enough to be co-pilots to Google’s software.” The company is building redundancies into its design instead.

Toyota, meanwhile, plans to spend $50 million dollars over the next five years, “to work on artificial intelligence and autonomous driving technology” in collaboration with Stanford and MIT.

This is a noteworthy development given that, according to Wired, “A year ago, its deputy chief safety technology officer publicly rejected the idea, saying ‘Toyota’s main objective is safety, so it will not be developing a driverless car.'”

Perhaps with Nissan, Mercedes, Audi and Volvo all in the race to build self-driving cars, Toyota feels compelled. Although, they’ve stated their focus is “‘advanced architectures’ that will let cars perceive, understand, and interpret their surroundings” – led by their MIT contingent – while Stanford will handle “computer vision and machine learning.” So they’re playing it cautious.

Machine learning experts

Machine learning is indeed the most important piece of the puzzle – but is it sufficient? In a video preview for online university Udacity’s Intro to Machine Learning class, Course Developer Katie Malone explains, “Machine learning is all about learning from examples.”

But how many examples does it take to teach a car how to react to the many scenarios that can come its way? Sebastian Thrun, instructor at Udacity, speaks to the challenges of desert terrain – which is full of ruts and bumps. Drive too fast, and you might flip your car right over.

“So one thing we trained the car to do is to really slow down at the appropriate time,” Thrun notes. “We did this by us demonstrating in the car how we drive, and it would just emulate us…. We spent thousands of miles every day in the desert, and it took quite a while to make the car really smart.”

Thousands of miles – and that’s just to accomplish one task!

One task that was very clearly defined, with no ambiguous questions in the mix.

Machine learning is dependent on human teaching

Of course, machine learning for such high-stakes scenarios will require some sort of 3D interactive technology in the mix – since practice dodging actual people would open an entirely separate ethical argument. But we’re back to the same question: What IS the response we want learned?

According to Jean-Francois Bonnefon of France’s Toulouse School of Economics, beyond the hypothetical, consumers aren’t ready to play God if they themselves are in the driver’s seat. But the need for a consensus will intensify as AI and machine learning advances pave the way to any number of autonomous devices in the next 3-5 years.

Bonnefon’s team says, “[E]ven though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.” And one thing we’ve come to understand in the Digital Age: Public opinion is all over the map.

It may be, in the end, that the only autonomy we really want is our own. Can machine learning teach devices to be morally conscious? Sure. But only after WE’VE decided what that means.

About mchiaviello

Currently, Associate Creative Director, Brand Experience at Hook & Loop, Infor’s creative think-tank. A creative leader and team player with over 12 years of professional experience in art direction and design in agency and corporate settings. Successfully launched 360˚ campaigns across print, digital, direct mail and TV.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Scroll To Top