Preview Mode Links will not work in preview mode

This is a podcast for the curious. Strap yourself in for genuine dialogues with people who think deeply and are ready to tackle the big questions, such as broadcaster Terry O'Reilly, fantasy author Guy Gavriel Kay, and journalist Sally Armstrong.

Join Ben Charland to peel back the headlines and ask, what are the forces, people and ideas that shape the human story today? From the Mafia to the Beavertonwomen in politics to women in leadershiphistory to artificial intelligence, and entrepreneurship in the digital age to the art of wheelchair fencing, just what on Earth is going on?

Subscribe to the podcast now.


        
        
 

Humanism and AI

Apr 12, 2019

Listen to Episode 50: The Promise of Artificial Intelligence

As you know, I find the subject of AI fascinating. Often my mind wanders along the tangents of science fiction as I consider the breathtaking potential of artificial intelligence – specifically, what is often called artificial general intelligence (AGI), or “Strong AI”.

In our conversation, Jeff Lui mentions a “singularity”, which he sees coming in about 30 years. Put glibly, this technological singularity is the moment when an artificial superintelligence, capable of learning and improving itself at a faster and faster pace, becomes in effect unstoppable. The runaway impacts of this singular moment will change our world in ways we cannot possibly fathom. The singularity is the point of no return.

Jeff is pretty clear: it’s coming, but we’ve got some time to get our ducks in a row before it does. We can prepare so that the artificial intelligences we’re building now are built right, built ethically, and built with humans in mind.

Some AI experts believe that the singularity is a myth, a scare tactic drummed up by those who don’t understand the technology. They believe that AI can never be sentient in the way humans are, and thus will always be on our leash. But others say the singularity is coming much sooner, and that we won’t even notice until it’s too late. Stephen Hawking used much of his public platform in his dying years to warn us about the existential threats posed by our technological progress, and Elon Musk is actively trying to harness machine learning for good with his company Neuralink. You know, before we all go extinct.

I get a buzz from thinking about all this stuff – even the dark Hollywood possibilities of Terminator, Ex Machina or Blade Runner. But what’s apparent in my conversation with Jeff, just as in my previous discussion about AI in Business with Stephen Thomas, is that the promise and the perils of AI are in front of us today. Car engineers are dealing with very real ethical issues (Jeff and I talk about the “Trolley Problem”, for example) as they roll out consumer products with autonomous features. Machine learning enables massive corporations such as Apple, Netflix, Amazon, Facebook and Alphabet (Google) to cultivate your data for profit, while governments are eager to put it to similar uses but different purposes.

It’s not all bad: a machine learning program can tell a police force where crime is likely to occur and thus where to commit resources. In concert with the “internet of things”, an intelligent computer can not only notify a city of a broken underground pipe, but it can predict with high accuracy where and when the next one will burst. It can help us learn languages, build companies, manage human resources, plan festivals, improve food transportation, you name it. But the way we build AI now will reflect what it becomes in the future, at a time when we may no longer be in the driver’s seat.

Jeff said something at the top of our conversation that shouldn’t have been surprising, but it was: and it’s a good reminder of what AI can and should mean. He said that he is a student of people, and he finds the human miracle more interesting than anything else. What can artificial intelligence teach natural intelligence? We already know that it can make us better Go or Chess or Jeopardy players. But can it teach us about ethics? Can it teach us to be better, wiser, more resilient human beings? Can the ongoing AI revolution be humanist in nature?

Back to my tangents into science fiction and fantasy, I’ve wondered what it would look like to simply hand over the keys of government to an artificially intelligent computer. If programmed right, such a government would make rational decisions in the best interests of all citizens – the greatest good for the greatest number. An AI Government would never act selfishly or for corrupt purposes. It would never hire its nephew or give loyalty a higher premium than competence. It would always weigh the pros and cons based on hard evidence and cold prediction, and never be swayed by emotion or forced to pander to the loudest constituent.

But what’s to stop our Robot Overlord from deciding that because of, say, overpopulation and climate change, half of us need to go? Jeff and I talk about the potential Thanos (the Avengers villain) storyline happening from AI. It is plausible, but it is also preventable. If our government was an unfeeling machine, we could not program it with purpose as well as principle – a constitution, if you will. In this way, AI could force us to be proactive, and to grapple now with some big ethical questions. Maybe that’s a good thing. But slow, old-school, organic evolution has its stengths, too.

See you next week.

- Ben