Savvy algorithms are learning to drive our cars, to detect cancer faster and can already beat the world’s best chess player.
But there are concerns about whether or not the programmers building these algorithms are still in control of their creations.
In 2016, a flash crash caused the British pound to lose nearly six per cent of its value in minutes. Analysts blamed trading algorithms for that fall and a series of other crashes in the market.
Earlier this year, Andrew Smith wrote about “Franken-algorithms” for The Guardian, suggesting that software in our lives had become complex and unpredictable and the consequences could be deadly.
“Once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are,” wrote Smith. “At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us.”
Evolving algorithms are less predictable
Tech expert Don Burks describes an algorithm as “a series of instructions that you would do to solve a problem and it produces a successful, repeatable result.”
Burks is the head instructor at Lighthouse Labs, based in Vancouver, which offers coding camps for people interested in a career in software development.
“What has changed over time is the complexity of what we’re able to do and the power of the hardware on which we’re able to do it,” said Burks. “We’re also trying to solve bigger problems like climate change and self-driving cars.”
Jean-Luc Dery is the chief technology officer at Thinking Capital, a Montreal-based financial technology company that enables small businesses to access capital quickly.
“We’ve evolved where those algorithms are learning and the amount of predictability that is attached to those algorithms is less and less,” said Dery. “They’re learning they can actually rewrite themselves and, depending on the environment that they’re operating in, they may have different behaviours.”
Out of control chatbots
A chatbot is a computer program designed to stimulate conversation with human users.
Last year, Facebook’s experiment tasked two chatbots to negotiate with each other. But instead of using English, they developed a private language that their human operators couldn’t understand.
It was a case of the robots creating a more efficient way to do what they had to do — communicate — and the humans eventually turned them off.
According to Burks, the problem is with the data we put in, not with the algorithms crunching the data. He referenced Microsoft’s racist chatbot as an example of this.
In March 2016, Microsoft launched Tay, a Twitter chatbot meant to test and improve Microsoft’s understanding of conversational language. But Twitter trolls flooded the bot with racist and misogynistic comments.
From these interactions, Tay learned to tweet what Microsoft admitted was “wildly inappropriate and reprehensible words and images,” and had to be deactivated within the first 24 hours.
“Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” said Peter Lee, corporate vice-president of Microsoft Healthcare in a statement posted to the official Microsoft blog.
‘The human is still in charge’
Still, Burks doesn’t believe computer algorithms have outsmarted humans, even though he says Hollywood has scared us into thinking that’s the case.
“Even though computers are now moving to the point where they’re able to, in some cases, write their own rules for how they behave, those rules are still within the guidelines that we have programmed into the computer,” said Burks. “The human is still in charge.”
Dery agrees. He is confident programmers and software developers have their hands firmly on the steering wheel.
“You know there are good monsters [and] bad monsters,” said Dery. “We want to make sure that we put the regulations in place, the best practices in place, that will allow us to keep the good monsters and keep all the benefits of these amazing technologies.”