(as originally published on Entrepreneur / image credit: Shutterstock)
For the past several decades, we have created computer programming languages and managed to force entire generations across the globe into becoming engineers and learning how to code.
And the result?
We have succeeded beyond our wildest dreams (we’ve created machines that can now learn on their own), and we have failed beyond our worst nightmares (we’ve created “black box” artificial intelligence (AI) which we don’t –and can’t– understand).
It’s time for us to rethink the future we’re so effectively creating. I’m concerned by this trend, where we focus our mental energy into machines, rather than having them understand us more.
And I’m not alone.
Industry and thought leaders across a wide array of disciplines have advocated for having machines understand our languages, instead of having people understand machines’ languages. This argument has recently moved beyond code. The uncharted new territory of machine learning has made it virtually impossible to understand why machines make the decisions they do.
How machines talk to each other is worth a deeper discussion, especially since Google’s Neutral Machine translation system went live last year, and more recently, Microsoft researchers discovered that their AI has invented its own language to write code -which it could use to talk to machines- and each other, without us even understanding it. Eventually, it means that technology works out certain decisions with itself.
Skynet is already here.
Another worry of various technology leaders is that we are rapidly letting these intelligent machines assume responsibility for our daily lives. Driving our cars, screening our health, monitoring our children, pampering our pets- they are not just evolving our relationships with machines, but changing culture and society, and our place in it. In the case of self-driving cars for instance, insurers may decide that human drivers pose an unacceptably high risk compared to autonomous cars, trucks, and trains.
Right now, two of the most popular digital apps use Machine Learning to make suggestions. Facebook uses Machine Learning to decide for us what news we see –and don’t see– in our feeds. And Google Photos uses Machine Learning to identify people in photos.
And then there is MADCOMS, which stands for Machine Driven Communications Tools- you may be more familiar with the term chatbots. These AI-driven robots are quickly outnumbering human communications online. They browse the internet to understand people by autonomously gathering data. You have likely seen them in the form of AI-created advertising. With hundreds of thousands of chatbots, we expect to soon see a world where machines are no longer learning from human content, but other machine-generated content.
These trends will not only affect how we think, communicate, and see ourselves and each other, but also limit our ability to learn and grow, as content selection, variety and exposure are based on sources that are linked to machine culture, and not human culture anymore.
At the same time, machines have moved beyond working around a set standard of operations and tasks, moving into the more subjective field of “art.”
The art of music is complicated enough for humans to have had a centuries-old debate about tastes, styles, and what “makes good music tick”. While it’s never going to be a Mozart, today, we already have AI that is creating music on its own, and many people actually seem to like it. All this will affect the very fabric of our society- its arts and culture, even more than the internet and social media already have.
Smart devices and AI are increasingly taking a critical role in our lives. It’s up to us to decide whether we want a future to be more human or more machine. What makes us humans are our languages, and the cultures and thinking patterns that come with them.