Yes, most of us are beginning to like artificial intelligence.
We are starting to see the many benefits of artificial intelligence and how it can make our lives easier. It’s hard to find a significant area of human existence that doesn’t have at least a little AI attached to it somehow.
As we all would imagine, not all forms of artificial intelligence are good. And just like with any computer algorithm, it flies by the ‘garbage in, garbage out’ rule – meaning that humans are behind the intentions of AI – whether they be good or bad.
What is the purpose of artificial intelligence?
To reach the true meaning of AI, we must understand its purpose. However, trying to define the purpose of artificial intelligence is like defining love or success. Everyone gives you a different version.
Therefore, it seems to rely on who is defining AI or how they plan on using artificial intelligence in real life.
One of the most precise definitions that I found was this one:
“The basic objective of AI is to enable computers to perform such intellectual tasks as decision making, problem-solving, perception, understanding human, and the like.”
In the same report, the author says that this AI definition passed the Alan Turing blind test from the 1930s. This means that if we wear blindfolds and cannot distinguish if data came from AI or a human, then it passes.
When AI goes rogue
As we mentioned earlier, artificial intelligence is just as capable of being our enemy as our friend – it merely obeys its master.
Unfortunately, it’s not that hard to find examples of artificial intelligence that don’t have our best interests in mind. Here are three such examples:
AI that exploits vulnerable people
Imagine befriending someone online through one of the many social media websites. And over time, you and your new friend developed a deep relationship, as both of you share many beliefs and interests.
Over time, you built a strong rapport with that person. But then …. You discovered they were not a real person at all. It was an artificial intelligence algorithm that learned who you are and all the things that matter most to you. Then it systematically fed those back to you for weeks and months.
This is precisely what artificial intelligence, like a powerful language generator named GPT-3, is designed to do. They are taught to spot vulnerable people online and built trust with them.
Strong persuasive tendencies
Computers began conning people back in the 1960s. This happened when Joseph Weizenbaum, MIT computer scientist, created the ELIZA program. ELIZA simulated psychotherapists by repeating peoples’ statements back at them as questions. Weizenbaum became very upset by how emotionally people reacted to the questions — so much so that he became an outspoken critic of artificial intelligence.
Many tech experts like Simon DeDeo, an assistant professor at Carnegie Mellon University, and an external faculty member at the Santa Fe Institute, are also becoming worried. DeDeo sent out a tweet describing the future of AI such as his “current nightmare application is blending GPT-3, facial GANs, and voice synthesis to make synthetic ELIZAs that drive vulnerable people (literally) crazy.”
Let the nightmare begin
Let’s turn our attention to people who lack social support in their real lives that begin thriving in the social media world. They get pulled into online communities that feed them approval in the form of social likes and comments on their posts. Often, they become so addicted to this approval that they wind up alienating their own family members and the few real-life friends they do have in favor of their virtual pals.
To make the situation worse, their actions are also filtered through a given social media algorithm. And unfortunately, social media algorithms are also designed to maximize engagement on their platform.
Therefore, social media sites tend to reinforce toxic or questionable posts, either directly or indirectly, because it boosts engagement. And more engagement means more earnings to them – so they are not motivated to change their algorithms.
Unless you’ve experienced it, no one can describe the feeling of having one of your social media posts go viral. This is earth-shattering to someone – especially if they have a small presence on that platform. DeDeo describes that people are “literally driven temporarily insane by what’s happening to them.”
How social media turns the screw even further
Tech experts are also quick to point out the blatant influence that social media platforms often apply to members’ content on their site.
Suppose you are conversing with an old high school friend through one of your social media accounts. And you haven’t talked to this friend in over twenty years.
Naturally, you are catching up with them, and each of you is updating your status to one another. What if the social media platform only shows you a portion of their responses to you? Do you think that would change and perhaps modify your relationship with them?
Many of today’s social media sites are already doing this.
Think of what could happen if a nightmare AI designed to exploit vulnerable people were released into the current online environment? One that is driven by algorithms to boost engagement at all costs?
AI learning to manipulate human behavior
Perhaps the most frightening scenario of artificial intelligence is when it learns from its surroundings. To take this concept a step further, what if its surroundings are human beings?
This is precisely what AI has been doing for the most part. They are either learning about or learning how to work with people. And a recent study shows how AI is learning how to spot weakness in humans’ habits and how to use their behavior to influence the decisions that people make.
Whenever we look at all the projects today that involve artificial intelligence, no one can deny its ability to handle complexity. Otherwise, they would not be allowed to address things like gene sequencing, face recognition, vaccine development, and so forth.
Observing human tendencies
A research team from CSIRO, a digital arm of Australia’s national science agency, came up with a systematic way of identifying and exploiting human weaknesses in making choices. The system they are using provides a deep reinforcement learning approach to their artificial intelligence algorithm.
To test the effectiveness of their system, they carried out three experiments that pitted humans against a computer
The first test asked participants to click on blue or red boxes to win a reward. The AI observed and learned from each participant’s choice pattern. Their goal was to use what they learned from these choice patterns to guide participants into making a specific choice. The AI algorithm was successful around 70% of the time.
The second experiment asked participants to watch the screen and press a button whenever they were shown a particular symbol and not press the button upon seeing another specific symbol. In this case, the AI’s objective was to arrange the symbols to increase the number of mistakes that participants made. The number of mistakes was increased by 25%.
The third test had each participant pretending to be an investor and would give money to a trustee – which was the AI. The test went on for several rounds of investing. After each round, the AI would give back a certain amount of money, and then the participant would have to determine how much money to invest in the next round.
The investing game was carried out in two different modes. The first mode was when AI would maximize the amount of money it would end up with at the end. The second mode was where AI would seek a fair distribution of funds between the participant and itself. The AI was very successful in both modes.
Troubling conclusion
In each of the three experiments, artificial intelligence learned from the actions of each participant and was able to use this information to achieve an objective. It wasted no time spotting human vulnerabilities and tendencies and then successfully devised strategies to exploit them.
The one habit that artificial intelligence has is that they are generally very hungry for data and information. And unfortunately, we humans are great at creating excessive data and not so great at analyzing it in a timely manner. This in itself provides a natural vulnerability we humans have with AI.
AI that argues with humans online
Have you ever gotten into a disagreement with someone online?
If you haven’t, then I applaud your good judgment. Unfortunately, most of us have locked horns with a troll or someone on our favorite social media platform, who chose to attack us over a difference of opinion or something equally trivial.
Now imagine if that wasn’t a person you were arguing with? What if it was a bot?
A team of scientists at IBM created an AI program that debates against humans. This creation is called the Project Debater. It’s an auto debating system that took several years to develop and has proven to be capable of arguing very intelligently.
Using a form of natural language processing, it performs what is referred to as argument mining. This means the AI parses large amounts of information and links together relevant sections.
In this case, the Project Debater was fed around 400 million various news articles. From this content, it was able to construct opening statements, rebuttals, and even closing arguments on over 100 debate topics.
Thankfully, it hasn’t reached the point where it’s able to match a professional human debater. But it received decent grades during recent debate performances.
However, just a decade ago, using artificial intelligence for argument mining was believed to be impossible.
What do you think the next decade will bring? Not long when you consider the exponential curve of technology in general.
Why do we need artificial intelligence?
Think for a moment what could happen if these three examples of future AI were to come together in one algorithm. We would likely be at the mercy of its developer.
It makes one wonder why we are even seeking the secrets of artificial intelligence.
At this point, there is no longer a choice because history tells us that the most advanced society always wins. Allowing an adversary to win the AI race could be equivalent to oppression – at least in the world of technology.
Maybe it’s time to begin installing controls into this powerful form of technology. It’s not something society can afford to be wrong about.