The Futurist: Anticipations of Weaponized A.I.
Better to be in-the-know than blindsided
Of all the topics I've written about, this one scares me the most.
Yes, artificial intelligence, one of humanity’s greatest achievements, can also unleash the seeds of our own destruction. Weaponized A.I. will range from relatively minor weapons designed to change a specific action to nation-vs.-nation full-blown warfare.
Artificial intelligence, while still in its infancy, is growing up fast. A recent Cylance survey showed 62 percent of security experts think we’ll see the first incidents of weaponized A.I. in less than a year.
Several aspects of A.I. make its use as an offensive weapon different than anything we’ve encountered in the past.
Attacks can be highly individualized, carefully directed toward the greatest vulnerabilities of key individuals, formed around specific threats, extortion, blackmails or intimidation.
The British TV show "Black Mirror" does a particularly good job of demonstrating how a simple threat can spiral out of control with its “Shut Up and Dance” episode.
In the hands of a terrorist, weaponized A.I. can also form around an unpredictable chaos engine, whose sole purpose is to disrupt as many people, places and things as possible.
Using next-gen A.I. masking tools, wrongdoers will maintain a distant relationship from the path of destruction they’ve created, hiding any direct ties to the puppet masters in the background.
Once a well-crafted A.I. weapon is launched, it can operate on its own, creating mayhem for months, years, perhaps even decades into the future.
Ironically, the greatest tool for fighting an A.I. weapon is more A.I. This will likely become our next major arms race with the smart good guys trying to stay one step ahead of the smart bad guys.
The purposes behind A.I. weapons will range from students out to get grades, to terrorists threatening to destroy civilization.
Some of the ideas that follow have the potential to unleash unspeakable evil, and I’ve wrestled with whether or not to make these public. But after considerable reflection, I’ve concluded that anything I can think of, evildoers are also capable of coming up with.
For this reason, a well-informed public can be far better prepared for any of the treacheries or menacing plots that may lie ahead.
Starting with an Innocent Façade
Ayzenberg is an A.I. marketing company that leverages consumer social media activities by turning segmenting the data to create specific marketing strategies. Using a series of machine-learning algorithms it can analyze social-speech, along with what you see, post and share across all platforms.
Over time, Ayzenberg will know you better than you know yourself.
From a positive perspective, it will create more efficient systems for leveraging advertising dollars, and for you as the consumer, to only see products and deals of interest.
However, an A.I. system like this can uncover your vulnerabilities, weaknesses and liabilities.
In much the same way Google’s personalized marketing system delivers targeted ads, a weaponized intimidation engine will be capable of delivering highly targeted threats.
As A.I. cyber crimes escalate, we run the risk of our social structures deteriorating into invisible mafia-style communities with blackmailers ruling the rest, and few, if any, capable of understanding the behind-the-scenes war zones?
Understanding the Targets
People who live in obscurity, eking out a living just to keep their families afloat, generally have less to worry about. But they can still find themselves unwitting pawns in a much larger scheme.
Most primary targets will be the fame-seekers. All the trappings of power and success make them the most vulnerable.
Virtually any person, put under a microscope, can be threatened with his or her own character flaws.
Perhaps the greatest danger comes from knowing personal weaknesses, and in most cases, it’s the person or thing they care about most. For an A.I. seeking leverage, the quickest results will come the greatest point of leverage, and whether it’s a child, parent, valuable possession or someone’s reputation, one well-crafted threat can turn a mild concern into instant blackmail.
Virtually every situation presents an opportunity for weaponized A.I., but each will require different strategies, targets and techniques. Once a clear objective is put into place, A.I. will use a series of trial and error processes to find the optimal strategy.
A.I. tools will include incentives, pressures, threats, intimidation, accusations, theft and blackmail. All can be applied in some fashion to targeted individuals as well as those close to them.
If a $100,000 reward is offered to kidnap an eight-year-old girl, many who pride themselves on being law-abiding citizens will jump at the chance, knowing if they don’t do it, someone else will.
Each of these “games” will be played until a final outcome has been achieved. In reality, there is little difference between this type of game and A.I. playing Alpha Go, Jeopardy or chess.
1.) STOCK MARKET MANIPULATION – There are only a few highly influential market analysts who do the math to determine the true value of a stock. These people can be influenced without them ever knowing they’re being manipulated, or they can be outright threatened. This kind of manipulation can be accomplished by making a few key stocks look better and others worse. Most likely it will involve strategic people placing critical “buy” or “sell” orders at a specific time.
2.) BLACKMAILING A JUDGE – Judges will soon find themselves in a particularly vulnerable position. Even with juries present, judges remain the most critical influencer in any case’s outcome. Even with the FBI watching, veiled threats and paranoia can become insidious influencers.
3.) THREATENING POLITICOS – Living in the U.S. where we many layers of government (city, state, county, special taxing districts, etc.), finding a politician to manipulate is relatively easy. With American democracy, an elected official who lives in the public eye under constant scrutiny can either be forced to “play ball” or replaced by someone who will.
4.) HIJACKING A CITY – Every city is made up of interdependent systems that function symbiotically with their constituency. Stoplights, water, electric, sewage, traffic control, etc., are just a few of the obvious trigger points. Once A.I. disables a single city, it can easily be replicated to affect many more.
5.) FUNDING A STARTUP –With the right set of circumstances, every round of funding can be turned into a bidding war.
6.) DESTROYING A RELIGION – The shortest route to turn faith on its head is with scandal and controversy. While every religious organization has its share, leveraging an incessant string of threats, confessions and lies can drive a serious wedge between leaders and followers. Other mitigating factors that can speed the demise will be things like significant financial loss, claims of false doctrine, overt favoritism or theft.
7.) DESTROY A NATION – At the core of every country are its financial systems. Turning a nation into a game board, using currency as the defining metric, weaponized A.I. could be directed to attack essential communication and power systems. Once disabled, the next wave of attacks could be focused on airports, banks, hospitals, grocery stores and emergency services. Every system has its weakest link and this kind of exploitive weaponry will be relentless until each point of failure is exploited and the currency goes into a freefall.
Key Points of Intimidation
Throughout society there are people of influence who are critical for maintaining the systems, business operations and processes that govern our lives. These individuals become most “at risk” for becoming a target of weaponized A.I.
- Stock Analysts
- Newspaper Editors
- Corporate CEOs
- Medical Doctors
- Military Generals
- Insurance Company Executives
- Venture Capitalists – Can a VC be coerced into producing a well-funded term sheet with favorable conditions?
- Angel Investors – For every VC there are potentially hundreds of angel investors.
- Bankers – Can bankers be forced to issue a loan?
- Corporate Investors – Since corporations aren’t as personally accountable for investment decisions, their support may be easier to coerce.
- Accelerators – Winners and losers in an accelerator competition are often only a single vote apart.
- Grant-Makers – Every philanthropic process boils down to a few decision-makers.
- Foundations – Virtually every foundation grant has exceptions to the normal funding criteria. In these kinds of scenarios, it comes down to the judgment of the gifting few.
- Sponsors – Many of these relationships are worth millions.
Landmark Decisions in the Future
Will people or machines be shaping our lives?
- Should cryptocurrencies replace national currencies?
- Should we have a single world leader?
- Should dying languages be resuscitated?
- How should life and death decisions be made in the future?
Every major system has the potential to be hijacked by evil A.I. in the future. Either through the tech itself, the people who control it, or a combination; virtually all future systems will be vulnerable.
- Stock Exchanges
- Power Plants
- City Water Supply
- Security Systems
- Cloud Storage Systems
- Election Systems
As our equipment becomes more universally connected to the web, commandeered devices will become an ongoing concern. For example, the same drone that delivers packages can also deliver bombs, poison and spy on your kids.
- Flying Drones
- Driverless Cars
- IoT Devices
- Delivery Trucks
- Data Centers
- Smart Houses
Those who thought privacy wasn’t all that important in the past will quickly come to an entirely different conclusion once weaponized A.I. touches them directly.
Privacy has a way of masking our personal foibles and overall weaknesses. Look for an entire new wave of privacy concerns and demands to take center stage in the coming years.
Until recently I had largely dismissed the warnings of Elon Musk, Bill Gates and Stephen Hawkings about the dangers of A.I. Yes, the super advanced A.I. that they’re talking about will be problematic on many levels, but we’re still years away from that.
The part that I was missing was not artificial intelligence itself, rather the sinister people capable of controlling it in the background.
Weaponized A.I. is coming. The first iteration will be crude and poorly implemented, but the second and third generation of this technology could be far more menacing.
Once again, the greatest tool for fighting weaponized A.I. is more A.I.
The only way to minimize the threat is by upping the ante and creating more powerful machines to combat the bad guys.
We cannot turn back the hands of time or suddenly ban all further research. Progress will happen with or without our blessing.
Instead, we must navigate through the coming dicey years in the same fashion we’ve worked through other dangerous technologies like nuclear weapons, chemical warfare, and suicide bombers.
It’s never easy, but in the end the benefits will far outweigh the penalties we must endure.
But please don’t think that I have all the answers. Let us know what you think. Will we survive the murky times ahead, or have we gotten ahead of our capabilities and now face a no win situation?