Q2 ’16:   Artificial Intelligence – Blessing or Curse?

2016 has already been an outlier for unexpected events. Political turbulence and violent outbursts globally are on the rise. At home, the ambush murders of police officers in Dallas and Little Rock have shocked the nation. Politically, the ascendance of Donald Trump as the Republican nominee for President is enough to let us know that we’re not in Kansas any more.

But behind the turbulence and media driven distractions, the big trends continue to unfold. Chief among them is the transformation of our society by technology, especially the emergence of artificial intelligence (AI).


John McCarthy, who coined the term “artificial intelligence” in 1956, defined it as “the science and engineering of making intelligent machines.”

“Basically, we’re talking about computer systems and algorithms that can form conclusions and determine their actions without direct human intervention. That doesn’t mean that they have human-like minds, but they may be capable of equaling—and often exceeding—human cognitive capacities with regard to specific tasks. In the broadest sense, Google Maps is employing A.I. when it helps you find a route to your destination. And the self-driving cars that might soon carry us along those routes are using A.I. to evaluate road conditions and otherwise keep us safe.” —Jacob Brogan


AI is opening virtually every field to new possibilities. Medical technology is transforming the practice of medicine—giving hearing to the deaf, sight to the blind, replacement body parts, cures for previously untreatable diseases. Engineering, robotics, education, financial management, manufacturing, publishing, communications, entertainment are all being impacted.

AI expert systems are providing medical diagnosis, strategic planning, voice recognition, face recognition, market analysis, real-time process control for space missions, developing plans for clean energy solutions and environmental clean-up, and support for education such as test assessment and AI tutors.

As the technology continues to mature, there are no fields of human activity that will not be transformed. Ultimately, the very nature of what it means to be human will be changed as we enter into the predicted Singularity, when humans merge with their technology, creating a new life form with extraordinary powers and indefinite lifespan. For the latest on the Singularity, Ray Kurzweil, the oracle of technology development, has published a new title, “The Singularity is Near.”

The Problem

One after another, some of our brightest and most successful citizens have been raising the alarm about the potential dangers of artificial intelligence (AI).

“When it (evolved AI) eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”
Stephen Hawking

“I think we should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that.
Elon Musk

“I am in the camp that is concerned about super intelligence… I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Bill Gates

“This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself – as well as to vast numbers of others.”
Bill Joy

Those quoted above are the “high priests” of our machine culture: premier scientists and tech entrepreneurs—the people who are most intimately familiar with technology. If they are concerned that there is a problem with AI, we should probably listen to their concerns.

But what are the specific concerns of our techno-elites? Given the vague, yet existential nature of the threat, we can conjure up some pretty terrifying images…Terminator, Matrix, Ex Machina. Hollywood has taken the concept to the bank.

Are they actually worried about sentient killing machines rising up to wipe out humanity? Not really. Not in the near term anyway.

The Future of Life Institute, an Elon Musk initiative, published a letter in 2015 co-signed by thousands of researchers and scientists and other tech savvy citizens raising their concern about “autonomous weapons.” The letter is really a plea to world governments to forego the development of autonomous weapons systems, and the AI arms race that would inevitably result.

In other words, the issue raised is not about what the machines will do to humans; it’s about what humans will do to each other with the machines, and about the misdirection of resources away from the development of benefic applications.

Longer term, if there is something that gives the high priests of machine culture bad dreams, it IS the prospect of their creations becoming self-aware and rising up against humanity. Why would they think this is even possible?

For the most part, the creators of AI are steeped in a materialistic world view holding that life emerged from the primordial void by spontaneous combustion, and that all life forms and the human brain evolved over eons from that first spark of life, and most importantly, that intelligence and consciousness are emergent features of the computational complexity of the brain. (It is notable that this world view also holds that if you were to give a chimp a keyboard and enough iterations, it would eventually pound out “War and Peace.”)

Given their world view, it is not so surprising that our elite technologists have bad dreams about their creations becoming sentient beings that might consider biological humans a threat.

But is this a scientific world view? Or is it a belief system? Is it any more or less scientific than the belief that all creation—from the void of interstellar space to star systems to human life—is one great whole that is life, and is inherently conscious?…or even the Creationist notion that there is a deity somewhere in the heavens who created the world by his (His!) will, fully formed, in 6 Earth days?

We all adopt beliefs regarding the origins of life and our place in the universe to give us some sense of comfort about the great mystery of our very existence in this incomprehensibly vast universe. The technologists are no different in this regard.

Even so, opinion on this matter in the tech sector is not monolithic. See “The Myth of Sentient Machines” by Bobby Azarian in Psychology Today, which points out that even “a perfectly accurate computer simulation of a brain would not have consciousness like a real brain, just as a simulation of a black hole won’t cause your computer and room to implode.” Also, see “The Fascinating Truth About Why Artificial Intelligence Won’t Take Over the World” by Sean Miller. Miller’s takedown of the “cult of scientism…practicing algorythmancy” is a classic.

If, however, the machines were to become superintelligent, autonomous, sentient beings, when would this be likely to happen? The following chart shows the trajectory of computing power. By 2030 a single $1,000 computer will exceed the computing power of the human brain. By 2050 that single computer will exceed the computing capacity of ALL human brains combined.

This is the time frame in which Kurzweil is predicting the Singularity. Alternately, this is when machines might become self-aware and decide to strike out on their own.

Economic and Political Implications

AI is giving us the means to create the world we choose…IF we step up to the responsibility.

If humans achieve the Singularity, war will be unthinkable. It’s not likely humanity would survive the destructive power available to combatants. But can we survive the transition?

Presently, our society is enamored of the machines, so much so that we have lost the broader sense of life. Our culture has come to be all about machine values; repeatability, reliability, certainty, efficiency…efficiency above all.

We normal humans exhaust ourselves trying to compete with the machines, to become machine-like, while our civilization is careening off into chaos as a result of the economic disparity created by our primitive economic system running on the steroids of advanced technology.

The lack of balance in life is reflected in our leadership and public policy across the board, highlighted by the following post at the Bulletin of the Atomic Scientists, which maintains the Doomsday Clock, presently set at 3 minutes to midnight:

“Unchecked climate change, global nuclear weapons modernizations, and outsized nuclear weapons arsenals pose extraordinary and undeniable threats to the continued existence of humanity, and world leaders have failed to act with the speed or on the scale required to protect citizens from potential catastrophe. These failures of political leadership endanger every person on Earth.”

The real challenge of AI is economic and political. It is not autonomous killing machines, as frightening as they are. We are quite capable of wiping ourselves out without such tools. The atom bomb will do just fine. If we continue in our primitive ways, the machines will just make the killing more efficient.

We have been raised on the false premise of binary choices…management OR labor, Capitalism OR Socialism, conservative OR liberal, this religion OR that religion. These are false choices—primitive models that divide us and lead to endless conflict. The truth of the matter is that we need both management AND labor, Capitalism AND Socialism, conservatives AND liberals. And we need the essence of ALL religions.

Like our primitive politics, our economic system is a vestige of a world that is no more. When emerging from a more primitive world, it made sense that the fruit of economy should accrue to those who had the means and knowledge to organize society and put capital to productive use. Today the means and knowledge are ubiquitous, but capital is still aggregating to the few, who increasingly don’t know what to do with it.

Excessive concentrations of capital are creating asset bubbles ($100 million NY penthouse) and turning to non-productive rent seeking while our infrastructure is crumbling and our schools are graduating students unprepared for the world they are entering. (The tools and processes required to build a house are not the same as those required to operate and maintain a house.)

Big corporations are deploying big data and advanced algorithms to know our every interest and tendency more intimately than we know ourselves, and using that information to front run our every move, at the same time shifting assets and loyalties around the world to minimize taxes and responsibilities, thus vacuuming up the fruit of economy for the benefit of a relatively small number of families, to the detriment of the remaining billions who compete for the leftover crumbs.

Our technology driven winner-take-all, “creative destruction” casino economy has enabled early adapters to suck the general wealth out of the economy, and destroy the great American middle class in a single generation.

However, just applying stale Socialist thinking to this problem is not going to solve our dilemma. Take a good look at Russia, or Venezuela, or Cuba, to see where that kind of thinking gets you. We need to transcend the old left/right, liberal/conservative paradigm. Our technology is enabling us to transcend the limitations of the physical world; it can also enable us to transcend the limitations of this stifling old binary political theology. The truth of the matter is that left/right, conservative/liberal are the left and right legs of the body politic. They are both needed to move forward.

We have a choice: we can use our rapidly expanding technology to create a better world for everyone, or we can descend into a techno-dystopia (think Elysium), where elites live in luxurious walled off compounds and the vast majority live in soul crushing poverty. That world will eventually erupt into revolution, or worse, and, given the destructive capabilities bestowed by our advancing technology, quite possibly hasten the end of human civilization on Earth.

Seems like a pretty obvious choice. And it is. But presently we are headed in precisely the wrong direction. We need a global awakening and a transformation of our politics to meet this challenge and fashion a positive outcome. See my upcoming book, The Politics of Unity.

Predicting the Future

Those who were present in the late 50’s and 60’s might remember the projections of the impact that developing technology would have on humans, often featured in the cartoon section of the Sunday paper. Chief among the benefits was…wait for it…leisure time…Hahahahaha! Hahahahaha!!!

As Yogi Berra put it: “It’s tough to make predictions, especially about the future.”

The future never works out the way we think it will. But problems are mitigated first by our awareness of them, followed by action to solve the problem. The fact that we are increasingly aware of the potential danger from AI is a positive sign in itself, and gives us hope that the dangers presented will be met and mitigated, although we don’t yet know just how, because we don’t really know yet how or in what form the greatest threats will present themselves, or the kinds of tools that AI will provide us to do so.

In the real world, the greater danger lies not in sentient, autonomous AI coming into conflict with biological humans, but in malefic actors, human actors, applying the vast problem solving ability and potentially planet destroying power bestowed by AI within the context of our primitive, tribal, violent politics. In other words, WE are the real existential threat to ourselves, not the technology.

As Albert Einstein noted, “The splitting of the atom changed everything, except our thinking.” I say it’s long past time to complete the change.

I’ll give Ray Kurzweil the last word:

“Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human machine civilization…The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.”