Daniel Faggella is the founder and CEO of Emerj. He is an expert on the competitive strategy implications of artificial intelligence (AI).
He writes: “Over the last two years, there has been a general ‘uptick’ in media attention around the risks of artificial general intelligence, and it seems safe to say that, though Bill Gates, Stephan Hawking and many others have publicly articulated their fears, no one has moved the media needle more than Elon Musk.”
He quotes Musk as follows: “With AI, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like .. wink ... yeah he’s sure he can control the demon… doesn’t work out” and “I’m inclined to think that there should be some regulatory oversight . . . at the national and international level … just to make sure that we don’t do something very foolish.”
I am not a fan of AI. Take the average day: temperatures can vary by 20 ºC, the wind can blow from at least four directions or not at all, and it can be rainy, misty, cloudy or still, clear or hazy. Thus, there are numerous conditions that may influence the way an AI system makes a decision and all have to be programmed into it. Or, alternatively, the AI system can be a self-taught system that teaches itself. For example, a helicopter instructor switches on the AI and then flies a perfect liftoff and depart and a perfect auto-rotation (helicopter descent with engine off). Then, thereafter, the AI system will be able to do these things at the touch of a button. The helicopter must be at the right place at the right height – it is a poor idea to take off and plough into a hangar or to try to auto-rotate when 30 m off the ground.
So, inputs will be required from altimeter, GPS and collision-avoidance radar. Oh, and wind – can’t fly above 30 knots. And compass. And aircraft fuel quantity. And passenger and cargo weight. And rotor rpm. And engine rpm. And engine manifold pressure. And engine alarm system. And . . . if something gets in the way on the ground, the collision avoidance will probably make the aircraft crash.
Further, we have just seen the crash of two Boeing aircraft, Boeing 737 Max types. In both accidents, pilots could not recover from automated commands by the 737 MAX’s new Manoeuvring Characteristics Augmentation System, or MCAS), which repeatedly pushed the airplane nose down in response to erroneous flight data. This is AI gone wrong, as in being badly programmed or, more likely, a device which was not needed.
AI shows itself in game playing and is supreme in chess and the game of ‘go’. But not in the game of contract bridge. As Warren Buffett, a bridge player, reputedly said, “Playing bridge is like running a business. It’s about hunting, chasing, nuance, deception, reward, danger, cooperation and, on a good day, victory.” I remember a French bridge championship. Declarer led an ace. Player to his left played a jack. This seemed to indicate that he had no more cards of the same suit and so declarer did not lead his king. But it was a bluff, which succeeded.
There is no way that an AI system would work that out. The essence of the problem is that, while bridge apps work well, the AI in them knows the contents of each hand of cards. In human bridge, the contents of three hands is unknown. This makes AI for bridge nearly impossible.
So, what is Musk afraid of? Displacement of workers by machines? That has been a fear for hundreds of years and nothing bad yet. AI making mistakes which kill people? Um, well, ask Boeing and Tesla about that. Machines taking over the world? You know, it is hardly a worry. Right now, the machines cannot even play contract bridge. When the best bridge player is a machine, you could start worrying. But really not until then.