BY: Reuven Cohen
Google GOOG +1.93% is without question one of the most innovative companies on the planet. It’s a company that is known mostly for its amazingly successful search and advertising businesses, and will probably be known for this, for the foreseeable future. But lately it’s also quickly becoming known for its rather unorthodox array of secondary business efforts. These efforts include things like driverless cars, wearable technology (Google Glass), human-like robotics, high-altitude Internet broadcasting balloons, contact lenses that monitor glucose in tears, and even an effort to potentially solve death.
Within all these various and sometimes bizarre efforts is a common guiding principle. Google doesn’t just attempt to take incremental steps when it comes to technology. It takes what a recent Time Magazine profile described as “Moon Shots.” Yet within these Moon Shots lays a method to its apparent madness. I decided to do a little digging to see if I could find out what that is.
My question is simple; why is a company built on finding information and serving up ads, spending vast amounts on a variety of outlandish projects?
Adding to Google’s mixture of eccentric acquisitions is word that this week it has acquired artificial intelligence (AI) startup DeepMind, a London-based company the tech giant bought up for an estimated minimum of $500 million. According to Re/code, which broke the story, the purchase “is in large part an artificial intelligence talent acquisition.” Re/code notes that DeepMind has a team of at least 50 people and has secured more than $50 million in funding calling it “the last large independent company with a strong focus on artificial intelligence.” DeepMind was founded by 37-year old former child chess prodigy Demis Hassabis who was once called “probably the best games player in history” by the Mind Sports Olympiad. Interestingly, Facebook was reportedly also attempting to buy the company.
DeepMind joins a growing list of robotics and AI companies recently purchased by Google, including Boston Dynamics, its eighth acquisition of a Robotics Company in the past few months. The robots manufactured by Boston Dynamics possess locomotive abilities replacing the conventional wheel-based robots with ones that look and act more like humans or even certain kinds of animals. Boston Dynamics is also a leading provider of human simulation software. Two of their bipedal robots named Atlas and Petman have a significant degree of freedom, which can only be matched by human beings. Its primary customers are the US Army, Navy and Marine Corps. Other recent Google acquisitions include Flutter, which specializes in gesture recognition and, most recently Nest, which it bought for $3.2 billion and provides smart household items like thermostats and smoke detectors for the Internet of Things.
Google’s DeepMind acquisition led it to establish, upon the smaller company’s insistence, a DeepMind-Google ethics board that will set standards for use of the AI technology within Google, assuring it does “no evil.” Actually, this ethics board sounds a lot like the famous “The Three Laws of Robotics” from the 1942 short story “Runaround” by the science fiction author Isaac Asimov.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But I digress, besides adding deep talent to already deep talent pool, the broader question is why would Google spend an estimated half a billion dollars for a “talent acquisition” and what’s with this obsession with Artificial intelligence and robots? All of the companies it has acquired in the AI and robotics space currently sit within its Google X division, a semi-secret facility dedicated to making major technological advancements. Work at the lab is overseen by Sergey Brin, one of Google‘s co-founders, and by scientist and entrepreneur Astro Teller. Teller says that they aim to improve technologies by a factor of 10, and to think of “science fiction-sounding solutions.
A recent post on The Guardian sheds light on the potential rationale; “What drives the Google founders is an acute understanding of the possibilities that long-term developments in information technology have deposited in mankind’s lap. Computing power has been doubling every 18 months since 1956. Bandwidth has been tripling and electronic storage capacity has been quadrupling every year. Put those trends together and the only reasonable inference is that our assumptions about what networked machines can and cannot do need urgently to be updated.”
According to the company’s research portal the answer is simple. Much of the fundamental infrastructure within Google is based on language, speech, translation, and visual processing. All of this depends upon the use of so called Machine Learning and AI. A common thread among all of these tasks and many others at Google is that it gathers unimaginably large volumes of direct or indirect data. This data provides what the company calls “evidence of relationships of interest” which they then apply to adaptive learning algorithms. In turn these smart algorithms create new potential opportunities in areas that the rest of us have yet to grasp. In short, they might very well be attempting to predict the future based on the search/web surfing habits of the millions who visit the company’s products and services every day. They know what we want, before we do.
Along with the billions of dollars Google is spending on various cutting edge companies, in May of 2013 it launched a Quantum Artificial Intelligence Lab to study how quantum computing might advance machine learning and artificial intelligence. For Google this obsession with Artificial Intelligence and robotics may very well be about building better models of the world so they can make more accurate predictions of future outcomes. If Google wants to cure diseases, they need better models of how they develop. If they want cars to drive by themselves, they need better models for how transportation networks operate. If they want to create effective environmental policies, they need better models of what’s happening to our climate. And if Google wants to build a more useful search engine; they need to better understand you and how you interact with what’s on the web so you get the best answer tailed specifically for you. Or maybe they just want to create an autonomous robot army, but that sounds crazy. Or does it?