Back in 2011, Danny Crichton was an associate product manager intern who got to work on a secret internal project, codenamed “Emerald Sea”.
Emerald Sea eventually became Google+, the social media site that was supposed to be the search giant’s answer to Facebook and Twitter, but ended up floundering.
So where did Google+ go wrong? In an article for TechCrunch, Crichton spills the beans, noting the early warning signs were ominous:
After about 10 minutes of using it, I got distracted and left to find some food. I never reopened the product that day, an ominous sign in retrospect. Later, I hung out with some of the other new graduates who had just joined the team, and there was an awkward silence that can only come when the emperor walks by without his clothes on. No one could figure out why anyone would use this. As we would learn soon, we weren’t the only ones.
A key lesson from the project was that it suffered from a clear lack of focus:
Google+ was also weighed down by its expected integration with every product at Google. It was literally everything to everyone. It was supposed to facilitate the future of chat (“Google Talk”) through Hangouts, be the glue for games across Android and the web, handle the future of news on Google News and Google Reader, and myriad other priorities, in addition to being the personal data layer across the company and a delightful social product to boot.
One of the key lessons I learned from the experience that I have drilled into every founder I have worked with is that focus is absolutely everything. As soon as you have two goals, even one that is minor, you start heading toward the center of the convex set of solutions, and your product deeply suffers.
Another important lesson, for anyone building a product with a social focus, is the importance of slowly and organically growing a community:
Building a social product is a deeply organic process. It starts with a spark of imagination, but is refined through careful feedback in the early months of the product’s launch to adjust parameters and features. This process cannot be short-circuited, nor can it be emulated through a beta test… Google+ never had such an opportunity. From the very first day of its public launch, millions of people signed up for the service, wanting entry into Google’s newest product. This was the benefit and curse of building a social product at Google: Everyone wants to use it, but you have no way of knowing whether they would stay after the initial excitement wore off.
Crichton’s report is an important cautionary tale for any business developing a product or service, especially one with a social or community focus.
The great untold story of Larry Page’s comeback
The rise, fall and return of Apple’s cofounder and former chief executive, Steve Jobs, is one of the best known in the tech industry.
What fewer people are aware of is that over at Apple’s rival, Google, a similar fate befell Google cofounder and chief executive Larry Page.
Over at Business Insider, Nicholas Carlson reveals the events that led to Page standing down as chief executive:
One day in July 2001, Larry Page decided to fire Google’s project managers. All of them…
Google was doing really well. It had millions of users, an impressive list of investors, and 400 employees, including about a half-dozen project managers.
As at most startups, in Google’s first year there were no management layers between the CEO, Page, and the engineers. But as the company grew, it added a layer of managers, people who could meet with Page and the rest of Google’s senior executives and give the engineers prioritised orders and deadlines.
Page, now 28, hated it. Since Google hired only the most talented engineers, he thought that extra layer of supervision was not just unnecessary but also an impediment. He also suspected that Google’s project managers were steering engineers away from working on projects that were personally important to him. For example, Page had outlined a plan to scan all the world’s books and make them searchable online, but somehow no one was working on it. Page blamed the project managers.
Soon after the incident, investors convinced Page to stand aside in favour of former Novell chief executive Eric Schmidt. However, despite the resignation, Page remained an important force at the company:
Everyone inside Google still regarded Larry Page as their ultimate boss. He approved every hire, and it was his signature on the day of Google’s initial public offering, Aug. 19, 2004, that turned hundreds of people into millionaires — and Page himself into a billionaire.
While Schmidt was content to see Google turn into a corporatised search and advertising business, the increasingly distant Page began working with a man named Andy Rubin on a secret project, codenamed Android:
The Android acquisition was a Larry Page production. Page didn’t tell Schmidt about the deal — which set the company back about $US50 million — until after it was done. Brin knew all about it, but he didn’t take much interest.
Page set up Android as a separate entity, one that was only nominally a part of Google, and allowed Rubin wide latitude to run it without interference from the parent company. Android even had its own building, one that regular Googlers couldn’t access with their employee badges. Schmidt essentially acted as if it didn’t exist, mostly because $US50 million wasn’t enough of Google’s massive money pile for him to worry if it had been well spent.
As Carlson reveals, the Android project set off a chain of events that would see Page return to the helm of the company he cofounded.
The dangers of tech monocultures
Dr Dan Greeer is a highly regarded expert in cybersecurity, currently serving as the chief security officer at In-Q-Tel, the CIA’s strategic investment arm. In an article for Lawfare, Greer has a warning about monocultures:
Nature teaches that where there is enough prey, there will be predators. Nature teaches that when a predator gains an advantage, it will consume prey as fast as it can metabolize them, until the prey are too few to sustain the predators’ numbers. Nature teaches that monocultures are so unnatural as to require constant intervention to maintain.
According to Greer, a similar phenomenon occurs when a single piece of software is widely deployed, as was the case recently in the notorious Heartbleed bug in OpenSSL:
Recent headlines have been all about a coding error in a core Internet protocol that got named “Heartbleed.” It is serious. It was hiding in plain view. If it wasn’t exploited before its announcement, it most certainly has been after. It is hard to fix.
That Heartbleed is hard to fix is not because there is some technical difficulty in fixing a single line of code; it is hard to fix operationally — the error is so widely deployed that removing its effects will take time. Whether such a simple error could have been detected before it was so widely deployed is being debated with all the vigor of stiff-necked 20/20 hindsight.
When deployment is wide enough, it takes on the misfeatures of monoculture.
Greer argues that monocultures are dangerous in that they provide single points of failure or exploitation, and that their risks can be worsened by complexity:
The only aspect of monoculture that matters is that monocultures are the sine qua non of mass exploitation. In the language of statistics, this is “common mode failure,” … redundant copies of the same software process [failing] under identical conditions is exactly what monoculture invites and exactly what can be masked by complexity. Why? Because complexity ensures hidden levels of mutual dependence. In an Internet crowded with important parts of daily life, the chance of common mode failure is no idle worry — it is the sum of all worries.
With ever more embedded devices being connected to the internet of things, Greer warns the risks of software monocultures and complexity are set to proliferate:
Heartbleed is getting its fifteen minutes of fame, but what may matter most is that so much of what is being deployed now is in the embedded systems space — network-capable microcontrollers inside everything that has a power cord or a fuel tank. No one watches these and they are treated as if immortal. They have no remote management capability. There is not even a guarantee that their maker knows with precision what went into any one of them after the model year is over.
In the full version of the piece, Greer also discusses some of the possible solutions to the problem. It’s certainly worthwhile reading.
ET dumped in New Mexico landfill
The Atari game ET is regarded as one of the worst to have ever made. According to Kotaku’s Jason Schreier, it’s the stuff of urban legend.
For decades, legend had it that Atari put millions of E.T. cartridges in the ground, though some skeptics have wondered whether such an extraordinary event actually happened.
Well, this urban myth has now been confirmed:
Digging in Alamogordo, New Mexico today, excavators discovered cartridges for the critically-panned Atari game E.T., buried in a landfill way back in 1983 after Atari couldn’t figure out what else to do with their unsold copies… Today, it’s official. They’ve found E.T.’s home—though it’s unclear whether there are really millions or even thousands of copies down there.
Click the link above for photos of ET in New Mexico.