It doesn’t feel like it was very long ago that I was sitting, legs crossed, on the floor of a nondescript office building on University Avenue in Palo Alto, watching Mark Zuckerberg raise his fist with a slight smile and say, “Domination!” as a way of closing out our weekly Friday all-hands meeting. By the time I left the company in 2011 to become a full-time writer, “domination” was a real possibility.
It was 2005 and the whole world lay ahead of Facebook. I was a former English graduate student turned Facebook employee, where I started working with user issues, then moved into product marketing and managing before finally, in 2009, becoming Zuckerberg’s speech and blog writer. While in 2005 the site was growing rapidly with college users and had some of the best engagement numbers any venture capitalist had ever seen, it had yet to be released to the general public.
That year, Zuckerberg frequently wore a shirt that said “Sloths” on it, signaling an ironic laziness — to him, and to many of us, the idea of “domination” seemed as much a half-serious, half-joking fantasy as a hard-set mission.
Some days I wondered if he was utterly serious. What would “domination” by Facebook, which at the time was a fun social network for university students, look like? Were there any downsides to connecting anyone and everything in the world — and for one company to oversee those connections?
But whenever I had any doubts I would think, “Facebook has to be much bigger than it is now before ‘domination’ is anything to be afraid of.” And with that, I would throw myself back into the work of growing Facebook. That was our shared passion as a company: Scale first, ask questions later.
As Facebook continued to grow, so did my concerns
As Facebook grew exponentially, smashing user milestones by the month, and the idea of domination started to become plausible, I started looking for signs that Zuckerberg and the Facebook culture at large were aware of the potential downsides to our unfettered growth.
My job as Zuckerberg’s speechwriter meant I helped him formulate his internal and external communications, from blog posts to company-wide emails. This meant that not only did I need to understand the full scope of the mission, I needed to be able to argue it eloquently to others and foresee criticism. At the time, I felt that the biggest long-term threat to Facebook’s popularity was, ironically, becoming dominant to the point of overbearing monopoly.
My worry began to deepen when, around this time, Zuckerberg began using the language of states to talk about Facebook’s burgeoning power. “Companies over countries,” he told me once, as we discussed a blog post about Facebook’s goals. “If you want to change the world, the best thing to do is to build a company,” he added.
In an office where “fortune favors the bold” posters were hung prominently, I could see the appeal of such ambition. Unlike a country, a company has unlimited potential to build and grow, I felt, and a social media company that transcended national boundaries could become a meta-society of its own. A line from Thomas Pynchon’s novel The Crying of Lot 49 often came to mind as I contemplated the heady possibility of changing society digitally: “Shall I project a world?”
But the question I was afraid to ask him was this: If we were to achieve our goal, why should the world trust Facebook or Zuckerberg to shape and manage this new global meta-society? Could Zuckerberg, who wields considerable power over Facebook’s share structure, develop the self-awareness and responsibility to manage it?
If my co-workers were asking themselves these same questions, I didn’t see it being discussed on our internal forum pages or in conversations around the office.
Facebook employees are often brilliant and technically gifted. But the atmosphere we inhabited did not encourage asking questions about power, at least publicly. Instead, internal conversations stayed focused on technical and growth questions; questions that can be answered with metrics — how fast are we growing and what technical roadblocks can we remove — rather than introspection.
“We are building a social operating system,” colleagues would say, and when they said that, it sounded so neutral, so technically unbiased, like we were building a faster laptop instead of a machine capable of mediating the world’s personal relationships, not to mention political elections, with increasing levels of sophistication.
The culture of Facebook then and, as recent reporting suggests, now is one that — oddly, for a company built on the idea of wide-open communication — was somewhat immune to self-criticism or self-reflection. One might think that a company breaking social media growth records would welcome dialogue about the product’s social impact, since we were essentially building an entirely new social and communications infrastructure.
But even the earliest Facebook privacy scandal, the News Feed launch — where users became incensed about having their Facebook activities suddenly serialized into “stories” overnight — was met not with discussion about how Facebook was rapidly overturning people’s understanding of privacy and more with relief that, after a few days of spirited protest, users came back to the fold. It looks like not much has changed with this new scandal.
The Cambridge Analytica scandal brought all these questions to a head
If the takeaway from this to outside observers is that Facebook only cares about growth and profit above ethics, that’s not quite accurate. Because to Facebook, growth is an ethical goal unto itself. When Facebook’s technical mission to connect people is imagined as itself a moral good, all efforts in that direction become righteous by definition.
Asking questions about how that mission might go awry, then, may be seen as disloyal. “This is so disappointing, wonder if there is a way to hire for integrity. We are probably focusing on the intelligence part and getting smart people here who lack a moral compass and loyalty,” as one Facebook employee put it recently in an internal discussion. When asked about people who thought Apple was better at privacy, Zuckerberg claimed those customers had “Stockholm syndrome.”
Then the Cambridge Analytica scandal happened, bringing all these questions home to roost. Revelations that the company worked with the Trump campaign and the Brexit campaign in the United Kingdom to use shadily harvested data from Facebook to target political ads has shaken the company’s public image.
But Cambridge Analytica is a scandal that isn’t a scandal: Everyone who has ever worked on or with the Facebook platform knew that for several years, the platform made data available to third-party developers by design. The scandal is that the world finally understands the ramifications of that state of affairs.
Still, Facebook doesn’t seem to have woken up to this moral quandary, at least in a way that is visible to the world watching. Perhaps the reported internal rumblings at Facebook are evidence that some moral self-reflection is developing; on the other hand, reported employee comments likening anyone who questions or leaks information to “wife beaters and suicide bombers” suggests that the cultural atmosphere that abhors self-criticism is alive and well.
There are signs of hope. Facebook has reportedly taken steps to make anonymized data sets available to researchers to understand how election interference happens on Facebook. It may be that Zuckerberg is finally reflecting on his company’s mission and moral responsibility. Facebook users, investors, and, I would argue, employees should demand that he do so.
But he can’t do it alone. Every person who works at Facebook should also be looking in the mirror and thinking about what that means and asking the hard questions. In the early days of Facebook, it was easy to dream big and hope for the best even while “moving fast and breaking things”; in many ways, we were more successful than anyone who worked there back then could have imagined. But now that market dominance has been achieved and its risks are readily apparent, it’s time to set real limits on Facebook’s power.
Kate Losse is a writer based in California. She is the author of The Boy Kings: A Journey Into the Heart of the Social Network, which details the early culture of Facebook during her time there. Facebook declined to comment about the anecdotes in her book.
First Person is Vox’s home for compelling, provocative narrative essays. Do you have a story to share? Read our submission guidelines, and pitch us at firstperson@vox.com.