In Sheera Frenkel and Cecilia Kang's "An Ugly Truth," the authors recount the company's long history of insider threats in which employees (mostly men) used the company's tools to stalk people (mostly women).
The stalking targets included both strangers and intimate partners - for example, an engineer used FB's tools to locate his partner after she fled their shared vacation hotel room in order to "confront her."
Another FB engineer stalked a woman who didn't return his messages after a date, accessing years of private messages and photos, including photos that his target believed she had permanently deleted, but which Facebook had secretly retained.
All told, Facebook fired 52 employees for data abuses between Jan 2014 and Aug 2015, after a policy change eliminated many access safeguards in the name of eliminating "the red tape that slowed down engineers."
In other words, Facebook was in a situation in which its users' interests were at odds with its shareholders. By eliminating protections for its users, it allowed its engineers to work more efficiently, and increased its profits.
These kinds of conflicts - between shareholder and stakeholder interests - are the norm in business. Think of a busy retailer that cuts its cashiers: reducing payroll costs increases profits, at the expense of worker stress and longer waits for customers.
The question of how much value can be shifted from employees and customers to shareholders isn't really an economic one - it's really a *policy* question.
If we have strong labor laws - protecting cashiers from undue stress, extending unemployment benefits to workers who quit bad jobs, protecting workers from non-compete clauses, separating health-care from employment - then a business that screws its cashiers will lose them.
Or if the business has a regulated monopoly - a patent, a trademark or some other exclusive right that makes it the only game in town (say, the sole right to sell snacks in an airport), it can shift more value from customers to shareholders before the customers walk away.
Facebook - and other tech monopolists - have engineered a world where they get to side with shareholders over users, again and again, to the users' great detriment, without losing those users.
Economic analysis of tech monopolies focuses on "network effects" - the way more users make Facebook more valuable (you join FB because your friends are there, more friends sign up because *you're* there).
Taken on their own, network effects are cause for despair, predicting that tech will produce "natural monopolies" - an inevitable winner-take-all market. But that's obviously not true - I'm not typing this on a Cray or using Altavista to look up facts while I do.
Far more important than network effects for antimonopoly analysis is *switching costs* - the things you give up when you quit a service. In FB's case, quitting means leaving behind your friends, communities and customers.
Now, this needn't be the case. You can switch phone companies or email providers without shattering your social connections. FB has engineered a high switching cost, blocking other services from connecting to it.
After all, the more you stand to lose by leaving FB, the worse FB can treat you before you're willing to leave. Zuck didn't abolish the safeguards that protected us from rogue FB employees because he's nosy - he did it because it's profitable.
He was betting (probably correctly) that no matter how unhappy the ensuring scandals made his users, it wouldn't make so many of them unhappy enough to quit that the losses would outweigh the gains from exposing us to predatory Facebookers.
Which is why proposals like the ACCESS Act, currently working its way through Congress, are such a big deal. It's a law that would force FB to let third parties plug into it, so you could leave FB but stay in touch with the people who stay behind.
In response to this (and the EU's Digital Markets and Digital Services Act), FB (and some lawmakers) warned that allowing third parties to connect to monopoly platforms would expose users to privacy risks, by reducing tech companies' control over their services.
There's an element of truth to this, but left unsaid is that reducing the switching costs for leaving Facebook will protect users *from Facebook*.
When FB says that it needs total control over its servers or Cambridge Analytica will steal our data, we have to remember that FB *already* let Cambridge Analytica steal our data.
When FB gutted its internal controls to increase profitability by decreasing user protections, senior employees went to Mark Zuckerberg to warn him against it. Alex Stamos, then FB's then-CSO, reportedly strenuously objected, but was personally overruled by Zuck.
Contacted by Insider's Sarah Jackson, an anonymous FB spokesperson (literally the only kind of person FB allows anonymity to!) said "We've always had zero tolerance for abuse and have fired every single employee ever found to be improperly accessing data."
But that's a dodge. "Zero tolerance" isn't the same as "top priority."
I am skeptical that FB will *ever* be a trustworthy guardian of its users' safety and privacy. I don't think the problem is that Mark Zuckerberg is the wrong self-appointed czar of 3B peoples' lives - I think the problem is that no one should have that job.
But if you disagree - if you want to fix, rather than abolish, Facebook - then you need to figure out which policies will tip the balance in favor of the public interest.
Policies like interop impose immediate, meaningful, concrete costs on FB every time it sides against the public and with its shareholders. If you're worried that interop will expose FB users to rogue companies or state actors, then regulate who can connect to FB.
But don't leave that up to FB. FB will side with shareholders over users whenever it's profitable to do so. Putting FB in charge of interoperability shifts that balance dramatically in favor of shareholders and against the public.
John Lodder (modified):
Denis Defreyne (modified):
Mamot.fr est une serveur Mastodon francophone, géré par La Quadrature du Net.