Facebook Doesn’t Have to Be Terrible
And you see some of that in these documents as well, where there might be a really damning internal doc from 2019 about how the algorithm, because of how it amplifies re-shares, for example, that inadvertently ended up boosting the bad stuff. And one reason for that is that it was giving too much weight to re-shares and to certain kinds of reactions, but over time you also see the researchers saying, OK, so here’s how we’re changing the weighting to address those problems.
So that’s a micro version of this point, which is that Facebook does respond to these controversies, and it does change. It was very, think back to way long ago, 2019, Facebook didn’t do any fact checking, or Facebook did very little intervention for material that was false. That was really a pandemic, an election holy crap, we have to start doing things that we previously were uncomfortable with, because we didn’t want to be the arbiters of truth, which was a comment that Mark Zuckerberg made in fall 2019. So they change a lot in response to events, in response to shit storms like the one they’re currently standing under. So the problem is that this seems to be the only way they change, which is kicking and screaming, to mix metaphors, when they already have egg on their face, and when bad stuff has already happened. So I guess a different way to put the question would be, will the way that they change, change?
LG: Mm-hmm.
MC: And that’s been the thing that I’ve been thinking about, because the clear takeaway from Haugen’s testimony, and the thing that has been made very clear, is that there’s a lot of bad stuff happening on the platform, and Facebook does not have the resources to spot it and to keep it out of the public view. So what are the answers? Better automated tools, better AI tools to recognize misinformation, to recognize abuse, things like that, or more humans to spot those things? So more human intervention, or more machine intervention? That seems like it’s a problem of scale, and it’s a problem of building the technology. So how do they respond? What do they do?
GE: So you’re right about that, and I’ve written, I wrote a piece last year saying, stop saying Facebook is too big to moderate, because that’s their excuse for a lot of stuff. It’s like, look, we’ve got billions of users. We can’t moderate all this stuff. It’s like, OK, well, maybe you also don’t get to have 40 billion in profits, or 20 billion in profits, or whatever it is. Companies in many regulated industries have higher revenues than Facebook, but lower profits. Why would that be? Oh, because they actually have to spend money to make sure that your car doesn’t blow up.
MC: Yeah.
GE: So, that’s one issue. But I want to emphasize something that Frances has said, and she said in her testimony and in her 60 minutes interview, and she’s been really clear on this point, which is she argues pretty compellingly that thinking about how to react to bad stuff and get it off is the wrong framework, that the much more promising direction to go is what she calls content agnostic changes to the algorithm design. So this is a familiar idea. It’s the root problem with, not just Facebook, but other recommender based social platforms, is that they’re designed around the theory that, well, whatever people engage with, or spend time on, that shows that they value it. So just design it, and it’s good for us. So it helps us sell more ads because they’re spending more time on our platforms. So let’s just design around that goal.