Talking Brains, Hardware, and Privacy With Facebook’s AR Guru

LG: Describe that then. What does that actually look like when you say the consumer’s in control and understands? Because my thought is maybe, and we’re seeing this from other products as well, for example, the new Google Nest Hub, Google Home Nest Hub. I can’t keep track of their naming, but they’re doing some of these functions on device, for example, their sleep tracking which uses this miniaturized radar solution that they’ve developed in their labs. But then ultimately it’s still suggesting that you share that health data or sleep data to Google Fit, which is this cloud-based application, so there is an encouragement for the user to share things.

And then after that, it’s this giant, I’d say like it’s a shrug emoji, what happens to it after that. So ultimately, if someone is using a Facebook Portal, or Facebook, let’s say, the air glasses that may eventually come out, if there are some functions that are happening on device, but ultimately, there’s still data that’s being sent to the server, how do you actually give a consumer control over what happens after that?

AB: Well, it’s actually quite straight forward. The execution of it is not, but the doing of it is straight forward. On Facebook, you have access to all the data that Facebook has on you and you can individually or collectively remove it. And we take that incredibly seriously. That’s obviously a matter of policy, global policy at this point. So from a data control standpoint, it’s straightforward, likewise, from a sharing standpoint, that’s always been straightforward. It’s like, who am I sharing it with? What’s the audience? Those two, I think, are pretty well-established types of controls that people have on the internet and on the devices.

More generally though, look, it’s a question of, does the consumer want to exchange the value? Look, this goes back to the, man, Microsoft early 2000s. Like you install the software, it says, “Hey, can I share crashes and analytics reports with the server to improve the software?” And a lot of people check on that box because they’re not worried about it. They feel, “Yep. I understand what this is for, how it’s going to be used, what the data consists of and I’m comfortable with it.” And it hasn’t really been a problem for 20, some odd years you’ve been doing it. That’s a really common place.

If you want, go get a Portal out of the box, if you don’t have one, you should, so you can go ahead and I’ll wait, I’m just kidding. You got a Portal out of box part of the flow, it says, “Hey, when you use our assistant, when you use the wake word,” it’s a really explicit thing, it says, “That has to go to the server because that’s where we keep the answers. And that’s how we understand the question. And not only that, it might be reviewed either by an AI system or even by an employee of the company.” And it’s right there written and you say, “Yeah, I accept it.” Or, “Nope, I don’t accept that,” if it’s in the kind what we’d call the forced choice framework, is the user design language that you use to describe that.

So everyone who goes through a Portal and sets one up, goes through that decision. So I think it’s very clear about what it is, why it is, how it’s used, and they have a choice to use it or not use it. Listen, if I could provide the assistant without going to the server, I’d love to, we don’t have the technology. We don’t have the capability to do that locally on device. We can’t keep the entirety of the world’s knowledge on the device. So alas, it’s got to go to the server. And by the way, we want to keep improving this thing, so it could end up being heard by contractors. So if you’re not comfortable with that, then don’t use this feature. Same with Google fitness, if you don’t think the Google Fit cloud thing is useful to you, definitely don’t do it.