So when people voice fears of artificial intelligence, very often, they invoke images of humanoid robots run amok. You know? Terminator? You know, that might be something to consider, but that's a distant threat. Or, we fret about digital surveillance with metaphors from the past. "1984," George Orwell's "1984," it's hitting the bestseller lists again. It's a great book, but it's not the correct dystopia for the 21st century. What we need to fear most is not what artificial intelligence will do to us on its own, but how the people in power will use artificial intelligence to control us and to manipulate us in novel, sometimes hidden, subtle and unexpected ways. Much of the technology that threatens our freedom and our dignity in the near-term future is being developed by companies in the business of capturing and selling our data and our attention to advertisers and others: Facebook, Google, Amazon, Alibaba, Tencent.
﻿当人们谈论起对于 人工智能的恐惧时 浮现在脑海里的 往往是失控的机器人 就像终结者一样 这种担心固然有一定道理 但目前和我们相隔甚远 我们也会对数字监控心生恐惧 这从过去的隐喻中就可以初见端倪 例如乔治·奥威尔的著作 1984 最近再次登上热销榜 这是一本很好的书 但是书中的反乌托邦社会 并不是21世纪的正确缩影 我们最应该担心的 并不是人工智能本身 对我们的影响 而是掌权的人会怎样 利用人工智能 来控制并摆布我们 通过新奇 有时是隐蔽的 微妙以及不可预料的手段 很多对我们的 自由和尊严有潜在威胁的科技 正在被那些收集 并贩卖我们的私人信息给广告商的
Now, artificial intelligence has started bolstering their business as well. And it may seem like artificial intelligence is just the next thing after online ads. It's not. It's a jump in category. It's a whole different world, and it has great potential. It could accelerate our understanding of many areas of study and research. But to paraphrase a famous Hollywood philosopher, "With prodigious potential comes prodigious risk."
公司开发出来 例如脸书 谷歌 亚马逊 以及阿里巴巴和腾讯 现在 人工智能也开始强化 他们自身的业务 看起来好像人工智能只不过 是网络广告的下一步 但并非如此 它是一个全新的类别 是一个完全不同的世界 并且有着极高的潜力
Now let's look at a basic fact of our digital lives, online ads. Right? We kind of dismiss them. They seem crude, ineffective. We've all had the experience of being followed on the web by an ad based on something we searched or read. You know, you look up a pair of boots and for a week, those boots are following you around everywhere you go. Even after you succumb and buy them, they're still following you around. We're kind of inured to that kind of basic, cheap manipulation. We roll our eyes and we think, "You know what? These things don't work." Except, online, the digital technologies are not just ads. Now, to understand that, let's think of a physical world example. You know how, at the checkout counters at supermarkets, near the cashier, there's candy and gum at the eye level of kids? That's designed to make them whine at their parents just as the parents are about to sort of check out. Now, that's a persuasion architecture. It's not nice, but it kind of works. That's why you see it in every supermarket. Now, in the physical world, such persuasion architectures are kind of limited, because you can only put so many things by the cashier. Right? And the candy and gum, it's the same for everyone, even though it mostly works only for people who have whiny little humans beside them. In the physical world, we live with those limitations.
它可以加快人们在很多 领域的学习与研究速度 但就如好莱坞一名著名哲学家所言 惊人的潜力带来的是惊人的风险 我们得明白关于数字生活 以及网络广告的基本事实 是吧 我们几乎把它们忽略了 尽管它们看起来很粗糙 没什么说服力 我们都曾在上网时 被网上的一些广告追踪过 它们是根据我们的浏览历史生成的 比如 你搜索了一双皮靴 接下来的一周里 这双皮靴就 在网上如影随形的跟着你 即使你屈服了 买下了它们 广告也不会消失 我们已经习惯了这种 廉价粗暴的操纵 还不屑一顾的想着 这东西对我没用的 但是别忘了 在网上 广告并不是数字科技的全部 为了便于理解 我们举几个 现实世界的例子 你知道为什么在超市收银台的旁边 要放一些小孩子 一眼就能看到的糖果吗 那是为了让孩子在父母面前撒娇 就当他们马上要结账的时候 那是一种说服架构 并不完美 但很管用 这也是每家超市惯用的伎俩 在现实世界里 这种说服架构是有限制的 因为能放在收银台旁边的 东西是有限的 对吧 而且所有人看到的都是同样的糖果
In the digital world, though, persuasion architectures can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent to everyone's phone private screen, so it's not visible to us. And that's different. And that's just one of the basic things that artificial intelligence can do.
所以说大多数情况下 只是针对那些带着小孩的买主 这些是现实世界的种种局限 但在网络世界里 说服架构可以千变万化 因人而异 它们可以理解并推断个体用户的喜好 然后被部署在用户周围 一个接一个 通过对每个人弱点的了解 出现在每个人的私人手机屏幕上
Now, let's take an example. Let's say you want to sell plane tickets to Vegas. Right? So in the old world, you could think of some demographics to target based on experience and what you can guess. You might try to advertise to, oh, men between the ages of 25 and 35, or people who have a high limit on their credit card, or retired couples. Right? That's what you would do in the past.
而其他人却看不见 这是（与物质世界）截然不同的地方 而这仅仅是人工智能的基本功能之一 再举个例子 假如你要销售飞往 拉斯维加斯的机票 在过去 你也许需要一些 统计资料来确定销售对象 然后根据你的个人经验和判断 你也许会把推广目标定为 25岁到35岁的男性
With big data and machine learning, that's not how it works anymore. So to imagine that, think of all the data that Facebook has on you: every status update you ever typed, every Messenger conversation, every place you logged in from, all your photographs that you uploaded there. If you start typing something and change your mind and delete it, Facebook keeps those and analyzes them, too. Increasingly, it tries to match you with your offline data. It also purchases a lot of data from data brokers. It could be everything from your financial records to a good chunk of your browsing history. Right? In the US, such data is routinely collected, collated and sold. In Europe, they have tougher rules.
或者是高信用卡额度人群 或者是退休夫妇 对吧 那就是你以前采用的方法 但在大数据和人工智能面前 一切都改变了 请想象一下 你被Facebook掌握的所有信息 你的每一次状态更新 每一条对话内容 所有的登陆地点 你上传的所有照片 还有你输入了一部分 后来又删掉的内容 Facebook也会保存下来进行分析 它将越来越多的数据 和你的离线生活匹配 还有从网络信息商贩那里购买信息 从你的财务记录到 所有网页浏览记录 各类信息无所不包
So what happens then is, by churning through all that data, these machine-learning algorithms -- that's why they're called learning algorithms -- they learn to understand the characteristics of people who purchased tickets to Vegas before. When they learn this from existing data, they also learn how to apply this to new people. So if they're presented with a new person, they can classify whether that person is likely to buy a ticket to Vegas or not. Fine. You're thinking, an offer to buy tickets to Vegas. I can ignore that. But the problem isn't that. The problem is, we no longer really understand how these complex algorithms work. We don't understand how they're doing this categorization. It's giant matrices, thousands of rows and columns, maybe millions of rows and columns, and not the programmers and not anybody who looks at it, even if you have all the data, understands anymore how exactly it's operating any more than you'd know what I was thinking right now if you were shown a cross section of my brain. It's like we're not programming anymore, we're growing intelligence that we don't truly understand.
在美国 这种数据是经常被收集 被整理 然后被贩卖的 而在欧洲 这是被明令禁止的 所以接下来会发生的是 电脑通过算法分析 所有收集到的数据 这个算法之所以叫做学习算法 因为它们能够学会分析所有之前买过  去维加斯机票的人的性格特征 而在学会分析已有数据的同时 它们也在学习如何将其 应用在新的人群中 如果有个新用户 它们可以迅速判断这个人 会不会买去维加斯的机票 这倒还好 你也许会想 不就是一个卖机票的广告吗 我不理它不就行了 但问题不在这儿 真正的问题是 我们已经无法真正理解 这些复杂的算法究竟是怎样工作的了 我们不知道它们是 如何进行这种分类的 那是庞大的数字矩阵 成千上万的行与列 也许是数百万的行与列 而没有程序员看管它们 没有任何人看管它们 即使你拥有所有的数据 也完全了解算法是如何运行的 如果仅仅展示给你我的部分脑截面
And these things only work if there's an enormous amount of data, so they also encourage deep surveillance on all of us so that the machine learning algorithms can work. That's why Facebook wants to collect all the data it can about you. The algorithms work better.
你也不可能知道我的想法 就好像这已经不是我们在编程了 我们是在创造一种 我们并不了解的智能 这种智能只有在 庞大的数据支持下才能工作 所以它们才致力于对我们 所有人进行强力监控
So let's push that Vegas example a bit. What if the system that we do not understand was picking up that it's easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers. They could do this, and you'd have no clue that's what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, "That's why I couldn't publish it." I was like, "Couldn't publish what?" He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on.
以便学习算法的运行 这就是Facebook费尽心思 收集用户信息的原因 这样算法才能更好的运行 我们再将那个维加斯的 例子强化一下 如果那个我们并不了解的系统 发现即将进入躁狂 阶段的躁郁症患者 更有可能买去维加斯的机票 这是一群有挥霍金钱 以及好赌倾向的人 这些算法完全做得到 而你却对它们 是如何做到的毫不知情 我曾把这个例子举给 一些计算机科学家 后来其中一个找到我 他很烦恼 并对我说 这就是我没办法发表它的原因 我问 发表什么 他曾尝试在狂躁症病人 被确诊具有某些医疗症状前 是否可以从他们的社交媒体上 发现病情的端倪
Now, the problem isn't solved if he doesn't publish it, because there are already companies that are developing this kind of technology, and a lot of the stuff is just off the shelf. This is not very difficult anymore.
他做到了 还做得相当不错 但他不明白这是怎么做到的 或者说如何算出来的 那么如果他不发表论文 这个问题就得不到解决 因为早就有其他的一些公司
Do you ever go on YouTube meaning to watch one video and an hour later you've watched 27? You know how YouTube has this column on the right that says, "Up next" and it autoplays something? It's an algorithm picking what it thinks that you might be interested in and maybe not find on your own. It's not a human editor. It's what algorithms do. It picks up on what you have watched and what people like you have watched, and infers that that must be what you're interested in, what you want more of, and just shows you more. It sounds like a benign and useful feature, except when it isn't.
在发展这样的科技了 很多类似的东西现在就摆在货架上 这已经不是什么难事了 你是否曾经想在YouTube 上看一个视频 结果不知不觉看了27个 你知不知道YouTube的 网页右边有一个边栏 上面写着 即将播放 然后它往往会自动播放一些东西 这就是算法 算出你的兴趣点 甚至连你自己都没想到 这可不是人工编辑 这就是算法的本职工作 它选出你以及和你 相似的人看过的视频 然后推断出你的大致兴趣圈 推断出你想看什么
So in 2016, I attended rallies of then-candidate Donald Trump to study as a scholar the movement supporting him. I study social movements, so I was studying it, too. And then I wanted to write something about one of his rallies, so I watched it a few times on YouTube. YouTube started recommending to me and autoplaying to me white supremacist videos in increasing order of extremism. If I watched one, it served up one even more extreme and autoplayed that one, too. If you watch Hillary Clinton or Bernie Sanders content, YouTube recommends and autoplays conspiracy left, and it goes downhill from there.
然后就那些东西展示给你 听起来像是一个无害且贴心的功能 但有时候它并不是 2016年 我参加了当时的总统 候选人 唐纳德 特朗普 的系列集会 以学者的身份研究 这个支持他的运动 我当时正好在研究社会运动 然后我想要写一些 有关其中一次集会的文章 所以我在YouTube上看了几遍 这个集会的视频 然后YouTube就开始 不断给我推荐 并且自动播放一些 白人至上主义的视频 这些视频一个比一个更极端 如果我看了一个 就会有另一个更加 极端的视频加入队列 并自动播放
Well, you might be thinking, this is politics, but it's not. This isn't about politics. This is just the algorithm figuring out human behavior. I once watched a video about vegetarianism on YouTube and YouTube recommended and autoplayed a video about being vegan. It's like you're never hardcore enough for YouTube.
如果你看有关 希拉里 克林顿 或者 伯尼 桑德斯 的内容 YouTube就会开始推荐并 自动播放左翼阴谋内容 并且愈演愈烈 你也许觉得这和政治有关 但事实上并不是这样 这只不过是算法在 学习人类行为而已
(Laughter)
我曾在YouTube上观看过 一个有关素食主义的视频
So what's going on? Now, YouTube's algorithm is proprietary, but here's what I think is going on. The algorithm has figured out that if you can entice people into thinking that you can show them something more hardcore, they're more likely to stay on the site watching video after video going down that rabbit hole while Google serves them ads. Now, with nobody minding the ethics of the store, these sites can profile people who are Jew haters, who think that Jews are parasites and who have such explicit anti-Semitic content, and let you target them with ads. They can also mobilize algorithms to find for you look-alike audiences, people who do not have such explicit anti-Semitic content on their profile but who the algorithm detects may be susceptible to such messages, and lets you target them with ads, too. Now, this may sound like an implausible example, but this is real. ProPublica investigated this and found that you can indeed do this on Facebook, and Facebook helpfully offered up suggestions on how to broaden that audience. BuzzFeed tried it for Google, and very quickly they found, yep, you can do it on Google, too. And it wasn't even expensive. The ProPublica reporter spent about 30 dollars to target this category.
然后YouTube就推送了 纯素主义的视频 在YouTube上你就 好像永远都不够决绝 （笑声） 这到底是怎么回事儿 现在YouTube有其专有的算法 但我认为事情是这样的 这算法已经分析出了 如果能展示出更加核心的内容  以此来诱惑网站用户 那么人们就更有可能沉浸在网页里 一个接一个的观看推荐的视频 同时Google给它们投放广告 目前没有人在意网络的道德规范 这些网站可以对用户进行划分 哪些人仇视犹太人 哪些人视犹太人为寄生虫 以及说过明显反犹太言论的人 然后让你面向这些 目标人群投放广告 他们也可以利用算法 来找到和你类似的观众 那些个人账号中虽然没有过 明显的反犹太人言论 但却被算法检测出 可能被这种言论影响的人 然后也面向他们投放同样的广告 这听起来难以置信 但确有其事 ProPublica在这方面调查过 发现这的确可以在Facebook上实现 Facebook还积极的就 有关如何将算法的受众 再度扩大提出了建议 Buzzfeed曾在Google上 进行尝试 并很快发现 没错 这也可在Google实现
So last year, Donald Trump's social media manager disclosed that they were using Facebook dark posts to demobilize people, not to persuade them, but to convince them not to vote at all. And to do that, they targeted specifically, for example, African-American men in key cities like Philadelphia, and I'm going to read exactly what he said. I'm quoting.
而这甚至花不了多少钱 ProPublica只花了大概30美元 就找出了目标人群 那么去年 特朗普的 社交媒体经理披露道 他们使用Facebook的 隐藏发帖来动员大众退出 不是劝告 而是说服他们根本就不要投票 为了做到这一点 他们有 针对性的找到目标
They were using "nonpublic posts whose viewership the campaign controls so that only the people we want to see it see it. We modeled this. It will dramatically affect her ability to turn these people out."
比如 在费城这种关键城市里 居住的非裔美国人 请注意接下来我要复述的 都是他们的原话 他们使用 以下是引用 由竞选者控制的 非面向公众的贴文发帖
What's in those dark posts? We have no idea. Facebook won't tell us.
这样就只有我们选定的人 可以看到其内容 我们估算了一下 这会极大程度的做到让这些人退出
So Facebook also algorithmically arranges the posts that your friends put on Facebook, or the pages you follow. It doesn't show you everything chronologically. It puts the order in the way that the algorithm thinks will entice you to stay on the site longer.
以上我引述的隐藏贴文说了些什么呢 我们无从知晓 Facebook不会告诉我们 所以Facebook也利用 算法管理贴文 不管是你朋友的发帖 还是你的跟帖
Now, so this has a lot of consequences. You may be thinking somebody is snubbing you on Facebook. The algorithm may never be showing your post to them. The algorithm is prioritizing some of them and burying the others.
它不会把东西按时间顺序展现给你 而是按算法计算的顺序展现给你 以使你更长时间停留在页面上 而这一切都是有后果的
Experiments show that what the algorithm picks to show you can affect your emotions. But that's not all. It also affects political behavior. So in 2010, in the midterm elections, Facebook did an experiment on 61 million people in the US that was disclosed after the fact. So some people were shown, "Today is election day," the simpler one, and some people were shown the one with that tiny tweak with those little thumbnails of your friends who clicked on "I voted." This simple tweak. OK? So the pictures were the only change, and that post shown just once turned out an additional 340,000 voters in that election, according to this research as confirmed by the voter rolls. A fluke? No. Because in 2012, they repeated the same experiment. And that time, that civic message shown just once turned out an additional 270,000 voters. For reference, the 2016 US presidential election was decided by about 100,000 votes. Now, Facebook can also very easily infer what your politics are, even if you've never disclosed them on the site. Right? These algorithms can do that quite easily. What if a platform with that kind of power decides to turn out supporters of one candidate over the other? How would we even know about it?
你也许会觉得有人在 Facebook上对你不理不睬 这是因为算法可能根本就 没有给他们展示你的发帖 算法会优先展示一些贴文 而把另一些埋没 实验显示 算法决定展示给你的东西 会影响到你的情绪 还不止这样 它也会影响到政治行为 在2010年的中期选举中 Facebook对美国6100万人 做了一个实验 这是在事后被披露的 当时有些人收到了 今天是选举日 的贴文 简单的版本 而有一些人则收到了 微调过的贴文 上面有一些小的缩略图 显示的是你的 哪些好友 已投票 这小小的微调 看到了吧 改变仅仅是 添加了缩略图而已 并且那些贴文仅出现一次 后来的调查结果显示 在那次选举中 根据选民登记册的确认 多出了34万的投票者 仅仅是意外吗 并非如此 因为在2012年 他们再次进行了同样的实验 而那一次 类似贴文也只出现了一次 最后多出了28万投票者 作为参考 2016年总统大选的 最终结果是由大概 十万张选票决定的 Facebook还可以轻易 推断出你的政治倾向 即使你从没有在网上披露过 这可难不倒算法
Now, we started from someplace seemingly innocuous -- online adds following us around -- and we've landed someplace else. As a public and as citizens, we no longer know if we're seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we're just at the beginning stages of this. These algorithms can quite easily infer things like your people's ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and genders, just from Facebook likes. These algorithms can identify protesters even if their faces are partially concealed. These algorithms may be able to detect people's sexual orientation just from their dating profile pictures.
而如果一个拥有 这样强大能力的平台 决定要让一个候选者胜利获选 我们根本无法察觉 现在我们从一个无伤大雅的方面 也就是如影随形的 网络广告 转到了另一个方面 作为一个普通大众和公民 我们已经无法确认 自己看到的信息 和别人看到的信息是否一样 而在没有一个共同的 基本信息的情况下 逐渐的 公开辩论将变得不再可能 而我们已经开始走在这条路上了 这些算法可以轻易推断出 任何一个用户的种族 宗教信仰 包括政治倾向 还有个人喜好 你的智力 心情 以及用药历史 父母是否离异 你的年龄和性别 这些都可以从你的 Facebook关注里推算出来 这些算法可以识别抗议人士
Now, these are probabilistic guesses, so they're not going to be 100 percent right, but I don't see the powerful resisting the temptation to use these technologies just because there are some false positives, which will of course create a whole other layer of problems. Imagine what a state can do with the immense amount of data it has on its citizens. China is already using face detection technology to identify and arrest people. And here's the tragedy: we're building this infrastructure of surveillance authoritarianism merely to get people to click on ads. And this won't be Orwell's authoritarianism. This isn't "1984." Now, if authoritarianism is using overt fear to terrorize us, we'll all be scared, but we'll know it, we'll hate it and we'll resist it. But if the people in power are using these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels, to deploy persuasion architectures at scale and to manipulate individuals one by one using their personal, individual weaknesses and vulnerabilities, and if they're doing it at scale through our private screens so that we don't even know what our fellow citizens and neighbors are seeing, that authoritarianism will envelop us like a spider's web and we may not even know we're in it.
即使他们部分掩盖了面部特征 这些算法可以测出人们的性取向 只需要查看他们的约会账号头像 然而所有的一切都 只是概率性的推算 所以它们不会百分之百精确 这些算法有很多误报 也必然会导致其他层次的种种问题 但我没有看到对想要使用这些 科技的有力反抗 想象一下 拥有了海量的市民数据 一个国家能做出什么 中国已经在使用 面部识别来抓捕犯人 然而不幸的是 我们正在建造一个 监控独裁性质的设施 目的仅是为了让人们点击广告 而这和奥威尔笔下的独裁政府不同 不是 1984 里的情景 现在如果独裁主义公开恐吓我们 我们会惧怕 但我们也会察觉 我们会奋起抵抗并瓦解它 但如果掌权的人使用这种算法 来安静的监视我们 来评判我们 煽动我们 来预测和识别出那些 会给政府制造麻烦的家伙 并且大规模的布置说服性的架构 利用每个人自身的 弱点和漏洞来把我们逐个击破 假如他们的做法受众面很广 就会给每个手机都推送不同的信息 这样我们甚至都不会知道
So Facebook's market capitalization is approaching half a trillion dollars. It's because it works great as a persuasion architecture. But the structure of that architecture is the same whether you're selling shoes or whether you're selling politics. The algorithms do not know the difference. The same algorithms set loose upon us to make us more pliable for ads are also organizing our political, personal and social information flows, and that's what's got to change.
我们周围的人看到的是什么 独裁主义会像蜘蛛网 一样把我们困住 而我们并不会意识到 自己已深陷其中 Facebook现在的市值 已经接近了5000亿美元 只因为它作为一个说服架构 完美的运作着 但不管你是要卖鞋子 还是要卖政治思想 这个架构的结构都是固定的 算法并不知道其中的差异 同样的算法也被使用在我们身上
Now, don't get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I've written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it's not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it's not the intent or the statements people in technology make that matter, it's the structures and business models they're building. And that's the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don't work on the site, it doesn't work as a persuasion architecture, or its power of influence is of great concern. It's either one or the other. It's similar for Google, too.
它让我们更易受广告诱导 也管控着我们的政治 个人 以及社会信息的流向 而那正是需要改变的部分 我还需要澄清一下 我们使用数字平台 因为它们带给我们便利 我和世界各地的朋友和家人 通过 Facebook 联系 我也曾撰文谈过社交媒体 在社会运动中的重要地位 我也曾研究过如何使用这些技术 来绕开世界范围内的审查制度 但并不是那些管理Facebook 或者Google的人 在意图不轨的尝试 如何使世界走向极端化 并且推广极端主义 我曾读到过很多由这些人写的 十分善意的言论 但重要的并不是 这些科技人员说的话 而是他们正在建造的 架构体系和商业模式 那才是问题的关键所在 要么Facebook是个 5000亿市值的弥天大谎 那些广告根本就不奏效
So what can we do? This needs to change. Now, I can't offer a simple recipe, because we need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to face and try to deal with the lack of transparency created by the proprietary algorithms, the structural challenge of machine learning's opacity, all this indiscriminate data that's being collected about us. We have a big task in front of us. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won't be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don't see how we can postpone this conversation anymore. These structures are organizing how we function and they're controlling what we can and we cannot do. And many of these ad-financed platforms, they boast that they're free. In this context, it means that we are the product that's being sold. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue.
它并不是以一个 说服架构的模式成功运作 要么Facebook的影响力 就是令人担忧的 只有这两种可能 Google也是一样 那么我们能做什么呢 我们必须改变现状 现在我还无法给出 一个简单的方法 因为我们必须重新调整 整个数字科技的运行结构 一切科技从发展到激励的方式 不论是在经济 还是在其他领域 都是建立在这种结构之上 我们必须得面对并尝试解决 由专有算法制造出来的 透明度过低问题 还有由机器学习的 不透明带来的结构挑战 以及所有这些不加选择 收集到的我们的信息 我们的任务艰巨 必须调整我们的科技 我们的创造力 以及我们的政治 这样我们才能够制造出 真正为人类服务的人工智能 但这也会受到人类价值观的阻碍 我也明白这不会轻松 我们甚至都无法在这些 理论上达成一致 但如果我们每个人都认真对待 这些我们一直以来 都在依赖的操作系统 我认为我们也 没有理由再拖延下去了 这些结构 在影响着我们的工作方式 它们同时也在控制 我们能做与不能做什么事情 而许许多多的 这种以广告为生的平台 他们夸下海口 对大众分文不取 而事实上 我们却是他们销售的产品
(Applause)
我们需要一种数字经济
So to go back to that Hollywood paraphrase, we do want the prodigious potential of artificial intelligence and digital technology to blossom, but for that, we must face this prodigious menace, open-eyed and now.
一种我们的数据以及我们专注的信息 不会如竞拍一样被售卖给 出价最高的独裁者和煽动者 （掌声） 回到那句好莱坞名人说的话 我们的确想要
Thank you.
由人工智能与数字科技发展 带来的惊人潜力
(Applause)
但与此同时 我们也要 做好面对惊人风险的准备 睁大双眼 就在此时此刻 谢谢 （掌声）