查看原文
其他

TED学院 | 盲目信仰大数据的时代必须结束

小芳老师 2020-09-18

提示:点击↑上方"小芳老师"免费关注,点击“阅读原文”加入打卡

TED简介:2017 | 算法决定谁会得到贷款,谁会得到工作面试,谁会得到保险等等—— 但它们不会自动使事情变得公平。身为数学家兼数据科学家的凯西·奥尼尔为算法创造了一个术语,它们是秘密的、重要的和有害的:“杀伤性数学武器”。通过这个演讲了解更多关于这些公式背后不为人知的运作方式吧。


https://v.qq.com/txp/iframe/player.html?width=500&height=375&auto=0&vid=u0547kuqvco

中英对照翻译

Algorithms are everywhere. They sort andseparate the winners from the losers. The winners get the job or a good creditcard offer. The losers don't even get an interview or they pay more forinsurance. We're being scored with secret formulas that we don't understandthat often don't have systems of appeal. That begs the question: What if thealgorithms are wrong?

算法无处不在。他们把成功者和失败者区分开来。成功者得到工作或是一个很好的信用卡优惠计划。失败者甚至连面试机会都没有,或者要为保险付更多的钱。我们被不理解的秘密公式打分,却并没有上诉的渠道。这引出了一个问题:如果算法是错误的怎么办?


To build an algorithm you need two things:you need data, what happened in the past, and a definition of success, thething you're looking for and often hoping for. You train an algorithm bylooking, figuring out. The algorithm figures out what is associated withsuccess. What situation leads to success?

构建一个算法需要两个要素:需要数据,如过去发生的事情,和成功的定义,你正在寻找的,通常希望得到的东西。你可以通过观察,理解来训练算法。这种算法能找出与成功相关的因素。什么情况意味着成功?


Actually, everyone uses algorithms. Theyjust don't formalize them in written code. Let me give you an example. I use analgorithm every day to make a meal for my family. The data I use is theingredients in my kitchen, the time I have, the ambition I have, and I curatethat data. I don't count those little packages of ramen noodles as food.

其实,每个人都使用算法。他们只是没有把它们写成书面代码。举个例子。我每天都用一种算法来为我的家人做饭。我使用的数据就是我厨房里的原料,我拥有的时间,我的热情,然后我整理了这些数据。我不把那种小包拉面算作食物。


My definition of success is: a meal issuccessful if my kids eat vegetables. It's very different from if my youngestson were in charge. He'd say success is if he gets to eat lots of Nutella. ButI get to choose success. I am in charge. My opinion matters. That's the firstrule of algorithms.

我对成功的定义是:如果我的孩子们肯吃蔬菜,这顿饭就是成功的。这和我最小的儿子负责做饭时的情况有所不同。他说,如果他能吃很多Nutella巧克力榛子酱就是成功。但我可以选择成功。我负责。我的意见就很重要。这就是算法的第一个规则。


Algorithms are opinions embedded in code.It's really different from what you think most people think of algorithms. Theythink algorithms are objective and true and scientific. That's a marketingtrick. It's also a marketing trick to intimidate you with algorithms, to makeyou trust and fear algorithms because you trust and fear mathematics. A lot cango wrong when we put blind faith in big data.

算法是嵌入在代码中的观点。这和你认为大多数人对算法的看法是不同的。他们认为算法是客观、真实和科学的。那是一种营销技巧。这也是一种用算法来恐吓你的营销手段,为了让你信任和恐惧算法因为你信任并害怕数学。当我们盲目信任大数据时,很多人都可能犯错。


This is Kiri Soares. She's a high schoolprincipal in Brooklyn. In 2011, she told me her teachers were being scored witha complex, secret algorithm called the "value-added model." I toldher, "Well, figure out what the formula is, show it to me. I'm going toexplain it to you." She said, "Well, I tried to get the formula, butmy Department of Education contact told me it was math and I wouldn'tunderstand it."

这是凯丽·索尔斯。她是布鲁克林的一名高中校长。2011年,她告诉我,她学校的老师们正在被一个复杂并且隐秘的算法进行打分,这个算法被称为“增值模型"。我告诉她,“先弄清楚这个公式是什么,然后给我看看。我来给你解释一下。”她说,“我寻求过这个公式,但是教育部的负责人告诉我这是数学,给我我也看不懂。”


It gets worse. The New York Post filed aFreedom of Information Act request, got all the teachers' names and all theirscores and they published them as an act of teacher-shaming. When I tried toget the formulas, the source code, through the same means, I was told Icouldn't. I was denied. I later found out that nobody in New York City had accessto that formula. No one understood it. Then someone really smart got involved,Gary Rubinstein. He found 665 teachers from that New York Post data thatactually had two scores. That could happen if they were teaching seventh grademath and eighth grade math. He decided to plot them. Each dot represents ateacher.

更糟的还在后面。纽约邮报提出了“信息自由法”的要求,来得到所有老师的名字与他们的分数,并且他们以羞辱教师的方式发表了这些数据。当我试图用同样的方法来获取公式,源代码的时候,我被告知我没有权力这么做。我被拒绝了。后来我发现,纽约市压根儿没有人能接触到这个公式。没有人能看懂。然后,一个非常聪明的人参与了,加里·鲁宾斯坦。他从纽约邮报的数据中找到了665名教师,实际上他们有两个分数。如果他们同时教七年级与八年级的数学,就会得到两个评分。他决定把这些数据绘成图表。每个点代表一个教师。


What is that?

那是什么?


That should never have been used forindividual assessment. It's almost a random number generator.

它永远不应该被用于个人评估。它几乎是一个随机数生成器。


But it was. This is Sarah Wysocki. She gotfired, along with 205 other teachers, from the Washington, DC school district,even though she had great recommendations from her principal and the parents ofher kids.但它确实被使用了。这是莎拉·维索斯基。她连同另外205名教师被解雇了,都是来自华盛顿特区的学区,尽管她的校长还有学生的父母都非常推荐她。


I know what a lot of you guys are thinking,especially the data scientists, the AI experts here. You're thinking,"Well, I would never make an algorithm that inconsistent." Butalgorithms can go wrong, even have deeply destructive effects with good intentions.And whereas an airplane that's designed badly crashes to the earth and everyonesees it, an algorithm designed badly can go on for a long time, silentlywreaking havoc.

我知道你们很多人在想什么,尤其是这里的数据科学家,人工智能专家。你在想,“我可永远不会做出这样前后矛盾的算法。”但是算法可能会出错,即使有良好的意图,也会产生毁灭性的影响。每个人都能看到一架设计的很糟糕的飞机会坠毁在地,而一个设计糟糕的算法可以持续很长一段时间,并无声地造成破坏。


This is Roger Ailes.

这是罗杰·艾尔斯。


He founded Fox News in 1996. More than 20women complained about sexual harassment. They said they weren't allowed tosucceed at Fox News. He was ousted last year, but we've seen recently that theproblems have persisted. That begs the question: What should Fox News do toturn over another leaf?

他在1996年创办了福克斯新闻。公司有超过20多名女性曾抱怨过性骚扰。她们说她们不被允许在福克斯新闻有所成就。他去年被赶下台,但我们最近看到问题依然存在。这引出了一个问题:福克斯新闻应该做些什么改变?


Well, what if they replaced their hiringprocess with a machine-learning algorithm? That sounds good, right? Think aboutit. The data, what would the data be? A reasonable choice would be the last 21years of applications to Fox News. Reasonable. What about the definition of success? 

如果他们用机器学习算法取代传统的招聘流程呢?听起来不错,对吧?想想看。数据,这些数据到底是什么?福克斯新闻在过去21年的申请函是一个合理的选择。很合理。那么成功的定义呢?


Reasonable choice would be, well, who is successful at Fox News? Iguess someone who, say, stayed there for four years and was promoted at leastonce. Sounds reasonable. And then the algorithm would be trained. It would betrained to look for people to learn what led to success, what kind ofapplications historically led to success by that definition. Now think aboutwhat would happen if we applied that to a current pool of applicants. It wouldfilter out women because they do not look like people who were successful inthe past.

合理的选择将是,谁在福克斯新闻取得了成功?我猜的是,比如在那里呆了四年,至少得到过一次晋升的人。听起来很合理。然后这个算法将会被训练。它会被训练去向人们学习是什么造就了成功,什么样的申请函在过去拥有这种成功的定义。现在想想如果我们把它应用到目前的申请者中会发生什么。它会过滤掉女性,因为她们看起来不像在过去取得成功的人。


Algorithms don't make things fair if youjust blithely, blindly apply algorithms. They don't make things fair. Theyrepeat our past practices, our patterns. They automate the status quo. Thatwould be great if we had a perfect world, but we don't. And I'll add that mostcompanies don't have embarrassing lawsuits, but the data scientists in thosecompanies are told to follow the data, to focus on accuracy. Think about what thatmeans. Because we all have bias, it means they could be codifying sexism or anyother kind of bigotry.

算法不会让事情变得公平,如果你只是轻率地,盲目地应用算法。它们不会让事情变得公平。它们只是重复我们过去的做法,我们的规律。它们使现状自动化。如果我们有一个完美的世界那就太好了,但是我们没有。我还要补充一点,大多数公司都没有令人尴尬的诉讼,但是这些公司的数据科学家被告知要跟随数据,关注它的准确性。想想这意味着什么。因为我们都有偏见,这意味着他们可以编纂性别歧视或者任何其他的偏见。


Thought experiment, because I like them: anentirely segregated society -- racially segregated, all towns, allneighborhoods and where we send the police only to the minority neighborhoodsto look for crime. The arrest data would be very biased. What if, on top ofthat, we found the data scientists and paid the data scientists to predictwhere the next crime would occur? Minority neighborhood. Or to predict who thenext criminal would be? A minority. The data scientists would brag about howgreat and how accurate their model would be, and they'd be right.

思维实验,因为我喜欢它们:一个完全隔离的社会——种族隔离存在于所有的城镇,所有的社区,我们把警察只送到少数族裔的社区去寻找犯罪。逮捕数据将会是十分有偏见的。除此之外,我们还会寻找数据科学家并付钱给他们来预测下一起犯罪会发生在哪里?少数族裔的社区。或者预测下一个罪犯会是谁?少数族裔。这些数据科学家们会吹嘘他们的模型有多好,多精确,当然他们是对的。


Now, reality isn't that drastic, but we dohave severe segregations in many cities and towns, and we have plenty ofevidence of biased policing and justice system data. And we actually do predicthotspots, places where crimes will occur. And we do predict, in fact, theindividual criminality, the criminality of individuals. 

不过现实并没有那么极端,但我们确实在许多城市里有严重的种族隔离,并且我们有大量的证据表明警察和司法系统的数据存有偏见。而且我们确实预测过热点,那些犯罪会发生的地方。我们确实会预测个人犯罪,个人的犯罪行为。


The news organization Pro Publica recently looked into one of those "recidivism risk"algorithms, as they're called, being used in Florida during sentencing byjudges. Bernard, on the left, the black man, was scored a 10 out of 10. Dylan,on the right, 3 out of 10. 10 out of 10, high risk. 3 out of 10, low risk. Theywere both brought in for drug possession. They both had records, but Dylan hada felony but Bernard didn't. This matters, because the higher score you are,the more likely you're being given a longer sentence.

新闻机构“人民(ProPublica)”最近调查了,其中一个称为“累犯风险”的算法。并在佛罗里达州的宣判期间被法官采用。伯纳德,左边的那个黑人,10分中得了满分。在右边的迪伦,10分中得了3分。10分代表高风险。3分代表低风险。他们都因为持有毒品而被带进了监狱。他们都有犯罪记录,但是迪伦有一个重罪但伯纳德没有。这很重要,因为你的分数越高,你被判长期服刑的可能性就越大。


What's going on? Data laundering. It's aprocess by which technologists hide ugly truths inside black box algorithms andcall them objective; call them meritocratic. When they're secret, important anddestructive, I've coined a term for these algorithms: "weapons of mathdestruction."

到底发生了什么?数据洗钱。这是一个技术人员把丑陋真相隐藏在算法黑盒子中的过程,并称之为客观;称之为精英模式。当它们是秘密的,重要的并具有破坏性的,我为这些算法创造了一个术语:“杀伤性数学武器”。


They're everywhere, and it's not a mistake.These are private companies building private algorithms for private ends. Eventhe ones I talked about for teachers and the public police, those were built byprivate companies and sold to the government institutions. They call it their"secret sauce" -- that's why they can't tell us about it. It's alsoprivate power. They are profiting for wielding the authority of theinscrutable. Now you might think, since all this stuff is private and there'scompetition, maybe the free market will solve this problem. It won't. There's alot of money to be made in unfairness.

它们无处不在,也不是一个错误。这些是私有公司为了私人目的建立的私有算法。甚至是我谈到的老师与公共警察使用的(算法),也都是由私人公司所打造的,然后卖给政府机构。他们称之为“秘密配方(来源)”——这就是他们不能告诉我们的原因。这也是私人权力。他们利用神秘莫测的权威来获利。你可能会想,既然所有这些都是私有的而且会有竞争,也许自由市场会解决这个问题。然而并不会。在不公平的情况下,有很多钱可以赚。


Also, we're not economic rational agents.We all are biased. We're all racist and bigoted in ways that we wish weweren't, in ways that we don't even know. We know this, though, in aggregate,because sociologists have consistently demonstrated this with these experimentsthey build, where they send a bunch of applications to jobs out, equallyqualified but some have white-sounding names and some have black-soundingnames, and it's always disappointing, the results -- always.

而且,我们不是经济理性的代理人。我们都是有偏见的。我们都是固执的种族主义者,虽然我们希望我们不是,虽然我们甚至没有意识到。总的来说,我们知道这一点,因为社会学家会一直通过这些实验来证明这一点,他们发送了大量的工作申请,都是有同样资格的候选人,有些用白人人名,有些用黑人人名,然而结果总是令人失望的。


So we are the ones that are biased, and weare injecting those biases into the algorithms by choosing what data tocollect, like I chose not to think about ramen noodles -- I decided it wasirrelevant. But by trusting the data that's actually picking up on pastpractices and by choosing the definition of success, how can we expect thealgorithms to emerge unscathed? We can't. We have to check them. We have tocheck them for fairness.

所以我们是有偏见的,我们还通过选择收集到的数据来把偏见注入到算法中,就像我不选择去想拉面一样——我自认为这无关紧要。但是,通过信任那些在过去的实践中获得的数据以及通过选择成功的定义,我们怎么能指望算法会是毫无瑕疵的呢?我们不能。我们必须检查。我们必须检查它们是否公平。


The good news is, we can check them forfairness. Algorithms can be interrogated, and they will tell us the truth everytime. And we can fix them. We can make them better. I call this an algorithmicaudit, and I'll walk you through it.

好消息是,我们可以做到这一点。算法是可以被审问的,而且每次都能告诉我们真相。然后我们可以修复它们。我们可以让他们变得更好。我把它叫做算法审计,接下来我会为你们解释。


First, data integrity check. For therecidivism risk algorithm I talked about, a data integrity check would meanwe'd have to come to terms with the fact that in the US, whites and blackssmoke pot at the same rate but blacks are far more likely to be arrested -- fouror five times more likely, depending on the area. What is that bias lookinglike in other crime categories, and how do we account for it?

首先,数据的完整性检查。对于刚才提到过的累犯风险算法,数据的完整性检查将意味着我们不得不接受这个事实,在美国,白人和黑人吸毒的比例是一样的,但是黑人更有可能被逮捕——取决于区域,可能性是白人的4到5倍。这种偏见在其他犯罪类别中是什么样子的,我们又该如何解释呢?


Second, we should think about thedefinition of success, audit that. Remember -- with the hiring algorithm? Wetalked about it. Someone who stays for four years and is promoted once? Well,that is a successful employee, but it's also an employee that is supported bytheir culture. That said, also it can be quite biased. We need to separatethose two things. 

其次,我们应该考虑成功的定义,审计它。还记得我们谈论的雇佣算法吗?那个呆了四年的人,然后被提升了一次?这的确是一个成功的员工,但这也是一名受到公司文化支持的员工。也就是说,这可能会有很大的偏差。我们需要把这两件事分开。


We should look to the blind orchestra audition as an example.That's where the people auditioning are behind a sheet. What I want to thinkabout there is the people who are listening have decided what's important andthey've decided what's not important, and they're not getting distracted bythat. When the blind orchestra auditions started, the number of women inorchestras went up by a factor of five.

我们应该去看一下乐团盲选试奏,举个例子。这就是人们在幕后选拔乐手的地方。我想要考虑的是倾听的人已经决定了什么是重要的,同时他们已经决定了什么是不重要的,他们也不会因此而分心。当乐团盲选开始时,在管弦乐队中,女性的数量上升了5倍。


Next, we have to consider accuracy. This iswhere the value-added model for teachers would fail immediately. No algorithmis perfect, of course, so we have to consider the errors of every algorithm.How often are there errors, and for whom does this model fail? What is the costof that failure?

其次,我们必须考虑准确性。这就是针对教师的增值模型立刻失效的地方。当然,没有一个算法是完美的,所以我们要考虑每一个算法的误差。出现错误的频率有多高,让这个模型失败的对象是谁?失败的代价是什么?


And finally, we have to consider thelong-term effects of algorithms, the feedback loops that are engendering. Thatsounds abstract, but imagine if Facebook engineers had considered that beforethey decided to show us only things that our friends had posted.

最后,我们必须考虑这个算法的长期效果,与正在产生的反馈循环。这听起来很抽象,但是想象一下如果脸书的工程师们之前考虑过,并决定只向我们展示我们朋友所发布的东西。


I have two more messages, one for the datascientists out there. Data scientists: we should not be the arbiters of truth.We should be translators of ethical discussions that happen in larger society.

我还有两条建议,一条是给数据科学家的。数据科学家们:我们不应该成为真相的仲裁者。我们应该成为大社会中所发生的道德讨论的翻译者。


And the rest of you, the non-datascientists: this is not a math test. This is a political fight. We need todemand accountability for our algorithmic overlords.

然后剩下的人,非数据科学家们:这不是一个数学测试。这是一场政治斗争。我们应该要求我们的算法霸主承担问责。


The era of blind faith in big data mustend.

盲目信仰大数据的时代必须结束。


Thank you very much.

非常感谢。(掌声)

学习型公众号:值得你关注

点击关键词获取福利

 动画 | 10部美剧 | 专八全科 | 新概念 | 老友记 | 动画2

89奥斯卡 | 专四 | 英语智力竞赛 | 美语发音| BEC | PPT模板

关注小芳老师 

听哈利波特,看TED视频

转发和点赞是我持续更新的动力


资源来自网络,如果有侵权,即刻删除!

    您可能也对以下帖子感兴趣

    文章有问题?点此查看未经处理的缓存