and an exaggerated headline that you don't agree with, it's easy to blame fake news.
And while it might seem like nothing more than a meme, fake news is a real thing.
Misinformation spreads across the Internet like wildfire, and might even spread more quickly than real news.
But why? Well, it all comes down to human psychology and how our brains deal with the new information.
Scientists have been studying the cognitive basis of believing false information for a long time.
回溯到上世紀(jì)70年代,當(dāng)時(shí)的研究著眼于人們?nèi)绾慰创?/div>
that goes against things that they've already been told.
與被告知的真相相違背的新信息。
For example, in one 1975 study,
比如,在1975年的一項(xiàng)研究中,
high school and college students were asked to compare two suicide notes and identify the real one.
高中生與大學(xué)生都被要求比較兩份自殺筆記并鑒別真假。
Most students did okay, they could sometimes identify the real note, but not all the time.
大多數(shù)學(xué)生都做得不錯(cuò),他們有時(shí)能辨別出真實(shí)筆記,但不是一直這樣。
But then some students were told that they were either really good or really bad at the task.
但隨后,一些學(xué)生被告知他們?cè)谌蝿?wù)中要么表現(xiàn)得好,要么表現(xiàn)得差。
Later, the researchers revealed that they lied and clarified that everyone just did okay.
后來,研究人員透露自己說了謊,并澄清說每個(gè)人都做得很好。
But students still thought that they were better or worse at the task than they actually were,
但是學(xué)生們?nèi)哉J(rèn)為他們?cè)谌蝿?wù)中的表現(xiàn)比實(shí)際情況更好或更糟,
the lies stuck with them.
謊言困住了他們
Along with other studies on everything from economic decision-making to medical information,
它連同從經(jīng)濟(jì)決策到醫(yī)療信息的其他所有研究
all this research shows that humans aren't very logical.
都表明,人類不是很有邏輯。
Even when we're given information that should adjust our beliefs,
即使我們得到了應(yīng)該調(diào)整自身信念的信息,
like learning something is a straight-up lie,
比如學(xué)東西明顯是謊言,
it's hard for us to let go of how we initially feel about a person or a situation.
但我們很難放棄對(duì)一個(gè)人或形勢(shì)的最初感覺。

A possible factor in this is confirmation bias:
導(dǎo)致它的一個(gè)可能因素是確認(rèn)偏誤:
we tend to be more convinced by ideas that support our beliefs,
我們更傾向于相信那些支持我們信念的想法,
while opposing information doesn't seem so trustworthy.
而相反的信息似乎并不可信。
This is partially because of what psychologists call motivated reasoning.
這其中的原因有部分是心理學(xué)家所說的動(dòng)機(jī)性推理。
Basically, we're motivated to reach conclusions that we want to reach.
我們基本上有動(dòng)機(jī)得出我們想要達(dá)成的結(jié)論。
Like, if you're diagnosed with a nasty health condition,
比如,如果你被診斷出健康狀況嚴(yán)重不良,
you're more motivated to find reasons why the test might be wrong than reasons to agree with it.
你更有動(dòng)機(jī)找出檢查可能錯(cuò)誤的原因,而不是同意它的理由。
It's better for you if you're not actually sick.
如果你不是真的病了,那就更好了。
On top of that, we tend to believe that our views are correct and other people are wrong,
除此之外,我們傾向于相信我們的觀點(diǎn)正確,其他人的錯(cuò)誤,
especially if their views disagree with ours.
尤其是他們的觀點(diǎn)與我們不符的情況下。
This is called naive realism and makes it hard for us to separate facts and opinions.
這被稱為素樸實(shí)在論,它使我們很難將事實(shí)和觀點(diǎn)分開。
These psychological patterns show that our brains can be pretty easily led astray by misinformation.
這些心理模式表明,我們的大腦很容易被錯(cuò)誤信息誤導(dǎo)。
But this doesn't completely explain how and why fake news goes viral.
但這并不能完全解釋假新聞在網(wǎng)上瘋狂傳播的原因。
Part of the problem might be the fact that, according to a pretty comprehensive survey,
一項(xiàng)相當(dāng)全面的調(diào)查表明,部分問題可能在于
over half of U.S. adults get at least some news from social media.
超過半數(shù)的美國(guó)成年人都從社交媒體上獲得一些新聞。
There's so much information, and it can be hard to tell which sources are credible.
信息那么多,我們很難判斷哪些來源是可信的。
Like, a blog post about how coconut water makes you live longer
比如,一篇關(guān)于椰子水如何讓你活得更久的博客文章
probably isn't fact-checked like a scientific press release, but it might make for a viral tweet.
可能沒有像科學(xué)新聞發(fā)布那樣被核實(shí),但它可能成為推特紅文。
And social media companies are motivated to promote whatever gets the most traffic and attention,
社交媒體公司的動(dòng)機(jī)是推動(dòng)任何獲得最多流量和關(guān)注的事物,
which isn't necessarily what's true.
至于它們的真假就不必要了。
This could also play into the illusory truth effect:
這也會(huì)影響到虛幻真理效應(yīng):
the idea that we tend to believe information we're exposed to repeatedly, whether or not it's true.
即我們傾向于相信自己經(jīng)常接觸到的信息,不管它是不是真的。
A 2016 study at Yale, shared on the open-access platform SSRN, tested for this effect.
耶魯大學(xué)在2016年將一項(xiàng)研究在開放存取平臺(tái)社會(huì)科學(xué)研究網(wǎng)(SSRN)上進(jìn)行了共享,測(cè)試了這種效應(yīng)。
They exposed participants to both real and fake news headlines
他們讓參與者看了真實(shí)和虛假的新聞標(biāo)題,
and then distracted them with demographic questions about themselves.
然后用人口統(tǒng)計(jì)學(xué)的問題分散他們的注意力。
Later in the same session or after a week, participants were presented with more headlines.
在同一時(shí)段或一周后,參與者會(huì)得到更多的新聞標(biāo)題。
And they rated stories they'd seen before as more accurate, even pretty implausible ones.
結(jié)果,他們把以前看過的新聞評(píng)為更準(zhǔn)確、更虛假的新聞。
For example, one of the fake headlines was about a nationwide ban on all TV shows with gay relationships.
例如,一條假新聞標(biāo)題是關(guān)于全國(guó)范圍內(nèi)禁止所有與同性戀關(guān)系有關(guān)的電視節(jié)目的。
This effect kind of makes sense, when a whole bunch of people keep talking about the same story,
當(dāng)一群人不停地談?wù)撏粋€(gè)故事,
it seems to have more credibility than if one random dude was shouting it on a street corner.
使得它似乎比任意一個(gè)家伙在街角大喊大叫更可信時(shí),這種效應(yīng)似乎有點(diǎn)兒道理。
But these headlines have to be shared for a reason.
但人們分享這些頭條新聞是有原因的。
And fake news actually seems to be shared more than real news.
實(shí)際上,假新聞似乎比真新聞分享得更多。
A study published in Science in March 2018 looked at how fake news is spread using social media,
2018年3月發(fā)表在《科學(xué)》(Science)雜志上的一項(xiàng)研究著眼于假新聞如何通過社交媒體傳播,
with a massive longitudinal data set following Twitter stories from 2006 through 2017.
這些社交媒體包含2006年到2017年推特上發(fā)布的大量縱向數(shù)據(jù)集,
It included over 125,000 stories shared by around 3 million users,
包括約300萬用戶分享的超過12.5萬個(gè)故事,
and found that fake news spreads farther and faster than the truth.
他們發(fā)現(xiàn)假新聞比事實(shí)傳播得更廣更快。
This was especially true for political news, compared to other categories like scientific or economic news.
政治新聞與科學(xué)或經(jīng)濟(jì)新聞等其他類別相比,結(jié)果更明顯。
These results go along with data from Buzzfeed in 2016,
這些結(jié)果與2016年嗡嗡喂(Buzzfeed)的數(shù)據(jù)一致,
showing that false stories were shared on Facebook more than true stories
這表明臉書上的虛假故事比
during the few months leading up to the presidential election that year.
那一年總統(tǒng)選舉前幾個(gè)月里發(fā)生的真實(shí)故事分享得更多
And, importantly, this Science study found that fake stories are being shared by real people, not by bot software.
更重要的是,這項(xiàng)科學(xué)研究發(fā)現(xiàn),虛假故事是被真人分享的,而不是通過機(jī)器人軟件。
So it's not as simple as blaming Twitter bots for spreading misinformation.
所以事情不是責(zé)備推特機(jī)器人傳播虛假信息那么簡(jiǎn)單。
The authors think that the novelty of false headlines could partially explain this trend.
作者認(rèn)為,虛假新聞標(biāo)題的新奇性是導(dǎo)致這一趨勢(shì)的部分原因。
Maybe so many people are retweeting fake news because it's more surprising and interesting than real news.
也許那么多人轉(zhuǎn)發(fā)假新聞是因?yàn)樗鼈儽日嫘侣劯钊梭@訝和有趣。
In other psychology studies on viral content,
在關(guān)于傳染內(nèi)容的其他心理學(xué)研究中,
the stories that people were more likely to share made them more emotionally charged,
人們更愿意分享的故事讓它們更有情感,
either positively or negatively. So this idea fits with that pattern.
無論這種故事是積極的還是消極的。所以這一觀點(diǎn)符合那個(gè)模式。
Now, all of this is pretty...intense.
現(xiàn)在,所有這些都非常…緊張。
So how can we resist the influence of false news if it's everywhere and spreads so easily?
那么,如果虛假新聞無處不在而且容易傳播,我們?nèi)绾尾拍艿种扑鼈兊挠绊懩兀?/div>
Well, one idea is to tag headlines with warnings if third-party fact checkers have found them to be dubious.
一個(gè)想法是,如果第三方事實(shí)核查人員發(fā)現(xiàn)新聞?lì)^條可疑,就用警告符號(hào)標(biāo)記它們。
In the 2016 Yale study, this significantly reduced the chances that a headline was perceived as accurate,
在2016年耶魯大學(xué)的研究中,這種做法大大降低了標(biāo)題被認(rèn)為是準(zhǔn)確的幾率,
even if participants saw it a couple times.
即使參與者看了幾次。
Another tactic is to look more carefully at news sources.
另一種策略是更仔細(xì)地觀察新聞來源。
It's easy to lean into confirmation bias when you're arguing with your Aunt Sue on Facebook
當(dāng)你和蘇阿姨在臉書上爭(zhēng)論時(shí),你很容易傾向于確認(rèn)偏見,
and looking up sources to back up what you already think.
并且查找資料來源來支持你的觀點(diǎn)。
But you can dig deeper into news outlets and authors to understand things
但你可以深入到新聞媒體和作者那里去了解真相,
like what biases they might have or how they did their research, and think about sources more critically.
比如他們有什么偏見或者他們?nèi)绾巫鲅芯浚⒏行缘厮伎夹畔碓础?/div>
And finally, when you see a surprising headline,
最后,當(dāng)你看到令人驚訝的標(biāo)題時(shí),
take a second to reflect on it before you click on the "share", " retweet," "show all your friends" buttons.
在點(diǎn)擊“分享”“轉(zhuǎn)發(fā)”“展示給所有朋友”的按鈕前,花點(diǎn)兒時(shí)間反思一下。
Shocking stories might seem important to amplify, but they're not necessarily true.
讓人震驚的故事要放大似乎很重要,但它們并不一定是真的。
The Internet is a tricky place to navigate these days, I am aware of this,
我知道,現(xiàn)在的互聯(lián)網(wǎng)是一個(gè)操作很棘手的地方,
but there are ways to handle misinformation.
但我們也有處理錯(cuò)誤信息的方法。
And if enough people and companies keep working on this kind of transparency and critical thinking,
如果有足夠多的人和公司繼續(xù)致力于這種透明度和批判性思維,
it might help to turn the tide.
就有可能有助于扭轉(zhuǎn)局勢(shì)。
Thanks for watching this episode of SciShow, which is produced by Complexly,
感謝您收看本期的心理科學(xué)秀,它是由Complexly出品的,
a group of people who believe the more we learn about the world, the better we are at being humans.
Complexly是一群相信我們對(duì)世界了解得越多就越優(yōu)秀的人。
If you want to learn even more about how media affects how we think and act,
如果你想了解更多關(guān)于媒體如何影響我們的思考和行為的信息,
we would like you to check out our show, Crash Course Media Literacy, at youtube.com/crashcourse.
請(qǐng)?jiān)趛outube.com/crashcourse上查看我們的節(jié)目“Crash Course Media Literacy”。
來源:可可英語 http://www.ccdyzl.cn/Article/201804/550697.shtml