본문 바로가기

테크크런치기사

러시아의 잘못된 정보에 대한 Google의 조사 결과에 따르면 광고 구매, 신고 주장

https://techcrunch.com/2017/10/09/googles-probe-into-russian-disinformation-finds-ad-buys-report-claims/?ncid=rss

구글은 2016 년 미국 선거를 방해하기 위해 러시아 요원들이 플랫폼을 개발했다고 워싱턴 포스트지가 보도했다 .

구글의 비디오 콘텐츠 플랫폼 인 유튜브 (YouTube)를 비롯해 구글 검색, Gmail, 자사의 더블 클릭 광고 네트워크와 관련된 광고를 통해 구글 제품에 대한 정보를 유포하려는 러시아 요원들의 광고에 수만 달러가 지출됐다고 밝혔다.

이 신문은 크렘린 계열사가 자사의 플랫폼을 사용하여 온라인으로 잘못된 정보를 전파하려했는지 여부에 대한 Google의 조사에 정통한 사람들이 제공 한 정보를 기반으로 한 것이라고 보도했다.

Google 대변인은 보고서 확인을 요청하면서 "정치 광고 타겟팅 제한 및 인종 및 종교 기반 타겟팅 금지와 같은 엄격한 광고 정책을 시행하고 있습니다. 우리는 시스템을 악용하려는 시도를 연구하고 연구자 및 다른 회사와 협력하며 진행중인 조사에 도움을 줄 것입니다. "

구글이 조사를 부정하고 외면하지 않는다는 것을 말하고있다. 구글이 내부 조사를 통해 실제로 뭔가를 발견했다는 것을 암시한다. 그러나 아직 발굴 된 것은 무엇이든 공개 할 준비가되어 있지 않다.

Google, Facebook, and Twitter have all been called to testify to a Senate Intelligence Committee on November 1 which is examining how social media platforms may have been used by foreign actors to influence the 2016 US election.

Last month Facebook confirmed Russian agents had utilized its platform in an apparent attempt to sew social division across the U.S. by purchasing $100,000 of targeted advertising (some 3,000+ ads — though the more pertinent question is how far Facebook’s platform organically spread the malicious content; Facebook has claimed only around 10M users saw the Russian ads, though others believe the actual figure is likely to be far higher.)

CEO Mark Zuckerberg has tried to get out ahead of the incoming political and regulatory tide by announcing, at the start of this month, that the company will make ad buys more transparent — even as the U.S. election agency is running a public consultation on whether to extend political ad disclosure rules to digital platforms.

(And, lest we forget, late last year he entirely dismissed the notion of Facebook influencing the election as “a pretty crazy idea” — words he’s since said he regrets.)

Safe to say, tech’s platform giants are now facing the political grilling of their lives, and on home soil, as well as the prospect of the kind of regulation they’ve always argued against finally being looped around them.

But perhaps their greatest potential danger is the risk of huge reputational damage if users learn to mistrust the information being algorithmically pushed at them — seeing instead something dubious that may even have actively malicious intent.

While much of the commentary around the US election social media probe has, thus far, focused on Facebook, all major tech platforms could well be implicated as paid aids for foreign entities trying to influence U.S. public opinion — or at least any/all whose business entails applying algorithms to order and distribute third party content at scale.

Just a few days ago, for instance, Facebook said it had found Russian ads on its photo sharing platform Instagram, too.

In Google’s case the company controls vastly powerful search ranking algorithms, as well as ordering user generated content on its massively popular video platform YouTube.

And late last year The Guardian suggested Google’s algorithmic search suggestions had been weaponized by an organized far right campaign — highlighting how its algorithms appeared to be promoting racist, nazi ideologies and misogyny in search results.

(Though criticism of tech platform algorithms being weaponized by fringe groups to drive skewed narratives into the mainstream dates back further still — such as to the #Gamergate fallout, in 2014, when we warned that popular online channels were being gamed to drive misogyny into the mainstream media and all over social media.)

Responding to The Guardian’s criticism of its algorithms last year, Google claimed: “Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs — as a company, we strongly value a diversity of perspectives, ideas and cultures.”

But it looks like the ability of tech giants to shrug off questions and concerns about their algorithmic operations — and how they may be being subverted by hostile entities — has drastically shrunk.

According to the Washington Post, the Russian buyers of Google ads do not appear to be from the same Kremlin-affiliated troll farm which bought ads on Facebook — which it suggests is a sign that the disinformation campaign could be “a much broader problem than Silicon Valley companies have unearthed so far”.

Late last month Twitter also said it had found hundreds of accounts linked to Russian operatives. And the newspaper’s sources claim that Google used developer access to Twitter’s firehose of historical tweet data to triangulate its own internal investigation into Kremlin ad buys — linking Russian Twitter accounts to accounts buying ads on its platform in order to identify malicious spend trickling into its own coffers.

A spokesman for Twitter declined to comment on this specific claim but pointed to a lengthy blog post it penned late last month — on “Russian Interference in 2016 US Election, Bots, & Misinformation”. In that Twitter disclosed that the RT (formerly Russia Today) news network spent almost $275,000 on U.S. ads on Twitter in 2016.

It also said that of the 450 accounts Facebook had shared as part of its review into Russian election interference Twitter had “concluded” that 22 had “corresponding accounts on Twitter” — which it also said had either been suspended (mostly) for spam or were suspended after being identified.

“Over the coming weeks and months, we’ll be rolling out several changes to the actions we take when we detect spammy or suspicious activity, including introducing new and escalating enforcements for suspicious logins, Tweets, and engagements, and shortening the amount of time suspicious accounts remain visible on Twitter while pending confirmation. These are not meant to be definitive solutions. We’ve been fighting against these issues for years, and as long as there are people trying to manipulate Twitter, we will be working hard to stop them,” Twitter added.

As in the case with the political (and sometimes commercial) pressure also being applied on tech platforms to speed up takedowns of online extremism, it seems logical that the platforms could improve internal efforts to thwart malicious use of their tools by sharing more information with each other.

In June Facebook, Microsoft, Google and Twitter collectively announced a new partnership aimed at reducing the accessibility of internet services to terrorists, for instance — dubbing it the Global Internet Forum to Counter Terrorism — and aiming to build on an earlier announcement of an industry database for sharing unique digital fingerprints to identify terrorist content.

But whether some similar kind of collaboration could emerge in future to try to collectively police political spending remains to be seen. Joining forces to tackle the spread of terrorist propaganda online may end up being trivially easy vs accurately identifying and publicly disclosing what is clearly a much broader spectrum of politicized content that’s, nonetheless, also been created with malicious intent.

According to the New York Times, Russia-bought ads that Facebook has so far handed over to Congress apparently included a diverse spectrum of politicized content, from pages for gun-rights supporters, to those supporting gay rights, to anti-immigrant pages, to pages that aimed to appeal to the African-American community — and even pages for animal lovers.

One thing is clear: Tech giants will not be able to get away with playing down the power of their platforms in public.

Not at the Congress hearing next month. And likely not for the foreseeable future.

Featured Image: Mikhail Metzel/Getty Images