Maryland one of three states most targeted by Russian ads in 2016

Maryland one of three states most targeted by Russian ads in 2016

Sen. Riichard Burr, R-N.C., chairs Senate Intelligence Hearing Wednesday. From committee video

By Changez Ali and J.F. Meils

Capital News Service

WASHINGTON – Heavily Democratic Maryland was one of the three states most targeted by Russian ads during the 2016 presidential election, the chairman of the Senate Intelligence Committee said Wednesday.

Sen. Richard Burr, R-North Carolina, used the example in an apparent effort to diminish the perception that Russian ads may have had an impact on the election of fellow Republican Donald Trump to the presidency.

“The (current) narrative here is that ads linked to Russia were targeted at pivotal states and directly influenced the election outcome,” Burr said, opening a hearing on the influence of social media on last year’s election. “What you haven’t heard is that almost five times more ads were targeted at the state of Maryland than of Wisconsin.”

Maryland was targeted by 262 ads, compared to 55 in Wisconsin leading up to the 2016 election, according to Burr. Maryland was won easily by Democrat Hillary Clinton while Wisconsin went narrowly for Trump. The latter state was one of the keys to Trump’s Electoral College victory.

“With the 2018 midterm elections just around the corner, Congress must come together and advance bipartisan reforms to prevent foreign agents from undermining our electoral process,” said Rep. John Sarbanes, D-Towson, who chairs the Democracy Reform Task Force.

Fake Black Lives Matter ad

One of the Russian-linked messages to appear in Maryland was a fake Black Lives Matter ad allegedly aimed at disenfranchising African-American voters in Baltimore, according to reporting by CNN and the Baltimore Sun.

“Some ads targeted users in Ferguson, Baltimore and Cleveland,” Sen. Chuck Grassley, R-Iowa, told a Senate Judiciary crime and terrorism subcommittee hearing on Tuesday. “These ads spread stories about abuse of black Americans by law enforcement. These ads are clearly intended to worsen racial tensions and possible violence in those cities.”

Although the exact content of the fake Black Lives Matter ad used in Baltimore has yet to be made public, it is has been described as expressing both support for the Black Lives Matter movement while also implying that the group represented a threat to others, presumably white people.

The ad was one of the more than 3,000 linked to Russian entities that appeared on Facebook between June 2015 and May 2017 and has been turned over to Congress.

Senators attack social media giants

At Wednesday’s intelligence panel hearing, senators from both parties lashed out at lawyers from social media companies Facebook, Twitter and Internet giant Google, the second day tech companies’ executives faced questions about what the industry intended to do to block fake Russian ads and accounts in future elections.

Facebook’s general counsel, Colin Stretch, revealed on Tuesday that Russian-linked accounts delivered ads to more than 126 million Americans in the lead up and aftermath of the 2016 elections.

“Many of these (Russia-linked) ads and posts are inflammatory, some are downright offensive,” Stretch told lawmakers. “And much of it will be particularly painful to communities that engaged with this content believing it to be authentic. They have every right to expect more of us.”

Some senators put the challenge facing these companies and the American public in starker terms.

“What we’re talking about is a cataclysmic change. What we’re talking about is the beginning of cyber warfare,” said Sen. Dianne Feinstein, D-California. “What we’re talking about is a major foreign power with the sophistication and ability to involve themselves in an election and sow conflict and discontent all over this country.”

The crux of the issue going forward is not just how to secure vulnerable social media platforms but who will do it – the government or the companies themselves.

“We believe as a user-generated platform, the rules around section 230 (of the Communications Decency Act) provide a platform to our users around free speech and expression and don’t require us to take a bias on removing content that we fear will violate certain rights,” said Sean Edgett, Twitter’s acting general counsel.

Another issue with which lawmakers and tech companies must wrestle is how to manage unverified or anonymous content without also regulating speech.

“We don’t want to put ourselves in the position of being the arbiter of truth. We don’t think that’s a tenable position for any company,” Stretch told the House Intelligence Committee on Tuesday.

But some in government feel there is a way for social media companies to do just that.

“It’s not actually news. These are stories that are placed by people with … malicious intent,” Ellen Weintraub, a member of the Federal Election Commission, told Capital News Service. “That’s not news. That’s people trying to cause trouble and what I want to know is are they U.S. people or are they foreign people.”

“When you get information on the internet there’s really no way of knowing where it’s coming from if it doesn’t carry some kind of disclaimer,” Weintraub added.

Striving for transparency

A bipartisan bill has been introduced in the Senate requiring political ads on social media to be subject to the same transparency laws as advertisements on television and radio.

The Honest Ads Act is sponsored by Democratic Sens. Amy Klobuchar of Minnesota and Mark Warner of Virginia, and is cosponsored by Sen. John McCain, R-Arizona.

“We’re simply asking the companies to make a reasonable attempt so that if that ad is being paid for by a foreign agent, that they will try to reveal that foreign agent,” Warner said in an interview to NPR.

Weintraub supports the legislation. “I think we need to get better disclosure,” she said. “I think there’s some of that we can do by regulation.”

How technology companies or the government will actually regulate content or determine what can be regulated is an open question.

All three companies outlined efforts already underway or planned to combat not just political content but malicious social content as well. These included various forms of transparency reporting for ads, additional verification for advertisers, adjusted algorithms to spot fake news and more staff to manually review sensitive content.

Some lawmakers were cautious about committing to a specific remedy.

“For every complex problem, there is a very clear, simple and wrong answer and so we need to be very careful, I think, in how we deal with this,” said Sen. John Cornyn, R-Texas.

About The Author

Capital News Service

aflynn1@umd.edu

Capital News Service is a student-powered news organization run by the University of Maryland Philip Merrill College of Journalism. With bureaus in Annapolis and Washington run by professional journalists with decades of experience, they deliver news in multiple formats via partner news organizations and a destination Website.

1 Comment

  1. Dale McNamee

    While I can’t say that I never saw any of these ads… They weren’t memorable and did not influence me or my vote for president…

    The problem is summed up in this comment : “It must be true… I read it on the Internet !”

    This goes for the opinion that masquerades as impartially reported news and the “fake news” that is reported as well that is debunked later on…

    Many lack wisdom and discernment regarding what is seen, heard, and read…

    Also, being uninformed and ignorant doesn’t help…