Twitter has sharply escalated its battle against fake and suspicious accounts, suspending more than one million accounts a day in recent months, a major shift to lessen the flow of disinformation on the platform, according to data obtained by The Washington Post.
The rate of account suspensions, which Twitter confirmed to the Post, has more than doubled since October, when the company under congressional pressure revealed how Russia used fake accounts to manipulate the US presidential election. Twitter suspended more than 70 million accounts in May and June, and the pace has continued in July, according to the data.
The aggressive removal of unwanted accounts may result in a rare decline in the number of monthly users in the second quarter, which ended last week, according to a person familiar with the situation who was not authorised to speak. Twitter declined to comment on a possible decline in its user base.
Twitter’s growing campaign against bots and trolls – coming despite the risk to the company’s user growth – is part of the ongoing fallout from Russia’s disinformation offensive during the 2016 presidential campaign, when a St. Petersburg-based troll factory was able to use some of America’s most prominent technology platforms to deceive voters on a mass scale to exacerbate social and political tensions.
The extent of account suspensions, which has not previously been reported, is one of several recent moves by Twitter to limit the influence of people it says are abusing its platform. The changes, which were the subject of internal debate, reflect a philosophical shift for Twitter. Its executives long resisted policing misbehaviour more aggressively, for a time even referring to themselves as “the free speech wing of the free speech party.”
Twitter’s Vice President for Trust and Safety Del Harvey said in an interview this week the company is changing the calculus between promoting public discourse and preserving safety. She added that Twitter only recently was able to dedicate the resources and develop the technical capabilities to target malicious behaviour in this way.
“One of the biggest shifts is in how we think about balancing free expression versus the potential for free expression to chill someone else’s speech,” Harvey said. “Free expression doesn’t really mean much if people don’t feel safe.”
But Twitter’s increased suspensions also throw into question its estimate that fewer than 5 percent of its active users are fake or involved in spam, and that fewer than 8.5 percent use automation tools that characterise the accounts as bots. (A fake account can also be one that engages in malicious behaviour and is operated by a real person. Many legitimate accounts are bots, such as to report weather or seismic activity.)
Harvey said the crackdown has not had “a ton of impact” on the numbers of active users – which stood at 336 million at the end of the first quarter – because many of the problematic accounts were not tweeting regularly. But moving more aggressively against suspicious accounts has helped the platform better protect users from manipulation and abuse, she said.
Legitimate human users – the only ones capable of responding to the advertising that is the main source of revenue for the company – are central to Twitter’s stock price and broader perceptions of a company that has struggled to generate profits.
Independent researchers and some investors long have criticised the company for not acting more aggressively to address what many considered a rampant problem with bots, trolls and other accounts used to amplify disinformation. Though some go dormant for years at a time, the most active of these accounts tweet hundreds of times a day with the help of automation software, a tactic that can drown out authentic voices and warp online political discourse, critics say.
“I wish Twitter had been more proactive, sooner,” said Sen. Mark Warner, Va., the top ranking Democrat on the Senate Intelligence Committee. “I’m glad that – after months of focus on this issue – Twitter appears to be cracking down on the use of bots and other fake accounts, though there is still much work to do.”
The decision to forcefully target suspicious accounts followed a pitched battle within Twitter last year over whether to implement new detection tools. One previously undisclosed effort called “Operation Megaphone” involved quietly buying fake accounts and seeking to detect connections among them, said two people familiar with internal deliberations. They spoke on the condition of anonymity to share details of private conversations.
The name of the operation referred to the virtual megaphones – such as fake accounts and automation – that abusers of Twitter’s platforms use to drown out other voices. The program, also known as a white hat operation, was part of a broader plan to get the company to treat disinformation campaigns by governments differently than it did more traditional problems such as spam, which is aimed at tricking individual users as opposed to shaping the political climate in an entire country, according to these people. Harvey said she had not heard of the operation.
Some executives initially were reluctant to act aggressively against suspected fake accounts and raised questions about the legality of doing so, said the people familiar with internal company debates. In November, one frustrated engineer sought to illustrate the severity of the problem by buying thousands of fake followers for a Twitter manager, said two people familiar with the episode. Bots can be readily purchased on a grey market of websites.
A person with access to one of Twitter’s “Firehose” products, which organisations buy to track tweets and social media metrics, provided the data to the Post. The Firehose reports what accounts have been suspended and unsuspended, along with data on individual tweets.
Bots, trolls and fake accounts are nearly as old as Twitter, which started operations in 2006. In 2015, Twitter’s then-chief executive Dick Costolo acknowledged the problem in a company memo: “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years.”
Twitter was not alone among tech companies in failing to adequately anticipate and combat Russian disinformation, which intelligence agencies concluded was part of the Kremlin’s attempt to help elect Republican Donald Trump, damage Democrat Hillary Clinton and undermine the faith of Americans in their political system.
The aftermath of the election – and the dawning realisation of the key role unwittingly played by US tech companies – threw some of the industry’s biggest players into crises from which they have not entirely emerged, while subjecting them to unprecedented scrutiny. Political leaders have demanded that Silicon Valley do better in the 2018 mid-term elections despite a lack of new laws or clear federal guidance on how to crack down on disinformation without impinging on constitutional guarantees of free speech.
Twitter had said in several public statements this year that it was targeting suspicious accounts, including in a recent blog post that nearly 10 million accounts a week were being “challenged” – a step that attempts to ascertain the authenticity of an account’s ownership and requires users to respond to a prompt such as verifying a phone or email address.
In March, Twitter chief executive Jack Dorsey announced a companywide initiative to promote “healthy conversations” on the platform. In May, Twitter announced major changes to the algorithms it uses to police bad behaviour. Twitter is expected to make another announcement related to this initiative next week.
But researchers have for years complained that the problem is far more serious and that Twitter’s definition of a fake account is too narrow, allowing them to keep counts low. Several independent projects also have followed particular bots and fake accounts over many years, and even after the recent crackdown, researchers point to accounts with obviously suspicious behaviours, such as gaining thousands of followers in just a few days or tweeting around the clock.
“When you have an account tweeting over a thousand times a day, there’s no question that it’s a bot,” said Samuel Woolley, research director of the Digital Intelligence Lab at the Institute for the Future, a Palo Alto, California-based think tank. “Twitter has to be doing more to prevent the amplification and suppression of political ideas.”
Several people familiar with internal deliberations at Twitter say the recent changes were driven by political pressure from Congress in the wake of revelations about manipulation by a Russian troll factory, which Twitter said controlled more than 3,000 Twitter accounts around the time of the 2016 presidential election. Another 50,258 automated accounts were connected to the Russian government, the company found.
News reports about the severity of the bot problem and a rethinking of Twitter’s role in promoting online conversation also factored into Twitter’s more aggressive stance, these people said.
During congressional hearings last fall, lawmaker questions forced Twitter to look harder at its bot and troll problem, according to several people at the company. It also revealed gaps in what the company had done so far – and limits on the tools at the company’s disposal in responding to official inquiries.
Twitter launched an internal task force to look into accounts run by the Russian troll factory, called the Internet Research Agency, and received data from Facebook and other sources, including a threat database known as QIntel, according to two people familiar with the company’s processes.
One major discovery was the relationship between the Russian accounts and Twitter’s longstanding spam problems, the people said. Many of the accounts used by Russian operatives, the company researchers found, were not actually created by the IRA. Instead, the IRA had purchased bots that already existed and were being sold on a black market. Older accounts are more pricey than newly-created ones because they are more likely to get through Twitter’s spam filters, said Jonathon Morgan, chief executive of New Knowledge, a startup focused on helping internet companies fight disinformation.
The discovery of the connection between the Russian bots and the spam problem led company officials to argue for a bigger crackdown, according to the people familiar with the situation. An internal battle ensued over whether the company’s traditional approach to spam would work in combatting disinformation campaigns organised and run by nation-states such as Russia.
Rather than merely assessing the content of individual tweets, the company began studying thousands of behavioural signals, such as whether users tweet at large numbers of accounts they don’t follow, how often they are blocked by people they interact with, whether they have created many accounts from a single IP address, or whether they follow other accounts that are tagged as spam or bots.
Sometimes the company suspends the accounts. But Twitter also limits the reach of certain tweets by placing them lower in the stream of messages, sometimes referred to as “shadow banning,” because the user may not know they are being demoted.
Harvey said that the effort built on the technical expertise of an artificial intelligence startup called Magic Pony that the company acquired in 2016. The acquisition “laid the groundwork that allowed us to get more aggressive,” Harvey said. “Before that, we had this blunt hammer of your account is suspended, or it wasn’t.”
The data obtained by the Post shows a steady flow of suspensions and spikes on particular days, such as Dec. 7, when 1.2 million accounts were suspended, nearly 50 percent higher than the average for that month. There was also a pronounced increase in mid-May, when Twitter suspended more than 13 million in a single week – 60 percent more than the pace in the rest of that month.
Harvey said that the company was planning to go further in the year ahead. “We have to keep observing what the newest vectors are, and changing our ways to counter those,” she said. “This doesn’t mean we’re going to sit on our laurels.”
[“source=gadgets.ndtv”]