BBritain is a much less racist country than the one I grew up in. Yet I have received more racist abuses in the last four weeks than in the previous 40 years. This paradox can be explained in one word: technology. More specifically, Twitter. It helps to explain why, last week, social media returned to being the key battleground for combating racism.
Growing up, my experiences of racism were rather trivial. If you called me a "paki" in the playground, you might expect a sarcastic lecture on the need to get an atlas, since my father is from India. No one has ever hit me, but when I was a football fan in the 80's, I felt a shocking racism, which was eventually eliminated in the middle of the 90's. The singing of monkeys in the ground gave way to hatred on social media: see the racist abuse against Paul Pogba of Manchester United and other important black players who hit the headlines last week.
It's been over 20 years since anyone was racist in me and I would rarely have encountered racism if I wasn't on Twitter. But I'm an addict. The platform can even tell me that I have sent over 180,000 tweets over the past 11 years.
I found a tremendously positive experience, although worrying about the growing incivility of public discourse led me to invent #positivetwitterday in 2012, promoting it in an unusual alliance with blogger Guido Fawkes. This annual event, the last Friday of August, challenges the tweeters to behave in a civilized way for at least a day, a symbolic way to deepen the conversation about what we can all do to shape the social media culture we want. I am pleased that Twitter also supports this year.
So users can change the tone of the speech online, but the racist storm I experienced this month shows why social media companies have to do their part.
Ironically, it started when I shared some good news on progress in the race: a Observer they report research that nine out of 10 people don't think you should be white to be English. This type of social change is a welcome message for about nine out of ten people. Several hundred people retweeted and enjoyed the tweet, but I also heard how angry the most toxic members of that declining racist minority were.
I decided to report the racial abuse I received on Twitter systems, something I had never done before. The results were illuminating. I reported about 50 racist users that weekend. About a third was considered out of service; two thirds were judged OK.
With what kind of racism does Twitter let users go? You probably won't be able to call someone "negro", but I was told by Twitter that "You're not English, parjeet – your people are shitting on the street" was acceptable. I sent an e-mail to ask what else the user has to say to break the rules of "hateful conduct". An answer has just said that it has been verified and confirmed. I wondered how many times a human being read my messages and how often an algorithm.
Getting these limits right is difficult. I want Twitter to do more against racists, but I don't think they should ban Donald Trump, for example, even though some of his tweets might have to go. With the goal of at least educating users, I started a new hashtag, #doesnotviolate, to promote transparency on what is allowed.
Twitter claims to hate racism on the platform, but its current rules allow racism and racist discourse, prohibiting only users who promote violence or make threats or harassment for racial reasons. This month has tightened its policies against hatred and dehumanizing tweets against faith groups now violate the rules. Twitter has provided examples: "We must exterminate mice; Jews are disgusting "now they would be out of service. I was amazed that he wasn't already. Yet I say exactly the same thing about blacks and it's still OK. That is, tweets that dehumanize racial groups are still considered acceptable by Twitter. Changing their policies on this is an urgent necessity.
The new rules won't help if Twitter can't apply them. The solidarity on the part of other tweeters included sending surprising evidence of how repeatedly those orchestrants harassing me had been banned. A virulent antisemite who uses the "Noxious Jew" handle openly boasts of being the same forbidden user. Twitter allowed him to re-record dozens of his vile variations, such as "Fetid Jew", "Pungent Jew", "Malodorous Jew", with messages that were tagged #myfirsttweet asking his racist network to rebuild his audience. I sent to the Twitter control team and received an email saying he could not find any violation of his rules.
The racists banished from the soccer fields cannot simply reappear the following weekend, but hundreds of virulent racist accounts banned by Twitter openly celebrate the ease with which they do so. Twitter must combine technology and human ability to solve this problem.
Is reporting online racism useless? "Just block and move on," I was told. But the answer to racism in the soccer fields were not the earplugs for black and Asian fans. We have changed culture on the ground. It is a mistake to think of "online" and "offline" as hermetically closed worlds. If we do not emulate the progress we have made in the playgrounds and stadiums on social media, this organized effort to relegate racism to social media will return to society.
Phil Neville, manager of the English women's team, proposes a six-month boycott by the players. But I don't want to leave the platform. Instead, it must be time for Twitter to show racism the red card and mean it.
• Sunder Katwala is director of British Future
. (tagsToTranslate) Twitter (t) Internet (t) Race (t) Social media (t) Digital media (t) News in the UK