Since Hamas attacked Israel on 7 October, the UK’s Counter Terrorism Internet Referral Unit has seen a 12-times increase in hateful social media content being reported to police officers who specialize in dealing with this type of content.
The unit now spends more time checking if extreme and hateful social media posts break the law against terrorism, rather than just focusing on IS propaganda and online reactions to attacks in the UK.
The team got over 2,700 tips from people using an online form after Hamas attacked Israel and Israel responded with air strikes on the Gaza Strip.
There is a lot of hate that is increasing and it is making young Britons more vulnerable to being radicalized by computer programs.
The BBC was the only ones allowed to see the team’s work. The police officers told me that they are seeing a lot of antisemitic content being shared by young people in Britain who they have not noticed before.
They said that there is a lot of hate coming from young people acting recklessly online.
One person said the time from October 7th is noticeable because there was a lot of content being shared. The BBC isn’t saying the officers’ names because of their job.
Their boss, Matt Jukes, who is in charge of Counter Terror Policing, is worried because his team has to deal with the worst online content, but social media companies are not doing enough to stop the spread of hateful content.
Algorithms are systems that suggest new stuff to you based on what you like and interact with. This means that some people may be influenced by the posts they see on social media and start believing in more extreme ideas, even if they didn’t think that way before. It’s not just about showing what they already believe.
Jukes says that people who used to have to look for this material in the past are now receiving it without having to search for it. “Before, people had to go to specific places or websites to find hateful and extreme material. But now, this kind of material is being sent to them, even when they’re not looking for it. ”
‘Posts that are thoughtless and driven by strong feelings’
People who send in posts to the team don’t know much about what happens after that.
I am told that the team checks posts and online content to see if they might break UK terrorism or other terror laws.
They are searching for posts that are very intense and shared a lot, rather than posted alone. The concern is about posts on social media that promote terrorism and could encourage people to commit violent acts or adopt extremist beliefs.
Currently, this includes posts that show support for Hamas and glorify or show support for acts of terror, one of the officers explains.
They showed me a few different pictures from X, TikTok, and a messaging app, with the usernames blurred out. The messages ask for help to join Hamas and give them money to travel. Hamas is seen as a terrorist organization by the UK and other governments. Some people write mean things about Jewish people online.
“People are using X, Instagram, and TikTok platforms. ” An officer says that many of the posts are just written words. “Young people often make impulsive, emotional posts on social media because they feel at ease using these sites. ”
They say that many of the profiles have never shared this kind of content before. They think that people who are not aware are starting to share very mean anti-Jewish ideas.
People who post like this come from different places and backgrounds, and are generally younger.
“We have seen a lot more material that is against Jewish people than material that is against Islam. ” “It’s very noticeable,” another officer says. “We received material from far-right groups that strongly support Israel. ”
Since October 7th, I’ve found and looked into posts on social media that are against Islam and are racist. Some of these posts were made by accounts that don’t support the pro-Palestinian movement, and there were also hurtful comments about Jewish people from accounts that don’t support Israel. I have seen the same thing online as several human rights groups and campaigners have said about an increase in hate against Muslims and Jewish people on social media.
In 2017, a lot of online posts were praising and celebrating terrorist attacks that happened in the UK, like the ones at London Bridge, Manchester Arena, in Westminster, and at a mosque in London’s Finsbury Park. However, police officers say that there have been a lot more ongoing reports since the latest Israel-Gaza War started, and the discussions have been much more intense than before.
They have found 630 possible cases that may have broken laws about terrorism or hate crimes.
I’ve been told that 150 of those cases have been sent to the police for more investigation or action. Around 10 cases have been given to the counter terrorism teams in the Met, and others have been given to local police or regional counter terrorism units for investigation.
The police say that TikTok, X, and Meta, the company that owns Instagram and Facebook, have been helpful and fast in deleting very bad content when they are told about it. However, they say it’s been harder to deal with posts that are not clearly breaking the rules of the social media sites.
People are saying mean and awful things. “But many of the things we’re dealing with are right on the edge,” a police officer says.
“You have an area where there could be content, opinions, and material that is not very pleasant. ” When does it become a crime. The team has to decide.
As of October 7th, TikTok has put a lot of effort into keeping its users safe. TikTok has made it clear that it is against hate and hatespeech. They are working on new ideas to show different videos and stop the same ones from being shown all the time.
Meta, the company that has Instagram, explains in its “community standards” how it uses both technology and people to find and delete content that goes against its rules. This includes content that attacks people because of their religion, ethnicity, or where they come from. The social media company will delete any pictures made by a dangerous group or person, unless the user is sharing it to report news or condemn it.
“Algorithms – a way to make people extreme in their beliefs. ”
What about all the hate that is in the middle. It’s not extreme enough to be against the law, but it still makes people talk badly with each other and might make some people become more extreme in their beliefs.
Matt Jukes says that he noticed a big divide online recently.
“People feel good when they see positive things on the internet. ” Many other people in their own little world feel the same way.
Younger people who use social media a lot may be at a higher risk. There are advantages to that; it could be said that they are more connected than ever before, they are seeing different perspectives and content, and many of them are more involved and excited. For some people, it could be dangerous to see extremist things on it.
Right now, social media companies are responsible for handling mean posts. This also depends on the people who make the rules for the sites and the people who use them.
The new Online Safety Act makes social media companies responsible for illegal content.
The biggest question is about what to do with algorithms that are accused of encouraging hate and making harmful language seem normal.
Jukes says it’s amazing that there’s a mix of terrorism threats, hate crimes online and offline, and interest from state actors in the upcoming year.
These intense and harmful talks that aren’t breaking any laws could have a big effect on how people talk to each other in public. Not only in this war, but also in the elections happening around the world this year.