from the insurrectionists department
Summary: After Amazon refused to continue hosting Parler, the American right-wing favorite Twitter contestant took over old Parler customers who wanted to communicate with each other – but tried to regulate tight – accepting Telegram as their advanced service. Following the attack on the Capitol building in Washington, DC, the Telegram chat app sent 25 million users in just over 72 hours.
Telegram has been home to more remote groups, who often find that their communication options are limited by appraisal policies that will, unsurprisingly, remove violent or hateful content. Telegram’s management is relatively weaker than many of its social media competitors, leaving it as the go-to option for long-distance people.
But Telegram seems to be trying to manipulate user behavior – along with an influx of worrying content. Some channels that broadcast content against it have been removed by Telegram because the increasingly popular chat service is shifting the evaluation muscle (so far rarely). According to the service, at least fifteen channels were removed by Telegram controllers, some of them filled with white supremacist content.
Unfortunately, policing remains difficult. Although Telegram says it has blocked “dozens” of channels containing “calls to violence,” journalists have had little trouble finding violence-like content on the service. , which has either blocked motoring or is bypassed by Telegram. While Telegram appears to respond to some notifications of potentially illegal content, it appears to be inconsistent in enforcing its own rule against inciting violence.
Decisions to be made by Telegram:
Should content contained in private chairs (rather than public channels) be subject to the same rules as for violent content?
Given that many of its users moved to Telegram after being banned elsewhere for posting anti-content, should the platform increase its evaluation efforts aiming for calls for violence?
Should a process be put in place to help prevent banned users / channels from reappearing on Telegram under new names?
Policy issues and implications for consideration:
Does Telegram ‘s commitment to user security and privacy prevent it from engaging in more active content modeling?
Context is considered when engaging in moderation to prevent unsuspecting people from sharing content that they feel is troubling, rather than promote the content or support its message?
Are reports of major content breaches (and lax modeling) attracting telegrams to Telegram? Does this increase the chance that the “snowball” assessment problem turns into something that can no longer be effectively managed?
Resolution: Telegram continues to use a largely hands-free approach but appears to be more sensitive to complaints of violence than it has been before. As it continues to attract more users – many of whom are kicked from other platforms – the existing evaluation issues are only increasing.
Originally posted to the Trust & Safety Foundation website.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you taking your time. We work hard every day to put out quality content for our community.
Techdirt is one of the few remaining independent media outlets. We don’t have a large corporate behind us, and we rely heavily on our community to support us, at a time when advertisers are not interested in supporting small non-commercial sites. dependent – especially a site like ours that is unwilling to pull punches in its statement. and analysis.
While other websites have been using paywalls, registration requirements, and advertising that is increasingly weird / annoying, we have always kept Techdirt open and available to anyone. But to continue doing that, we need your support. We offer a number of ways to support our readers, from direct donations to special memberships and cool products – and every little thing helps. Thank you.
– The Techdirt Team
Filed Under: chat, content modeling, violence