- Mic asked Leslie Miley, a former engineer at Twitter who started the product safety and security team that handled abuse, if Twitter is capable of handling the complex mechanisms of creative and dedicated abusers.
- The answer: It’s complicated but possible. Miley said that trolls can use code words and misspelled words, but that the tooling Twitter had in 2015 would be able to handle those.
- “If you start using ‘bob’ as a code word for some racist term, then it gets really difficult,” Miley said. “You can try to do a signal-to-noise ratio” so if accounts are flagged that are part of “affinity groups known for abusive behavior or racist views, the tooling is really easily modified to handle that.”
- If these algorithms don’t prove effective, however, there’s still plenty of room for improvement. Read more