Rethinking Cybersecurity: Protecting Privacy and Combating Online Harassment with a Human-Centered Approach
Cybersecurity often brings to mind images of supercomputers working with binary code or anonymous figures grinning behind masks. While these pop culture depictions are exaggerated, the dangers of cyber-attacks are very real. Globally, the cost of cybercrime is projected to reach $10.5 trillion annually by 2025, showing just how severe these threats are becoming. One important aspect often missed in cybersecurity discussions is personal security and privacy. For instance, in Kurdistan, the right to privacy only became a fundamental right, The Kurdistan Regional Government (KRG) has taken steps to address these challenges by cooperating with international organizations and local initiatives. However, comprehensive policies specifically addressing online harassment and the misuse of social media are still in progress.
Online communities have traditionally offered safe spaces for women, LGBTQ+ people, and others from marginalized groups. However, in recent years, these groups have faced increasing targeted harassment. A 2017 Pew Research Center study found that 41% of American adults have experienced some form of online harassment, with young women and LGBTQ+ individuals being disproportionately affected. Social media platforms claim to have anti-hate policies to protect users and prevent harassment, but these policies often cover only limited scenarios and fail to keep up with the evolving online landscape. This issue is even more significant in multilingual communities, where rules are often only enforced for English content.
In Kurdistan, for example, cyber attackers and trolls have developed ways to bypass suspensions by using "coded abusive language in regional dialects." This is a common problem on platforms like Twitter, where women and marginalized communities are especially targeted. The abuse extends beyond misogynistic insults to include Islamophobia and homophobia. A study by the Anti-Defamation League found that 44% of LGBTQ+ adults and 31% of African Americans in the U.S. have been subjected to severe online harassment, underscoring the scale of the problem across different platforms.
Research by Amnesty International found that 7.1% of tweets directed at women in their study were "problematic" or "abusive." Despite strict rules against targeted harassment based on race, gender identity, sexual orientation, and religion on social media platforms, abuse still occurs. One reason is that these rules often assume all users write in perfect English, which makes it easier for automated tools to detect harassment. Social media platforms struggle with enforcing their policies in multilingual environments, often leaving non-English speaking users more vulnerable.
This is where a human-centered approach becomes crucial. To effectively address online abuse, we need to first understand the problem in specific social and political contexts and then create empathetic policies to tackle it. Engaging users in the policy-making process can help because they experience online abuse firsthand. Content moderation needs to be combined with regional and language-specific safeguards to better protect users. Quick and effective responses from enforcement authorities build trust between users and companies. For example, Twitter’s implementation of the “Safety Mode” feature in 2021 was aimed at reducing harassment by automatically blocking abusive accounts, but users found it less effective in languages other than English. This highlights the need for localized solutions.
Transparent reporting about abusive content can guide future security measures and help both individuals and companies understand trends in cyber harassment. In this context, research also plays a critical role. Studies have shown that the psychological impacts of online harassment can include anxiety, depression, and even post-traumatic stress disorder (PTSD). Understanding these effects is crucial in creating policies that protect mental health as well as personal security.
For universities, this means taking a leading role in providing comprehensive cybersecurity programs and conducting research to address these issues. Cybersecurity graduates should not only focus on technical solutions but also study the social and psychological aspects of online abuse. Students in these programs should aim to develop a deep understanding of how online harassment works across different cultures and languages. They should work on creating policies and technologies that are sensitive to the nuances of regional and linguistic diversity. By doing so, they can contribute to building a safer and more inclusive online environment.
Moreover, universities can collaborate with social media companies to develop advanced tools and algorithms that can detect abusive behavior across multiple languages and cultural contexts. Additionally, educational institutions can foster awareness by hosting seminars, workshops, and discussions around topics like digital rights, online safety, and mental health impacts. This would not only equip students with the necessary skills but also make them advocates for change in the cybersecurity space. Furthermore, British International University in the Kurdistan region has started offering cybersecurity programs to equip students with the knowledge and skills to combat cyber threats. These programs aim to address both technical and ethical challenges, encouraging students to consider the social impact of cybercrime on privacy and security.
The first step in combining cybersecurity with human-centered thinking is to recognize that there is a person behind every screen. Cyber harassment is not just a security issue—it can also greatly affect a victim’s mental and physical well-being. By adopting a holistic approach, involving both technological and human considerations, we can create safer online spaces for everyone.