Dealing with "Unhealthy" Exposure with Emerging Technologies | Solving the User Security Problem with Web3 and Related Tech: Part 5
created with hotpot.ai

Dealing with "Unhealthy" Exposure with Emerging Technologies | Solving the User Security Problem with Web3 and Related Tech: Part 5


In this series of articles, we are looking into how various new-age technologies can increase user security online.


Now, we discussed various types of security challenges in the first part and how companies can minimize data collection in the second part.?


In the third part, we discussed how the emerging technologies mentioned in the second part can help protect user identity in the digital ecosystem.


In the fourth part, we discussed user profiling and how emerging technologies can restrict the adverse impact of user profiling on the users from privacy as well as a security perspective.


In this part, we discuss how the use of various new-age technologies can limit unhealthy exposure for users, especially for children.?


Unhealthy exposure here means exposure to content that negatively affects the mental health of the user through stimulating negative emotions. This kind of content can simply be hateful or provocative content, can be harassment based on religion or race, or can be sexually explicit content.?







One of the foundational texts of the modern concept of privacy is “The Right to Privacy" by Samuel Warren and Louis Brandeis published by Harvard Law Review in 1890.


Here is a brief excerpt from the paper:?


"""Thus, with the recognition of the legal value of sensations, the protection against actual bodily injury was extended to prohibit mere attempts to do such injury; that is, the putting another in fear of such injury … there came a qualified protection of the individual against offensive noises and odors, against dust and smoke, and excessive vibration. The law of nuisance was developed. So regard for human emotions soon extended the scope of personal immunity beyond the body of the individual. His reputation, the standing among his fellow-men, was considered, and the law of slander and libel arose…Similar to the expansion of the right to life was the growth of the legal conception of property. From corporeal property arose the incorporeal rights issuing out of it; and then there opened the wide realm of intangible property, in the products and processes of the mind, as works of literature and art, goodwill, trade secrets, and trademarks.
This development of the law was inevitable. The intense intellectual and emotional life, and the heightening of sensations which came with the advance of civilization, made it clear to men that only a part of the pain, pleasure, and profit of life lay in physical things. Thoughts, emotions, and sensations demanded legal recognition, and the beautiful capacity for growth which characterizes the common law enabled the judges to afford the requisite protection, without the interposition of the legislature.
Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right "to be let alone". Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life; and numerous mechanical devices threaten to make good the prediction that "what is whispered in the closet shall be proclaimed from the house-tops."
The press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip is no longer the resource of the idle and of the vicious, but has become a trade, which is pursued with industry as well as effrontery. …but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury."""




Thoughts, emotions, and sensations demanded legal recognition

Let us use this as the starting point of our discussion.


Unhealthy exposure, as we defined earlier, causes detriment to our “thoughts, emotions, and sensations” and must be legally recognized as harm to an individual.?


In the first article in this series, I mentioned some statistics that show how serious the unhealthy exposure problem really is. I think it is pertinent to mention them here.?

Based on Meta documents leaked and reported by The Guardian, around 100,000 children on Facebook and Instagram encounter online sexual harassment each day, involving exposure to explicit images.

John Shehan, the head of the exploited children division at the National Center for Missing and Exploited Children, notes a concerning rise in reports of child sexual abuse material on online platforms - from 32 million in 2022 to a record-breaking over 36 million in 2023.



Let us look into a broader issue - the pornification of social media.

Earlier this year, a Belgian regulator warned that Elon Musk's X is turning into a porn website.

And this is not a Belgian-specific problem. Last year, Indonesia blocked ‘X’ for the gambling and explicit content problem.

Also, I do not want to single out ‘X’ here.?

Instagram, Facebook, and other sites are also becoming distribution channels for adult content - Instagram is especially a cause of concern.?

The minimum age to be on Instagram is 13 and 8% of its users are under the age of 18. Well, just 8%? Instagram has more than a billion users.



Okay..these are statistics. Now, let us try to understand the problem by diving into what are the factors driving this problem.



Driving Forces


User Related Forces

Let us start with factors linked to the users.


Lack of meaningful age and identity verification:

Even when the minimum age for joining social media is 13, younger children can easily join these platforms.


Seriously, all they have to do is click on the button that says “yes” when asked “Are you xx or older” or enter a fake date of birth. It is that simple.?


Also, what is the process of identity verification??

What stops a man posing as a woman or a 50-year-old man posing as a teenage girl?


The Age and Identity Verification on social media are joke and minimal by design. Why? We will get to that later.



“Public as Default”:

Earlier in this series, I discussed how social media platforms at some point chose “public as default” user information visibility. This was a departure from the close-knit communities in the early days of social media.


The primary reason behind this choice is that “public as default” is much more conducive to network effect. We will discuss this later.?


Now, “public as default” causes "unhealthy" exposure issues in various ways.?

First, it makes it very easy to spread provocative or explicit content. Means, if the algorithms only show us content from our connections, or at least prioritize content from our connections, it will be difficult for "unhealthy" content to spread. These "unhealthy" contents are often shared and then distributed using fake profiles.

Unfortunately, social media algorithms often prioritize showing content from people we are not connected to rather than people we are connected to - especially on platforms like ‘X’ and Instagram. This makes these platforms attractive to spread provocative and explicit content.?


The second reason is that the “public as default” design makes it easier for frauds and predators to search for targets.



Algorithmic threat:

The issue is that the algorithms on these platforms are often designed for increasing user engagement rather than for user safety.?


Last year, a lawsuit from the New Mexico attorney general alleges that Facebook and Instagram help predators find underage children and Mark Zuckerberg allowed these platforms to become a “marketplace for predators in search of children”.


The Verge reports?

“As outlined in the complaint, the New Mexico attorney general’s office conducted an investigation that involved creating test profiles on Facebook and Instagram that appeared to be teenagers or preteens. Not only did the office find inappropriate recommendations for each of the decoys, such as an account that openly posted adult pornography, but it also found that they attracted predators as well.”





Content Related Forces


Now, the other side of the issue is linked with content.?


Lack of content tracking and moderation:

You know how easy it is to post any content on social media. It will take minutes.?

Now, unless the content is some kind of copyrighted material, it will not be deleted for days - does not matter what kind of content it is.?


Unfortunately, “unhealthy” content often spreads faster. So, before the content is deleted many have been exposed to that.?Even after the content gets deleted, the perpetrators can simply create a new account and upload the same content.



Lack of economic cost in content distribution:

One deeper issue is that there is no economic cost associated with “unhealthy” content distribution.

So, the most the perpetrator can lose is the profile. Using a fake, temporary profile means there is absolutely no risk at all.



Big picture - Drivers


Now, the big picture.?

What is behind all these issues? All user-related and content-related issues point to one overarching problem - a business model that encourages “unhealthy” content sharing.


Most of the revenue social media earns comes from advertising. For example, in 2023 Meta’s ~98% revenue came from advertising. And this is not only social media, most platforms businesses have dependence on advertising revenue.?

Obviously, their KPIs are in line with more and more advertising revenue generation - Daily Active Users (DAU), ad impressions price per ad, etc.?

Now, all these KPIs depend on the number of users, how long they spend on the platforms, and how often they engage with the platforms.

Apart from that, user engagement directly impacts the network effect which impacts the growth of the platforms.?


The point is that platforms have an economic incentive to get more people on the platform and make them engage with the platform.?


Unfortunately, provocative and explicit content not only gets more engagement but may bring more people to the platform.

Adult websites are always among the most visited websites around the world. So. the platforms have perverse incentives to allow explicit content posted on the platforms. Provocative content, on the other hand, gets the most engagement.


This type of content often calls to our animalistic nature through “amygdala hijack” and compels us to behave through emotions, rather than rational thinking. This leads to overactivity.





Anyways, now let us see how the emerging technologies we talked about in the second article can solve or at least alleviate these issues.


Solving “Unhealthy Exposure” with Emerging Tech


In the second post of this series, we discussed various emerging technologies, namely zero-knowledge proofs (ZKPs), decentralized identity solutions, federated learning, homomorphic encryption, differential privacy, and Secure Multi-Party Computation (SMPC).?


Please refer to the article to get a feeling about these technologies.?



Let us deal with the user-related issues first.?



Dealing with User Related Issues

Earlier in this article, we discussed the user-related issues namely lack of identity and age verifications, “public as default”, and vulnerabilities created by algorithms.?


Now, the use of decentralized identity (decentralized identifier, and verifiable credentials) can help verify age and identity decentrally.


Decentralized identity solutions leverage cryptographic techniques to ensure the integrity and security of identity information. By using decentralized networks and blockchain technology, these solutions provide a tamper-resistant infrastructure for storing and managing digital identities. This reduces the risk of identity fraud and impersonation on social media platforms, enhancing overall security for users and platforms alike.


Now, coming to “public as default”. This is a design issue and not a technical issue. As mentioned earlier, the primary issue is the business model on these platforms that depends on the network effect. The ideal situation from a user safety perspective would be that the users have full control over who can see their profiles and posts and the default setting would be only their own connections can see the profile.


New users join through references to become part of a close “community”. This process would also restrict fake profiles that are at the root of the propagation of “unhealthy content”.


We can also think about a “reputation system”. The “reputation” of a person will not only depend on the behavior of the person on the network but also on the behavior of those added by the said person. This will be an automatic deterrent for problematic profiles to be added to a network.


Yes, balancing this with the platform economics can be challenging and this is why platforms need to rethink their economics.



Dealing with Content Related Issues

Look, it is quite understandable that existing social media platforms will not depart from the advertise-based models. Many of them are public companies and changes in the economics will severely impact the market valuation of these platforms. So, probably we may see some new social media platforms emerge that prioritize user safety.


And there is also an issue from the user side. Many societies, especially in developing countries, do not really care about privacy. Another issue is that a departure from the advertisement-based model can mean users bearing some cost or paying for joining the network - this can also introduce some friction.

?

Anyways, an alternative content model may involve NFTs, economic incentives (and related penalties) based on content quality, and a related reputation system.?


NFTs will enable tracking of the content - who shared it, who accessed it, etc. Cryptographic techniques can be used to detect identical or stolen content. This will add to content monitoring as it will deter problematic content from re-entering the network.

The economic incentive model can be linked with the reputation system and can penalize sharing of “unhealthy content”.


For this model to work, the reputation should have economic value.?

The reputation can be linked with reach and other incentives (such as access to premium features). The link between reach and reputation will make sure that even if problematic profiles are not deleted, their reach is limited. In the current system, these kinds of decisions (on profile and content moderation) are often in the hands of humans (moderation teams) who can be biased or influenced (politically).


So, the reputation-based system should be automated and decentrally governed. The only available technology we have now is the use of blockchain smart contracts. In fact, reputation-based systems are already being built in Web3.


I am obviously not going to delve into every detail of such as system here. This is just a general overview.




To summarize, the “unhealthy exposure” problem on digital platforms is an extremely complex problem that needs to be solved. While emerging technologies can help, solving this problem needs collective effort from industry, users, and regulators.

This is such an important topic to delve into! Protecting user security online, especially when it comes to limiting unhealthy exposure, is crucial in today's digital landscape. Children, in particular, need safeguards against stumbling upon harmful content that could negatively impact their mental well being. It's heartening to see discussions around leveraging new age technologies to create safer online environments for them. From content filters to parental controls, there are various tools and strategies we can explore to ensure that users, especially the younger ones, can navigate the internet without encountering harmful or distressing content. Kudos to you for shedding light on this vital aspect of online safety! Let's continue to prioritize user security and well being in our digital strategies.

Web3 could contribute to incorporating new elements into the game.

要查看或添加评论,请登录

Sam Ghosh的更多文章

社区洞察

其他会员也浏览了