Reflections on "The Social Dilemma"?

Reflections on "The Social Dilemma"

For quite some time, I have recognized the impact that social networks and AI-based recommendation systems have on society. The 2020 Netflix documentary The Social Dilemma shed light on this issue, explaining in lay terms some of the negative impacts of these technologies. My goal in this article is to convey my personal perspectives, both for deepening my own understanding on the subject, and for receiving feedback from my professional network. The ideas presented here neither represent my employer’s views nor were they formed or influenced by my career at Microsoft.  

 The “problem”

Although The Social Dilemma brought attention to the area, in my opinion, it failed to clearly articulate the problem. There are two unrelated technologies in question: 

(1) Social networks: the ability to connect society in a deeper way, by allowing people to interact with each other and spread information in a frictionless manner through their connections in the network.

(2) AI-based recommendation systems: the ability to understand people’s interests and leverage this understanding to present relevant information to them, with the goal to maximize “engagement,” which essentially is time spent on the platforms that use these systems.

Both technologies provide incredible benefits and have helped advance humankind in many ways. Social networks are invaluable for connecting people together in ways that were not previously possible. For example, it is possible to easily assemble a team of experts to collaborate on a project by utilizing the tools available in professional social networks. AI-based recommendation engines make our lives easier and increase our productivity. They present content that is relevant to us from a vast content library, saving us time and providing a frictionless experience. However, these benefits do not arrive without consequences. Social networks and AI-based recommendation systems independently carry their own negative effects, which are amplified when these technologies are combined. This precarious combination is present in all popular platforms used today.

I believe it is our responsibility, as technologists, to analyze these issues in a structured manner to reduce the negative effects of these platforms. While I will attempt to present solutions, I know that I will fall short. Ideas presented here will require refinement and restructuring, more research, and deeper understanding of these issues. I will start with two key observations:

 (1) Social networks do not realistically reflect our society. In real life, we do not have as many friends. Furthermore, we do not constantly share our current thoughts and actions, nor do we comment on others’ thoughts and actions. Although we can spread “information,” it is typically at a slow rate and in a manner where personal interactions can help determine credibility and understanding. On social networks, it is not only possible to spread information at an increased rate, but also in an impersonal manner.

(2) Due to the use of AI-based systems to maximize platform engagement, popular content will always be more widely disseminated through the network, independent of its quality or validity.

In my opinion, the three most prominent problems that need to be addressed are:

(1) Spread of misinformation: a large, connected network that facilitates the dissemination of popular content to engaged users may promote fast spreading of misinformation [1]. This article will primarily focus this issue, as it has a crucial impact on our society and democracy.

(2) Polarization: divergent and conflicting information may be independently distributed to different communities, as AI-based recommendation systems will optimize for engagement by promoting potentially conflicting information to different users. This may reinforce and amplify bias and create a polarized social network, which in turn creates the risk of polarizing society.

It is hard to say if polarization in social networks has contributed to polarization in our society, and to what degree. We can, however, try to address the problem by first acknowledging and minimizing bias in the AI systems used by these platforms. We need more research to better understand bias in real life and in social networks. This data will help us improve our systems to simultaneously reduce bias and increase engagement by promoting constructive dialogue across different communities.

(3) Mental health: Currently, there are no mechanisms to validate content that is posted. With “friends,” who can post and redistribute “anything,” we also more exposed to cyberbullying, which can exacerbate mental health issues. These are serious and alarming issues, especially considering the increased number of hospitalizations and the increased suicide rate among teenagers [2, 3].

Social media platform use and its impact on mental health deserves the attention from several fields. As a possibility, computer scientists can work with educators and health care professionals to help us build platforms that are safer, especially for young users. More robust sentiment analysis tools, for instance, could have a positive impact on reducing cyberbullying by preventing offensive comments from being posted. Investments in better identity systems and stricter age-based regulation of content can also improve this issue.

Another important problem, which I will not address in this article, is the excessive time spent on social media platforms. I believe this issue is bigger than social networks and needs to be addressed separately. Excessive screen time may even be amplified in a post COVID-19 world, with more pervasive remote learning and work-from-home. Technology may help, but parents, educators, and health care professionals should also try to create mechanisms to enforce more balanced on/off-screen regimes. 

The “dilemma” is that there is no incentive for the platforms to change and no clear guidance on how they need to change to reduce these issues to an acceptable level. In the next section I will focus on the issue of the spread of misinformation and then I will conclude with a summary with the major societal problems that need to be addressed and proposed areas for investigation. I will not explore detailed solutions, but rather outline the the high-level areas that need need further investigation and how they fit in the overall solution framework.

The spread of misinformation

The underlying fundamental issues that need to be addressed to contain the spread of misinformation are: (1) the structure of the social networks and (2) the lack of checks and balances.

The structure of social networks does not reflect the structure of social bonds and organizations in real life. We do not have as many friends, which make the structure of social networks more densely connected. If we think of representing the structure of society as a graph, in which each node is an individual or a community, the graph representing the actual society will be a lot more sparse than the denser graph representing social networks, as illustrated in the figure below.

No alt text provided for this image

I am writing this in the middle of the COVID-19 pandemic, and we need to think about how society will look like in a post-pandemic world. Bill Gates predicts that we will live in an even more decentralized society. With working-from-home possible for a large part of the population, people would move to smaller cities away from “downtown” areas. This is a plausible vision, with many benefits varying from lower cost of real estate, less traffic and pollution, and being able to react better to future pandemics.

In this scenario, the structure of current social networks would be even more disconnected from the structure of our real society, which would be represented by an even sparser graph. In a society where more people live in the suburbs and spend more time on their local communities, we would have fewer friends but more meaningful relationships. If social networks were to match that structure, the ability to disseminate information widely through the network would be reduced and resemble more what we have in real life.

There are several ways to make the graph structure of social networks more aligned with the one that represents our society. In real life we interact with many people but we do not have many close friends. Making the type of relationship more explicit and controlling how information is propagated in a more granular way would more closely resemble the way we interact in the real world. This is an area that requires more research, as we need to make changes that allow users to feel more connected and engaged, while enhancing the value of social platforms and limiting the flow of information through the network.

The lack of checks and balances allows anyone to post anything on social networks. While this is positive in that it allows expression, creativity, and transparency, it is detrimental when posts contain misinformation, especially in the areas where false information can deeply impact society, such as politics and medicine. The spread of misinformation may be intentional, when a user infiltrates the network with the intent of promoting the content. It can also spread unintentionally, when a user posts or redistributes content without properly checking its validity, and usually with the belief that they are sharing valid and true information.

More robust identity systems and spam detection techniques may help reduce the intentional spread of misinformation. Although there are investments in automatically or semi-automatically tagging content and identifying “fake news,” this is a complex problem and there is still a lot to be done in this area. I believe this is an area where computer technology alone will not solve the problem. Platforms should strive to identify and limit the spreading of misinformation, however, this also requires regulation and cooperation between the government and the tech companies, as identified by former president Barack Obama.

A good reference model is the peer review system for academic publications. Experts volunteer their time to review academic papers in detail, with the goal of avoiding subpar research content to be published in premier venues and improving the quality of the papers that are ultimately published. Along the same lines, we need cooperation between the government, political scientists, journalists, and technologists to develop the proper checks and balances required to limit the publication and spreading of misinformation. We could, for instance, make posts from government officials go through a probation period until they are fact-checked by an independent, non-partisan organization. Of course, arbitrating content that gets published and propagated is extremely difficult, and it should be only cases where the information being published can be validated by well-established scientific methods and publicly available data.

Conclusion

I believe it crucial for all of us to think about how we want society to look like in a post-COVID world, and how to make social platforms reflect that. We need to use the mechanisms and structures of these platforms to improve our lives while minimizing the negative impacts on the spread of misinformation, polarization, and mental health. However, these issues cannot solely be fixed by the tech industry. A promotion of collaboration across fields will be an essential tool to mitigate these impacts. The table below summarizes the two main problems and provides some guidance on potential solutions. Finally, an important problem that was not addressed in this article but requires our full attention is excessive and increasing screen time.

No alt text provided for this image
  1. D. Easley, J. Kleinberg. Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Cambridge University Press, 2010.
  2. "FastStats". www.cdc.gov. 2020-08-04. Retrieved 2020-10-28.
  3. "Addiction Medicine – American Board of Preventive Medicine". Retrieved 2020-10-28.
Jeroen van Bemmel

Unlocking Potential Through Technology, Innovation, and Creative Collaboration

3 年

With LinkedIn, Microsoft seems to have found a social network business model that is not only based on engagement/attention. I believe it is possible to create better platforms that exhibit less of the negatives, and more of the positives. Personally, I would be willing to pay for such a service (in exchange for no ads/commercial bias by the platform itself) It is not possible nor desirable to make people use a platform in some "intended way" - people will use what they like and what helps them or makes them feel good, and they will find creative ways to achieve those goals. Social platforms solve some limitations in the physical world - they are complementary, not a subset

回复
Ashim Gupta

Principal Member of Technical Staff at AMD, Device Software Models

3 年

I commend you on the attempt to initiate an analysis and agree with several points. However, it is important to understand the purpose and design of any system. The current digital social network was designed to maximize advertisement revenue and not to mimic actual social experience of physical world , hence the unrealistic overconnected structure. Its design lacks the social & behavioral science aspects, it is not a technological flaw. The consequence is magnification of emotions rather than wisdom of the society it represents, since only that could be processed by ranking/relevance algorithm (I don't call it AI). An ironical illustration will be this very article, if the author has established social/professional hierarchy then the connections are pre-conditioned for ad-hominem fallacy. They are inclined to approve the message even without reading, because it will raise their own social capital by being conformist to a trusted & respected source. Thus the bias is built in for not what is said but said by whom. On the contrary, it is very risky to be confrontationist, even when the argument could be strong and insightful, given the larger structure of the network. And this is why the system discourages any checks/balances.

Tom Ball

Cloud Architect Lead at Starbucks

4 年

I like that you and others are thinking about this. I also believe this is an important space, and more thinking around just because we can , do we, is good. There are also deep correlations to free speech and the definitions of information. As well as these conduits are in many cases private and public companies vs a public information network.

Gregory Parker

Physicist; computation, simulation and cloud HPC/HTC consultant

4 年

Why would we want the "graph structure of social networks more aligned with the one that represents our society"? Because a dense structure allows for more and faster information? Is not one of the benefits of a dense social network an easier spread of information? For example, Arab Spring, Hong Kong protests, Iranian Green movement, .... Myanmar, Rwandan, ... would be a counter-examples. Regardless, it is not clear why a dense network is not desirable. The AI/ML goals need to be altered from 'engagement' to something else. Of course, this is in conflict with the business model and the stock market.

Rui Mano

Executive Director, Business Development, Sales & Operations | Startups GTM Strategy & Execution | Venture Partner | B2B Startups Mentor | I help IT and automation Companies to develop business internationally

4 年

Thanks for addressing this important subject. Recognizing the fact that we should work on a solution for avoiding the "fake news", I would add some issues I believe should be points of consideration. One is that social networks are - and should continue to be - a free space for ideas to spread. Then, how would it be possible to differentiate what is "misinformation" and what is simply "a different opinion", using automatic, computerized tools? Just imagine a government board imposing its' view and censoring whatever it thinks could collide with the "official" view. Considering this aspect, I would not agree with the idea of a "peer system review".

要查看或添加评论,请登录

社区洞察

其他会员也浏览了