Institutional Evolution and Technological Game in the Age of Artificial Intelligence - From Inclusiveness to Extraction: Path Choices
I. Introduction: Waves of Technology and Institutional Fractures
In 2024, the Nobel Prize in Economics was awarded to Daron Acemoglu, Simon Johnson, and James Robinson. Their research unveiled the profound connections between technology, institutions, and societal prosperity. Acemoglu and his colleagues emphasized that the design and implementation of institutions are critical factors determining whether technology can bring about widespread social welfare. Their theory highlights that if institutions are inclusive, they can harness technological innovations to drive collective development; conversely, if institutions are extractive, technological advancements will only exacerbate inequality and the concentration of power.
From these perspectives, I believe that the Nobel laureates’ research findings are highly relevant to the challenges faced in the current era of Artificial Intelligence (AI). The application and development of AI are fundamentally institutional issues rather than mere technical challenges. Technological advancements should ideally liberate labor and enhance the quality of life. However, in reality, AI often becomes a tool for a select elite to consolidate power and wealth, mirroring the extractive institutions described by Acemoglu et al.
Artificial Intelligence (AI) is not merely a technological transformation but a structural revolution sweeping the globe. It is altering production methods, lifestyles, and even the distribution of power structures. Previous technological revolutions, such as the steam engine and the internet, gradually reshaped the interaction between the economic base and the superstructure. AI takes this a step further by embedding algorithms, data, and automation directly into every corner of society, analyzing, manipulating, and reshaping human behavior.
From the perspective of institutional economics, AI is not an external force independent of institutions but is deeply integrated and interacts with existing economic, political, and cultural institutions. AI technology has the potential to either promote the construction of inclusive institutions or exacerbate the expansion of extractive institutions. Therefore, the importance of institutional design and social choice is particularly pronounced in this era.
This paper aims to explore how institutional design can ensure that advancements in AI technology lead to collective societal prosperity. By combining personal cognitive structures, I will dissect the complex relationship between AI and institutions, propose how to build an institutional environment that adapts to technological changes and ensures equitable development, and offer specific action recommendations. This is not only a response to economic theory but also a call to action for technologists, policymakers, and all sectors of society: Only through effective institutional innovation can we truly unleash the potential of AI, making it a force that drives social progress.
II. The Impact of AI Technology on Institutional Inclusiveness and Extraction
2.1 How AI Integrates into Economic Institutions and Catalyzes Structural Transformation
Artificial Intelligence (AI) integration is not merely a technological innovation but a catalyst that profoundly impacts economic institutions and social structures. AI, through its powerful data processing capabilities and multi-parameter control systems, is redefining production methods, business models, and labor markets, thereby triggering deep-seated transformations within economic institutions.
2.1.1 Analysis of AI’s Impact within Institutional Frameworks
The diagram below illustrates the differing performances of AI technology under two primary institutional frameworks: inclusive institutions and extractive institutions. In an inclusive institution, AI promotes the expansion of innovation and the universalization of social welfare through open data platforms and technological sharing. For instance, governments and open-source communities collaborate to build platforms for open data and transparent algorithms, providing opportunities for small and medium-sized enterprises and individual developers to leverage AI technology for innovation, thereby stimulating economic vitality. This framework emphasizes equitable participation and the broad distribution of opportunities.
Conversely, under an extractive institution, AI is utilized to concentrate power and resources. Through algorithmic manipulation and information blockade, a few tech giants and monopolistic enterprises control data flows and market regulations. These companies exploit technological advantages to exclude competitors from the market and maintain their monopolistic positions through data accumulation and patent protection. In such a system, the technological dividends flow only to a minority of capital and power concentrators, excluding the vast majority from technological benefits and even facing risks of unemployment and income decline.
2.1.2 Visualization of AI’s Pathways under Two Institutional Regimes
The diagram provides a detailed depiction of AI’s divergent pathways under the two institutional regimes. In an inclusive institution, the technological pathway unfolds through the following nodes:
1. Open Data and Resource Sharing: Governments collaborate with technological communities to create open platforms, making innovation resources and data accessible to all developers and enterprises.
2. Transparent Regulation and Algorithm Governance: Regulatory bodies conduct regular audits and involve public participation to ensure the openness and fairness of algorithm and data usage, preventing technological abuse and privacy infringements.
3. Expansion of Social Welfare: AI technologies are applied to optimize public services, such as smart healthcare, intelligent infrastructure management, and educational support tools, thereby enhancing the overall quality of life.
In an extractive institution, the pathway is more closed and centralized:
1. Data Control and Market Monopolization: A handful of companies dominate data flow, monopolize technological resources, and use patents and legal barriers to prevent other competitors from entering the market.
2. Algorithm Manipulation and Information Opacity: These companies employ closed algorithm systems to precisely manipulate consumer behavior and market information, thereby expanding their control while reducing external oversight and intervention.
3. Aggravation of Social Inequality: Due to AI-driven automation in the labor market, low-skilled workers face replacement risks, while technological benefits are primarily concentrated among capital holders and technical elites.
2.1.3 Polarization of the Labor Market: Systemic Unemployment and the Emergence of New Labor Forms
AI’s penetration into the labor market is not a simple “job replacement” phenomenon but a profound “systemic restructuring.” Low-skilled workers face not only job losses due to technological unemployment but also a collapse in their bargaining power between capital and labor. In the traditional industrial era, workers could form collective actions or unions to establish some bargaining power. However, in the AI era, low-skilled positions are swiftly replaced by automated systems and robots, stripping workers of their collective bargaining chips.
Simultaneously, we witness the birth of new labor forms—unstable labor and the gig economy. These labor forms ostensibly offer flexibility and autonomy but, in essence, represent a further deepening of capital’s exploitation of labor. The platform economy relies on AI technologies for efficient scheduling and work distribution, optimizing labor efficiency while depriving workers of stability and security, forcing them to survive in low-wage and unprotected environments. This shift is not a manifestation of technological neutrality but a result of capital redefining labor rules through technological tools.
2.1.4 Institutional Amplification of Capital Concentration and Wealth Polarization
The proliferation of AI further intensifies the concentration of capital and wealth. The traditional processes of capital accumulation are significantly accelerated with the aid of AI. AI giant companies monopolize data and control algorithms, quickly dominating the market. These enterprises not only hold numerous technological patents but also monopolize user data and computational resources. This monopolization is not merely a reflection of technological superiority but signifies the establishment of a new power structure.
In the AI era, the combination of capital and technology forms a new ruling class—the “tech-capital complex”—which controls production, consumption, and even political choices through data and algorithms. This complex continuously absorbs and concentrates resources, expanding its influence by controlling global data flows. The concentration of power results in severe societal inequalities, exacerbating the phenomenon of wealth polarization. Institutional economics must focus not only on how technology affects wealth distribution but also on deeply understanding how technology alters the foundational structures of capital accumulation.
III. Institutional Impacts of AI Technology—Inclusiveness and Extraction
AI technology impacts institutions in two primary ways: on one hand, it can enhance production efficiency and expand social welfare; on the other, if institutional design is flawed, AI can become a tool for extractive institutions, further exacerbating social inequality.
3.1 Institutional Inclusiveness: Opportunities and Challenges
Inclusive institutions refer to those arrangements that ensure benefits are accessible to all societal strata and reduce inequality. The core of inclusive institutions lies in guaranteeing that social members have equal opportunities to participate in the production process, share technological dividends, and have a voice in policy-making.
AI technology promotes the development of inclusive institutions in the following ways:
1. Data Openness and Resource Sharing: By establishing open data platforms and shared resource pools, AI can foster innovation and entrepreneurial opportunities. For example, many governments promote innovation among small and medium-sized enterprises and startups by opening up government data, thereby creating a more vibrant economic ecosystem.
2. Intelligent Public Services: AI technologies can enhance the efficiency and fairness of public services, such as smart city systems, medical diagnostic tools, and educational support platforms. These technological applications help bridge the gaps between regions and populations, allowing more people to benefit from modern technology conveniences.
However, achieving inclusive institutions does not occur automatically. It requires systematic policy support and the construction of institutional frameworks. Governments need to formulate relevant regulatory policies to ensure transparency in technology applications and data security, preventing technological abuse. Additionally, the implementation of antitrust laws and data privacy protection laws is crucial. These measures ensure that big data and algorithms are used in a reasonable and transparent environment, creating conditions for maximizing social welfare.
3.2 Extractive Institutions: Technological Abuse and Power Concentration
In contrast to inclusive institutions, extractive institutions are controlled by a few elites who use them to consolidate their power and wealth. In the AI era, extractive institutions manifest in increasingly subtle and complex ways. AI technology can enhance the expansion of such institutions through the following means:
1. Algorithmic Discrimination and Information Control: Tech giants and governments can manipulate information flows using algorithms to create “filter bubbles,” limiting the public’s access to real information channels. For example, social media platforms use recommendation algorithms to display content that aligns with users’ past browsing history, thereby reinforcing users’ biases. This phenomenon not only weakens public cognitive abilities but also has widespread negative impacts in political realms and market competition. Constructing fitting frameworks (the sentence seems incomplete in the original text).
2. Data Monopolization and Market Concentration: By monopolizing data and algorithms, tech giants can establish de facto monopolistic positions, excluding small and medium-sized enterprises and new entrants, thereby consolidating their market control. This concentration not only reduces market competition but also centralizes technological dividends among a few capital owners, leading to further concentration of wealth and power.
User Privacy Leakage: The Complex Process of Data Power and Social Structure Reconstruction in the AI Era
In the context of AI’s continuous infiltration and the prevalence of big data, user privacy leakage is not merely an isolated technical or occasional management issue but a concentrated manifestation of power relations and social structure changes within the entire digital economy ecosystem. To deeply understand this process, we need to dissect the technical logic, economic motivations behind privacy leakage, and its profound impact on social institutions from multiple levels.
1. Privacy as a Core Resource of Data Capital
In the AI era, privacy is being redefined, gradually detaching from the realm of individual rights and becoming a core resource in the data economy. Big data platforms and tech companies view privacy data as a core element of productivity, generating massive datasets through continuous tracking and recording of user behaviors. These datasets are not only used to train AI models but also exchanged as “capital” in various economic transactions. Privacy is no longer merely a private information that individuals can control; it is systematically absorbed into the tech giants’ “digital capitalism” framework.
In this process, data platforms accumulate significant informational power by collecting and controlling user privacy data. This informational power allows them not only to dominate information flows in the market but also to predict and guide user behavior patterns. This control over information, to some extent, changes the operational mode of the modern economy, strengthens platform monopolies, and exacerbates data inequality and economic power concentration.
2. Technical and Institutional Collusion Behind Privacy Leakage
Privacy leakage is not merely a result of technical failures but a collusion between technology and institutions. The design logic of recommendation algorithms is not neutral but is deeply influenced by platform interests. Personalized recommendation algorithms analyze user preferences and behaviors to provide highly tailored content, extending users’ time on platforms. However, this design fundamentally relies on comprehensive data collection and deep analysis of user data, achieving “precise matching” by gradually eroding users’ privacy rights.
Furthermore, the process of privacy leakage is inseparably linked to the institutional frameworks of the global digital economy. Existing data protection regulations and institutions often become mere formalities of “notice and consent” in practice. When users authorize data usage, they face lengthy privacy terms and complex legal language, resulting in very limited actual choice and control. This institutional design not only fails to effectively protect user privacy but also serves as a tool for platforms to legitimize data collection. Here, privacy leakage is no longer a technical bias but a product of the combined action of institutions and technology.
3. Systemic Erosion of Privacy Data and Social Trust
As privacy data gradually becomes commercialized and a core element of power, the trust relationship between users and platforms begins to deteriorate. Privacy is no longer just an individual right but a power game between digital platforms and users. In this game, platforms possess more information and resources, while users remain in a position of informational asymmetry and vulnerability.
This erosion of trust extends beyond the loss of personal privacy to a broader societal identity crisis. When user privacy data is widely collected, analyzed, and used for algorithmic manipulation, individuals gradually lose control over their own behaviors and preferences. While users may appear to have diverse choices, their selections are confined within the range set by algorithmic settings. Over time, this “invisible manipulation” blurs the boundaries between the virtual and real worlds, causing users to lose self-agency and transforming privacy leakage into a comprehensive reconstruction of social trust and power structures.
4. Reproduction of Privacy Data at Cultural and Economic Levels
Privacy is not only the loss of individual information but is also increasingly becoming an important resource for cultural production and economic reproduction. Platforms analyze user behavior patterns using data, generate user profiles, and integrate them into advertising and marketing systems. This reuse of data not only satisfies the economic interests of platforms but also, to some extent, shapes the content consumption patterns of social culture. Personalized recommendation mechanisms, through precise prediction and guidance of user behavior, create information echo chambers and cultural bubbles.
In this process, the leakage and reuse of privacy data gradually shape a “cultural market” dominated by a few tech giants, making the production, dissemination, and consumption of information and cultural products increasingly concentrated within a handful of platforms. Users gradually lose autonomy in cultural production processes, becoming passive participants in data capital. This collusion between culture and economy exacerbates the homogenization and extremization of social culture, further consolidating the institutional foundation of privacy leakage and reuse.
5. Loss of Privacy Control and Reproduction of Social Institutions
Privacy leakage not only reflects the collusion between technology and institutions but also reveals the complexity of power operations in modern society. The privacy issues in the AI era are not just about the loss of individual privacy rights but also symbolize the concentration of power and the reproduction of social institutions. Data has become a new form of capital and a core element of power. Through the manipulation of privacy, platforms and capital gain unprecedented control, while users gradually lose their voice in the process of privacy loss.
Against this backdrop, privacy leakage has become a crucial issue in institutional economics: how to understand and address privacy data as a means of reproducing power and capital determines the future fairness and inclusiveness of society. In the process of privacy loss, data is not merely a technical tool but a political and economic force. Only by deeply understanding this complex process can we truly recognize the essence of privacy leakage and its profound impact on social institutions.
The diagram “AI Monopoly Comprehensive Framework” illustrates the key links of AI in technological monopolization and market control. The lack or failure of institutional design allows these giants to continue expanding their power within legal and regulatory loopholes. To address this issue, the institutional framework proposed by Acemoglu and Johnson emphasizes the necessity of regulating big data monopolies and algorithm transparency.
IV. Institutional Collapse and the Underground AI Environment
Institutional collapse refers to the phenomenon where the existing institutional system fails or becomes ineffective in responding to emerging challenges under the impact of technological transformation. In the AI era, such collapse is particularly evident, manifesting in labor markets, social equity, and political structures.
4.1 Manifestations of Institutional Collapse
1. Polarization of the Labor Market and Technological Unemployment
AI’s automation technologies and algorithmic decision-making tools are massively replacing low-skilled labor positions. This technological unemployment not only brings economic hardship to workers but also weakens their bargaining power, placing them in a more disadvantaged position in the capital-labor negotiations. In the long run, this imbalance may lead to social instability and political tensions.
2. Expansion of the Underground AI Environment
In the context of institutional collapse, some enterprises and individuals develop and apply AI technologies outside regulatory oversight, forming underground AI markets. These underground environments include data black markets, unauthorized algorithm laboratories, and illegal deep learning models. The expansion of underground markets is not accidental but stems from the ineffectiveness or failure of formal institutions. In these underground environments, technological applications lack any form of ethical and legal constraints, further increasing the risks of AI abuse.
4.2 Structure and Composition of the Underground AI Market
The formation of the underground AI environment is not merely a market behavior but fundamentally a manifestation of institutional deficiency. In the context of globalization and digitalization, underground AI markets typically operate through the dark web and encrypted communications, making them difficult for law enforcement agencies to monitor. Meanwhile, the data and technological transactions within these markets are essentially redistributions of power and resources. Small and medium-sized enterprises and startups excluded from the legitimate market may obtain resources and technological support through underground markets, but this also means their actions reside in a legal and moral gray area.
The diagram “AI Development Induced Institutional Imbalances and Reconstruction” provides a detailed illustration of the formation logic of these underground markets. They are not only the result of the interplay between technological and market forces but also reflect the convergence of power asymmetry, institutional deficiencies, and the complexities of globalization.
V. Distinction of Task Types—Easy-to-Learn Tasks and Hard-to-Learn Tasks
In their research, Acemoglu and colleagues introduced a framework distinguishing between “easy-to-learn tasks” and “hard-to-learn tasks,” providing a new perspective on understanding AI’s role in productivity enhancement.
5.1 Productivity Enhancement and Limitations of Easy-to-Learn Tasks
Easy-to-learn tasks refer to those with high predictability and clear rules, such as logistics optimization, automated sorting, and data processing. AI has achieved remarkable success in these areas, but such achievements do not imply that AI can be equally easily expanded to all domains. For example, automated medical imaging diagnosis, while superior to human doctors in certain scenarios, still relies on human experts when dealing with complex cases and high uncertainty diagnoses. This limitation indicates that AI has domain-specific restrictions in enhancing productivity.
5.2 Challenges and Future Development of Hard-to-Learn Tasks
In contrast, hard-to-learn tasks involve complex cognitive and creative processes, such as strategic decision-making, social interactions, and innovative design. AI’s performance in these areas falls significantly short of human levels. This gap is difficult to bridge in the short term, and over the next decade, AI’s impact on Total Factor Productivity (TFP) will be limited by its inability to extend to these domains.
Combining the “AI Economics Integrated Framework” diagram, we can see that the distinction of tasks in different economic fields will directly impact future productivity growth. Acemoglu’s theory points out that productivity enhancements in easy-to-learn tasks cannot be directly extrapolated to hard-to-learn tasks. This means that while current technological progress has achieved rapid results in certain areas, its overall long-term impact on economic growth requires cautious estimation.
VI. Emergence of Negative New Tasks and Their Impact on the Economy and Society
AI technology’s rapid development not only brings about productivity enhancements but also generates a series of “negative new tasks.” These tasks may contribute to GDP growth but do not have a positive impact on overall social welfare and may even reduce societal well-being in certain cases.
6.1 Characteristics and Hazards of Negative New Tasks
Negative new tasks typically manifest as AI-driven economic activities that may bring initial benefits to certain enterprises or market entities but ultimately have detrimental effects on society as a whole. For example:
1. Data Manipulation and Algorithmic Control
Some companies use AI algorithms to optimize advertising placements and market strategies, thereby precisely influencing consumer behavior. While this technology increases company revenues, it also raises concerns about privacy and data usage. Additionally, the algorithms behind precise targeting often strip consumers of their choice, manipulating them unknowingly. This phenomenon is particularly prevalent on social media and e-commerce platforms.
2. Generation and Dissemination of Fake Information
AI-driven text generation and image synthesis technologies, such as deepfakes, are widely used to spread false information. Although these applications may provide promotional effects or economic benefits for specific interest groups in the short term, they ultimately undermine the foundation of social trust and severely damage the public opinion environment.
6.2 Weakening of Social Welfare by Negative New Tasks
These negative new tasks not only create inequalities through data and algorithm manipulation but also generate other forms of social harm. For instance, AI-generated fake information and market manipulation can interfere with public decision-making and election processes, directly endangering the functioning of democratic institutions. Furthermore, when these technologies are used in unregulated underground markets, they threaten fair market competition and can lead to the proliferation of criminal activities, such as information theft, extortion, and other malicious behaviors.
Acemoglu, in his theory, mentions that institutional design and social consensus should guide technology towards maximizing social welfare. The diagram “AI Development Induced Institutional Imbalances and Reconstruction” reveals how negative new tasks manifest under institutional failure, emphasizing the connection between institutional collapse and technological abuse.
VII. The Impact of AI on Income Inequality—Institutional Regulation
AI technology’s influence on income distribution and inequality is a major focus within the current economic discourse. In this chapter, we will explore AI’s role in this domain by integrating Acemoglu and Johnson’s theories and propose relevant policy recommendations.
7.1 Income Distribution Effects of AI Technology
AI technology development does not bring equal benefits to all social groups. Compared to capital-intensive and technology-intensive industries, labor-intensive and low-skilled workers face greater impacts. Without appropriate policy adjustments, AI technology will further widen the income gap between capital and labor. Specifically:
1. Widening Gap Between Capital and Labor Income
AI’s automation effects reduce the importance of laborers in the production process, while capital owners reap higher returns through investments in AI and automation technologies. This effect directly leads to an increase in capital income relative to labor income, thereby widening the wealth gap between capital holders and ordinary workers.
2. Differentiation Between High-Skilled and Low-Skilled Workers
领英推荐
The development of AI technology also gradually widens the income gap between high-skilled and low-skilled workers. High-skilled workers, such as data scientists, engineers, and AI experts, receive higher incomes and status due to their scarce professional skills, whereas low-skilled workers face replacement risks from automation, threatening their bargaining power and job stability.
Combining the “AI Economics Integrated Framework” diagram, we can observe that while AI drives the growth of high-skilled positions, it also causes significant structural shocks to traditional labor markets. Under the concentration effects of capital and skills, AI technology further exacerbates income inequality.
7.2 Institutional Design to Mitigate Inequality
To regulate the unequal effects brought about by AI technology, institutional design needs to intervene in the following areas:
1. Implementation of Redistribution Policies
Governments can use taxation and welfare policies to promote income redistribution. For example, imposing higher tax rates on high-income and high-capital-return individuals and using these funds for education, retraining, and social welfare subsidies. Such policies not only help balance income distribution but also enhance overall social stability and inclusiveness.
2. Lifelong Learning and Skills Enhancement Programs
To enable low-skilled workers to adapt to changes brought about by new technologies, governments should establish lifelong learning accounts and career development funds to help workers improve their skills through training and education. Concurrently, in the process of AI technology development, efforts should be made to promote labor market diversification and flexibility, encouraging investment in human capital development.
These policy measures will help mitigate the impact of AI on the labor market and promote the fair distribution of technological dividends. Additionally, establishing international cooperation and global governance frameworks is also an important approach to addressing inequality issues. For instance, setting up global technology taxes and distribution funds can encourage multinational companies to fairly share the responsibilities brought about by technological development and promote more balanced resource allocation globally.
VIII. Public Right to Information and Technological Transparency—The Core of Institutional Regulation
The rapid development of AI technology brings about issues related to privacy, data security, and transparency. In a society highly reliant on data and algorithms, technological transparency and the public’s right to information become essential safeguards for maintaining social fairness.
8.1 Institutional Guarantees for the Public’s Right to Information
The public has the right to understand the operational mechanisms of technology and its potential societal impacts. Governments should ensure the realization of the public’s right to information through the following mechanisms:
1. Establishment of Algorithm Transparency Regulatory Agencies
Governments can set up independent algorithm regulatory bodies to review and test key algorithms within AI systems, ensuring that algorithms do not discriminate against or unfairly impact specific groups. Additionally, all algorithms involved in public services and significant social interests must disclose their core designs and data sources to enable public oversight.
2. Public Participation and Social Supervision
Through public hearings, social consultations, and technology forums, the public’s participation in the formulation of technology policies can be increased. This transparency mechanism not only enhances societal trust in technological development but also allows for public supervision to correct technological misuse and control issues.
8.2 Institutional Transparency and Civil Society
In a healthy civil society, institutional transparency is crucial. Citizens need the right to understand the process of technology policy formulation and its potential impacts. Combining theories from “Power and Progress,” it is evident that information asymmetry during technological changes is a primary factor leading to technological abuse and social inequality. Therefore, promoting the transparency of technology policies is not only an issue of technological regulation but also essential for building and maintaining civil society and institutional integrity.
To achieve this goal, legislative and policy measures can be implemented to ensure that all policies involving AI and data usage are publicly transparent. For example, governments can establish a “Technology White Paper” publication mechanism to regularly inform society about technology usage, policy updates, and public feedback, and have independent third-party supervisory agencies evaluate them.
IX. The Invisible Dominance of Intelligent Technology: Social and Cultural Effects of AI in Information Bubbles and Virtual Identities
9.1 Social and Cultural Institutional Changes and the Invisible Control of Technology
AI technology’s influence extends beyond the economic domain, permeating deeply into social and cultural layers. Recommendation algorithms are not merely tools for driving information flows but have become powerful means of invisibly controlling social culture and cognitive behaviors. In this process, AI technology gradually reshapes the boundaries between individuals and society, and between the virtual and the real. The following sections systematically explore how AI manipulates social culture through algorithms and virtual roles, and the underlying issues of institutional lag.
9.1.1 Information Bubbles and Cognitive Manipulation
The design objective of personalized recommendation algorithms is to enhance user engagement, but their core lies in selective information filtering. This mechanism restricts users to only accessing information that aligns with their existing preferences and viewpoints, neglecting other diverse voices. Such “information bubbles” not only alter how users perceive the world but also have profound societal impacts.
1. Intensification of Echo Chamber Effects
On social media, short video platforms, and search engines, recommendation algorithms push content that matches user behavior data, gradually forming a closed information environment. In this environment, users only encounter information consistent with their existing viewpoints, losing opportunities to engage with other stances and diverse information. This phenomenon leads to the extremization and cognitive fragmentation within social groups, hindering understanding and communication between groups, and causing societal cognitive disintegration and polarization.
2. Cognitive Manipulation and Brain Structure Changes
Diagrams and neuroscience research further reveal the deep impacts of personalized information environments on users’ brains. Users immersed in algorithmically set environments frequently activate their emotional resonance and reward systems, particularly in regions like the prefrontal cortex and nucleus accumbens. However, when users encounter information that does not align with their preferences, the activation levels in these regions significantly decrease. This information bias leads to “cognitive rigidity,” causing users to become less willing to accept viewpoints different from their own, and gradually losing cognitive flexibility and openness.
9.1.2 Brainworm Effect of Personalized Recommendation Algorithms
Personalized recommendation algorithms not only influence how users acquire information but also stimulate their immediate reward systems, resulting in the “Brainworm Effect.” This effect deeply reshapes users’ cognitive mechanisms and brain structures, leading to a series of cognitive and behavioral issues.
1. Strengthening of Immediate Reward Systems
Short video platforms and personalized recommendation content activate users’ immediate reward systems through high-frequency, short-duration stimuli, especially in the nucleus accumbens and ventral tegmental area. This activation reinforces users’ dependency on short-term rewards, making it increasingly difficult for them to focus on long-duration and deeply cognitive tasks in real life. Over time, these brain structural changes lead users to lose patience for complex tasks, relying solely on transient pleasurable stimuli to maintain attention.
2. Long-term Impacts and Cognitive Addiction
Studies show that users exposed to personalized recommendation algorithms over the long term experience a gradual decrease in the activation of brain regions associated with long-term memory and decision-making, such as the hippocampus, while regions related to immediate responses, like the basal ganglia, become reinforced. These neural structural changes not only cause users to exhibit symptoms akin to “cognitive addiction” in real life but also make it difficult for them to adapt to tasks requiring deep thought and sustained focus, thereby affecting their learning and work performance.
9.2 Social and Cultural Disconnects Due to Virtual Identities and Emotional Dependence
The proliferation and development of AI technology have led to the rise of virtual idols and intelligent NPCs (Non-Player Characters), providing young users with new emotional outlets and identity recognitions. In this process, real-life identities are gradually marginalized, and virtual identities become primary sources of emotional dependence. Diagrams illustrate how virtual characters, through emotional simulation and interactive mechanisms, gradually occupy young people’s emotional worlds.
1. Emotional Manipulation by Virtual Characters
AI-generated virtual characters are not only companions for users but also form emotional control over users through highly personalized emotional responses and interaction patterns. Young people increasingly rely on virtual emotional experiences in their interactions with these virtual characters, viewing them as primary sources of emotional support. This emotional dependence gradually weakens social relationships and emotional bonds in real life, leading young individuals to increasingly view virtual identities as their main affiliations.
2. Cultural Disconnect
As virtual identities gradually replace real-life identities, social cultural structures face a crisis of fragmentation. Young people’s dependence on the virtual world may lead to their disconnection from real society, making it difficult for them to adapt to the complex emotions and identity roles in the real world. In the long run, this split in identity recognition may trigger broader social and cultural conflicts and identity crises, thereby affecting social stability and cultural heritage.
X. Reconstruction of Technology and Institutions—Paths of Open Source and Evolution
10.1 The Struggle between Open Source Communities and Monopolistic Enterprises
In the era of AI multi-parameter control systems and big data-driven environments, monopolistic enterprises utilize their vast capital and data advantages to swiftly capture the market and establish de facto technological monopolies. This monopolization not only hinders innovation but also leads to the concentration of social resources and welfare. To break this situation, open-source communities are gradually becoming important forces in driving technological and institutional reconstruction.
10.1.1 Advantages of Open Source Communities
Open-source communities, as an open and collaborative technological development model, possess several notable advantages:
1. Decentralization and Democratization
Open-source projects are typically developed by global developers collectively. Project decisions and technological directions are determined by community consensus rather than a few capital owners. This decentralized model ensures that technological development is not controlled by monopolistic forces.
2. Open Innovation and Resource Sharing
Open-source projects make their code repositories and technical documentation accessible to all developers and enterprises. This resource-sharing model not only accelerates technological iteration and innovation but also provides opportunities for small and medium-sized enterprises and startups to enter the market. The existence of open-source communities effectively lowers the barriers to innovation, allowing more developers and organizations to participate in technological competition.
3. Driving Force of Open Source Culture
The collaborative culture within open-source communities promotes technological transparency and openness. Developers not only share code within projects but also collaboratively establish open standards, fostering sustainable technological development. This cultural atmosphere encourages global cooperation, weakening monopolistic enterprises’ control over technological resources.
10.1.2 The Conflict and Integration between Monopolistic Enterprises and Open Source Communities
Although open-source communities to some extent resist the expansion of monopolistic enterprises, monopolistic companies are also gradually incorporating open source into their strategies. Many large tech companies, such as Google, Microsoft, Amazon, and OpenAI, have begun actively participating in open-source projects and even open-sourcing their core technologies. This behavior appears on the surface to symbolize open cooperation but is, in reality, a strategy for monopolistic enterprises to bring open-source communities within their sphere of influence.
However, this conflict and integration also drive the reconstruction of technology and institutions. By participating in open-source projects, monopolistic enterprises are compelled to adhere to community transparency and collaboration rules, thereby reducing the possibility of their complete control. Simultaneously, open-source communities leverage the resources and technological capabilities of these large companies, achieving faster progress in technological innovation and application.
10.2 Iterative Evolution of Open Source Institutions and Technological Development
To promote sustainable technological and institutional development, effective evolution of open-source communities and institutions is key. Open-source institutions are not only about technological openness but also about reshaping technological regulation and social participation.
10.2.1 The Integration of Technological Open Source and Institutional Openness
1. Technological Open Source
Through open-source technologies and standardized open platforms, various enterprises and organizations can more easily use and develop AI technologies, reducing the entry barriers caused by technological monopolization. Technological open source is essentially a process of “depowering,” ensuring that technological control is no longer concentrated in the hands of a few enterprises.
2. Institutional Openness
Through legislative and policy guidance, governments and international organizations should support and encourage open-source projects, ensuring their fairness and transparency. For example, special funds and technological support programs can be established to encourage small and medium-sized enterprises and individual developers to participate in the development of open-source technologies, and policy incentives can be provided for the use of open-source technologies. This institutional openness ensures the independence and innovativeness of open-source communities, creating conditions for the integration of technology and social welfare.
10.2.2 Iterative Evolution of Open Source Institutions
Technological and institutional evolution is a dynamic process that must adapt to changes in technology and market environments. Effective institutions should possess the following characteristics:
1. Flexibility
Institutions must be able to quickly respond to changes in technology and markets. For example, for emerging open-source AI projects, governments can promptly introduce corresponding legal support and financial subsidies to ensure these projects can rapidly grow and compete against large companies’ monopolistic behaviors.
2. Participation
Effective institutions should ensure that the public and developers have opportunities to participate in the formulation of technology policies. For example, establishing technology policy hearings allows open-source communities and public representatives to express their opinions during policy formulation, ensuring that policies better serve public interests.
3. Iterative Nature
Technological development and institutional construction are ongoing iterative processes. Governments and international organizations should regularly assess the progress of open-source projects and adjust institutional frameworks based on societal feedback. For instance, setting up an “Open Source Technology Oversight Committee” can periodically review projects and adjust technology policies according to community suggestions.
XI. The Future of Institutions under Technological Evolution
11.1 Open Source Communities Promoting Technological Fairness
The integration of open-source communities and institutions goes beyond technological openness; it involves establishing new market mechanisms and social ecosystems. Open-source technologies break capital monopolies, enhance market diversity and competitiveness, and promote the democratization of technological standards.
For example, global open-source AI communities such as TensorFlow and PyTorch have become industry standards. These open-source platforms not only increase development efficiency but also allow numerous small and medium-sized enterprises and research institutions to develop innovative products based on these platforms, thereby creating a healthy competitive environment. This model weakens monopolistic enterprises’ control over technological standards, allowing more societal members to benefit from technological evolution.
11.2 Global Collaboration of Open Source Institutions
In the context of globalization, countries should collaborate to promote the evolution of open-source institutions. For example, in data and technology sharing, countries can establish an “International Open Source Technology Alliance” to share open-source technology standards and data resources, reducing monopolistic enterprises’ control over global markets.
This global collaboration not only ensures the fair application of technology but also forms a more inclusive and open technological governance framework within the international community. For instance, the United Nations and the European Union have already been promoting a “Global Open Source Technology Agreement,” aiming to regulate the global application of AI and data technologies and enhance technological cooperation among countries through open-source mechanisms. These international collaborative efforts help weaken technological monopolies and ensure that all countries can benefit from technological advancements.
XII. Co-evolution of Institutions and Technology—Maximizing Social Welfare in the AI Era
In the AI era, technology itself does not automatically bring about inclusiveness and fairness; institutions are key to unlocking its potential. Only when institutions and technology co-evolve and support each other can AI truly become a force that maximizes social welfare rather than a tool for the few to extract benefits. Building an inclusive institution has become a fundamental challenge of our time.
12.1 Interactive Paths between Technology and Institutions
1. Technology Driving Institutional Change
AI and big data technologies provide governments and societies with more precise governance tools. These technologies can be used to monitor the implementation of policies, collect social feedback, and swiftly adjust institutions. This reverse-driving effect of technology allows governments to respond more flexibly to economic and social changes.
2. Institutions Guiding Sustainable Technological Development
Through legislation and social supervision, institutions can ensure that technology develops within the framework of social ethics and laws. For example, by mandating the transparency of algorithms and data, institutions can effectively prevent technological abuse and prioritize the application of technology in public services and social welfare.
12.2 Long-term Evolution Prospects of Institutions and Open Source
As open-source technologies mature, institutional evolution must support and adapt to this trend over the long term. Future institutions should continue to promote the application of open-source technologies and establish more完善的合作框架 globally. This not only helps in the broad distribution of technological dividends but also becomes an important institutional tool to combat technological monopolies and protect social welfare.
Through the iterative evolution of open-source communities and institutions, we find a new balance in the reconstruction of technology and institutions. Institutional economics in the AI era is no longer a one-way control of power and technology but a multi-party game and co-evolutionary process. It is within this process that we can truly achieve the maximization of technological progress and social welfare.
In this process, AI technology is not neutral. Algorithms are inherently mechanisms of power, controlling the public’s vision and understanding through the selection and sorting of information. The existence of AI amplifies the power structures that have control desires over society, allowing them to precisely manipulate public attention. This systemic information manipulation not only destroys social inclusiveness but also exacerbates political polarization and social fragmentation.
Conclusion: The Prometheus Paradigm—Institutional Economics in the AI Era
As we stand on the precipice of an AI-driven revolution, we cannot help but recall Prometheus, who gifted humanity with fire. Like the ancient Titan, we unleash a force capable of reshaping the world. However, as Aeschylus taught, such gifts come with profound responsibilities and unforeseen consequences.
The analysis in this paper reveals that Artificial Intelligence is not merely a technological leap but also a mirror reflecting our institutional foundations and societal values. It challenges us to reimagine the essence of economic and social structures, much like the Industrial Revolution did. However, the speed and scope of AI-driven changes far exceed those of previous technological paradigm shifts.
Our exploration of easy-to-learn and hard-to-learn tasks, along with our vigilance against negative new tasks, emphasizes a crucial insight: AI’s impact will neither be uniform nor universally beneficial. It will create rapid transformations in some areas while inducing gradual evolutions in others, necessitating meticulous and adaptable institutional responses.
Acemoglu and Robinson’s research on inclusive institutions gains new urgency in the AI era. We must ask ourselves: How can we design institutions that ensure AI benefits are widely shared while mitigating its potential to exacerbate inequality? This question pertains not only to economics but also involves profound ethical and political dimensions.
However, in addressing these challenges, we must resist the allure of technological determinism. Despite its power, AI remains a tool shaped by human choices and values. Our institutional designs will ultimately determine whether AI becomes a driver of inclusive prosperity or a catalyst for deepening social stratification.
In this context, the role of institutional economics transcends academic discourse. It becomes an essential framework for exploring the complex interactions between technological innovation and social welfare. We are not only called to analyze but also to actively shape the institutions that will govern our AI-infused future.
Unfortunately, we must acknowledge the limitations of current paradigms. Faced with the transformative power of AI, traditional economic models and policy recommendations may prove inadequate. As the saying goes, sometimes magic is needed to defeat magic. In our case, we may need to leverage AI itself to design and test new institutional frameworks capable of governing its own development and deployment.
This approach—using AI to shape the governance of AI institutions—offers immense opportunities but also harbors profound risks. It requires us to infuse cherished values and ethical considerations into AI systems while remaining vigilant against unintended consequences.
As we embark on this journey, let us draw inspiration from another Nobel laureate, Elinor Ostrom, whose research on governing common resources provides valuable insights for managing our collective AI resources and risks. We must cultivate multi-center governance structures capable of adapting to the complex and multi-faceted challenges brought by AI.
Final Words
The emergence of AI presents us with a Prometheus-like task: harnessing its immense power for the benefit of all humanity. This requires not only technological innovation but also the highest levels of institutional creativity. We must forge new economic paradigms, reimagine our social contracts, and nurture a global spirit of responsible innovation.
The road ahead is fraught with uncertainty but also brimming with limitless possibilities. By embracing the challenges of institutional restructuring in the AI era, we have the opportunity to create an unprecedentedly prosperous and fair world. However, we must remember that, like Prometheus, our gift may come with unforeseen shackles.
Let us proceed on this path guided by wisdom, resilient humility, and an unwavering commitment to human prosperity. For in the end, the true measure of our success is not the complexity of AI systems but the depth of our compassion and the breadth of our vision for a shared human future.
In this grand endeavor, institutional economics must evolve from a descriptive science to a normative art, blending rigorous analysis with bold imagination. It is through this alchemy of thought and action that we may transform the fundamental materials of current institutions into truly inclusive, AI-enhanced civilizations.
The fire of AI has been ignited. How we guard this flame will define not only our economy but also the essence of our humanity. Let us meet this challenge with courage, creativity, and a steadfast commitment to the public good.
References
? Acemoglu, D., Johnson, S., & Robinson, J. (2023). Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity.
Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action.