Infrastructure First—The Foundation for Governing AI Instruments and Ideologies.
ChatGPT: "Scenic Ukiyo-e painting with curtains, red, orange and blue colors."

Infrastructure First—The Foundation for Governing AI Instruments and Ideologies.

Written by Murat Vurucu, October 3rd, 2024

Abstract:

This essay explores the conceptualization of AI as infrastructure, contrasting it with its roles as an instrument or ideology in shaping technological and societal landscapes. It argues that when viewed primarily as infrastructure, AI offers a more straightforward path for regulating its development and mitigating systemic risks such as monopolization, resource scarcity, and algorithmic bias. The essay proposes using reverse salients and lagging listeners to track bottlenecks and promote transparency, ensuring informed decision-making and equitable regulation. However, the essay also acknowledges the limitations of focusing solely on infrastructure, warning that we risk entrenching existing power imbalances without recognizing the broader political and social ideologies AI reinforces.

?

Keywords:

AI governance, AI infrastructure, algorithmic bias, political ideology, regime complex in AI, Technological Determinism in AI, Socio-Technical Systems

?

Hypothesis:

Global North Western states largely operate on the foundational principles of a free market economy, liberal democracy, and competition. For the purposes of this argument, I take these as axiomatic: the free market economy, liberal democracy, and the thriving of corporations and nations to compete and win.??

This condition makes it nearly impossible to regulate AI usage as an instrument or the unfolding of AI as an ideology without risking regulation that stifles innovation or reinforces existing power imbalances, potentially harming us more than helping.

?

My argument:

Treating AI as an infrastructure first provides a macro-level view of its impact on economic, social, and regulatory systems. It ensures we have the foundational understanding necessary to monitor, regulate, and balance AI's trajectory across industries and power structures. Instrumentation and ideology, while important, deal with applications and narratives that are more fluid and influenced by the underlying infrastructure.

By treating AI as infrastructure, policymakers can develop frameworks that address systemic risks—such as monopolization, resource scarcity, and algorithmic biases—through a holistic lens rather than in isolated applications.

The ideology of AI emerges from how the infrastructure is built and controlled. Without addressing foundational issues, like resource allocation and control, we risk allowing AI to evolve in ways that reinforce existing power dynamics and foster ideologies of automation and technocracy.

??

How I believe we should approach the conceptualization of AI:

First and foremost, we should treat AI as infrastructure before engaging with the noise of instrumentation and ideologies. This approach lets us clearly understand the system, its players, and power structures.

⒈??? By establishing “lagging listeners” to track reverse salients, we can monitor the current state and future trajectory without infringing on privacy,

⒉??? we can also detect uncompetitive behavior and power imbalances and maintain a balanced market without overregulation,

⒊??? and create centralized security protocols to manage critical risks like resource shortages, shutdowns, monopolization, technocracies, etc.

We should address AI as an instrument or ideology only after grasping the foundational system. Understanding the infrastructure first allows us to steer the future of AI development transparently, equitably, and in support of individual freedoms while preventing power consolidation in the hands of a few.

?

Reverse salients and lagging listeners

Reverse salients are technical bottlenecks that hold back a system's progress while lagging listeners. These metrics are designed to detect these slowdowns early without infringing on privacy. This is a crucial step in understanding AI as Infrastructure, as it provides a clear view of the system's current state and future trajectory.

?

Scarcity of Resources:

●????? Reverse Salient: Resource shortages (e.g., data, AI talent, compute) can slow AI progress. Compute, in particular, is a bottleneck, as it's often not perfectly aligned with algorithmic needs. AI is data-hungry, but the lack of appropriate compute infrastructure is a limiting factor. (Vannuccini & Prytkova, 2023)

●????? Suggested Resolution: Aligning computation resources with algorithm demands, focusing on overcoming compute-related issues (like those between cloud and edge compute infrastructures) to prevent this bottleneck from constraining AI evolution(Vannuccini & Prytkova, 2023)

●????? Lagging Listener: Compute Queue Delays – Track average job completion times in cloud compute infrastructures. If times significantly increase, this could indicate that compute resources are insufficient to meet demand, signaling a potential reverse salient.

?

Compatibility and Interoperability Issues:

●????? Reverse Salient: Lack of well-regulated data architecture and poor interoperability across different domains (e.g., algorithms, hardware, and datasets) slow progress. Data can be siloed (in proprietary or market-driven environments), which creates barriers to broader AI deployment. (Vannuccini & Prytkova, 2023)

●????? Suggested Resolution: Set clear data access and use rules, promote open-source initiatives to improve interoperability, and encourage standardized libraries and frameworks. These strategies foster compatibility, enabling more seamless collaboration across AI domains. (Vannuccini & Prytkova, 2023)

●????? Lagging Listener: API Failure/Error Rates – Monitor the rate of failed or errored API calls across systems that rely on different datasets or hardware infrastructures. A rise in failure rates could indicate compatibility or interoperability challenges, hinting at a growing reverse salient.


Focus on Scaling Algorithms Over Data:

●????? Reverse Salient: The obsession with scaling algorithms (making them bigger) rather than focusing on the data needed to train them has created inefficiencies. For example, some models like GPT-3 outperformed smaller models better trained with more data (e.g., Chinchilla). (Vannuccini & Prytkova, 2023)

●????? Suggested Resolution: The resolution involves balancing algorithm scaling with proper data training. Companies and researchers should focus on increasing the models' size and optimizing the amount and quality of data used in training. (Vannuccini & Prytkova, 2023)

●????? Lagging Listener: Model Performance Decay – Track model performance improvements over successive generations (accuracy or efficiency). A plateau or decline despite increasing model size could signal that the scaling approach is leading to diminishing returns, requiring a refocus on data quality.

?

Cross-Domain Reverse Salients (Hardware and Software):

●????? Reverse Salient: AI algorithms often become obsolete quickly, and the hardware designed for them has high sunk costs, creating a disconnect between hardware and software domains. (Vannuccini & Prytkova, 2023)

●????? Suggested Resolution: Cross-domain collaboration is critical, particularly with algorithm and hardware development, which should evolve together rather than as separate, siloed efforts. (Vannuccini & Prytkova, 2023)

●????? Lagging Listener: Hardware Utilization Efficiency—Track the percentage of new hardware features (e.g., specific AI acceleration features) utilized by algorithms. A drop in utilization rate could indicate that algorithms are not adapting to the latest hardware, pointing to a reverse salient between hardware and software development.

?

Monopolization and Resource Control:

●????? Reverse Salient: Some actors in the AI ecosystem monopolize resources like data, talent, and compute power. This concentration leads to oligopolistic behaviors and may slow broader AI progress. (Vannuccini & Prytkova, 2023)

●????? Suggested Resolution: Ensure fair competition and equitable access to resources. This can be achieved through policy interventions that curb monopolization, promote open AI ecosystems, and lower the barriers to entry in resource-intensive domains like compute and data. (Vannuccini & Prytkova, 2023)

●????? Lagging Listener:?Market Concentration Indices—Track the concentration of key resources (e.g., cloud compute market share and AI talent hiring by large companies). A rising concentration index suggests growing monopolization, indicating a reverse salient as resources become inaccessible to smaller players.


By addressing these reverse salients, the AI system can experience more balanced growth, promoting long-term sustainability and avoiding bottlenecks that could hamper innovation. We also gain insight into the power structures that control these systems by addressing reverse salients and understanding the technical infrastructure. This is crucial for understanding AI’s societal implications, such as monopolization and bias.

?

The AI system players and their power dynamics


⒈??? Algorithms (Software)

⒉??? Compute (Hardware)

⒊??? Data (Datasets, Collection, Processing)

⒋??? AI System Builders (Tech Giants, Startups, Research Institutions)

⒌??? Regulatory Bodies (Governments, International Organizations)

⒍??? Users and AI-Adopters (Businesses, End-users, Platforms)

(Vannuccini & Prytkova, 2023)

?

Influence and Dependencies:


●????? Algorithms ? Compute (Hardware):

○????? Influence: Algorithms require specific hardware configurations (GPUs, TPUs) for optimization. Hardware capabilities directly shape what algorithms can be run, and algorithm innovations push the development of hardware that supports them.

○????? Power Dependency: Reverse salient—If hardware cannot keep up with algorithmic demands (e.g., compute or energy efficiency), this creates a lag, slowing the progress of AI.

●????? Algorithms ? Data:

○????? Influence: Algorithms need vast data for training, and better algorithms can process increasingly complex data types (e.g., multimodal datasets). At the same time, the quality of data defines the effectiveness of the algorithms.

○????? Power Dependency: Reverse salient—Poor data quality or limitations in data access can bottleneck algorithmic performance.

●????? Compute (Hardware) ? Data:

○????? Influence: Efficient hardware processes larger datasets faster and can store more data. Conversely, data-intensive applications require better hardware solutions, such as faster processors and more efficient memory.

○????? Power Dependency: Salient—Hardware improvement is often driven by the increasing need for faster data processing, while compute resources like energy consumption can become a reverse salient.

●????? AI System Builders ? Algorithms/Compute/Data:

○????? Influence: System builders such as tech giants, startups, and academic institutions steer AI development by creating new algorithms, commissioning hardware, and managing data collection.

○????? Power Dependency: System builders hold critical power when deciding where to invest resources. This can lead to lock-ins, where the largest players shape the industry in ways that favor their approaches and resources. This can also create reverse salients in domains they neglect.

●????? AI System Builders ? Regulatory Bodies:

○????? Influence: Regulators influence system builders by setting rules around data privacy, AI ethics, and monopolistic behaviors. System builders, in turn, lobby or engage in strategic concessions to avoid strict regulations.

○????? Power Dependency: Salient—Power imbalances exist here, where system builders often wield more power due to their resources, though regulators can enact laws that force reconfiguration.

●????? Regulatory Bodies ? Users/Adopters:

○????? Influence: Regulatory bodies shape how users and adopters interact with AI systems (e.g., GDPR in Europe affects how data is collected and processed). At the same time, user feedback pushes regulators to adjust policies.

○????? Power Dependency: Salient—Regulators might delay action, creating gaps that allow unchecked AI development or enforce standards that reshape how AI systems are built and used.

●????? Users/Adopters ? Data:

○????? Influence: Users generate the data (e.g., usage patterns, feedback), which is crucial for training AI models. Their behavior also determines the effectiveness and focus of data collection.

○????? Power Dependency: Reverse salient—Poor user engagement or lack of access to relevant user data can hinder progress in AI development.

?

Power Dependencies:

●????? Salient: Regulatory bodies, system builders, and hardware manufacturers often create salient power imbalances where one group or domain holds more influence over the system's development, driving the AI ecosystem in a specific direction.

●????? Reverse-Salient: This refers to parts of the system that lag in terms of technology or resources and hinder overall progress. For example, bottlenecks in compute power or regulatory frameworks can slow down AI advancements.

?

Addressing technical barriers like reverse salients first provides a stable foundation for managing AI's broader societal impact. With fixing resource bottlenecks or monopolistic control, attempts to address human concerns might be effective and balanced by existing power dynamics. By first understanding the infrastructure, we gain better tools to regulate AI’s ethical and social dimensions, ensuring policies are grounded in both technical realities and societal needs.

?

Conclusion: Infrastructure First—The Foundation for Governing AI Instruments and Ideologies

The instrumentation and ideologies behind AI will create unavoidable tension in society and the world, making it critical to get the infrastructural analysis right to develop a factual foundation for future regulatory activities. Understanding AI as infrastructure gives us a powerful tool to manage its development. Still, it must be complemented by a broader perspective that accounts for human agency, knowledge production, and the system’s inherent biases.

While treating AI as infrastructure offers a way to manage technical and societal challenges, we must also recognize the risks of centralization and surveillance, as seen with the internet and telecommunications. Edward Snowden’s revelations about the NSA show how even well-intentioned infrastructure can be exploited without adequate transparency and regulation due to weak governance, centralized control, and profit-driven models. AI as infrastructure could help by enforcing decentralized systems, strong oversight, and ethical safeguards from the start. However, as argued in "AI is an Ideology, Not a Technology," focusing solely on infrastructure may overlook how AI functions as a broader political and social ideology, reinforcing existing power structures. (Lanier & Weyl, 2020) This is further supported by research in Histories of Artificial Intelligence: A Genealogy of Power, which emphasizes AI’s deep entanglement with colonial and capitalist legacies, showing how AI’s technical and epistemic systems reproduce and perpetuate these historical power structures (Ali et al., 2023). By ensuring transparency in AI infrastructure, we enable informed decision-making, allowing society to actively participate in shaping AI development and mitigating the risks of surveillance and monopolization, steering AI in a direction that serves democratic values and societal interests. (Lanier & Weyl, 2020)

Viewing AI solely as infrastructure is an incomplete approach. It treats AI as a passive system and overlooks the human decisions, labor, and knowledge fundamental to building, operating, and guiding these systems. As the Nooscope Manifesto highlights, AI is not autonomous; it depends on human workers, corporate leaders, and communities who shape and interact with it. Reducing AI to mere infrastructure risks invisibilizing these contributions and perpetuating biases that favor the powerful few. (Pasquinelli & Joler, 2020).

Moreover, AI acts as a tool of knowledge production, not merely as a technical framework. It mediates and transforms the knowledge it generates, often introducing distortions and biases. This extractivist process reinforces existing power structures and influences what is perceived, known, and normalized in society. Ignoring this dimension prevents us from fully understanding AI’s broader social and political implications. (Pasquinelli & Joler, 2020).

Lastly, by focusing exclusively on infrastructure, we risk downplaying AI systems' critical limitations and failures, such as data distortions and algorithmic bias. These biases disproportionately affect marginalized communities, perpetuating inequality. Unlike the technical focus on salients and reverse salients, an analysis that includes human agency and social dynamics reveals how AI can reinforce these disparities(Pasquinelli & Joler, 2020).

In summary, while treating AI as infrastructure is essential for managing systemic risks and ensuring its equitable development, it must be complemented by recognizing human agency, knowledge production, and inherent biases. By integrating these dimensions into the infrastructural framework and ensuring transparency, we can create a regulatory environment that supports innovation and fairness and steers AI development in ways that benefit society.

?

References

Ali, S M., Dick, S., Dillon, S., Jones, M L., Penn, J., & Staley, R. (2023, January 1). Histories of artificial intelligence: a genealogy of power. Cambridge University Press, 8, 1-18. https://doi.org/10.1017/bjt.2023.15

Lanier, J., & Weyl, E G. (2020, March 15). AI is an Ideology, Not a Technology

Pasquinelli, M., & Joler, V. (2020, November 21). The Nooscope manifested: AI as instrument of knowledge extractivism. Springer Nature, 36(4), 1263-1280. https://doi.org/10.1007/s00146-020-01097-6

Vannuccini, S., & Prytkova, E. (2023, August 14). Artificial Intelligence’s new clothes? A system technology perspective. SAGE Publishing, 39(2), 317-338. https://doi.org/10.1177/02683962231197824

Murat Vurucu

Crafting AI Products; Studying AI Ethics and Society at University of Cambridge

5 个月

Addition for simplification - instruments = apps, ideology = politics.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了