How AI Literacy Can Democratize Generative AI
AI Literacy Framework, Digital Promise [96].

How AI Literacy Can Democratize Generative AI

The rapid rise of generative AI since the launch of ChatGPT in late 2022 has transformed how people interact with artificial intelligence. Generative AI (GAI) has dramatically expanded access to AI, but this democratization is both uneven and hedged about with risks. AI literacy—the knowledge, competencies, and interactive experiences required to navigate, use, and critically engage with AI—is critical to making access fairer, mitigating risks, and democratically steering the adoption and evolution of GAI.

The article begins by examining the democratizing potential of generative AI, emphasizing its ability to empower novice users, broaden access to education, and serve as a collaborator in creative and professional domains. However, this is tempered by concerns over misinformation, bias, economic inequality, and the environmental costs of GAI.

The second considers the meaning and purpose of AI literacy in the literacy's historical development. AI literacy is not just about using AI tools but also critically evaluating their impact on society. The section outlines key competencies, based on frameworks from organizations like UNESCO, that can empower individuals to engage with AI responsibly and effectively.

The article then explores prompting in the context of AI literacy. While often referred to as 'prompt engineering,' prompting is more than a technical skill. It represents an interactive process where users not only craft queries but also engage in dialogues with AI, shaping its outputs and learning from interactions. This section highlights how prompting plays a central part in developing human-AI collaboration, a critical component of AI literacy.

The article concludes by framing AI literacy as both a tool for individual empowerment and a key capability for steering social evolution. As AI continues to evolve, our ability to co-create with these systems and critically assess their role in our lives will determine how well we navigate the opportunities and risks they present.

1. ChatGPT and the Democratization of AI: Promise and Peril

The release of ChatGPT in November 2022 initiated a revolutionary change in how humans interact with computers. Users could talk with ChatGPT. It's conversational interface, combined with an ability to respond intelligently to an enormous variety of requests, sparked a dramatic uptake of the application. In just two months, it had over 100 million users [17].

ChatGPT, along with other large language models (LLMs) and types of generative AI, mark a new phase in the democratization of computer usage. In North and South America, Europe, Oceania, and parts of Asia, anyone with internet access and a browser can communicate with ChatGPT in over 50 languages, including nearly all of the most commonly spoken ones [18, 19]. ChatGPT is broadening access to coding, data analysis, content creation, and other complex cognitive tasks, including through free tiers [23, 24]. There is evidence that in workplace settings it disproportionately benefits "novice and low-skilled workers" [25] while engendering an overall increase in "job satisfaction and self-efficacy" [24]. It promises to extend high-quality educational opportunities, including personalized tutoring and support, to vastly more students [26, 27]. It acts as a collaborator in creative activities, including brainstorming ideas, generating drafts, and producing, reviewing, and revising works [28, 29]. It is expanding access to medical, psychological, legal, and other specialized professional services as well as enhancing customer service [30, 31, 32, 33]. It has been aptly described as a co-intelligence, a "thinking companion" that we can interact with "to improve our own decision-making ...[and] reflect on our own choices" [29].

GAI expands access to a variety of AI infused services and domains, disproportionately enhancing the capabilities of novice and low-skilled workers in some occupations. It could enable a dramatic scaling-up of personalized education and educational opportunity [28, 75]. It is opening new avenues for creative expression previously inaccessible to many due to cost, geographic location, or lack of resources.

Adoption of GAI is at an early stage, and its development and institutional and social integration are coevolving rapidly. Shadowing these promised benefits are fears of harms and unintended consequences, including:

  • Misinformation. It can be used to produce and scale the dissemination of false and misleading information to sway public opinion, harm individuals, as part of organized scams, and for other malicious purposes [41, 42].
  • Misalignment. As the intelligence, communicative abilities, and autonomy of AI systems increase, ensuring that they continue to act in accordance with human values and intentions becomes ever more important. This concept is known as alignment [35]. This is a dynamic, evolving, and bidirectional process that must be responsive to diverse social, political, and cultural contexts [36]. The dangers of misalignment range from manipulation and deception to existential threats to human well-being.
  • Repressive control. It may be used by authoritarian rulers to further their control. Some countries, including China and Russia, block access to ChatGPT and other Western LLMs. And Chinese LLMs, including Wu Dao and ERNIE, are subject to the government's strict censorship regime [20, 21, 22, 34].
  • Inequality. It may worsen economic inequality by disproportionately benefiting favorably situated individuals and countries [45, 46, 48].
  • Bias and discrimination. It could exhibit inadvertent biases acquired from training data or be trained to discriminate [47].
  • Economic hardship. It may lead to job losses, labor market disruptions, and deskilling [45, 46].
  • Social isolation. It may result in less social interaction as people spend more time communicating with GAI [49, 50].
  • Accelerated global warming. It may delay the achievement of climate change goals due to high power demands. GAI queries are estimated to require 10x the electricity of traditional Google queries [37]. This, combined with rapid adoption of ChatGPT and other GAI applications, is driving up data center energy consumption [38]. Technologies and methods to curtail this are on the horizon, but how quickly they can be implemented and deployed is unknown [39, 40, 44].

The future extent of these dangers is unknown and will be decided by how we respond to the continued development and diffusion of GAI. Emerging regulatory regimes recognize that the challenge facing governments and societies is to minimize these harms while promoting beneficial uses of AI and cultivating AI literacy [51, 52]. As explored in the next section, AI literacy involves not only practical knowledge of AI and the ability to utilize and collaborate with it, but the capacity to critically evaluate the roles AI plays in society.

2. AI Literacy: Empowerment and Integration

The growth of literacy in Europe during the Reformation and Counter-Reformation was marked by new forms of personal and collective empowerment and by new modes of social control [53]. Literacy fostered greater individual religious and political autonomy, contributed to the expansion and consolidation of a largely urban middle class of artisans and merchants, and indirectly aided the rise of capitalism. But it also underpinned the development of state bureaucracies that relied on literate officials for their administration. And in conjunction with the spread of state-mandated education, it enabled the inculcation of national systems of values and belief [54].

This duality reflects literacy's fundamental role in processes of social transformation. Those who are not literate -- allowing that literacy has multiple levels and forms -- are likely to be disempowered and subjected to new rules, forms of power and modes of existence they may have little understanding of. Literacy grants access, enables participation, and gives people voice, while also embedding them in systems of knowledge and power.

AI literacy is commonly defined to include competencies or is paired with them. It is a matter of both knowing and doing. Long and Magerko's influential early paper characterized AI literacy as a set of 16 competencies, though it described each of these solely with respect to knowledge [55]. This probably reflected the fact that it was published before the advent of generative AI. In revising and extending their work, Chiu et al. stressed the importance of competency as both personal confidence and ability to act:

AI literacy is defined as “an individual’s ability to clearly explain how AI technologies work and impact society, as well as to use them in an ethical and responsible manner and to effectively communicate and collaborate with them in any setting. It focuses on knowing (i.e. knowledge and skills).
AI competency is defined as “an individual’s confidence and ability to clearly explain how AI technologies work and impact society, as well as to use them in an ethical and responsible manner and to effectively communicate and collaborate with them in any setting. They should have the confidence and ability to self-reflect on their AI understanding for further learning. It focuses on how well individuals use AI in beneficial ways [70].

UNESCO's articulation of an AI competency framework for students redefined AI literacy itself as a set of competencies, noting that its curricular goals have "sometimes [been] referred to as 'AI literacy'" [57]. The integration of practice and skills development with knowledge acquisition is spelled-out in the curricular framework, which progresses from cultivating an understanding of what AI is and constructing "age-appropriate interpretations of the values, ethical issues, concepts, processes and technical methods underlying AI tools and their uses" to becoming "responsible, active and effective users of AI, both for the sake of their own individual interests, as well as to address shared sustainability challenges" and finally, for those who are interested, developing "new AI tools based on open-source and/or customizable datasets, programming tools or AI models" [57].

Widespread AI literacy as a form of educated social and political as well as technical engagement makes systems more pliable and adaptable. Authoritarian regimes that use literacy largely for social control stunt it. This understanding of literacy is evident in definitions of AI literacy, which often emphasize the evaluative and creative roles literate individuals can play:

  • "We define AI literacy as a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace" [55].
  • "AI literacy involves understanding a technology that possesses the capability to learn, adapt, and make autonomous decisions. It encompasses: [1] Understanding AI Mechanics: Basic knowledge of how AI algorithms learn (machine learning), make decisions, and evolve. [2] Ethical and Social Implications: Recognizing AI’s impact on privacy, employment, and societal norms. For example, understanding the ethical use of AI in surveillance technologies in UK cities. [3] Interacting with AI: Knowing how to work alongside AI in various sectors, like using AI tools in workplaces or navigating AI-driven services [56].
  • An early literature review identified four aspects of AI literacy: "know and understand, use and apply, evaluate and create, and ethical issues" [58].
  • UNESCO's AI competency framework for students, which builds on and extends the notion of AI literacy, is based on five principles: (1) foster a critical approach to AI, (2) prioritize human-centered interaction with AI, (3) encourage environmentally sustainable AI, (4) promote inclusivity in AI competency development, and (5) build core AI competencies for lifelong learning [57].

Most analyses of AI literacy recognize the critical role experience plays in cultivating skills and understanding. Yet without investment in education and social welfare, many students will not be able to gain experience. As UNESCO notes, "engagement with AI without access to the internet and AI tools will limit the scope and mastery level of AI competencies. Governments should commit to promoting inclusive access to basic internet connectivity, updated digital devices, open-source or affordable AI programmes and software, and essential AI devices" [57].

AI literacy is both a mode of empowerment and of social, technological, and system integration. AI illiteracy not only disempowers, it's also a threat to the stability and well-being of socio-technical systems. Extending and deepening AI literacy is a matter of fairness and equity that will contribute to crafting more sustainable planetary systems [59]. For both reasons, the rapid development of the widest possible AI literacy is critical if we are to manage and develop AI in ways beneficial to life on earth.

3. Prompting as Collaboration and Co-Creation

Prompting, often called "prompt engineering", is the iterative design of queries, instructions, and other communications with LLMs to elicit high quality, accurate, and relevant responses [60, 61, 62, 63, 65]. Prompting is recognized as a "key skill" [63] or competency "crucial for generative AI literacy" [64] that also benefits from the degree of a user's existing AI literacy. Understanding the behavior and cognitive characteristics of LLMs, as well as familiarity with the domain being queried about, enhance prompting.

The significance of prompting for AI literacy extends beyond its technical aspect. Prompting is not only about crafting precise queries to elicit accurate responses but is also a mode of collaboration and co-creation. Through prompting, users engage in interactive dialogues with generative AI, shaping its outputs and learning from its responses. Prompting is, at one and the same time, a form of skilled communication, a way to learn about GAI, and a process of collaboration and co-creation with GAI.

As the term "engineering" implies, social and mainstream media coverage is overwhelmingly focused on prompting as a skill, something that is also true of the research literature, as recent surveys and reviews show [61, 62]. In the field of education, Teaching with AI features a whole how-to chapter on prompt engineering and much additional guidance on prompting. But while it describes ways to collaborate with GAI -- research assistant, co-writer -- it frames collaboration strictly as tool use, beginning with an introduction to LLMs titled "Tools and Trade-offs" [28].

This emphasis probably reflects widespread concerns that GAI may have corrosive effects on creativity and critical thinking as well as ethical standards. Zhai et al. summarize these in a review paper:

While these technologies can enhance writing proficiency, boost self-confidence, and streamline research tasks, they also introduce risks such as diminished creativity, over-reliance, and ethical concerns like plagiarism and data bias. Studies highlight that although AI tools can aid decision-making and improve efficiency, they often lead to reduced critical and analytical thinking skills, especially when students become overly dependent on AI-generated content [71].

Terms such as "collaboration" and "co-creation" may be avoided from concern that these framings of GAI usage heighten the perceived risks of deskilling. However, efforts to find a "middle ground" leads to frameworks that erect distinctions that are likely to be unworkable. Walter, for example, explains the student guidelines adopted by Kalaidos University in Switzerland as "to handle AI not as their tutor, teacher or ghostwriter, but as their sparring partner. Just like with any other human sparring partner, it can provide interesting ideas and suggestions. It may provide some directions and answers that the student might have not thought of. However, at the same time, the sparring partner is not always right and should not be unconditionally trusted." But this metaphor undoes the message, since a sparring partner is just another boxer, a collaborator in skill development and adaptation to other boxers.

Others, while more open to a collaborative role for AI, imagine that increased knowledge of how GAI functions will mitigate perceived risks.

as LLM-based AI systems are seen as a teammate and companions for collaboration and task-solving ... it could be helpful in higher education to impart knowledge about the functioning of generative AI to leverage the opportunities of AI-based tools, and, at the same time, preventing increasing anthropomorphizing and potentially coming with that, diffusion of responsibility [60].

Here, anthropomorphism is considered naive or irrational, in contrast to research suggesting the importance of "tempered anthropomorphism" as a means to explore and understand our evolving relations with AI [73, 74]. And dampening anthropomorphism is supposed to enable collaboration without "diffusion of responsibility," despite the fact that collaboration inherently involves a diffusion of responsibility because tasks and decisions are shared among multiple participants. Asked to assess this passage, ChatGPT suggested that "while collaboration does involve shared responsibility, the distinction lies in ensuring that humans maintain clear accountability and don't mistakenly attribute too much agency to AI" [72]. This shows how anxiety about rising AI agency may constrain consideration of human-AI collaboration.

While research on collaboration and co-creation with generative AI is nascent, there are examples of more expansive findings. Yan et al., recognizing that "learning is a path towards fostering critical thinking, creativity, collaboration, and social cohesion," find that GAI can:

  • facilitate more effective and intuitive tutoring experiences, making the learning process more engaging and tailored to individual student needs.
  • function as a master teacher at scale, providing personalized and adaptive support to a wide range of learners across various subjects and languages.
  • provide timely, specific, and constructive feedback, a key element of high-quality instruction and essential for effective learning.
  • expand feedback delivery beyond text and graphics to include narrated audio and video.
  • co-create content with students to meet diverse learning needs, providing students with relevant and accessible materials to support their individual paths efficiently and creatively [75].

They acknowledge significant challenges, including:

  • Overreliance on AI could negatively impact learners' agency, critical thinking, and creativity.
  • Hallucinations, or incorrect or nonsensical outputs.
  • Ethical dilemmas such as a lack transparency, data privacy violations, the perpetuation of biases, and unequal access.
  • Disruption of traditional assessments due to the inability to distinguish between student work and AI-generated content.

Nevertheless, their closing vision of how education may be transformed speaks powerfully of how collaboration and co-creation could flourish:

GenAI technologies could permeate every aspect of human learning. Imagine students collaborating with AI agents designed to mimic certain personality traits to help students learn about leadership and teamwork, engaging in debates with digital twins of Socrates, Plato, and Aristotle to explore ancient Greek philosophy, learning impressionist painting techniques from a humanoid robotic mentor modelled after Claude Monet, and visualising Einstein’s special theory of relativity in virtual realities. All this could occur while receiving personalised support from a GenAI tutor hosted on a wearable device [75].

Understanding that prompting is both a communications skill and a mode of collaboration is essential for cultivating well-rounded AI literacy.

4. Conclusion: AI Literacy as Self and Social Development

AI literacy is a collaborative, critical, and communicative human-AI relationship. The rapid cultivation of AI literacy is essential not only to support civic engagement about the development and regulation of AI, but also to minimize the risks that illiteracy exposes us to, including the demagogic exploitation of fear and the concentration of power that ignorance facilitates.

Generative AI is shaping not only how we work, create, and learn but also how we develop as individuals and societies. AI literacy equips individuals with the knowledge, tools, and skills to navigate this new landscape, empowering them in an increasingly AI-infused world. And to a greater extent than any preceding technological revolution, the spread and deepening of AI literacy will be key to how well humanity is able to beneficially co-evolve with AI.

The growth of AI literacy will expand the capacity for human-AI collaboration, fostering new forms of creativity, problem-solving, and social organization. This collaboration is more than a technical interaction—it represents a form of co-creation, where human and AI abilities complement one another to achieve outcomes neither could accomplish alone.


This magnification of human abilities via collaboration with AI is already visible in many areas, as the following examples show.

Education. As already noted, GAI promises to enhance learning experiences by scaling personalized support, diversifying learning materials, enabling timely feedback, and innovating assessment methods [75]. It can potentially unlock new types of learning and support for English learners and students with disabilities by facilitating teachers’ Universal Design for Learning (UDL) efforts [88] and by providing richer feedback on writing [89]. It can assist both qualitative and quantitative research methodologies [90].

Decision-Making: AI has become critical to business and organizational agility, used to identify patterns in vast datasets and facilitate rapid and informed decisions [76]. Machine learning applications contribute to hybrid human-AI decision-making in which the "more objective, structured, and quantitatively driven" AI methods are integrated with human approaches that are "more nuanced, holistic, and adaptive to qualitative information" [77]. With GAI, the scope of hybrid decision-making is growing and evolving, as recent studies in finance and healthcare show. Che et al. created a specialized GAI model to simulate and predict dynamic changes in financial markets and found that "integrating generative artificial intelligence into financial market forecasts can not only improve the accuracy of forecasts, but also provide powerful data support for financial decisions, helping investors make more informed decisions in a complex and ever-changing market environment" [82]. Bragazzi et al., recognizing that GAI models have "the potential to revolutionize the field of clinical decision-making with their advanced capabilities in data analysis and pattern recognition" but that adoption requires that they do not generate inaccurate or misleading information, developed eleven "verification paradigms" to ensure their "comprehensibility, collaborative nature, and ethical grounding" to augment the work of clinicians [83].

Innovation: GAI can augment the early phases of innovation, including exploration, ideation, viewpoint diversity, and digital prototyping [84]. Recent research reports "widespread recognition of AI's capacity to elevate the quality and quantity of creative endeavors" [85]. In an empirical study focused on developing new business ideas for the circular economy, solutions produced by human-AI collaboration using GAI were rated "higher in value regarding their strategic viability for successful implementation, as well as environmental and financial value" [79]. In materials science, the integration of automation and AI-guided decision-making has emerged as a transformative route in shaping new research methodologies. Self-driving laboratories, "built through the integration of automated chemistry and material science laboratories with autonomy (automated decision-making) offer a promising avenue for different aspects of sustainability, ranging from the accelerated discovery of high-efficiency and clean energy materials to the optimization of material- and energy-efficient manufacturing processes [78].

Democratic Governance: Numerous efforts are underway to improve deliberative democracy and democratic decision-making using GAI and machine learning. As the authors of a recent study focused on urban government summarize these, GAI "can enable new mechanisms for community engagement, allowing residents to transform the information provided by the city in a way that is more relevant or actionable through improved language translation, more accessible summaries of lengthy technical documents, visioning the design of their communities, and search, to name a few improvements" [86]. More ambitious possibilities are being explored in platforms purpose-built to encourage and support democracy, including:

  • AI-Supported Deliberation: Platforms like Polis are integrating AI to summarize opinions, visualize areas of consensus and disagreement, and provide feedback to participants.

  • Comment Writing and Reflection Assistance: Generative AI can assist participants in crafting comments that are non-toxic, reasoned, and aligned with the public good. AI nudges participants to consider broader impacts, such as budget or community effects, and encourages more thoughtful engagement.
  • Summarization and Decision-Making Support: Gen AI helps summarize ongoing discussions, highlight key themes, and point out areas of controversy. It also guides participants in drafting policy proposals that have the potential for wider acceptance.
  • Visualization Tools: Gen AI creates real-time visualizations of participant ideas and perspectives, geographic data, and the performance implications of policies. This can help participants understand the how choices affect shared environments
  • Translation and Inclusiveness: AI translation capabilities are incorporated to support inclusivity, allowing participants from different linguistic backgrounds to engage in deliberation, improving access and participation across demographics.
  • AI for Civility and Respect: AI-based sentiment analysis and natural language processing (NLP) tools help flag offensive or toxic language and provide feedback to authors on how to rephrase statements, promoting mutual respect and civil discourse.
  • Identifying Consensus and Areas of Conflict: AI can identify comments or proposals with the most support and suggest groupings or iterations to promote consensus or address controversial points [87].

GAI could also help reduce inequalities in political participation by "helping those with less advanced communication, reasoning, information retrieval, and argumentation skills to better inform and express themselves politically" [80].


These and other efforts to adopt and integrate generative AI face many risks. Beltran et al provide a thorough delineation of these in public sector initiatives based on a survey of guidelines issued by Australia, Canada, New Zealand, the United Kingdom, and South Korea [91]. Most of these same risks are cited across diverse contexts of adoption.

Identified public sector risks from using generative AI [91]

These risks highlight that the successful integration of GAI requires broadening AI literacy so that people are able to exercise informed judgment to validate outputs, mitigate potential errors, contextualize results, and build appropriate trust [68]. Democratizing AI—making it accessible to a broader range of users—can address challenges like trust, transparency, and skill gaps. Ritu Jyoti, Vice President of AI Research at IDC, explains her AI guidance to corporations in this light:

Democratization makes AI available and accessible to the breadth of talent in an enterprise. Business users know the business in and out. Enabling them to build AI-powered applications using visual application development platforms, including those with drag-and-drop functionalities, can close the gap in data science talent. It’s important to note, though, that AI democratization doesn’t replace data scientists. It helps business users collaborate better with data scientists and see them as a partner. Essentially democratization brings the business closer to the technology and the technology closer to the business. It can also free data scientists to do higher value work [93].

In addition, AI literate citizens will be better equipped to participate in discussions about AI regulation and ethical considerations. The democratization of AI literacy is vital to limiting the concentration of AI benefits to specific corporations, regions, and social groups.

GAI initiatives also demonstrate the importance of a tiered conception of AI literacy such as the "Understand", "Apply", and "Create" levels in UNESCO's model of student competencies [57]. Many uses of AI require expertise that, in the case of GAI, informs prompt engineering. At the same time, the wide diffusion of AI literacy would ensure that people know how to use GAI services, judge outputs, and critically evaluate claims of expertise and the actions and proposals of experts.

AI, particularly since the advent of generative AI, is widely regarded as a general purpose technology and an "invention of a method of invention" (IMI) [94, 95]. Its economic and social impacts are likely to be both swift and profound. The AI literate will have an outsize influence on the evolution of society, making it all the more important that AI literacy is democratized. Fostering AI literacy is not just a personal endeavor but a societal imperative that will influence the trajectory of social development in the age of AI.

References

  1. Sahoo, Pranab, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha. 2024. “A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications.” ArXiv, February. https://arxiv.org/abs/2402.07927v1 .
  2. OpenAI. 2024. “Memory and New Controls for ChatGPT.” OpenAI. February 13, 2024. https://openai.com/index/memory-and-new-controls-for-chatgpt/ .
  3. OpenAI introduced a memory feature earlier this year by which ChatGPT remembers information from sessions in order to personalize future exchanges [4].
  4. OpenAI. 2024. “Memory and New Controls for ChatGPT.” OpenAI. February 13, 2024. https://openai.com/index/memory-and-new-controls-for-chatgpt/ .
  5. Sandry, Eleanor. 2024. “Anthropomorphism and Its Discontents.” In Handbook on the Ethics of Artificial Intelligence, edited by David J. Gunkel. Cheltenham, UK: Edward Elgar Publishing.
  6. Shneiderman, Ben, Michael Muller, and Chenhao Tan. 2023. “On AI Anthropomorphism.” Medium. April 10, 2023. https://medium.com/human-centered-ai/on-ai-anthropomorphism-abff4cecc5ae .
  7. Ji, Junyi (Joey). 2024. “Demystify ChatGPT: Anthropomorphism around Generative AI.” GRACE: Global Review of AI Community Ethics 2 (1). https://ojs.stanford.edu/ojs/index.php/grace/article/view/3222 .
  8. Carchidi, Vincent J. 2024. “Creative Minds Like Ours? Large Language Models and the ‘How’ of Human Linguistic Creativity.” Lingbuzz. https://lingbuzz.net/lingbuzz/007719/v1.pdf .
  9. Hunter, N. (2024). The Art of Prompt Engineering with ChatGPT: GPT-4, Plugins & DALL.E 3 (Kindle). ChatGPT Trainings.
  10. ChatGPT-4o. 2024. "System Prompt Explanation." August 31, 2024. https://chatgpt.com/share/cd81b8a5-8874-4483-8027-8bc114062140 .
  11. Metaculus. 2024. AI questions. Metaculus. Retrieved August 31, 2024, from https://www.metaculus.com/questions/?topic=ai .
  12. Sweeney, P. (2021). A fictional dualism model of social robots. Ethics and Information Technology, 23(3), 465–472.
  13. Bryson, Joanna J. "Robots should be slaves". In Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, edited by Yorick Wilks. John Benjamins Publishing Company, 2010. ProQuest Ebook Central, https://ebookcentral.proquest.com/lib/uoregon/detail.action?docID=623327 .
  14. Suleyman, Mustafa. 2024. “Mustafa Suleyman: AI Is Turning into Something Totally New.” TED Talks. April 16, 2024. https://www.ted.com/talks/mustafa_suleyman_ai_is_turning_into_something_totally_new .
  15. Oxford English Dictionary. 2023. Anthropomorphism, n., sense 1.a. Oxford University Press. Retrieved August 31, 2024, from https://doi.org/10.1093/OED/9437934606 .
  16. Anthropic. 2024. “Prompt Library - Anthropic.” Anthropic. 2024. https://docs.anthropic.com/en/prompt-library/library .
  17. Wikipedia contributors. 2024. ChatGPT. In Wikipedia, The Free Encyclopedia. Retrieved September 7, 2024, from https://en.wikipedia.org/w/index.php?title=ChatGPT&oldid=1244011024 .
  18. OpenAI. (n.d.). How to change your language setting in ChatGPT. OpenAI Help Center. https://help.openai.com/en/articles/8357869-how-to-change-your-language-setting-in-chatgpt .
  19. Emery, Chad. 2023. The 33 most spoken languages in the world. September 16, 2023. https://www.langoly.com/most-spoken-languages/ .
  20. Fong, Wong Siew. 2023. China’s Self-Censoring Chatbots Face Many Challenges. ThinkChina. April 17, 2023. https://www.thinkchina.sg/technology/chinas-self-censoring-chatbots-face-many-challenges .
  21. Wikipedia contributors. 2024. Ernie Bot. In Wikipedia, The Free Encyclopedia. Retrieved September 8, 2024, from https://en.wikipedia.org/w/index.php?title=Ernie_Bot&oldid=1244449174 .
  22. Toh, Michelle, and Nectar Gan. 2023. GPT-4 and Ernie Bot 4.0: We Asked Both Bots the Same Questions. Here’s How They Answered. CNN Business. December 15, 2023. https://www.cnn.com/2023/12/15/tech/gpt4-china-baidu-ernie-ai-comparison-intl-hnk/index.html .
  23. Kwame, Agbanyo George, Muhammad Nawaz, Humaid Al Naqbi, Zied Bahroun, and Vian Ahmed. 2024. “Enhancing Work Productivity through Generative Artificial Intelligence: A Comprehensive Literature Review.” Sustainability 2024, Vol. 16, Page 1166 16 (3): 1166. https://doi.org/10.3390/SU16031166 .
  24. Noy, Shakked, and Whitney Zhang. 2023. “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence.” Science 381 (6654): 187–92. https://doi.org/10.1126/science.adh2586 .
  25. Brynjolfsson, Erik, Danielle Li, Lindsey R Raymond, Daron Acemoglu, David Autor, Amittai Axelrod, Eleanor Dillon, et al. 2023. “Generative AI at Work.” National Bureau of Economic Research, April. https://doi.org/10.3386/W31161 .
  26. Khan, Sal. 2023. “Sal Khan: How AI Could Save (Not Destroy) Education.” TED Talk. April 2023. https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education .
  27. Su, Jiahong, and Weipeng Yang. 2023. “Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education.” ECNU Review of Education 6 (3): 355–66. https://doi.org/10.1177/20965311231168423/ASSET/IMAGES/LARGE/10.1177_20965311231168423-FIG1.JPEG .
  28. Bowen, Jose? Antonio, and C Edward Watson. 2024. Teaching with AI : A Practical Guide to a New Era of Human Learning. Baltimore, Maryland: Johns Hopkins University Press.
  29. Mollick, Ethan. n.d. Co-Intelligence : Living and Working with AI. Co-Intelligence : Living and Working with AI. New York, NY: Portfolio/Penguin.
  30. Sai, S., Gaur, A., Sai, R., Chamola, V., Guizani, M., & Rodrigues, J. J. P. C. (2024). "Generative AI for Transformative Healthcare: A Comprehensive Study of Emerging Models, Applications, Case Studies, and Limitations." IEEE Access, 12, 31078–31106. https://doi.org/10.1109/ACCESS.2024.3367715 .
  31. Chien, Colleen V., Miriam Kim, Akhil Raj, and Rohit Rathish. 2024. “How Generative AI Can Help Address the Access to Justice Gap Through the Courts.” SSRN. January 4, 2024. https://papers.ssrn.com/abstract=4683309 .
  32. Dwivedi, Yogesh K, Neeraj Pandey, Wendy Currie, and Adrian Micu. 2024. “Leveraging ChatGPT and Other Generative Artificial Intelligence (AI)-Based Applications in the Hospitality and Tourism Industry: Practices, Challenges and Research Agenda.” International Journal of Contemporary Hospitality Management 36 (1): 1–12.
  33. Huang, Ming Hui, and Roland T. Rust. 2024. “The Caring Machine: Feeling AI for Customer Care.” Journal of Marketing, September. https://doi.org/10.1177/00222429231224748 .
  34. Scharre, Paul. 2023. “How AI Became ‘the Autocrat’s New Toolkit.’” In Four Battlegrounds: Power in the Age of Artificial Intelligence. W. W. Norton & Company. https://breakingdefense.com/2023/02/how-ai-became-the-autocrats-new-toolkit-book-excerpt/ .
  35. Ji, Jiaming, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, et al. 2023. “AI Alignment: A Comprehensive Survey.” ArXiv, October. https://arxiv.org/abs/2310.19852v5 .
  36. Shen, Hua, Tiffany Knearem, Reshmi Ghosh, Kenan Alkiek, Kundan Krishna, Yachuan Liu, Ziqiao Ma, et al. 2024. “Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions.” ArXiv, June. https://arxiv.org/abs/2406.09264v2 .
  37. Electric Power Research Institute. 2024. “Powering Intelligence: Analyzing Artificial Intelligence and Data Center Energy Consumption.” EPRI. June 18, 2024. https://www.epri.com/research/products/000000003002028905 .
  38. International Energy Agency. 2024. “Electricity 2024 – Analysis - IEA.” International Energy Agency. May 2024. https://www.iea.org/reports/electricity-2024 .
  39. Secretary of Energy Advisory Board. 2024. “Recommendations on Powering Artificial Intelligence and Data Center Infrastructure.” July 30, 2024. https://www.energy.gov/sites/default/files/2024-08/Powering AI and Data Center Infrastructure Recommendations July 2024.pdf.
  40. University of Minnesota. 2024. “Advanced Hardware Device Slashes AI Energy Consumption by 1000x.” SciTechDaily. July 27, 2024. https://scitechdaily.com/advanced-hardware-device-slashes-ai-energy-consumption-by-1000x/ .
  41. Barman, Dipto, Ziyi Guo, and Owen Conlan. 2024. “The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination.” Machine Learning with Applications 16 (June): 100545. https://doi.org/10.1016/J.MLWA.2024.100545 .
  42. Funk, Allie, Adrian Shahbaz, and Kian Vesteinsson. 2023. “Freedom on the Net 2023: The Repressive Power of Artificial Intelligence.” Freedom House. October 2023. https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence .
  43. Cazzaniga, Mauro, Florence Jaumotte, Longji Li, Giovanni Melina, Augustus J. Panton, Carlo Pizzinelli, Emma Rockall, and Marina M. Tavares. 2024. “Gen-AI: Artificial Intelligence and the Future of Work.” International Monetary Fund. January 14, 2024. https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379?cid=bl-com-SDNEA2024001 .
  44. Plumer, Brad, and Nadja Popovich. 2024. “A New Surge in Power Use Is Threatening U.S. Climate Goals.” The New York Times. March 14, 2024. https://www.nytimes.com/interactive/2024/03/13/climate/electric-power-climate-change.html .
  45. Gmyrek, Pawe?, Janine Berg, and David Bescond. 2023. “Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality.” International Labour Organization. Geneva. https://doi.org/10.54394/FHEM8239 .
  46. OECD. 2023. “OECD Employment Outlook 2023.” OECD Publishing. OECD Employment Outlook. OECD. https://doi.org/10.1787/08785BBA-EN .
  47. Ferrara, Emilio. 2023. “Should ChatGPT Be Biased? Challenges and Risks of Bias in Large Language Models.” First Monday 28 (11). https://doi.org/10.5210/fm.v28i11.13346 .
  48. Daepp, Madeleine I. G., and Scott Counts. 2024. “The Emerging AI Divide in the United States,” April. https://arxiv.org/abs/2404.11988v2 .
  49. Kannan, Viji Diane, and Peter J. Veazie. 2023. “US Trends in Social Isolation, Social Engagement, and Companionship ? Nationally and by Age, Sex, Race/Ethnicity, Family Income, and Work Hours, 2003–2020.” SSM - Population Health 21 (March): 101331. https://doi.org/10.1016/J.SSMPH.2022.101331 .
  50. Janicke-Bowles, Sophie H., Tess M. Buckley, Rikishi Rey, Tayah Wozniak, Adrian Meier, and Anna Lomanowska. 2023. “Digital Flourishing: Conceptualizing and Assessing Positive Perceptions of Mediated Social Interactions.” Journal of Happiness Studies 24 (3): 1013–35. https://doi.org/10.1007/S10902-023-00619-5/TABLES/3 .
  51. Boland, Joseph. "AI Regulation: Control or Collaboration." 2024. LinkedIn. July 16, 2024. https://www.dhirubhai.net/pulse/ai-regulation-control-collaboration-joseph-boland-iyuxf/ .
  52. Boland, Joseph. "AI Regulation as Guided Promotion." 2024. LinkedIn. June 8, 2024. https://www.dhirubhai.net/pulse/ai-regulation-guided-promotion-joseph-boland-dp5lc/ .
  53. Willis, Jonathan, 'Reformation and Counter-Reformation in Europe', in Grace Davie, and Lucian N. Leustean (eds), The Oxford Handbook of Religion and Europe, Oxford Handbooks (2021; online edn, Oxford Academic, 8 Dec. 2021), https://doi.org/10.1093/oxfordhb/9780198834267.013.5 .
  54. Houston, R. A. 2018. The Growth of Literacy in Western Europe from 1500 to 1800. Brewminate. February 18, 2018. https://brewminate.com/the-growth-of-literacy-in-western-europe-from-1500-to-1800/ .
  55. Long, Duri, and Brian Magerko. 2020. “What Is AI Literacy? Competencies and Design Considerations.” Conference on Human Factors in Computing Systems - Proceedings, April. https://doi.org/10.1145/3313831.3376727/SUPPL_FILE/A598-LONG-PRESENTATION.MP4 .
  56. Kumar, Akshi, and Saurabh Raj Sangwan. 2024. “Conceptualizing AI Literacy: Educational and Policy Initiatives for a Future-Ready Society - Goldsmiths Research Online.” International Journal of All Research Education & Scientific Methods 12 (4): 1543–51. https://doi.org/10.56025/IJARESM.2023.1201241543 .
  57. Miao, Fengchun, and Kelly Shiohira. 2024. “AI Competency Framework for Students.” UNESCO. https://doi.org/10.54675/JKJB9835 .
  58. Ng, Davy Tsz Kit, Jac Ka Lok Leung, Samuel Kai Wah Chu, and Maggie Shen Qiao. 2021. “Conceptualizing AI Literacy: An Exploratory Review.” Computers and Education: Artificial Intelligence 2 (January): 100041. https://doi.org/10.1016/J.CAEAI.2021.100041 .
  59. United Nations. 2024. Sustainable Development Goals. United Nations. Retrieved September 18, 2024, from https://sdgs.un.org/goals .
  60. Knoth, Nils, Antonia Tolzin, Andreas Janson, and Jan Marco Leimeister. 2024. “AI Literacy and Its Implications for Prompt Engineering Strategies.” Computers and Education: Artificial Intelligence 6 (June): 100225. https://doi.org/10.1016/J.CAEAI.2024.100225 .
  61. Chen, Banghao, Zhaofeng Zhang, Nicolas Langrené, and Shengxin Zhu. 2024. “Unleashing the Potential of Prompt Engineering in Large Language Models: A Comprehensive Review,” ArXiv. June. https://arxiv.org/abs/2310.14735v4 .
  62. Sahoo, Pranab, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha. 2024. “A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications.” ArXiv, February. https://arxiv.org/abs/2402.07927v1 .
  63. Walter, Yoshija. 2024. “Embracing the Future of Artificial Intelligence in the Classroom: The Relevance of AI Literacy, Prompt Engineering, and Critical Thinking in Modern Education.” International Journal of Educational Technology in Higher Education 21 (1): 15–29.
  64. Annapureddy, Ravinithesh, Alessandro Fornaroli, and Daniel Gatica-Perez. 2024. “Generative AI Literacy: Twelve Defining Competencies.” Digital Government (New York, N.Y. Online).
  65. Schmidt, Douglas C., Jesse Spencer-Smith, Quchen Fu, and Jules White. 2024. “Towards a Catalog of Prompt Patterns to Enhance the Discipline of Prompt Engineering.” ACM SIGAda Ada Letters 43 (2): 43–51. https://doi.org/10.1145/3672359.3672364 .
  66. Bhatia, Aatish. 2024. “When A.I.’s Output Is a Threat to A.I. Itself.” The New York Times. August 25, 2024. https://www.nytimes.com/interactive/2024/08/26/upshot/ai-synthetic-data.html .
  67. Pamula, V. 2023. "An Introduction to LLMOps: Operationalizing and Managing Large Language Models using Azure ML." Microsoft TechCommunity. (2023, August 27). https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/an-introduction-to-llmops-operationalizing-and-managing-large/ba-p/3910996 .
  68. Mehrotra, Siddharth, Chadha Degachi, Oleksandra Vereschak, Catholijn M. Jonker, and Myrthe L. Tielman. 2023. “A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction.” ArXiv. (November 8, 2023). https://doi.org/10.48550/arXiv.2311.06305 .
  69. OpenAI. 2024. Reasoning guide. OpenAI. (2024, September). https://platform.openai.com/docs/guides/reasoning .
  70. Chiu, Thomas K.F., Zubair Ahmad, Murod Ismailov, and Ismaila Temitayo Sanusi. 2024. “What Are Artificial Intelligence Literacy and Competency? A Comprehensive Framework to Support Them.” Computers and Education Open 6 (June): 100171. https://doi.org/10.1016/J.CAEO.2024.100171 .
  71. Zhai, Chunpeng, Santoso Wibowo, and Lily D. Li. 2024. “The Effects of Over-Reliance on AI Dialogue Systems on Students’ Cognitive Abilities: A Systematic Review.” Smart Learning Environments 11 (1). https://doi.org/10.1186/S40561-024-00316-7 .
  72. Asked to assess this quotation, ChatGPT-4o agreed that collaboration is a diffusion of responsibility and sought to explain what might have motivated. Prompt: "Please consider the following quotation: "as LLM-based AI systems are seen as a teammate and companions for collaboration and task-solving ... it could be helpful in higher education to impart knowledge about the functioning of generative AI to leverage the opportunities of AI-based tools, and, at the same time, preventing increasing anthropomorphizing and potentially coming with that, diffusion of responsibility". Isn't collaboration inherently based on a "diffusion of responsibility"? Can you first give a yes or no answer to that question, then explain your reasoning?" ChatGPT. "Yes. Collaboration inherently involves a diffusion of responsibility because tasks and decisions are shared among multiple participants. In a collaborative setting, different team members, or in this case, AI and humans, contribute to solving problems and achieving goals. The responsibility for the outcome is distributed across the team, with each party playing a role in the process. However, the concern raised in the quotation is likely about the nature of responsibility when AI is anthropomorphized. If people view AI as fully autonomous agents, they might offload too much responsibility to the AI, blurring accountability, especially if the AI makes mistakes or ethical judgments. Thus, while collaboration does involve shared responsibility, the distinction lies in ensuring that humans maintain clear accountability and don't mistakenly attribute too much agency to AI."
  73. Sandry, Eleanor. 2024. “Anthropomorphism and Its Discontents.” In Handbook on the Ethics of Artificial Intelligence, edited by David J. Gunkel. Cheltenham, UK: Edward Elgar Publishing.
  74. Boland, Joseph. 2024. "Anthropomorphism and AI." LinkedIn. September 2, 2024. https://www.dhirubhai.net/pulse/anthropomorphism-ai-joseph-boland-2qkqc/ .
  75. Yan, Lixiang, Samuel Greiff, Ziwen Teuber, and Dragan Gasevic. 2024. “Promises and Challenges of Generative Artificial Intelligence for Human Learning.” ArXiv, August. https://arxiv.org/abs/2408.12143v3 .
  76. Kaggwa, Simon, Oluwatoyin Farayola, Tobechukwu Francisa Eleogu, Franciscamary Okonkwo, Oluwatoyin Ajoke Farayola, Prisca Ugomma Uwaoma, and Abiodun Akinoso. 2023. “AI in Decision Making: Transforming Business Strategies.” International Journal of Research and Scientific Innovation 10 (12). https://doi.org/10.51244/IJRSI.2023.1012032 .
  77. Bankins, Sarah, Anna Carmella Ocampo, Mauricio Marrone, Simon Lloyd D. Restubog, and Sang Eun Woo. 2024. “A Multilevel Review of Artificial Intelligence in Organizations: Implications for Organizational Behavior Research and Practice.” Journal of Organizational Behavior 45 (2): 159–82. https://doi.org/10.1002/JOB.2735 .
  78. Sadeghi, Sina, Richard B. Canty, Nikolai Mukhin, Jinge Xu, Fernando Delgado-Licona, and Milad Abolhasani. 2024. “Engineering a Sustainable Future: Harnessing Automation, Robotics, and Artificial Intelligence with Self-Driving Laboratories.” ACS Sustainable Chemistry and Engineering 12 (August): 12695–707. https://doi.org/10.1021/ACSSUSCHEMENG.4C02177/ASSET/IMAGES/LARGE/SC4C02177_0006.JPEG .
  79. Boussioux, Léonard, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani. 2024. “The Crowdless Future? Generative AI and Creative Problem-Solving.” Organization Science, August. https://doi.org/10.1287/ORSC.2023.18430 .
  80. Formosa, Paul, Bhanuraj Kashyap, and Siavosh Sahebi. 2024. “Generative AI and the Future of Democratic Citizenship.” Digital Government (New York, N.Y. Online).
  81. Sedkaoui, Soraya, and Rafika Benaichouba. 2024. “Generative AI as a Transformative Force for Innovation: A Review of Opportunities, Applications and Challenges.” European Journal of Innovation Management. https://doi.org/10.1108/EJIM-02-2024-0129/FULL/XML .
  82. Che, Chang, Zengyi Huang, Chen Li, Haotian Zheng, and Xinyu Tian. 2024. “Integrating Generative AI into Financial Market Prediction for Improved Decision Making.” Applied and Computational Engineering 64 (1): 155–61. https://doi.org/10.54254/2755-2721/64/20241376 .
  83. Bragazzi, Nicola Luigi, and Sergio Garbarino. 2024. “Toward Clinical Generative AI: Conceptual Framework.” JMIR AI 3 (1): e55957. https://doi.org/10.2196/55957 .
  84. Bilgram, Volker, and Felix Laarmann. 2023. “Accelerating Innovation With Generative AI: AI-Augmented Digital Prototyping and Innovation Methods.” IEEE Engineering Management Review 51 (2): 18–25. https://doi.org/10.1109/EMR.2023.3272799 .
  85. Rosignoli, Andrea, and Raphael Geyer. 2024. “The Influence of Generative AI on Creativity in the Front End of Innovation.” J?nk?ping University.
  86. Williams, Sarah, Sara Beery, Christopher Conley, Michael Lawrence Evans, Santiago Garces, Eric Gordon, Nigel Jacob, et al. 2024. “People-Powered Gen AI: Collaborating with Generative AI for Civic Engagement.” MIT. 2024. https://doi.org/10.21428/E4BAEDD9.F78710E6 .
  87. Tsai, Lily L., Alex Pentland, Alia Braley, Nuole Chen, José Ramón Enríquez, and Anka Reuel. 2024. “Generative AI for Pro-Democracy Platforms.” An MIT Exploration of Generative AI, March. https://doi.org/10.21428/E4BAEDD9.5AAF489A .
  88. Evmenova, Anya S, Jered Borup, and Joan Kang Shin. 2024. “Harnessing the Power of Generative AI to Support ALL Learners.” TechTrends 68 (4): 820–31.
  89. Tseng, Yu-Ching, and Yi-Hsuan Lin. 2024. “Enhancing English as a Foreign Language (EFL) Learners’ Writing with ChatGPT: A University-Level Course Design.” The Electronic Journal of E-Learning 22 (2): 78–97. https://doi.org/10.34190/ejel.21.5.3329 .
  90. Noroozi, Omid, Saba Soleimani, Mohammadreza Farrokhnia, and Seyyed Kazem Banihashem. 2024. “Generative AI in Education: Pedagogical, Theoretical, and Methodological Perspectives.” International Journal of Technology in Education (IJTE). https://doi.org/10.46328/ijte.845 .
  91. Beltran, Marco Antonio, Mondragon, Ivette Ruiz, and Seung Hum Han. 2024. “Comparative Analysis of Generative AI Risks in the Public Sector.” ACM International Conference Proceeding Series, June, 610–17. https://doi.org/10.1145/3657054.3657125 .
  92. Miao, Fengchun, and Mutlu Cukurova. 2024. “AI Competency Framework for Teachers.” UNESCO. https://doi.org/10.54675/ZJTE2084 .
  93. Chen, Pearl. 2021. “Expert Q&A: Democratize AI for All Lines of Business and Speed Time to Value.” March 16, 2021. https://www.ibm.com/blog/expert-qa-democratize-ai/ .
  94. Crafts, Nicholas. 2021. “Artificial Intelligence as a General-Purpose Technology: An Historical Perspective.” Oxford Review of Economic Policy 37 (3): 521–36. https://doi.org/10.1093/OXREP/GRAB012 .
  95. Horwitch, Mel. 2024. “The AI Challenge for National Technology Strategy,” September, 1–11. https://doi.org/10.23919/PICMET64035.2024.10653430 .
  96. Lee, Keun-woo, Kelly Mills, Pati Ruiz, Merijke Coenraad, Judi Fusco, Jeremy Roschelle, and Josh Weisgrau. 2024. “AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology.” Digital Promise. June 18, 2024. https://digitalpromise.org/2024/06/18/ai-literacy-a-framework-to-understand-evaluate-and-use-emerging-technology/ .

Eric Lane

Customer Success Strategist | Enhancing Client Experiences through Strategic Solutions

1 个月

AI literacy is crucial for responsible use and maximizing the potential of generative AI—empowering people to shape its future!

Jed Morley

Platpay.com - The Thought & Compliance Leader in Payments ??

1 个月

Great insight on the importance of AI literacy! Your point about prompting as collaboration rather than just engineering really shifts the perspective on human-AI interaction.

?Ovi Vásquez

Inspirational Keynote Speaker & Author. I believe in developing values-centered leaders. Grew up without electricity as a farmworker. College grad, worked for Apple, Tesla, Salesforce, Uber—then I quit to serve.

1 个月

AI literacy indeed plays a crucial role in navigating generative AI's complexities responsibly. Joseph Boland

Fascinating perspective on democratizing GAI through AI literacy. Questioning prompts' role as collaboration?

回复

要查看或添加评论,请登录

Joseph Boland的更多文章

  • Legislation to Repress Non-Profits

    Legislation to Repress Non-Profits

    The House of Representatives is about to consider legislation, dubbed the "Stop Terror-Financing and Tax Penalties on…

    1 条评论
  • The Conversational Future of Search

    The Conversational Future of Search

    Large language models (LLMs) depend on the internet for the massive training datasets that inform their intelligence…

    1 条评论
  • Anthropomorphism & AI

    Anthropomorphism & AI

    Nathan Hunter begins his guide on prompt engineering with a warning and a word of advice: I started to realise that my…

  • AI Regulation: Control or Collaboration?

    AI Regulation: Control or Collaboration?

    Current regulatory efforts advocate for "human-centric" AI. The EU's Artificial Intelligence Act (EUAIA) declares it a…

    4 条评论
  • The Apprenticeship of AI

    The Apprenticeship of AI

    The ambiguous and shifting epistemic status of AI is evident in language. Often, a kind of casual uncertainty or…

    2 条评论
  • AI Regulation as Guided Promotion

    AI Regulation as Guided Promotion

    The need for regulation of artificial intelligence is widely viewed as a control problem. Governments need to keep AI…

  • More Everyday Uses of ChatGPT

    More Everyday Uses of ChatGPT

    As artificial intelligence is transforming the world, so generative AI is transforming our experience of artificial…

    9 条评论
  • The Social Diffusion of AI, Illustrated in a New Federal Regulation

    The Social Diffusion of AI, Illustrated in a New Federal Regulation

    The Office of Management and Budget (OMB) published "Advancing Governance, Innovation, and Risk Management for Agency…

    1 条评论
  • Ready or Not: Autonomous AI Agents, Part 1

    Ready or Not: Autonomous AI Agents, Part 1

    Part I: To the Threshold The flood of innovations in artificial intelligence can make it difficult to recognize the…

    4 条评论
  • Everyday Uses of ChatGPT

    Everyday Uses of ChatGPT

    I make varied and growing use of GPT-4, the paid version of ChatGPT. The examples below from my experience are a tiny…

    13 条评论

社区洞察

其他会员也浏览了