The Rise of AI in Medical Diagnosis: Redefining Expertise

The Rise of AI in Medical Diagnosis: Redefining Expertise

Two recent papers demonstrate that new large language models (LLMs) from OpenAI, GPT-4 and o1-preview, are capable of superior clinical reasoning, resulting in diagnostic performance exceeding that of human physicians. These findings, reviewed below, indicate that medical diagnosis is becoming a collaborative effort of medical personnel and AI. This is important because diagnostic errors—failures to establish an accurate and timely explanation of a patient’s health problem or to communicate that explanation to the patient—are a serious problem nationally and globally. Estimates put the rate of error at 4.3 to 15 percent, with roughly half being potentially harmful [13, 14, 15, 16].

Because AI systems could contribute to major improvements in diagnosis and treatment, including a reduction in errors, more timely diagnoses, and the extension of diagnostic resources to far more people, it is essential to proactively recognize these capabilities and plan for their evolution in order to maximize their social benefits.

Goh et al., in "Large Language Model Influence on Diagnostic Reasoning" [1] and Brodeur et al. in "Superhuman performance of a large language model on the reasoning tasks of a physician" [2] evaluate the reasoning abilities of LLMs, not simply the diagnostic results of using them. This reflects the fact that "clinical practice requires real-time complex multi-step reasoning processes, constant adjustments based on new data from multiple sources, iteratively refining differential diagnoses and management plans, and making consequential treatment decisions under uncertainty" [2]. And it anticipates a future in which physicians and other diagnosticians collaborate with LLMs throughout clinical procedures, making it essential that LLMs follow and actively contribute to the development of diagnoses and revise provisional assessments in light of new information.

Evaluating Physician-AI Collaboration: ChatGPT’s Diagnostic Skills Surpass Human Experts

Goh et al.'s study used GPT-4 [6] to assess two key aspects of an LLM's potential role in diagnosis:

  1. The effect on physician diagnostic performance of using an LLM.
  2. The comparative diagnostic reasoning ability and final accuracy of an LLM and physicians.

Physician participants were randomized to either access the LLM in addition to conventional diagnostic resources or to use conventional resources only. They were allocated 60 minutes to review up to 6 clinical vignettes, and were required to:

  • List their top three differential diagnoses.
  • Identify findings in the case that supported or opposed each diagnosis.
  • Rank the diagnoses by likelihood.
  • Propose up to three next diagnostic steps to evaluate the patient further.

This framework was designed to promote the practice of deliberate reflection, which has been found to improve physicians' diagnostic performance, particularly in complex diagnostic tasks [3].

Consistent with this, the study's primary outcome was performance on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps. Secondary outcomes included time spent per case and final diagnosis accuracy.

ChatGPT (GPT-4) was tested using the same structured framework as the physician participants. Clinical vignettes were provided to the model with structured prompts designed to elicit diagnostic reasoning aligned with the study's framework. The authors were careful to use cases that were not included in ChatGPT's pretraining: "The cases have never been publicly released to protect the validity of the test materials for future use and therefore are excluded from training data of the LLM" [1].

ChatGPT was asked to list differential diagnoses, identify supporting and opposing findings, rank diagnoses by likelihood, and propose next diagnostic steps. Its responses were evaluated using the same scoring rubric applied to the physicians, allowing for direct comparison of performance on diagnostic reasoning and deliberate reflection.

Results

The results were unexpected and remarkable. The median diagnostic reasoning score per case was:

  • 76% — Physicians with LLM
  • 74% — Physicians with conventional resources
  • 92% — LLM alone

ChatGPT scored 18 percentage points higher than did physicians using conventional resources (the control group in this study) and 16 percentage points better than the physician + LLM group. And while the small score difference between the two physician groups was not statistically significant, the difference between the control group and ChatGPT was. As Jonathan Chen, one of the authors, put it in a subsequent interview,

The chatbot by itself did surprisingly better than all of the doctors, including the doctors that accessed the chatbot. That flew in the face of the fundamental theorem of informatics: human plus computer will deliver better results than either would alone [4].

These findings highlight the practical potential of LLMs to support physicians in refining diagnostic reasoning, particularly in complex cases, while underscoring the importance of fostering effective human-AI collaboration to maximize these benefits.

Brodeur et al. extend these findings by evaluating the more advanced o1-preview model. They show not only significant improvements in diagnostic accuracy and reasoning but also the potential for LLMs to independently perform at levels surpassing human physicians in key areas of clinical decision-making.

Continuing Advances in LLM Clinical Reasoning: o1-Preview

The study by Brodeur et al. evaluated the performance of the newer o1-preview model across five dimensions of diagnosis: differential diagnosis, diagnostic reasoning, triage differential diagnosis, probabilistic reasoning, and management reasoning [2]. Remarkably, the study was completed and published only three months after o1-preview's release, underscoring both the urgency of evaluating cutting-edge AI tools and the efficiency of the research team.

Results

Differential diagnosis. The study found that o1-preview included the correct diagnosis in its differential in 78.3% of cases, significantly exceeding the previous LLM result of 72.9% by GPT-4 [10] and far above the human clinician result of 33.6% reported by Google researchers [9]. It should be noted, however, that the Google result was obtained when clinicians had to provide a differential diagnosis "based solely on review of the case presentation without using any reference materials" [9]. When they were permitted to use reference tools the percentage of differentials with the correct diagnosis rose to 44.5%.

Figure 1. Bar plot showing the accuracy of including the correct diagnosis in the differential for differential diagnosis (DDx) generators and LLMs on the NEJMCPCs [2].

The ability of o1-preview to consistently include the correct diagnosis in its differential demonstrates the potential for LLMs to reduce cognitive load for physicians and improve the accuracy of assessments.

Diagnostic reasoning. Perhaps the most startling result was o1-preview's score on diagnostic reasoning. In 78 of 80 cases, o1-preview achieved a perfect R-IDEA score. This compared favorably to GPT-4 (47/80), attending physicians (28/80), and resident physicians (16/80) as shown in Figure 2. R-IDEA is a recently developed instrument for evaluating diagnostic reasoning on a ten-point scale across five dimensions: interpretive summary, differential diagnosis, explanation of lead diagnosis, and explanation of alternative diagnoses. This comprehensive and standardized framework ensures a holistic assessment of reasoning quality, making it an essential tool for comparing the diagnostic capabilities of clinicians and AI systems.

Figure 2. Comparison of o1-preview, GPT-4 and physicians for clinical diagnostic reasoning (distribution of R-IDEA scores on NEJM Healer Cases) [2].

In addition, Brodeur et al. replicated Goh's study of diagnostic reasoning, showing that o1-preview's performance was even better than the 92% reported by Goh. However, perhaps because Brodeur only asked o1-preview for one response per case—in contrast to the three obtained by Goh—Brodeur was unable to demonstrate statistical significance.

Figure 3. Box plot of normalized diagnostic reasoning points by model and physicians. Six diagnostic challenges were included. We generated one o1-preview response for each case. The prior study collected three GPT-4 responses to all cases, 25 responses from physicians with access to GPT-4, and 25 responses from physicians with access to conventional resources. Ns: not statistically significant [2].

The exceptional performance of o1-preview in diagnostic reasoning underscores the potential of LLMs to transform clinical decision-making by providing accurate, structured, and comprehensive analyses of complex cases. By augmenting physicians’ cognitive processes, LLMs can reduce diagnostic errors and improve the efficiency of care delivery.

Management reasoning. The study used clinical vignettes based on real cases that were also used in a previous study that evaluated GPT-4's performance. The cases were presented to the physicians and to o1-preview, followed by a series of questions regarding next steps in management. The median score for o1-preview was 86%, compared to GPT-4 (42%), physicians with access to GPT-4 (41%), and physicians with conventional resources (median 34%). This is an extraordinary improvement by o1-preview over its predecessor and far above physician performance.

In another management-related study, o1-preview was asked to select the next test to perform. In 87.5% of cases o1-preview selected the correct test, and in another 11% of cases it selected a helpful test, based on an assessment by two physicians. Only in 1.5% of cases was the selected test considered unhelpful.

These outstanding results in management reasoning are in contrast to more modest, though still impressive, results from another study by Goh et al., "Large Language Model Influence on Management Reasoning" [12]. In this study, the authors attempted to emulate the inherently fuzzy nature of management reasoning, "which encompasses decision making around treatment, testing, patient preferences, social determinants of health, and cost-conscious care, all while managing risk". Their results showed that physicians using an LLM (GPT-4) performed moderately better than physicians relying on convention resources:

  • Physicians using the LLM vs. control group (43.0% compared to 35.7%, difference: 6.5%, 95% CI: 2.7% to 10.2%, p<0.001) (Figure 2).
  • LLM by itself vs. humans using the LLM (43.7% versus 43.0%, difference: 0.9%, 95% CI: -7.2% to 9.0%, p =0.80).
  • LLM by itself vs humans using conventional resources (43.7% versus 35.7%, difference: 7.3%, 95% CI: -0.7% to 15.4%, p =0.074).

Only the first of these results was statistically significant. And there is no way disentangle the extent to which differences in scores between Brodeur et al. and Goh et al. reflect differences in scoring rubrics, statistical methods, or the underlying abilities of the LLMs.

The substantial improvement in management reasoning by o1-preview highlights its potential to guide clinicians in making more informed decisions about next steps in patient care. By suggesting appropriate diagnostic tests and treatments with high accuracy, LLMs can complement physicians’ expertise, especially in complex or uncertain scenarios, fostering a more integrated approach to decision-making in clinical practice.

Triage differential diagnosis. The model was tested on clinical scenarios where prioritizing "cannot-miss" conditions (e.g., life-threatening illnesses) was essential. It demonstrated strong performance in recognizing and including critical diagnoses in its differentials, reinforcing its potential utility in triage scenarios where identifying urgent conditions is paramount. By helping to ensure that life-threatening conditions are not overlooked, LLMs can not only enhance the speed and precision of clinical workflows but also potentially save lives through timely and accurate prioritization.

Probabilistic reasoning. One sub-study compared 01-preview with GPT-4 and human subjects (the latter two from a previous study) on the estimation of pre- and post-test probabilities, with the "true range" defined based on expert guidelines. The model showed room for improvement in probabilistic reasoning compared to its other capabilities. While 01-preview performed well in identifying patterns, its quantitative estimates of risk and probability were less reliable, suggesting that it may suffer from some of the same issues as affect medical personnel, including overestimation of the accuracy of diagnostic tests, anchoring bias, and a flawed understanding of conditional (Bayesian) probability.

If LLMs achieve significant advances in probabilistic reasoning, they could revolutionize clinical decision-making by improving risk stratification, enhancing the interpretation of diagnostic tests, and supporting more nuanced, evidence-based patient care.

Limitations

Study design choices by Brodeur et al. raise concerns about some of their findings, as explained below. Although there are reasons to believe that the findings are an accurate reflection of enhanced reasoning by o1-preview, it would be desirable for a follow-up study to correct these shortcomings.

Data Contamination

In some instances, Brodeur et al. failed to adequately control for the possibility that o1-preview's pretraining included data about cases included in the study. Contamination can distort findings in several respects:

  • Artificial Inflation of Performance: If the model has been trained on the exact cases or their associated assessments, it might "memorize" the answers instead of applying general reasoning. This would overstate its real-world clinical reasoning abilities.
  • Relevance to Clinical Utility: In real-world settings, the model needs to generalize to unseen cases. If contamination inflates performance on benchmark tasks, it could mislead developers and practitioners about its actual reliability.
  • Trustworthiness of Results: Transparency about contamination risk is critical for ensuring confidence in findings, especially for a tool that could influence medical decision-making.

Brodeur et al. did perform a sensitivity analysis on the cases used for the differential diagnosis portion of the study. They compared the model’s performance on cases published before and after 01-preview's pretraining cutoff date and found no statistically significant difference in diagnostic reasoning. In addition, the cases used by Goh et al. and re-used by Brodeur et. al. to replicate Goh's diagnostic reasoning study were shielded from exposure and thus excluded from LLM pretraining.

Small Sample Size

The study used only five cases to evaluate management reasoning. While this may have been motivated by the fact that the previous study of GPT-4 also used five cases, it means that

  • The results may not fully capture variability in the LLM's performance, and
  • It doesn't align with the much larger sample of physician responses (176), which can be expected to capture a wider range of real-world variability in reasoning and decision-making.

This problem also affected the "Cannot Miss" sub-study, where a relatively small sample size combined with high performance across groups made it impossible to find statistically significant differences among groups.

Implications & Recommendations

The results of Goh et al. and Brodeur et al. demonstrate the transformative potential of LLMs in clinical diagnostics and raise urgent questions about how AI can best be integrated into clinical workflows to ensure that human-AI collaboration is optimized for reducing diagnostic errors and improving patient outcomes.

There are a number of issues that must be resolved in order for wider adoption of AI to be both effective and accepted. Ullah et al., authors of a recent review of the potential of LLMs for diagnostic medicine in digital pathology, identified "several challenges and barriers associated with the use of LLMs... These included limitations in contextual understanding and interpretability, biases in training data, ethical considerations, impact on healthcare professionals, and regulatory concerns" [18]. While the Goh et al. and Brodeur et al. studies provide evidence of lucid contextual understanding, they were not designed to address the other challenges cited by Ullah et al.

Nevertheless, rapid advances in the capabilities of LLMs and other types of AI are likely to continue or accelerate, along with improvements in the technical, organizational, and sociological aspects of its integration into healthcare systems. It is critical that medical researchers and healthcare institutions adapt. To their credit, the authors of the Goh and Brodeur papers understand the need to both apply and develop new measures of the diagnostic and management capabilities of AI. By doing so, largely successfully, they have shown that LLMs surpass human performance in differential diagnosis, diagnostic reasoning, triage assessment, and some aspects of management reasoning. And they have shown that expectations of human-AI collaboration based on treating AI as a conventional resource are likely to founder.

Unfortunately, skepticism about the evolution and integration of AI may impede adaptation. Ranji, for example, imagines that LLMs will not be able to cope with the "iterative—and complicated" process of diagnosis in a clinical setting:

There are reasons to be skeptical that the performance of LLMs on simulated cases can generalize to the clinical practice setting environment. The [Goh] study’s cases were representative of common general practice diagnoses but are presented in an orderly fashion with the relevant history, physical examination, laboratory, and imaging results necessary to construct a prioritized differential diagnosis. Diagnosis in the clinical setting is an iterative—and complicated—process that takes place amid many competing demands and requires input from the patient, caregivers, and multiple clinicians in addition to objective data. Far from a linear process, diagnosis in the clinical practice setting involves progressively refining diagnoses based on new information, and the distinction between diagnosis and treatment is often blurred as clinicians incorporate treatment response into diagnostic reasoning [17].

Ranji and other skeptics seem not to grasp the implications of AI development trends. The research by Goh et al. and Brodeur et al. shows that LLMs have already become powerful, flexible, and accurate diagnostic reasoners. Ironically, Ranji's skepticism is best understood as describing development goals, goals likely to be achieved in the near future.

In this regard, the composition of the Goh et al. and Brodeur et al. author lists likely reflects a strategic effort to influence the field. For example:

  • Robert Gallo is well-known for groundbreaking work in medical diagnostics and imaging, lending significant credibility to any research in this domain.
  • Eric Horowitz, a leader in computational medicine, brings expertise in integrating AI into clinical workflows, signaling the high-level applicability and importance of the findings.
  • Benjamin Morgan Scirica, Associate Professor at Harvard Medical School and the director of quality initiatives at Brigham and Women’s Hospital’s (BWH) Cardiovascular Division, bridges clinical expertise with research on AI-driven diagnostics.
  • Arjun K. Manrai, Assistant Professor of Biomedical Informatics at Harvard Medical School, leads a research lab that works on applying machine learning and statistical modeling to improve medical decision-making.
  • Arnold Milstein, Professor of Medicine, Stanford University; Director, Clinical Excellence Research Center (CERC), a one-time chair of a National Academy of Sciences planning committee on best methods to lower per capita health care spending and improve clinical outcomes.

The combination of prominent names, distinguished institutional affiliations, and cross-disciplinary expertise among the authors suggests that the aim is not only to present research findings but also to shift perspectives on the role of AI in clinical practice and accelerate its integration into healthcare systems.

Recommendations

Realizing the potential of LLMs will require proactive strategies to integrate AI into healthcare effectively while addressing limitations and ensuring human oversight. The following recommendations aim to guide clinicians, healthcare organizations, and policymakers in navigating this new landscape.

  1. AI Literacy and Prompt Training. Invest in AI literacy programs for clinicians at all career stages to ensure that healthcare professionals can critically assess LLM outputs and integrate them effectively into patient care [19]. Clinicians should be trained in best practices for prompt formulation, enabling them to interact effectively with LLMs and maximize their diagnostic potential. Healthcare organizations could integrate standardized prompts into clinical workflows to streamline AI-assisted decision-making and ensure consistent application.
  2. Enhance LLM Interactions. Refining LLM user interfaces to highlight inconsistencies in differential diagnoses or conflicting evidence can enhance diagnostic accuracy and encourage reflective reasoning.
  3. Educational and Practice Redesign. Medical education and clinical practice frameworks must evolve to integrate AI technologies like LLMs, emphasizing collaborative workflows that harness both human expertise and AI capabilities for optimal patient care.
  4. Human Oversight. LLMs should complement, not replace, physicians in the diagnostic process. Human oversight is essential to address broader clinical competencies such as patient communication, contextual understanding, and ethical considerations, ensuring high-quality, patient-centered care.
  5. Output monitoring and auditing. While explainability is often cited as essential for AI systems, in the case of LLMs like o1-preview, their overt reasoning processes reduce the need for extensive introspection into internal logic. Instead, robust monitoring and periodic auditing of diagnostic outputs—for accuracy, equity, and adherence to standards—would ensure that these systems deliver safe, effective, and ethical patient care.
  6. Support Multidisciplinary Collaboration. Promote collaboration among clinicians, AI developers, ethicists, and policymakers to co-design LLM systems that align with clinical workflows and address ethical, practical, and cultural challenges in healthcare environments.
  7. Further Research. Large-scale clinical trials are needed to evaluate the real-world impact of LLMs on patient outcomes and healthcare workflows. These studies should explore not only diagnostic reasoning but also tasks like patient data collection, summarization, and dynamic adaptation in clinical settings.
  8. Ethical and Regulatory Frameworks. Governments, citizens, and stakeholders should establish ethical and regulatory frameworks tailored to the use of LLMs in medical practice. These frameworks must address data privacy, accountability for errors, and the equitable distribution of AI resources to avoid exacerbating disparities in healthcare access.

References

  1. Goh, E., Gallo, R., Hom, J., Strong, E., Weng, Y., Kerman, H., Cool, J. A., Kanjee, Z., Parsons, A. S., Ahuja, N., Horvitz, E., Yang, D., Milstein, A., Olson, A. P. J., Rodman, A., & Chen, J. H. (2024). Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial. JAMA Network Open, 7(10), e2440969–e2440969. https://doi.org/10.1001/JAMANETWORKOPEN.2024.40969.
  2. Brodeur, P. G., Buckley, T. A., Kanjee, Z., Goh, E., Ling, E. bin, Jain, P., Cabral, S., Abdulnour, R.-E., Haimovich, A., Freed, J. A., Olson, A., Morgan, D. J., Hom, J., Gallo, R., Horvitz, E., Chen, J., Manrai, A. K., & Rodman, A. (2024). Superhuman performance of a large language model on the reasoning tasks of a physician. ArXiv. https://arxiv.org/abs/2412.10849v1.
  3. Mamede, S., & Schmidt, H. G. (2023). Deliberate reflection and clinical reasoning: Founding ideas and empirical findings. Medical Education, 57(1), 76–85. https://doi.org/10.1111/MEDU.14863.
  4. Hswen, Y., Rubin, R., & Chen, J. (2024). An AI Chatbot Outperformed Physicians and Physicians Plus AI in a Trial—What Does That Mean? JAMA. https://doi.org/10.1001/JAMA.2024.23860.
  5. OpenAI. (2023). Function calling and other API updates. OpenAI. Retrieved December 28, 2024, from https://openai.com/index/function-calling-and-other-api-updates/.
  6. OpenAI released GPT-4 in March 2023 and updated it in June 2023 [5]. It's likely the Goh study used the updated version.
  7. Introducing OpenAI o1-preview: OpenAI. (2024). OpenAI. Retrieved December 28, 2024, from https://openai.com/index/introducing-openai-o1-preview/.
  8. OpenAI o1 System Card. (2024). OpenAI. Retrieved December 28, 2024, from https://openai.com/index/openai-o1-system-card/.
  9. McDuff, D., Schaekermann, M., Tu, T., Palepu, A., Wang, A., Garrison, J., Singhal, K., Sharma, Y., Azizi, S., Kulkarni, K., Hou, L., Cheng, Y., Liu, Y., Mahdavi, S. S., Prakash, S., Pathak, A., Semturs, C., Patel, S., Webster, D. R., … Natarajan, V. (2023). Towards Accurate Differential Diagnosis with Large Language Models. ArXiv. https://arxiv.org/abs/2312.00164v1.
  10. Kanjee, Z., Crowe, B., & Rodman, A. (2023). Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge. JAMA, 330(1), 78–80. https://doi.org/10.1001/JAMA.2023.8288.
  11. Schaye, V., Miller, L., Kudlowitz, D., Chun, J., Burk-Rafel, J., Cocks, P., Guzman, B., Aphinyanaphongs, Y., & Marin, M. (2022). Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback. Journal of General Internal Medicine, 37(3), 507–512. https://doi.org/10.1007/S11606-021-06805-6/FIGURES/2.
  12. Goh, E., Gallo, R., Strong, E., Weng, Y., Kerman, H., Freed, J., Cool, J. A., Kanjee, Z., Lane, K. P., Parsons, A. S., Ahuja, N., Horvitz, E., Yang, D., Milstein, A., Olson, A. P. J., Hom, J., Chen, J. H., & Rodman, A. (2024). Large Language Model Influence on Management Reasoning: A Randomized Controlled Trial. MedRxiv, 2024.08.05.24311485. https://doi.org/10.1101/2024.08.05.24311485.
  13. For example, a 2015 Institute of Medicine study cites Singh's meta-analysis of outpatient clinic errors: "The combined estimate of diagnostic error based on these three datasets was about 5 percent. Extrapolating to the entire U.S. population, Singh et al. (2014) estimated that approximately 12 million adults (or 1 in 20 adults) experience a diagnostic error each year; the researchers suggested that about half of these errors could be potentially harmful" [14].
  14. Institute of Medicine (U.S.). Committee on Diagnostic Error in Health Care, issuing body., Balogh, E., Miller, B. T., & Ball, J. (2015). Improving diagnosis in health care. The National Academies Press.
  15. Singh, H., Connor, D. M., & Dhaliwal, G. (2022). Five strategies for clinicians to advance diagnostic excellence. BMJ, 376. https://doi.org/10.1136/BMJ-2021-068044.
  16. Graber, M. L. (2013). The incidence of diagnostic error in medicine. BMJ Quality & Safety, 22(Suppl 2), ii21–ii27. https://doi.org/10.1136/BMJQS-2012-001615.
  17. Ranji, S. R. (2024). Large Language Models—Misdiagnosing Diagnostic Excellence? JAMA Network Open, 7(10), e2440901–e2440901. https://doi.org/10.1001/JAMANETWORKOPEN.2024.40901.
  18. Ullah, E., Parwani, A., Baig, M. M., & Singh, R. (2024). Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology – a recent scoping review. Diagnostic Pathology, 19(1), 1–9. https://doi.org/10.1186/S13000-024-01464-7/TABLES/2.
  19. Boland, Joseph. (2024). How AI Literacy Can Democratize Generative AI. LinkedIn. https://www.dhirubhai.net/pulse/how-ai-literacy-can-democratize-generative-joseph-boland-cwyvc/.


Tersh Blissett

AI Swarm Agent & Automation Expert for the Trades | Co-Founder Trade Automation Pros | Co-Founder Skilled Trades Syndicate | Founder of Service Emperor HVAC | Service Business Mastery podcast | Tri-Star Mechanical

2 个月

Exciting advancements Joseph Boland AI's potential to enhance diagnostic accuracy?is really commendable. This is a transformative step for global healthcare

要查看或添加评论,请登录

Joseph Boland的更多文章

  • Salt Typhoon, A Massive and Persistent Breach of Telecommunications Systems by China

    Salt Typhoon, A Massive and Persistent Breach of Telecommunications Systems by China

    Recent cyberattacks by the Chinese state-sponsored group "Salt Typhoon" have compromised U.S.

  • Legislation to Repress Non-Profits

    Legislation to Repress Non-Profits

    The House of Representatives is about to consider legislation, dubbed the "Stop Terror-Financing and Tax Penalties on…

    1 条评论
  • The Conversational Future of Search

    The Conversational Future of Search

    Large language models (LLMs) depend on the internet for the massive training datasets that inform their intelligence…

    1 条评论
  • How AI Literacy Can Democratize Generative AI

    How AI Literacy Can Democratize Generative AI

    The rapid rise of generative AI since the launch of ChatGPT in late 2022 has transformed how people interact with…

    5 条评论
  • Anthropomorphism & AI

    Anthropomorphism & AI

    Nathan Hunter begins his guide on prompt engineering with a warning and a word of advice: I started to realise that my…

  • AI Regulation: Control or Collaboration?

    AI Regulation: Control or Collaboration?

    Current regulatory efforts advocate for "human-centric" AI. The EU's Artificial Intelligence Act (EUAIA) declares it a…

    4 条评论
  • The Apprenticeship of AI

    The Apprenticeship of AI

    The ambiguous and shifting epistemic status of AI is evident in language. Often, a kind of casual uncertainty or…

    2 条评论
  • AI Regulation as Guided Promotion

    AI Regulation as Guided Promotion

    The need for regulation of artificial intelligence is widely viewed as a control problem. Governments need to keep AI…

  • More Everyday Uses of ChatGPT

    More Everyday Uses of ChatGPT

    As artificial intelligence is transforming the world, so generative AI is transforming our experience of artificial…

    9 条评论
  • The Social Diffusion of AI, Illustrated in a New Federal Regulation

    The Social Diffusion of AI, Illustrated in a New Federal Regulation

    The Office of Management and Budget (OMB) published "Advancing Governance, Innovation, and Risk Management for Agency…

    1 条评论

社区洞察

其他会员也浏览了