Artificial Intelligence and Navigating Firewalls: Academic Perspective
Most educators use Generative Artificial Intelligence (GAI) every day in both our work and personal life. I advise my students that, to stay competitive in 2024 and beyond in a rapidly growing and competitive market, they need to learn how to use GAI effectively and understand its strengths and weaknesses. Each GAI product varies depending on its training and algorithms; however, all serve as an extension of ourselves and can assist in formulating ideas, conducting research, and providing writing support. With that said, a critical limitation is that GAI cannot access information behind firewalls, which restricts academic learning and research. Therefore, we cannot rely solely on GAI tools in academia, or we risk losing access to critical empirical evidence in our research and writing.
_____________________________
The following is a list of current GAI challenges regarding information access. The content was prompted using ChatGPT-4, verified with Google Gemini, and Microsoft CoPilot (2024, March 27), and verified and edited by me, the author of this article.
Authentication Challenges
GAI systems primarily use public Application Programming Interfaces (APIs). These are a set of rules and protocols for building and interacting with software applications. An API defines methods and data structures that programmers can use to interact with the operating system, application, or other service running on a computer or server. This allows different software programs to communicate with each other and is crucial for developing complex systems and applications. However, firewalls, designed to block unauthorized access, prevent AI from utilizing login processes or credentials a human might use (Cronin, 2023). ?
Dynamic Content Challenges
Firewalls displaying dynamic content adapt based on user interactions, presenting a significant challenge for GAI systems trained predominantly on static datasets. These AI systems, accustomed to fixed data patterns, often struggle to interpret or navigate web pages where content changes dynamically. Furthermore, such firewalls can also restrict AI access if they detect patterns typical of automated bots rather than human users, complicating the ability of AI to effectively retrieve and utilize data behind these protective measures (Stanford University, n.d.).
CAPTCHA Verification Challenges
To prevent automated access by bots, many firewalls implement CAPTCHA tests (Completely Automated Public Turing test to tell Computers and Humans Apart). Despite advancements in AI, these systems still struggle to solve complex CAPTCHA puzzles, hindering access to protected information.
Legal and Ethical Considerations
Bypassing firewalls, even unintentionally, can raise significant legal and ethical issues. AI developers must navigate these challenges carefully, respecting legal restrictions and copyright laws, to avoid potential legal entanglements.
Strategies for Overcoming Access Challenges
Despite these challenges, researchers and developers are exploring various strategies to enhance AI's ability to access firewall-protected information.
API Access with Permissions
One approach involves granting AI systems access through authorized APIs, which requires collaboration between AI developers and data providers. This method aims to provide AI with the necessary credentials to access restricted data legally (Cronin, 2023).
领英推荐
Emphasizing Public Data
A considerable amount of valuable information is available publicly on the web. Focusing on harnessing and analyzing this publicly accessible data can offer an alternative to accessing firewall-protected information. However, this is not acceptable to use by itself in academia when synthesizing empirical evidence, or in healthcare, where evidence-based practices are critical to patient outcomes. This limitation significantly impacts information quality and reliability and potentially reduces the novelty of research findings. Academic research thrives on innovation and unique contributions to knowledge, which can be challenging when using datasets that are readily available to all researchers, as well as the risk of ethical and privacy concerns (Yelne et al., 2023)
Human-in-the-Loop Systems
Incorporating human oversight into AI systems presents another solution. By allowing humans to handle authentication and navigate through firewalls, AI can then be utilized to analyze the data obtained. This collaborative approach leverages the strengths of both humans and AI (Stanford University, n.d.).
_______________________
Conclusion
AI programs can only provide what they are trained on or what they have access to. For academic researchers and students, overcoming firewalls to find empirical evidence and primary source data often feels like 'hitting a wall.' However, there is light at the end of the tunnel. Every day brings new technical advancements. Additionally, it is good academic stewardship to share our lessons learned regarding these strengths and limitations with our peers and students as we experience them, helping to optimize the collaboration between AI and human intelligence. This knowledge is key to finding the best evidence and data available.???
____________
References
Cronin, I., & Scoble, R. (2023). The metaverse: A professional guide (H. Swart, Contrib.). Packt Publishing. Purchased through https://www.google.com/books/edition/The_Immersive_Metaverse_Playbook_for_Bus/aHTkEAAAQBAJ?hl=en&gbpv=0
Department of health and Human Services (DHHS). (2024, April 29). HHS shares its Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by State, Local, Tribal, and Territorial Governments in the Administration of Public Benefits. Press Release. https://www.hhs.gov/about/news/2024/04/29/hhs-shares-plan-promoting-responsible-use-artificial-intelligence-automated-algorithmic-systems-state-local-tribal-territorial-governments-administration-public-benefits.html
Du Boulay, B. (2016). Recent Meta-Reviews and Meta-Analyses of AIED Systems. International Journal of Artificial Intelligence in Education, 26(1), 536-537. https://gemini.google.com/app/5f4ac17aeb44bc88
Google. (2024). reCAPTCHA protects your website from fraud and abuse without creating friction. https://www.google.com/recaptcha/about/
Google Gemini (2024). [Large language model]. https://gemini.google.com/app
Microsoft Copilot (2024). [Large language model]. https://copilot.microsoft.com/
OpenAI. (2024). ChatGPT (4).[Large language model]. https://chat.openai.com
Sai, S. S., Yashvardhan, U., Chamola, V., & Sikdar, B. (2024, April 19). Generative AI for Cyber Security: Analyzing the Potential of ChatGPT, DALL-E, and Other Models for Enhancing the Security Space. IEEE Access. https://doi.org/10.1109/ACCESS.2024.3385107
Stanford University. (n.d.) Humans in the Loop: The Design of Interactive AI Systems. Human-Centered Artificial intelligence. https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems
Yelne, S., Chaudhary, M., Dod, K., Sayyad, A., & Sharma, R. (2023). Harnessing the Power of AI: A Comprehensive Review of Its Impact and Challenges in Nursing Science and Healthcare. Cureus, 15(11), e49252. https://doi.org/10.7759/cureus.49252
Dynamic and experienced academic professional with online course management, faculty coaching, and course development experience.
10 个月What an amazing article Dr. Meyer. I too enjoy learning more about the nuances of AI, especially in academics. I teach GEN 101 at UAGC and constantly do my research on this subject. Thank you for sharing your research and wisdom! Bonita Bryant, M.Ed., Adjunct Faculty GEN 101, Faculty Support & Classroom Consultant, Faculty Affairs.