Delivering on High Impact AI Promise with Trust
Image 2ragon

Delivering on High Impact AI Promise with Trust

Happy New Year and Welcome to this edition of Creating Trust in AI. Establishing Trust is a complex task, we want to simplify it with actionable and practical tools and strategy.

I believe in the immense promise and potential of AI and like to share the developments that can help stay the course of positive outcomes or challenges that can deter. It is a Dialog to Action Series.

First: Let me share the image I carefully chose for this special edition on AI. I wanted it to signify the positive potential of AI and also paint a realistic picture of the risks and unintended outcomes. Yin and Yang image as it symbolizes the interconnectedness of positive and negative forces.

Let me illustrate with an example: An AI system recently created to automate the drug discovery process by identifying proteins that indicate disease also?identified?40,000 potential bioweapons.?Collaborations Pharmaceuticals, a US based medical startup, in collaboration with researchers at King’s College London and Spiez Laboratory in Switzerland—designed a de novo molecule generator, which uses machine learning to learn from data inputs to generate compounds that can reasonably be used as a drug. The system, called?MegaSyn, is trained to reward the identification?of good compounds and punish the identification of toxic ones.?

The researchers decided to test results on outcomes if they reversed the goals of MegaSyn’s and trained it to develop compounds that are used for harm, instead of good? They oriented the MegaSyn model to reward itself for generating toxic chemicals, rather than treatment molecules.

“To narrow the universe of molecules, we chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century—a few salt-sized grains of VX (6–10 mg) is sufficient to kill a person,”

Within six hours, the model had generated 40,000 chemical warfare agents entirely on its own. It designed not just VX, but also other known nerve agents as well as novel molecules that the researchers note appear not only plausible but more toxic than other known chemical warfare agents.?

“The reality is that this is not science fiction,” the authors write. “We are but one very small company in a universe of many hundreds of companies using AI software for drug discovery and de novo design. How many of them have even considered repurposing, or misuse, possibilities?”?

I want to highlight, the issue of Trust and what we need to establish it in complex systems such as AI go beyond Security, Ethics, Privacy, Transparency. It is a multistakeholder problem that requires a holistic approach. I will address actions we can take to build once and achieve intended outcomes with AI at the end of the newsletter.

As you know, my background is Risk Management, Cybersecurity, Privacy and Data Governance and am happy to bring other thought leaders perspectives in other fields on the topic.

This is a special edition, with esteemed ao-authors Scott Tousley and Samantha Wigglesworth .

Author: Scott Tousley is a career engineer of many different domains, most recently of data, information and machine learning applications.?He has served in military, civil service and commercial organizations, and wrestled with a great deal of organizational research and practice. He was the DHS representative to the machine learning and artificial intelligence work at the close of the Obama administrations, then part of the public sector team at Splunk, and working Smart Cities and Communities efforts with NIST for many years.

Title: Machine Learning, AI and the Human Organizations that use them

??Conversations today about Artificial Intelligence are often focused on various aspects of machine learning; models, analytics, and data and data and data…. and today many different discovering conversations about ChatGPT. But also remember that all of this is taking place in and around real organizations, with real expertise of people and teams in these organizations.??

??Early in my professional life I was very well trained by real experts, sergeants that led squads and platoons in the Army organizations I led.?This experience brought me to always thinking about how to support, leverage and train experts found throughout organizations.?How do we build the most capable teams from a wide range of expert individuals; tacit and explicit knowledge, often driven by strong lessons-learned systems; knowledge management strategies and systems; how do we develop and leverage expertise around both routine operations and unpredictable surprises.??

??So a growing challenge we face today is how to bring together data, analytics and machine learning; with individual and team knowledge in the organization.?How do we ensure data dashboards (so often a modern focus) help real individuals and experts and organizations strengthen their expertise and make better decisions?

????This modern challenge as analogous to the making of an alloy, where different constituents are mixed together; if you get the combinations and pressure and temperature correct, you end up with an alloy that has better properties than of the individual constituents.?With clear focus and thinking, mixing together data, analytics and machine learning, with individual and organizational knowledge and understanding, yields higher quality outcomes than if we rely too much on data or knowledge alone.

Author: Sam Wigglesworth

Sam is an experienced ML and DL practitioner, Samantha has a keen interest in AI and NLP, Machine Translation, coding and Cloud technologies.

Title: ChatGPTs premature rollout and Challenges posed.

ChatGPT, a new generative AI system from OpenAI promises the ability to produce human-like text similar to what we have seen previously with GPT-3 in what has been defined as “generative AI.”

ChatGPT can write complete pass-grade AP essays, emails, and thematic articles; it can help with generating and brainstorming new ideas, or scripts for plays. It re-creates text that you would think is written by humans; demonstrating knowledge of a topic, punctuation, varied sentence structure, and clear organization.

As a chatbot, the conversational model can “answer follow-up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests.”?

?There are nevertheless still some teething problems with users receiving network errors when you pass parameters through chat.openai.com and watch the bot type out words. Some factual inaccuracies still exist e.g. knowledge of specific characters in a movie or scene, and sometimes answers that sound plausible are incorrect or nonsensical.?

OpenAI has learned many lessons from earlier deployments of models like GPT-3 and Codex. In particular, it has focused on reducing harmful and unsubstantiated or untruthful outputs using Reinforcement Learning from Human Feedback [RLHF] but concerns still exist around implicit bias in text outputs and how users will be able to detect whether what they read online is written by humans or machine. I look forward to contributing to the NLP and generative AI field in future newsletters!

How to spot AI-generated text | MIT Technology Review

OpenAI debuts ChatGPT and GPT-3.5 series as GPT-4 rumors fly | VentureBeat?

ChatGPT Wrote My AP English Essay—and I Passed - WSJ


No alt text provided for this image

Where are we now?

?Stanford University Human Centered Artificial Intelligence Policy, White Paper assesses the progress of three pillars of U.S. leadership in AI innovation and trustworthy AI ?that carry the force of law:

(i) the AI in Government Act of 2020;

(ii) the Executive Order on “AI Leadership”; and

(iii) the Executive Order on “AI in Government.”

It is not reassuring on the current state.

Specifically it call out "America’s AI innovation ecosystem is threatened by weak and inconsistent implementation of these legal requirements. First, less than 40% of all requirements could be publicly verified as having been implemented. Second, 88% of examined agencies have failed to provide AI plans that identify regulatory authorities pertaining to AI. Third, roughly half or more of agencies have failed to file an inventory of AI use cases, as required under the AI in Government Order.

Difficulties in verifying implementation strongly suggests that improvements must be made on reporting and tracking of requirements that the President or Congress deemed necessary for public disclosure. Fulfilling mandated transparency requirements strengthens external stakeholders’ ability to provide meaningful, informed advice to the federal government. The high prevalence of non-implementation suggests a leadership vacuum and capacity gap at the agency and national level. Agencies require leadership and resources and to meaningfully advance the objectives of these legal mandates. Checking these boxes is not the end itself, but is a mechanism for the ultimate goal of U.S. leadership and responsibility in AI development and trustworthy adoption, both for the public and private sector. Overall, implementation has been lacking.



Next issue I will talk about the impact on Business of EU's new landmark rules for or a safer and more accountable online environment,?and the European Centre for Algorithmic Transparency (ECAT). The DSA is a first-of-a-kind regulatory toolbox globally and sets an international benchmark for a regulatory approach to online intermediaries.

Algorithmic Impact assessments are one tool to anticipate and manage an AI system’s benefits, risks, and limitations throughout its entire life cycle.

We at Trusted AI are providing guidance and practical actionable solution for a big challenge for all companies developing or implementing AI, namely Trusted AI. We are helping Organizations with Strategic Guidance on creating a AI Strategy, deliver educational workshops on the subject and partner with Technology providers depending on the initial assessment findings of business AI Context.

Dr. Joseph (Nwoye) Author

Institutional & Corporate Diversity Leader and Trainer

1 年

Ms. Gupta, I love your article, and as I read it, my thoughts were not on AL but were more generalized. My focus centered on a larger context with an emphasis on collaboration and the power of collaboration. You demonstrated with examples the importance of the right combination of resources to produce a high-quality result: "This modern challenge is analogous to making an alloy, where different constituents are mixed; if you get the combinations and pressure and temperature correct, you end up with an alloy that has better properties than of the individual constituents." You presented concrete examples, such as:?"Collaborations Pharmaceuticals, a US-based medical startup, in collaboration with researchers at King's College London and Spiez Laboratory in Switzerland—designed a de novo molecule generator, which uses machine learning to learn from data inputs to generate compounds that can reasonably be used as a drug." This is one of the core principles of cooperative societies I learned several years ago, where individuals banded together to produce goods and services that are more and better than the sum of their output. Imagine a world where human beings, corporations, professionals, and countries band for positive ends.

Bill Ross

Self taught Genetic Writer, Researcher and Theorist and Top Gun Cyber Warfare Expert

1 年

Oh God …. “The researchers decided to test results on outcomes if they reversed the goals of MegaSyn’s and trained it to develop compounds that are used for harm, instead of good? They oriented the MegaSyn model to reward itself for generating toxic chemicals, rather than treatment molecule” We are now not to far from robots wanting to feel good and self program their reward system to destroy humanity

Yangbo Du

Entrepreneur, Social Business Architect, Connector, Convener, Facilitator - Innovation, Global Development, Sustainability

1 年
Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

1 年

Pamela Gupta thank you as always for keeping us up to date on these issues.

Alexandre MARTIN

Autodidacte ? Chargé d'intelligence économique ? AI hobbyist ethicist - ISO42001 ? Polymathe ? éditorialiste & Veille stratégique - Times of AI ? Techno-optimiste ?

1 年

要查看或添加评论,请登录