OpenAI ChatGPT 4 v2: Explainable AI, Audit, Ethics, Hallucinations, Transparency
Jeff Frick
Engagement in an AI Driven, Asynchronous World | Builder | Top Voice | Video Virtuoso | Content Curator | Host, Turn the Lens podcast and Work 20XX podcast
Since my last conversation with ChatGPT where we covered influencing, training, nudging, and more, I've been anticipating Chapter 2, and deep dive on Explainable AI, ethics in AI, especially in the context of better understanding how generative ai does create something brand new, rather than retrieving something off the shelf, so the audit trail, or really understanding creation, especially for the user, wary of hallucinations in unexplored areas, is an interesting challenge we explore.
Full length version dropping soon. Consider this subscribers' early access. Thanks for subscribing.
Let's get into it
Let's Talk Ethics and Explainable AI
Today I want to go a little different direction and talk about Explainable AI, Ethics in AI - Jeff Frick
Thank you for having me back on ‘Turn the Lens’ It's always a pleasure to join you. I'm here and ready for our conversation - ChatGPT
I've been looking forward to digging into Explainable AI with ChatGPT since our last conversation. Had a few connectivity hick ups, but edited most of the dead air for consumability.
Explainable AI - Concepts
Explainable AI, often abbreviated as XAI, refers to methods and techniques in the field of artificial intelligence that provide more clarity on how AI models make their decisions.... Explainable AI aims to open up the black box of AI, turning it into more of a glass box that's easier for everyone to understand - ChatGPT
The concepts are easy to understand in general, not easy to implement in detail. I'm a bit skeptical, let's dive in, and get beyond the summary paragraphs into the implementation.
Full Prompt and Response
User
Okay, so today I want to talk about explainable AI. Before we get into the questions about it, I wonder if you can give people a summary on the concepts of explainable AI.
ChatGPT
Sure, I'd be happy to explain. Explainable AI, often abbreviated as XAI, refers to methods and techniques in the field of artificial intelligence that provide more clarity on how AI models make their decisions. The idea is to make the inner workings of AI systems more transparent and understandable to humans. This is important for several reasons:
Trust: If users can understand how an AI system comes to its conclusions, they're more likely to trust its recommendations and decisions.
Fairness: Explainability helps identify and correct biases within AI systems, making them fairer.
Compliance: In many industries, regulations require decisions made by AI to be explainable, so companies can comply with legal standards.
Debugging and improvement: Understanding how an AI model works can help developers improve its performance and fix issues more effectively.
In essence, explainable AI aims to open up the "black box" of AI, turning it into more of a "glass box" that's easier for everyone to understand.
2 - Audit
Q - I get that as a concept, but what I don't understand in practical reality is that these models are adapting all the time, data is changing all the time, generating new content, not retrieving something that would have an easy audit trail. So how do you deal with audit trail? How do we approach this problem? - Jeff Frick
A - That's a great question. And you've touched on one of the core challenges in making AI, especially large models, explainable and auditable, given their complexity and dynamic nature. Here are a few approaches to tackle this. - ChatGPT
The fact that the models generate new content, vs retrieving existing information (with a more direct connection), and the fact they adjust in data and algorithms all the time, is this possible?
Full Prompt and Response
User
Okay, so I get that as a concept, but what I don't understand in practical reality is that these models are so large, they're adapting all the time, the data is changing all the time, and as you've said before, you're generating new content. You're not retrieving something that would have an easy audit trail. So how do you deal with something like audit trail? How do we approach this problem?
ChatGPT (Answer Read)
That's a great question, and you've touched on one of the core challenges in making AI, especially large models, explainable and auditable. Given their complexity and dynamic nature, here are a few approaches to tackle this:
1) Simplified Models for Explanation: Sometimes, simpler models that approximate the behavior of complex models can be used to explain decisions in a more understandable way. These are not as accurate but provide a gist of the decision-making process.
2) Feature Importance: For many AI models, it's possible to identify which inputs (or features) were most influential in making a decision. This doesn't fully open the black box but gives a sense of why the model might have made a particular decision.
3) Decision Paths: In some models, like decision trees, you can trace the path the model took to reach a decision. This is more challenging with neural networks, but efforts are being made to map decision-making processes.
4) Regular Audits and Updates: By regularly auditing AI systems and the data they're trained on, you can identify shifts in data or behavior. Documentation of these audits and updates can form a part of the audit trail.
5) Human-in-the-loop Systems: Integrating human oversight into AI systems helps in making judgments about the AI's decisions and correcting them if necessary. This human oversight can also provide a form of an audit trail, as decisions that are reviewed or overridden are documented.
6) Transparency and Documentation: Keeping detailed records of the data used to train models, the training process, and any modifications to the model over time can help in creating an audit trail. This includes documenting the rationale behind ||// read back stopped \\|| behind using certain datasets, changes in the model architecture, and updates based on performance evaluations.
The field of XAI is actively evolving, with researchers and practitioners developing new tools and methods to improve the explainability and auditability of AI systems. While it's a challenging problem, especially with the scale and complexity of modern AI, these efforts are crucial for building AI systems that are accountable, fair, and trustworthy.
3 - Tracking Decisions
Q - How do you track a decision tree when you're generating stuff new that's never been generated before from a consumption of all this massive data? - Jeff Frick
"It's a complex task" in the understatement of 2024. There are some helpful training tools listed in the prompt, and full reply's in text (not included in video, didn't want to wait for her to read the long list)
Full Prompt and Response
User
How do you track a decision tree when you're generating stuff new that's never been generated before from a consumption of all this massive data?
ChatGPT
Tracking decisions in a decision tree for newly generated content, especially when it's based on consuming and synthesizing vast amounts of data, is a complex task. However, decision trees inherently have a more interpretable structure compared to other AI models, which can help in this tracking. Here's a simplified breakdown of how you might approach it:
1) Node Analysis: Each node in a decision tree represents a decision based on a specific feature of the input data. By examining the path taken through the tree ||// answer read interrupted \\|| that is, which nodes were activated—you can trace the decision-making process. This involves looking at the conditions at each node that were met or not met, leading to the final decision.
2) Feature Importance: In decision trees, it's possible to quantify the importance of each feature (or input variable) in making predictions. This can help you understand which features are most influential in generating new content. By analyzing feature importance, you can trace back through the tree to see which features led to the creation of something new.
3) Path Recording: For each piece of newly generated content, you can record the exact path taken through the decision tree. This record includes all the decision points (nodes) and the choices made at each point. This creates a clear audit trail for how the new content was generated, making it easier to understand and explain the process.
4) Data Provenance: Keeping detailed records of the input data used for generating new content is crucial. This includes not just the data itself but also its source, how it was preprocessed, and how it was fed into the decision tree. This data provenance helps in understanding the context and basis for the tree's decisions.
5) Sensitivity Analysis: Conducting sensitivity analyses can help determine how changes in the input data affect the decision tree's outputs. This is particularly useful for understanding how the tree responds to new or unseen data and can provide insights into how it generates new content.
6) Regular Reviews and Updates: Decision trees, like all AI models, may need to be updated to reflect new data or correct biases. Regularly reviewing and updating the tree based on its performance and the accuracy of its generated content ensures that the decision-making process remains relevant and accurate.
While decision trees offer a more transparent view into AI decision-making, the sheer volume and complexity of data involved in generating new content can make tracking challenging. However, by leveraging the tree's structure, maintaining detailed records, and using analytical techniques, it's possible to create a comprehensible and traceable audit trail for decisions leading to new content generation.
4 - Process Layers
I'm talking about getting a decision tree on a specific answer to a specific question at the user level
I am talking at the user level, by response. Their really doesn't appear to be an answer here. there are all set up and training answers.
Full Prompt and Response
ChatGPT
Ah, tracking how a decision tree—or, more broadly, AI models like the one I'm based on—generates new responses involves a slightly different approach, as traditional decision trees are more straightforward and interpretable. For complex AI models, especially those generating new content, the process looks like this:
1) Layer-by-Layer Analysis: Although AI models like mine don't use decision trees in the traditional sense, understanding the decision-making process involves breaking down the model's layers. Each layer processes input data (like a question from you) and passes its output to the next layer, gradually building towards a final response. Analysts and developers can examine the activations within these layers to infer how different parts of the input influenced the output.
2) Attention Mechanisms: Modern AI models, especially those in natural language processing, use attention mechanisms to determine which parts of the input are most relevant to generating a response. || Answer Interupted || By analyzing these attention patterns, one can get a sense of how the model decides to focus on certain aspects of the input data to generate new content.
3) Input and Output Correlation: Tracking the generation process involves comparing the input data (like your question) with the output (my response). By examining what information was present in the input and how it correlates with the output, researchers can deduce how the model is interpreting and transforming the input into a new answer.
4) Synthetic Probes and Controlled Experiments: To understand decision-making in AI, researchers often use synthetic probes (input data with known properties) and controlled experiments where they systematically vary aspects of the input to see how the output changes. This helps in mapping out how changes in the input data affect the generated content, providing insights into the decision-making process.
5) Model Interpretability Tools and Techniques: While direct tracking of each decision within complex models isn't straightforward, interpretability tools and techniques are being developed to provide more insight. These include methods for visualizing the internal state of the model, extracting rules that approximate the model's behavior, and using external models to explain the decisions of the primary model.
Creating an "audit trail" for AI-generated answers involves a combination of technical analysis, experimental interventions, and the use of advanced tools designed to make opaque models more understandable. While it's a complex and evolving field, these approaches help demystify how AI models generate new, never-before-seen answers from vast data inputs.
5 - Decision Logic
Is there an audit trail that's generated in the back of every answer? Is there a ‘push to download’ to get information?
Full Prompt and Response
ChatGPT
Creating an audit trail for dynamically generated answers, especially when not directly retrieving but generating content "on the fly," involves a few nuanced steps. Although decision trees are traditionally more static and interpretable, let's extend the concept to encompass the broader challenge of auditing decisions in more complex, generative AI models. Here's a structured approach to tackle this:
1) Define the Decision Logic: Even for generative models, there's an underlying logic to how decisions are made. This could be based on probabilities, patterns learned from data, or rules in more structured models like decision trees. Understanding this decision logic is the first step, which might involve dissecting the model's architecture to see how inputs are processed to generate outputs.
||// answer reading stopped \\||
2) Log Key Decision Points: Implement logging at key points where the model makes significant decisions. For a generative model, this could involve capturing the model's state at various layers or points where the input data significantly influences the direction of content generation.
3) Capture Input Features and Weights: For each decision or piece of content generated, record the input features that were most influential and any weighting or prioritization applied to them. In models inspired by decision trees or that involve sequential decision-making, this could be akin to tracking which branches or paths were taken based on specific features of the input data.
4) Record Generation Pathways: Document the pathways or sequences of decisions that lead to the final generated content. This includes which options were considered and discarded, which rules or criteria were applied at each step, and how the model converged on the final output.
5) Use Interpretability Tools: Leverage tools and techniques designed for model interpretability, such as LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations), to provide insights into how input features influence outputs. These tools can help create a more understandable audit trail for complex models.
6) Implement Versioning and Change Tracking: Maintain strict version control for the model and its training data. Document any changes to the model’s structure, training process, or input data sources. This helps in understanding how the model’s decision-making process evolves over time.
7) Evaluate and Validate Outputs: Regularly evaluate the generated content for quality, relevance, and accuracy. Use both automated metrics and human judgment to assess whether the decisions made by the model align with expectations and ethical guidelines.
8) Transparent Reporting: Create transparent reports detailing the decision-making process for generated content, including how input data was used, which decisions were made, and why certain paths were taken over others. This documentation is vital for audits and for stakeholders to understand the model's functioning.
By systematically tracking decisions, logging influential factors, and using interpretability tools, you can create an audit trail for AI models that generate new content dynamically. This process promotes transparency, accountability, and trust in the model's decisions.
6 - Last Answer
Q - What was the decision logic you used for your last answer? - Jeff Frick
A - I see what you're getting at now. You're focusing more on the dynamic, real time generation of content and where decisions are made on the fly during the interaction rather than during the initial training of the model - ChatGPT
Not finding the answer, how do I have confidence in the accuracy in areas where I'm not proficient? Do I have to double check everything? No, use the tool in areas you ARE proficient so you can SPOT and asses VALUE and ADJUST when known hallucinations come through and interrupt your workflow.
Full Prompt and Response
ChatGPT
I see what you're getting at now. You're focusing more on the dynamic, real-time generation of content, where decisions are made "on the fly" during the interaction, rather than during the initial training of the model. This distinction is crucial because the model isn't just applying learned rules but is actively generating new responses based on the input it receives at the moment. Let's clarify how we can approach auditing in this real-time generation context:
Real-Time Decision Tracking
1) Dynamic Logging: Implement dynamic logging mechanisms that capture decisions as they happen. This involves recording the input, the model's immediate response to that input, and any intermediate steps the model takes as it generates the output. This real-time logging is more challenging because it requires the system to capture and document processes as they occur, without significantly impacting performance.
2) Explainability at Interaction Level: Utilize explainability techniques that can operate at the interaction level. This means applying tools or methods that can provide insights into why the model generated a specific response to a specific input in real-time. It could involve simpler, more interpretable models that run in parallel to provide a "shadow" explanation of decisions.
3) Anomaly Detection for Unexpected Outputs: Set up systems that automatically flag unexpected or anomalous outputs. This can help in auditing by identifying instances where the model's real-time generation deviates significantly from expected patterns, prompting a closer examination.
4) User Feedback Loop: Incorporate mechanisms for immediate user feedback on generated content. \\// break answer read //\\ This allows users to flag content that seems off, incorrect, or inappropriate, providing a direct signal that can be audited and analyzed to understand why the model generated such a response.
5) Version Snapshot at Interaction: For each interaction, record a snapshot of the model version, including any dynamic adjustments or learning that might be occurring. This ensures that any audit can account for the specific state of the model at the time of generation, important for models that learn or adapt in real-time.
6) Contextual Analysis: Document the context within which decisions are made, including preceding interactions, the sequence of inputs leading to the decision, and any external data sources referenced. This context is vital for understanding why a decision was made.
7) Ensuring Transparency and Accountability: Transparent Mechanisms for Review: Establish clear, transparent mechanisms for reviewing logged decisions, especially those flagged by users or anomaly detection systems. This includes having a process for human review and interpretation of the model's decision-making process.
8) Ethical and Compliance Considerations: Ensure that real-time auditing processes comply with ethical guidelines and legal regulations, particularly for models used in sensitive or impactful areas. This might involve specific checks for bias, fairness, and privacy protection.
Approaching the audit trail for real-time generated content involves balancing the need for detailed tracking and analysis with the practicalities of dynamic interaction. It's about creating a system that's not only capable of generating new, relevant content but also doing so in a way that's transparent, understandable, and accountable.
领英推荐
7 - Documentation
Q - Is there a setting in OpenAI, documenting how you got an answer for every response generated? - Jeff Frick
A - No. ... The process behind generating responses is complex and involves parsing the input, understanding context, and producing a relevant answer - ChatGPT
Full Prompt and Response
User
Okay, so is there a way to put a setting in OpenAI so that you will create the tracking or the documentation as to how you got an answer for every response generated?
ChatGPT
No, there isn't a setting in OpenAI that automatically creates and shows the documentation of how an answer was derived for every response generated. The process behind generating responses is complex and involves parsing the input, understanding context, and producing a relevant answer based on a vast amount of information. However, I always aim to provide answers based on up-to-date and accurate information from reliable sources and my training data up to my last update in April 2023. If you need more detailed explanations or sources for specific answers, feel free to ask!
8 - Current Events
Your training data set went up to a certain date ... How do you keep up to date on the latest news?
Keeps up with updated information.
Full Prompt and Response
User
So, a quick question on that. Your training data set went up to a certain date, and yet you say that you are getting updated on the latest. How do you keep up to date on the latest news?
ChatGPT
I can keep up with the latest news and information by using a browsing tool that allows me to search the web for current events, new discoveries, and up-to-date information. This tool lets me access a range of sources to provide you with the most recent and relevant information. So, while my training data goes up to a certain point, I can still provide current insights by looking up real-time information when needed.
9 - Ethical Challenges
Addressing these challenges involves a multidisciplinary approach, including legal, ethical, technical, and social considerations. Establishing clear guidelines, ethical standards, and robust regulatory frameworks is crucial for ensuring that generative AI benefits society while minimizing harms - ChatGPT
This could (will be) an entire episode. Exercising ruthless prioritization, what are the top challenges according to ChatGPT. Important to note the number of vectors, points of view, and considerations required, legal, ethical, technical and social. Sure can add more.
Full Prompt and Response
User
Okay, so let's shift gears a little bit and talk about ethics in AI. What do you see as the three greatest challenges for ethics in AI, in Gen AI specifically?
ChatGPT
Ethics in generative AI, like in other areas of artificial intelligence, involves several key challenges that are central to its development and deployment. Here are three significant ones:
1) Bias and Fairness: Generative AI systems can inadvertently perpetuate or even exacerbate biases present in their training data. Ensuring these systems are fair and do not discriminate against any group of people is a major challenge. This involves identifying and mitigating biases in the datasets they're trained on and the algorithms themselves.
2) Privacy and Data Security: Generative AI systems often require access to vast amounts of data, which can include sensitive information. Protecting this data and ensuring the privacy of individuals is crucial. There's also the challenge of generating new content that respects privacy and doesn't inadvertently reveal personal data.
3) Misuse and Accountability: There's a risk that generative AI can be used to create misleading information, deepfakes, or other forms of disinformation. Determining how to prevent misuse while promoting positive uses of the technology is a significant ethical challenge. Additionally, there's the question of who is responsible when AI systems make mistakes or are used harmfully—be it the creators, the users, or the technology itself.
Addressing these challenges involves a multidisciplinary approach, including legal, ethical, technical, and social considerations. Establishing clear guidelines, ethical standards, and robust regulatory frameworks is crucial for ensuring that generative AI benefits society while minimizing harms.
10 - Governance and Privacy
Just as privacy governance emphasizes the importance of transparent practices around data collection and use, transparency in AI operations is crucial - ChatGPT
We have not done well with privacy, as evidenced by the lack of a national breach notification standard or law.
Full Prompt and Response
User
Okay, so what can we take from the governance around privacy that might apply to the governance of AI?
ChatGPT
Learning from the governance of privacy to apply to AI governance offers valuable insights, especially as both domains face challenges related to data management, ethical considerations, and societal impact. Here are a few key takeaways:
1) Transparency: Just as privacy governance emphasizes the importance of transparent practices around data collection and use, transparency in AI operations, including data sourcing, training processes, and decision-making algorithms, is crucial. This can help build trust and accountability.
2) Consent and Autonomy: In privacy governance, consent is a foundational principle, ensuring individuals have control over their personal information. Similarly, consent mechanisms might be adapted for AI, allowing individuals to understand and consent to how AI systems might use their data or affect them, ensuring respect for autonomy and personal choice.
3) Data Protection and Security: The rigorous data protection measures employed in privacy governance can be extended to AI, safeguarding the data used in training and operation of AI systems against breaches and misuse. This includes techniques for anonymization, encryption, and secure data storage.
4) Ethical Frameworks: The development of ethical frameworks in privacy governance, which address concerns such as fairness, non-discrimination, and human rights, can be mirrored in AI governance. These frameworks can guide the ethical development, deployment, and use of AI technologies.
5) Regulatory Compliance and Oversight: Privacy laws and regulations, such as GDPR in the European Union, provide a model for how AI could be regulated. This includes establishing clear standards, oversight bodies, and mechanisms for enforcement to ensure compliance and protect individuals' rights.
6) Stakeholder Engagement: Engaging a broad range of stakeholders, including the public, in discussions about privacy has been critical. Similarly, inclusive dialogues involving policymakers, technologists, civil society, and the public can inform AI governance, ensuring it reflects diverse values and needs.
Applying these principles from privacy governance to AI challenges can help manage risks, protect individuals, and ensure AI is used in ways that are ethical, beneficial, and aligned with societal values.
11 - Hallucinations
Q - How do I increase the probability of identifying errors in hallucinations on topics I’m less familiar with? - Jeff Frick
A - You can try the following strategies - ChatGPT
Full Prompt and Response
User
How do I increase the probability of identifying errors and hallucinations on topics I'm less familiar with?
ChatGPT
To increase the probability of identifying errors and hallucinations on topics you're less familiar with, you can try the following strategies:
1) Cross-check information: Look up key facts, data, and assertions on reputable sources. Even a quick search can help validate the information.
2) Ask for sources: If possible, request sources or references for the information provided. This allows you to verify the accuracy of the information.
3) Use critical thinking: Question the logic, coherence, and consistency of the information. Does it align with what you know to be true from reliable sources? Is the argument logically sound?
4) Consult with experts: Reach out to people who are knowledgeable in the field. They can offer insights and spot inaccuracies more easily.
5) Educate yourself on common pitfalls: Familiarize yourself with common errors or biases in the topic area. Knowing what to look out for can help you spot potential issues.
Be skeptical of extraordinary claims: If a claim seems too good to be true or highly unusual, it warrants extra scrutiny.
Use multiple sources: Compare the information across different reputable sources. Consensus among sources can increase confidence in the accuracy of the information.
Remember, the goal isn't to become an instant expert but to develop a healthy skepticism and a methodical approach to validating information. Would you like to dive deeper into any of these strategies?
12 - Accuracy and Frequency
Developers do focus on assessing accuracy and reducing the frequency of inaccuracies or hallucinations - ChatGPT
Full Prompt and Response
User
In your training process, do you do any type of assessment as to accuracy percentages versus frequency of hallucinations?
ChatGPT
In the development and training process of models like me, developers do focus on assessing accuracy and reducing the frequency of inaccuracies or "hallucinations." While I don't have access to the specific methods or results of these assessments due to my design, I can tell you about the general approaches used:
1) Training on High-Quality Data: Ensuring the training data is of high quality and diverse can help improve the accuracy of the model. This involves sourcing data from reliable and authoritative sources.
2) Evaluation Metrics: Developers use various metrics to evaluate the model's performance, including accuracy, precision, recall, and more specialized metrics tailored to specific tasks or domains.
3) Regular Updates and Fine-Tuning: Continuously updating the model with new data and fine-tuning it on specific tasks or domains can help improve accuracy and reduce errors over time.
4) Error Analysis and Testing: Conducting thorough error analysis and testing under different scenarios helps identify common pitfalls and areas where the model might "hallucinate." This feedback is then used to make adjustments.
5) Adversarial Testing: Introducing challenging or tricky queries during the testing phase to specifically target the model's weaknesses and improve its handling of difficult or ambiguous inputs.
6) Human-in-the-Loop: In some development cycles, human evaluators review the model's outputs to assess accuracy and flag errors, which helps train the model further.
Developers and researchers are keenly aware of the importance of accuracy and the challenge of hallucinations. Ongoing research and development efforts aim to continuously improve these aspects of AI models.
13 Summary
ChatGPT is a great tool for doing specific things when you have a specific agenda. ... "Every time I dig deep, I just get more confused and more concerned about how you possibly unpack these things - Jeff Frick
It's a tool. Works great for some things, less for others, for now. Find a place to add some automation in your workflow. Once you see the hallucinations in your own world, you'll have proper skepticism for trusting important information on topics you're less familiar with. Start practicing with the tool.
14 - Wrap
So anyway, we'll continue this exploration. I've got a whole nother kind of questions in line, more of a fun task for our next episode.
That's a wrap. Something less serious in store for our next conversation.
15 - Full-Text Answers, Prompts, and Questions
The full questions and answers, the text, the prompts are all going to be up on Turn the Lens Podcast dot com - Jeff Frick
Have a great week
That's a wrap on today's edition.
If you enjoyed this week's posts, please consider sharing with a friend, and subscribing to OpenAI ChatGPT Adventure Newsletter, or my other two, the Friday Five Newsletter, and The Work 20XX Newsletter, Subscribe to YouTube and Work 20XX on Spotify and Turn the Lens on Spotify, or wherever you podcast.
Ratings and reviews on Spotify help. Share with a friend, like, subscribe, comment, and SMASH that notification bell.
I go through gallons of Peet's Coffee putting posts together. Someone finally created an app, where you can buy me a cup of coffee.
Thanks and have a great day.
Engagement in an AI Driven, Asynchronous World | Builder | Top Voice | Video Virtuoso | Content Curator | Host, Turn the Lens podcast and Work 20XX podcast
6 个月cc Jeffrey Traum
Engagement in an AI Driven, Asynchronous World | Builder | Top Voice | Video Virtuoso | Content Curator | Host, Turn the Lens podcast and Work 20XX podcast
7 个月Cc: Otto Papasadero more AI material for your group
As Editor-in-Chief for Podcast Magazine?, I've acquired insider information about what makes podcasts profitable. Now, I use that insight to help professionals ??= Skyrocket their Sales =?? using the Power of Podcasting
7 个月Exciting times ahead for the Newsletter subscribers! Can’t wait to dive into the conversation on Explainable AI and Ethics.