When it comes to transparent and explainable AI, there is no one-size-fits-all solution. Instead, different approaches and techniques should be applied depending on the context and purpose of the AI model. For instance, when designing AI models, one can consider using simpler or modular architectures, incorporating human feedback, or adding documentation and metadata. Post-hoc methods can be used to analyze and interpret the AI model's behavior, such as visualizations, saliency maps, feature importance scores, or counterfactual examples. To evaluate and report the transparency and explainability of AI models, standards and frameworks can be developed using metrics, indicators, or checklists. Finally, engaging with stakeholders and users to communicate and collaborate on the transparency and explainability of AI models is essential; this could involve using natural language, narratives, or interactive interfaces.