Building a trust-first AI software
Artificial intelligence (AI) is already driving profound innovation and leading us to societal transformation. Yet, as systems like Genway AI gain more traction in their respective areas (for us,? AI-led, insight-driven decision-making), legitimate concerns arise regarding transparency, safety, and accountability. The answer to these concerns doesn't lie in halting technological progress but rather in crafting a foundation of trust from the outset. Building trust-first AI software means ensuring it aligns with foundational humanistic values, safeguards user data, and operates within ethical boundaries. But how do you actually do it?
The core trust challenge when building AI products
AI technologies operate on unseen patterns within massive datasets, leading to decision-making processes that are often opaque, even to us as the team building the platform. This "black box" nature raises questions: Is our system reliable? Are the insights it’s generating sound? Is it biased in ways that exacerbate societal inequalities?
When things go wrong, the mistrust that’s generated erodes adoption and limits the value AI can offer. Consider our realm of insight-gathering functions within tech; when researchers can't understand the basis for research synthesis and action items tied to it, naturally, they’ll lack the confidence to implement an AI-led research tool more widely.?
Because companies can easily squander the transformative potential of AI by ignoring these challenges, at Genway AI , we’ve chosen to build using a “trust-first” approach.?
Our trust-first approach encompasses three key areas of focus:
The frameworks that guide a trust-first AI approach
We use a multifaceted approach and a diverse set of frameworks to enable our trust-first approach across our three areas of focus.
Governance frameworks have helped us establish our guiding values and processes. We’ve reviewed and implemented documents like the OECD Principles on AI, reflecting in particular on inclusive growth, sustainable development and well-being,
This principle “recognizes that guiding the development and use of AI toward prosperity and beneficial outcomes for people and planet is a priority.”?
领英推荐
Genway AI naturally plays an important role in advancing the mission of inclusive growth, sustainable development, and well-being by bridging the gaps that have led to disparities in who we focus on as we build technology. Too often, our limited capabilities to conduct qualitative research at scale have perpetuated existing biases.? AI-led research can and should be used to give all members of society a voice and help reduce biases. In turn, as the OECD terms, this “responsible stewardship” will drive the development of more human-centered products with beneficial outcomes for all.
AI Review Boards - of publicly traded companies - a relatively novel form of oversight, have already approved Genway AI for deployment. These internal bodies assess pre-deployment AI ethics implications before deployment and conduct proactive assessments of AI's potential harms and how to mitigate them.
Technical Frameworks have helped us make Genway AI more reliable and trustworthy. One of our core areas of focus in this area has been transparency. In the context of research, transparency helps insight-gathering functions understand the outputs of a system like Genway AI , namely the analysis and synthesis of rapid interview data. Genway AI utilizes complex algorithms operating on vast datasets of potentially 1000s of interviews. Transparency provides visibility into how our AI arrived at a specific output (such as a and insight or research summary). The simplest manifestation of in-product transparency is in the fact that we deep-link to the specific parts of interview transcripts that were included in an AI generated insight or summary. This approach is crucial in decision-support situations where researchers need to comprehend why an AI system suggested a particular insight or follow-up action item.
Finally, Responsible AI frameworks are guiding our thinking on accountability, meaning that from day one we’ve been creating mechanisms to establish who is responsible for the proper functioning of our AI system and how we address any negative or unforeseen consequences that arise from its use.
In this realm, there are two types of measures we employ (and will continue to evolve):
We’ve been inspired by companies like 微软 and 谷歌 , who’ve been sharing their responsible AI journey for years. In fact, this article was specifically inspired by 微软 's principle to share learnings about developing and deploying AI responsibly. We’ll continue to do so as we go through our own trust-first, responsible AI journey.?
As always, if you’d like to learn more about what we’re up to at Genway AI , check out our website at www.genway.ai. We’re working hard to leverage AI in ways that benefit our society and help us build technology inclusively.?
We’re perfecting the end-to-end process of conducting interviews by leveraging AI to refine and enhance how research teams schedule research, synthesize their learnings, and integrate them into their workflows for maximal impact on everyone, for everyone.?
We’re always looking for feedback; If you’d like to try us out, reach out at [email protected] or DM me on LinkedIn.?
Product @ Adyen
1 年Great article! In my experience, navigating the intersection of data in the financial sector and the integration of AI into our products necessitates a robust regulatory framework. This framework is essential for both internal operations and collaborations with third-party entities. Without such guidelines, AI can become a "wild west," lacking the necessary boundaries to ensure it effectively meets its objectives. Amazing to see new startups taking responsible approach.
Co-Founder @ Stealth | AI, Strategy, Product
1 年Great article on the importance of trust within Gen AI applications and the practical ways you are approaching it ??
Analista de desenvolvimento de negócios / Agritech / Empreendedorismo / Tecnologia/ Especialista em Processos de Recursos Humanos
1 年????????
Senior User Experience Researcher @monday.com | Ex-Meta | M.A. Clinical Psychology
1 年Acknowledging these concerns and being transparent about your efforts to mitigate them is definitely a key to building trust ???. Especially when your audience is made up of professional critical thinkers..
Building something new
1 年Very interesting, keep us posted on how you are actually implementing those principles!