The Senate Select Committee’s recommendations on AI guardrails are a great step toward ensuring the safe and ethical use of artificial intelligence in Australia. We commend their vision, particularly the emphasis on transparency, accountability, and protecting high-risk applications. Yet, we also see an opportunity for a more nuanced and adaptive framework too. First up, let's look at common ground: ? Transparency and Traceability: Both we and the Committee agree on the need for robust transparency mechanisms. High-risk AI applications demand rigorous traceability, but scaling this based on risk levels could help avoid burdens for low-stakes applications. ? Humans in the Loop: Oversight is critical in high-stakes decisions like healthcare or law enforcement. We see digital attestations—tamper-proof records of human approvals—as a potential tool to formalise this oversight while maintaining accountability. ? Global Alignment: The Committee’s recognition of international standards like the EU AI Act is vital. Harmonising frameworks ensures Australian innovators stay competitive on the global stage. But we must also ask: Are we overregulating the tools at the expense of innovation? Here’s where we see room for improvement: ?? General-Purpose AI as High-Risk: Labeling general-purpose AI (GPAI) as inherently high-risk may stifle innovation in low-risk areas, like using AI for writing assistance or automating mundane tasks. Could a use-case-specific framework better balance oversight with creative freedom? ?? Modernising Compliance: SMEs need help navigating complex compliance frameworks. Could technologies like blockchain and zero-knowledge proofs (ZKPs) provide tamper-proof transparency while easing administrative burdens? ?? Industry-Led Standards: A Hub and Spoke model—where government sets high-level principles (the Hub) and industry-specific bodies define tailored standards (the Spokes)—could ensure regulations remain practical and adaptable to sector-specific challenges. A Path Forward ?? We applaud the Committee for taking the first steps toward ethical AI governance. But as technology evolves, we need a framework that’s flexible enough to evolve with it. Could we: 1?? Pilot risk-based regulation to ensure low-risk AI applications aren’t overburdened? 2?? Test cutting-edge compliance tools in key sectors to modernise record-keeping? 3?? Empower industry bodies to co-develop standards that reflect sector-specific realities? AI is advancing rapidly, and Australia has the chance to lead by example. By balancing safety with innovation, we can foster a regulatory framework that safeguards society and unlocks the full potential of AI. What do you think? How can we balance these goals to ensure Australia remains a hub for responsible AI innovation? Let’s keep the conversation going below ?? Mark Dwyer Ivan Chan, CPA CA Timothy Murphy Arturo R. ?? Mark Monfort #AI #Innovation #EthicalAI #Regulation #Australia #sikeAI
sike.ai
嵌入式软件产品
New York City,New York 162 位关注者
Private GenAI GPT for highly regulated enterprises.
关于我们
sike.ai is a private Generative AI GPT platform for the regulated enterprise. Regulated industries such as Financial Services, Healthcare and Public Companies face significant challenges in adopting GenAI tools including: ~ Security: Ensuring robust data protection and user access controls. ~ Transparency: Providing clear, explainable AI powered decisions. ~ Risk: Managing potential AI biases and errors. sike.ai is Secure: sike.ai's GenAI engine secures confidential data and prevents leaks with sophisticated access controls and permissions. sike.ai is Auditable: ZKP MicroSignatures create an immutable audit trail by tokenizing interactions and outputs. sike.ai is Trusted: Advanced automation and tagging ensure accurate responses, reducing errors while ensuring human intervention. www.sike.ai
- 网站
-
https://www.sike.ai
sike.ai的外部链接
- 所属行业
- 嵌入式软件产品
- 规模
- 11-50 人
- 总部
- New York City,New York
- 类型
- 私人持股
- 创立
- 2022
地点
sike.ai员工
动态
-
Great to see our co-founder Arturo R. up on stage at the Actuaries Institute AI Conference yesterday. He said “AI can almost answer any question. It's much more important to figure out what is the question” and alongside Henry Cheang of Platform One was one of a group of entrepreneurs showcasing the challenges of working with AI systems and sharing their unfiltered truths (as the Actuaries Institute noted). Here's some shots from it along with Joanna Marsh and Vincent Po Li for sharing the unfiltered truth about AI entrepreneurship. Also great to see that Google Chief Data Scientist Cassie Kozyrkov was presenting as well.
-
-
-
-
-
+3
-
-
Here's our response to the Australian Government Department of Health and Aged Care Safe and Responsible Artificial Intelligence in Health Care – Legislation and Regulation Review - key highlights include our push for transparent and explainable AI plus the use of blockchain/ZKPs when it comes to proving humans in the loop for high-risk AI usage. ?? Mark Monfort Arturo R. Mark Dwyer Ivan Chan, CPA CA Timothy Murphy
-
At sike.ai, we are proud to have submitted our response to the Department of Industry, Science and Resources consultation on mandatory AI guardrails. Our submission is focused on ensuring the safe, transparent, and responsible deployment of AI across industries while fostering innovation and reducing unnecessary regulatory burden. ?? Key Highlights from Our Submission: We propose a Hub and Spoke model where industry-specific bodies like APRA and ASIC define high-risk AI applications, ensuring sector-specific AI regulations that are tailored to individual risks and evolving needs. We're advocating for a shift toward regulating AI outcomes and applications rather than the technology itself—allowing AI to evolve without being stifled by outdated regulations. Our response underscores the importance of traceability and explainability, particularly in high-risk sectors such as finance and healthcare, using Retrieval-Augmented Generation (RAG) systems and digital attestations to enhance trust. We emphasise the critical role of blockchain and Zero-Knowledge Proofs (ZKPs) in providing tamper-proof records and ensuring data integrity in AI-driven processes. Our approach focuses on collaboration, aligning with international standards like ISO/IEC 42001:2023, and ensuring that regulatory frameworks are adaptable and responsive to industry feedback. We believe in AI as a companion technology that enhances human decision-making, and we’ve outlined a step-by-step action plan to trial these guardrails across sectors like finance, law enforcement, and healthcare. ?? Interested in learning more? Let’s continue the conversation around building a responsible AI future! ?? ?? Mark Monfort Arturo R. Mark Dwyer Timothy Murphy Ivan Chan, CPA CA #AIRegulation #ArtificialIntelligence #AIInnovation #Blockchain #ZKP #DigitalTransformation #AICompliance #sikeai #ResponsibleAI
-
In the world of finance and compliance, Know Your Customer (KYC) processes are crucial but often cumbersome. Traditional methods can be time-consuming, error-prone, and resource-intensive. But what if you could streamline your KYC process with the power of AI? At?sike.ai, we’re revolutionizing the way businesses handle KYC. Our secure intelligent knowledge engine is designed to integrate seamlessly into your existing workflows, making your KYC process faster, more accurate, and more efficient. With sike.ai, you can: ?? Automate Data Collection:?Reduce manual entry and human error by automating the collection and verification of customer information. ?? Enhance Accuracy:?Utilize advanced AI models to cross-reference and validate data, ensuring compliance and reducing the risk of fraud. ?? Customize Workflows:?Tailor the KYC process to fit your specific needs, ensuring that every step aligns with your regulatory requirements and business goals. ?? Maintain Privacy:?Keep your data secure with our privacy-first approach, which ensures that sensitive information is handled with the utmost care. Imagine a KYC process that adapts to your needs, not the other way around. With sike.ai, it’s “Your Workflows, Your Way.” Join us in transforming the future of KYC. Learn more at?sike.ai. Mark Dwyer Arturo R. Timothy Murphy Ivan Chan, CPA CA ?? Mark Monfort #AI #KYC #Compliance #WorkflowAutomation #sikeai #YourWorkflowsYourWay
-
-
great insights from Sarah Kaur and our CXO ?? Mark Monfort on how #AI and #blockchain can transform how we govern
Winner of the Women in AI Asia Pacific Awards in Creative Industries. Human-centred AI is my jam. I contribute to impact by supporting forward-thinking leaders to use emerging technology thoughtfully. Views are my own.
After attending CEDA - Committee for Economic Development of Australia roundtable on AI governance last week, I've been exploring what it means to try and manage a "digital shapeshifter" like AI in a complex ecosystem like an organisation. I think traditional governance models are proving inadequate. Collaborating with ?? Mark Monfort, here are two articles for lunchtime reading on governing for and with AI as a shapeshifter! Part 1 argues AI as something "novel" with unique characteristics that demand new governance approaches, drawing on complexity theory for principled approaches. + Part 2 explores using AI as part of governance use cases like enhancing explainability through Retrieval-Augmented Generation and leveraging blockchain for transparent AI decision tracking Enjoy! ?? https://lnkd.in/gQa7shXv https://lnkd.in/gT_EpZAH
-
sike.ai is excited to share that we've been selected for the x15ventures Community Day! Australia's Commonwealth Bank venture-scaling arm, x15ventures, is seeking startups reimagining the customer and employee experience with data and artificial intelligence (AI) for its annual Xccelerate program, which helps early-stage founders explore pathways to partnership with the Bank. Arturo R. Mark Dwyer Nick Bishop CFA ?? Mark Monfort Ivan Chan, CPA CA Timothy Murphy
-