November 23, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
The first thing to note about AI compliance today is that few laws and other regulations are currently on the books that impact the way businesses use AI. Most regulations designed specifically for AI remain in draft form. That said, there are a host of other regulations — like the General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA), and the Personal Information Protection and Electronic Documents Act (PIPEDA) — that have important implications for AI. These compliance laws were written before the emergence of modern generative AI technology placed AI onto the radar screens of businesses (and regulators) everywhere, and they mention AI sparingly if at all. But these laws do impose strict requirements related to data privacy and security. Since AI and data go hand-in-hand, you can't deploy AI in a compliant way without ensuring that you manage and secure data as current regulations require. This is why businesses shouldn't think of AI as an anything-goes space due to the lack of regulations focused on AI specifically. Effectively, AI regulations already exist in the form of data privacy rules.?
Like most types of hardware, AI accelerators can run either on-prem or in the cloud. An on-prem accelerator is one that you install in servers you manage yourself. This requires you to purchase the accelerator and a server capable of hosting it, set them up, and manage them on an ongoing basis. A cloud-based accelerator is one that a cloud vendor makes available to customers over the internet using an IaaS model. Typically, to access a cloud-based accelerator, you'd choose a cloud server instance designed for AI. For example, Amazon offers EC2 cloud server instances that feature its Trainium AI accelerator chip. Google Cloud offers Tensor Processing Units (TPUs), another type of AI accelerator, as one of its cloud server options. ... Some types of AI accelerators are only available through the cloud. For instance, you can't purchase the AI chips developed by Amazon and Google for use in your own servers. You have to use cloud services to access them. ... Like most cloud-based solutions, cloud AI hardware is very scalable. You can easily add more AI server instances if you need more processing power. This isn't the case with on-prem AI hardware, which is costly and complicated to scale up.
Platform engineering has provided a useful escape hatch at just the right time. Its popularity has grown strongly, with a well-attended inaugural platform engineering day at KubeCon Paris in early 2024 confirming attendee interest. A platform engineering day was part of the KubeCon NA schedule this past week and will also be included at next year’s KubeCon in London. “I haven't seen platform engineering pushed top down from a C-suite. I've seen a lot of guerilla stuff with platform and ops teams just basically going out and doing a skunkworks thing and sneaking it into production and then making a value case and growing from there,” said Keith Babo, VP of product and marketing at Solo.io. ... “If anyone ever asks me what’s my definition of platform engineering, I tend to think of it as DevOps at scale. It’s how DevOps scales,” says Kennedy. The focus has shifted away from building cloud native technology, done by developers, to using cloud native technology, which is largely the realm of operations. That platform engineering should start to take over from DevOps in this ecosystem may not be surprising, but it does highlight important structural shifts.
According to the OECD, AI is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments.” The vision for Responsible AI is clear: establish global auditing standards, ensure transparency, and protect privacy through secure data governance. Yet, achieving Responsible AI requires more than compliance checklists; it demands proactive governance. For example, the EU’s AI Act takes a hardline approach to regulating high-risk applications like real-time biometric surveillance and automated hiring processes, whereas the U.S., under President Biden’s Executive Order on Safe, Secure, and Trustworthy AI, emphasizes guidelines over strict enforcement. ... AI is becoming the lynchpin of cybersecurity and national security strategies. State-backed actors from China, Iran, and North Korea are weaponizing AI to conduct sophisticated cyber-attacks on critical infrastructure. The deployment of Generative Adversarial Networks (GANs) and WormGPT is automating cyber operations at scale, making traditional defenses increasingly obsolete. In this context, a cohesive, enforceable framework for AI governance is no longer optional but essential.?
Voice biometrics are making waves across multiple industries. Here’s a look at how different sectors can leverage this technology for a competitive edge:Financial services: Banks and financial institutions are actively integrating voice verification into call centers, allowing customers to authenticate themselves with their voice, eliminating the need for secret words or pin codes. This strengthens security, reduces time and cost per customer call and enhances the customer experience. Automotive: With the rise of connected vehicles, voice is already heavily used with integrated digital assistants that provide handsfree access to in-car services like navigation, settings and communications. Adding voice recognition allows such in car services to be personalized for the driver and opens the possibilities of more enhancements such as commerce. Automotive brands can integrate voice recognition for offering seamless access to new services like parking, fueling, charging, curbside pick-up by utilizing in-car payments that boost security, convenience and customer satisfaction. Healthcare: Healthcare providers can use voice authentication to securely verify patient identities over the phone or via telemedicine. This ensures that sensitive information remains protected, while providing a seamless experience for patients who may need hands-free options.
While rate-limiting is an essential tool for protecting your system from traffic overloads, applying it directly at the application layer — whether for microservices or legacy applications — is often a suboptimal strategy. ... Legacy systems operate differently. They often rely on vertical scaling and have limited flexibility to handle increased loads. While it might seem logical to apply rate-limiting directly to protect fragile legacy systems, this approach usually falls short.?The main issue with rate-limiting at the legacy application layer is that it’s reactive. By the time rate-limiting kicks in, the system might already be overloaded. Legacy systems, lacking the scalability and elasticity of microservices, are more prone to total failure under high load, and rate-limiting at the application level can’t stop this once the traffic surge has already reached its peak. ... Rate-limiting should be handled further upstream rather than deep in the application layer, where it either conflicts with scalability (in microservices) or arrives too late to prevent failures. This leads us to the API gateway, the strategic point in the architecture where traffic control is most effective.?