Iron Clad Security Credits

Iron Clad Security Credits

Original Post on Medium: Iron Clad Security Credits. Successful AI implementations are… | by Sam Bobo | Jun, 2024 | Medium

Security is an assumed feature that we as humans take for granted with the exception of highly sensitive or personal aspects of our lives — home security, safety of our children while away from us, and private information such as social security numbers or even health records — and this passive trust in organizations continues to pursue until a breach occurs that personally effects you. Taking out a credit card and paying for a meal at a restaurant, we do not typically think to ourselves “is this terminal and the software underpinning it PCI compliant?” or sitting in our doctors office as they take notes on an Electronic Medical Record, we don’t think “Hmmm I hope this service is HIPPA compliant!” Again, we passively trust that organizations within the entire value chain of a service we use are compliant with the latest security practices and laws put in place to protect one’s privacy and security.

In the wake of Artificial Intelligence and the headstrong race for training data (particular to LLMs) or even peering back to the early algorithmic days of Facebook and Netflix creating personalized ads and/or algorithms based on our information to target us and show the power of Recommendation Systems, society has taken a keen eye on data ownership, privacy, and security of our information. Over the past recent history, Europe has taken a strong stance on data ownership, passing GDPR (General Data Protection Regulation) and California has special laws such as the California Customer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) that protect consumer data. Furthermore, we have new precedent about one’s own likeness being crafted in the courts as it tries to tackle low-data-cost replication of media such as audio, pictures, video, and more!

These laws and regulations have come from increased awareness about personal data usage by algorithms used for recommendations or generation. We as consumers are more aware than ever about our personal data, security and privacy. I’ve been in rooms where guests have stated “I will never put a Google Home in my house as it always listens to me!” While that statement is true for the sake of a wake-up word, the fact of the matter is that there are instances where trust has been invaded by consumers and significantly diminished. I always operated under the philosophy that trust is extremely easy to erase and massively difficult to build up.

Artificial Intelligence as an entire technological practice faces immense friction in adoption with trust. Should trust be eroded by the consumer base (and by the Google Home example, already has), then gaining widespread adoption will be extremely difficult. Two companies, are hardening its security posture and leveraging security “credits” respectively to gain the trust of the general populus are Microsoft and Apple, and until Apple’s Worldwide Developer Conference (“WWDC”) keynote on June 10th, I did not draw the strategy comparison but this has become ever more clear and I seek to share that with you in this article!

Microsoft

On May 3, 2024 CEO Satya Nadella issued a memo to all Microsoft Employees detailing the absolute need for a security-first mindset in approaching both current, legacy, and future products and services backed by the company’s brand. From the internal memo covered by The Verge

Last November, we launched our Secure Future Initiative (SFI) with this responsibility in mind, bringing together every part of the company to advance cybersecurity protection across both new products and legacy infrastructure. I’m proud of this initiative, and grateful for the work that has gone into implementing it. But we must and will do more.

and later in the memo

If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security. In some cases, this will mean prioritizing security above other things we do, such as releasing new features or providing ongoing support for legacy systems. This is key to advancing both our platform quality and capability such that we can protect the digital estates of our customers and build a safer world for all.

Note: in full transparency, I am a Microsoft Employee, I am taking an analyst-like view on public information (again, published by The Verge among other news outlets) and will not provide any commentary outside of the aforementioned view.

Microsoft, like many high profile hyperscale organizations, are frequently targets of state-sponsored espionage, cyber attacks, and other domestic and foreign meddling given its service to government entities and broad societal reach, yielding a high impact if compromised. When compromised, financial impacts to organizations could be in the millions, however, brand equity and goodwill costs are detrimental, or near crippling based on the size of the company.

Fast forward to Microsoft Build where Satya announced in the opening keynote the concept of a Copilot+ PC threshold specification containing enough processing power to handle on-device Artificial Intelligence computations. For OEM manufactures who build Copilot+ compliant PCs, Windows will enable by default new AI-powered capabilities, one of those announced was Recall:

With Recall, you have an explorable timeline of your PC’s past. Just describe how you remember it and Recall will retrieve the moment you saw it. Any photo, link, or message can be a fresh point to continue from. As you use your PC, Recall takes snapshots of your screen. Snapshots are taken every five seconds while content on the screen is different from the previous snapshot. Your snapshots are then locally stored and locally analyzed on your PC. Recall’s analysis allows you to search for content, including both images and text, using natural language. Trying to remember the name of the Korean restaurant your friend Alice mentioned? Just ask Recall and it retrieves both text and visual matches for your search, automatically sorted by how closely the results match your search. Recall can even take you back to the exact location of the item you saw.

In order to permit a company to take ongoing snapshots on a computer for indexing and recall (no pun intended here), there must be immense trust between the software provider and the user. For many, the computer or personal tablet fits into the earlier categorization of a device which users move from passive assumption of security to active awareness, namely when the concept of “surveillance” could be introduced, as Recall may be notes as for many users. Many who dug deep into the technological implementation of Recall shared reason not to enable the capability, leading Microsoft to pull the release on June 10th.

Apple

Apple, by contrast, was extremely well positioned to capitalize on the security credits required to introduce Artificial Intelligence capabilities into the hands of consumers whom, at this juncture, have taken a critical eye on AI based on the ongoing conversation about data privacy and security clouding the technological landscape.

Apple has historically taken a security-first mentality (some may argue too much) for years now:

  • App Tracking and Transparency (ATT)
  • App Store Requirements & Payments Platform
  • Lockdown mode & Stolen Device Protection
  • Hide-My-Email on iCloud
  • End-to-End Encryption
  • Refusing to unlock iPhones for Law Enforcement

The above are a small subset of the overall security-focused portfolio from Apple. In fact, Apple’s marketing team has spent a considerable amount of advertising real-estate boasting about the security features of iPhone and the entire Apple ecosystem, making it one of the differentiating factors to an open Android ecosystem.

Many analysts and critics criticized Apple for a delayed entry into the Artificial Intelligence space and took a critical eye on its duel strategy of partnerships (OpenAI) and proprietary AI models. However, at the Apple Worldwide Developer Conference (WWDC), Apple Intelligence (AI) was debuted as a focal point underpinning iOS 18. Apple Intelligence is an umbrella brand name to encompass AI features and functionality across the apple portfolio without explicitly detailing the “Artificial” nomenclature to AI, for obvious reasons, security and trust being at the forefront. Readers can learn more about the Apple Intelligence announcements on their website but particularly what intrigued me was about personal context.

Personal context utilizes on-device processing and encrypted within Apple Silicon to learn about the user from across all facets of an iPhone from writing style, usage trends, and other personal information for both recommendation-like activities such as surfacing more urgent notification to generative capabilities like aggregating and summarizing multiple email threads across a topic.

Apple importantly made the following security statement, tracking with their brand and promise of security:

Apple Intelligence is designed to protect your privacy at every step. It’s integrated into the core of your iPhone, iPad, and Mac through on-device processing. So it’s aware of your personal information without collecting your personal information. And with groundbreaking Private Cloud Compute, Apple Intelligence can draw on larger server-based models, running on Apple silicon, to handle more complex requests for you while protecting your privacy.

Apple earned the right to debut these AI focused features both with its strict focus on security as well as tight integration between hardware (Apple Silicon) and software (iOS). I anticipate that Apple will be sightly successful in iOS 18 upgrades and usage of its A(pple)I capabilities in comparison to Microsoft Recall.

The Hybrid Cloud Approach

Where both companies thrive is a new architectural pattern emerging to combat the trust obstacle as well as optimize for latency: the hybrid cloud. For both Microsoft and Apple, sensitive computing occurs on-device with a portfolio of Machine Learning models (for example, the Phi models from Microsoft) and Apples proprietary models to reduce latency (a critical factor in user experience) and maintain privacy (well…so long as the information is encrypted). Any workloads that are much larger in nature such as large document summaries and non-personal open-ended questions such as a search query, are outsources to cloud-hosted models and third party models, OpenAI for both companies.

Hence the overinvestment here in chips and small language models, enabling this hybrid computing approach to deliver Machine Learning And AI capabilities closer to consumers, personalized, and “trusted” (quotes intentional) where consumers need this information the most. I do believe this will become a dominant design paradigm that all hardware-software manufacturers (even if OEMed through third parties such as the Copilot+ PCs) employ.

More and more we will see specialized computing. I highly anticipate Edge Computing will take the forefront of many adjacent accessories and processes such as medical services or personal fitness. Again, I continue to reiterate that this is a fast growing and evolving field and I remain committed to my enthusiasm to continue to learn, draw patterns, and analyze this space for you!

Sam Bobo

Product Manager of Artificial Intelligence @ Microsoft specializing in Conversational AI and Generative AI technologies | Former IBM Watson

3 个月

Pete Trujillo given your interests in #ResponsibleAI, I’m curious on your take here.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了