September 16, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
The world has now caught up to what was previously science fiction. We are now designing AI that is in some ways far more advanced than anything Isaac Asimov could have imagined, while at the same time being far more limited. Even though they were originally conceived as fictional principles, there have been efforts to adapt and enhance Isaac Asimov’s Three Laws of Robotics to fit modern enterprise AI-based solutions. Here are some notable examples: Human-Centric AI Principles - Modern AI ethics frameworks often emphasize human safety and well-being, echoing Asimov’s First Law. ... Ethical AI Guidelines - Enterprises are increasingly developing ethical guidelines for AI that align with Asimov’s Second Law. These guidelines ensure that AI systems obey human instructions while prioritizing ethical considerations. ...?Bias Mitigation and Fairness - In line with Asimov’s Third Law, there is a strong focus on protecting the integrity of AI systems. This includes efforts to mitigate biases and ensure fairness in AI outputs. ...?Enhanced Ethical Frameworks - Some modern adaptations include additional principles, such as the “Zeroth Law,” which prioritizes humanity’s overall well-being.?
Neurodiversity, which includes ADHD, autism spectrum disorder, and dyslexia, presents unique challenges for individuals, yet it also comes with many unique strengths. People on the autism spectrum often excel in logical thinking, while individuals with ADHD can demonstrate exceptional attention to detail when engaged in areas of interest. Those with dyslexia frequently display creative thinking skills. However, software design often fails to accommodate neurodiverse users. For example, websites or apps with cluttered interfaces can overwhelm users with ADHD, while those sites that rely heavily on text make it harder for individuals with dyslexia to process information. Additionally, certain sounds or colors, such as bright colors, may be overwhelming for someone with autism. Users do not have to adapt to poorly designed software. Instead, software designers must create products designed to meet these user needs. Waiting to receive software accessibility training on the job may be too late, as software designers and developers will need to relearn foundational skills. Moreover, accessibility still does not seem to be a priority in the workplace, with most job postings for relevant positions not requiring these skills.
When you know that provenance is a vector for a software supply chain attack, you can take action to protect it. The first step is to collect the provenance data for your dependencies, where it exists; projects that meet SLSA level 1 or higher produce provenance data you can inspect and verify. Ensure that trusted identities generate provenance. If you can prove that provenance data came from a system you own and secured or from a known good actor, it’s easier to trust. Cryptographic signing of provenance records provides assurance that the record was produced by a verifiable entity — either a person or a system with the appropriate cryptographic key. Store provenance data in a write-once repository. This allows you to verify later if any provenance data was modified. Modification, whether malicious or accidental, is a warning sign that your dependencies have been tampered with somehow. It’s also important to protect the provenance you produce for yourself and any downstream users. Implement strict access and authentication controls to ensure only authorized users can modify provenance records.?
领英推荐
The term “technical” can introduce bias into hiring and career development, potentially leading to decisions swayed more by perception than by a candidate’s qualifications. Here, hiring decisions can sometimes reflect personal biases if candidates do not fit a stereotypical image or lack certain qualifications not essential for the role. For instance, a candidate might be viewed as not technical enough if they lack server administration experience, even when the job primarily involves software development. Unconscious bias can skew evaluations, leading to decisions based more on perceptions than actual skills. To address this issue, it is important to clearly define the skills required for a position. For example, rather than broadly labeling a candidate as “not technical enough,” it is more effective to specify areas for improvement, such as “needs advanced database management skills.” This approach not only highlights areas where candidates excel, such as developing user-centric reports, but also clarifies specific shortcomings. Clearly stating requirements, such as “requires experience building scalable applications with technology Y,” enhances the transparency and objectivity of the hiring process.
The single biggest thing enterprises are doing to address energy concerns is moving toward more energy efficient second-generation chips, says Duncan Stewart, a research director with advisory firm Deloitte Technology, via email. "These chips are a bit faster at accelerating training and inference -- about 25% better than first-gen chips -- and their efficiency is almost triple that of first-generation chips." He adds that almost every chipmaker is now targeting efficiency as the most important chip feature In the meantime, developers will continue to play a key role in optimizing AI energy needs, as well as validating whether AI is even required to achieve a particular outcome. "For example, do we need to use a large language model that requires lots of computing power to generate an answer from enormous data sets, or can we use more narrow and applied techniques, like predictive models that require much less computing because they’ve been trained on much more specific and relevant data sets?" Warburton asks. "Can we utilize compute instances that are powered by low-carbon electricity sources?
As for their use of private cloud, some of the rationale is purely a cost calculation. For some workloads, it’s cheaper to run on premises. “The cloud is not cheaper. That’s a myth,” one of the IT execs told me, while acknowledging cost wasn’t their primary reason for embracing cloud anyway. I’ve been noting this for well over a decade. Convenience, not cost, tends to drive cloud spend—and leads to a great deal of cloud sprawl, as Osterman Research has found.?... You want developers, architects, and others to feel confident with new technology. You want to turn them into allies, not holdouts. Jassy declared, “Most of the big initial challenges of transforming the cloud are not technical” but rather “about leadership—executive leadership.” That’s only half true. It’s true that developers thrive when they have executive air cover. This support makes it easier for them to embrace a future they likely already want. But they also need that executive support to include time and resources to learn the technologies and techniques necessary for executing that new direction. If you want your company to embrace new directions faster, whether cloud or AI or whatever it may be, make it safe for them to learn.?