September 22, 2023

September 22, 2023

HR Leaders’ strategies for elevating employee engagement in global organisations

In the age of AI, HR technologies have emerged as powerful tools for enhancing employee engagement by streamlining HR processes, improving communication, and personalising the employee experience. Sreedhara added “By embracing HR Tech, we can enhance the employee experience by reducing administrative burdens, improving access to information, and enabling employees to focus on more meaningful aspects of their work. Moreover, these technologies can contribute to greater employee engagement. Enhancing employee experience via HR tech and tools can improve efficiency, and empower employees to take more control of their work-related tasks. We have also enabled some self-service technologies like: Employee portal that serves all HR-related tasks, and access to policies and processes across the employee life cycle - Onboarding, performance management, benefits enrolment, and expense management;? Employee feedback and surveys; Databank for predictive analysis (early warning systems) and manage employee engagement.”


Bolstering enterprise LLMs with machine learning operations foundations

Risk mitigation is paramount throughout the entire lifecycle of the model. Observability, logging, and tracing are core components of MLOps processes, which help monitor models for accuracy, performance, data quality, and drift after their release. This is critical for LLMs too, but there are additional infrastructure layers to consider. LLMs can “hallucinate,” where they occasionally output false knowledge. Organizations need proper guardrails—controls that enforce a specific format or policy—to ensure LLMs in production return acceptable responses. Traditional ML models rely on quantitative, statistical approaches to apply root cause analyses to model inaccuracy and drift in production. With LLMs, this is more subjective: it may involve running a qualitative scoring of the LLM’s outputs, then running it against an API with pre-set guardrails to ensure an acceptable answer. Governance of enterprise LLMs will be both an art and science, and many organizations are still understanding how to codify them into actionable risk thresholds.?


Reimagining Application Development with AI: A New Paradigm

AI-assisted pair programming is a collaborative coding approach where an AI system — like GitHub Copilot or TestPilot — assists developers during coding. It’s an increasingly common approach that significantly impacts developer productivity. In fact, GitHub Copilot is now behind an average of 46 percent of developers’ code and users are seeing 55 percent faster task completion on average. For new software developers, or those interested in learning new skills, AI-assisted pair programming are training wheels for coding. With the benefits of code snippet suggestions, developers can avoid struggling with beginner pitfalls like language syntax. Tools like ChatGPT can act as a personal, on-demand tutor — answering questions, generating code samples, and explaining complex code syntax and logic. These tools dramatically speed the learning process and help developers gain confidence in their coding abilities. Building applications with AI tools hastens development and provides more robust code.?


Don't Let AI Frenzy Lead to Overlooking Security Risks

"Everybody is talking about prompt injection or backporting models because it is so cool and hot. But most people are still struggling with the basics when it comes to security, and these basics continue to be wrong," said John Stone - whose title at Google Cloud is "chaos coordinator" - while speaking at Information Security Media Group's London Cybersecurity Summit. Successful AI implementation requires a secure foundation, meaning that firms should focus on remediating vulnerabilities in the supply chain, source code, and larger IT infrastructure, Stone said. "There are always new things to think about. But the older security risks are still going to happen. You still have infrastructure. You still have your software supply chain and source code to think about." Andy Chakraborty, head of technology platforms at Santander U.K., told the audience that highly regulated sectors such as banking and finance must especially exercise caution when deploying AI solutions that are trained on public data sets.


The second coming of Microsoft's do-it-all laptop is more functional than ever

Microsoft's Surface Laptop Studio 2 is really unlike any other laptop on the market right now. The screen is held up by a tiltable hinge that lets it switch from what I'll call "regular laptop mode" to stage mode (the display is angled like the image above) to studio mode (the display is laid flat, screen-side up, like a tablet). The closest thing I can think of is, well, the previous Laptop Studio model, which fields the same shape-shifting form factor. But after today, if you're the customer for Microsoft's screen-tilting Surface device, then your eyes will be all over the latest model, not the old. That's a good deal, because, unlike the predecessor, the new Surface Laptop Studio 2 features an improved 13th Gen Intel Core H-class processor, NVIDIA's latest RTX 4050/4060 GPUs, and an Intel NPU on Windows for video calling optimizations (which never hurts to have). Every Microsoft expert on the demo floor made it clear to me that gaming and content creation workflows are still the focus of the Studio laptop, so the changes under the hood make sense.


Why more security doesn’t mean more effective compliance

Worse, the more tools there are to manage, the harder it might be to prove compliance with an evolving patchwork of global cybersecurity rules and regulations. That’s especially true of legislation like DORA, which focuses less on prescriptive technology controls and more on providing evidence of why policies were put in place, how they’re evolving, and how organizations can prove they’re delivering the intended outcomes. In fact, it explicitly states that security and IT tools must be continuously monitored and controlled to minimize risk. This is a challenge when organizations rely on manual evidence gathering. Panaseer research reveals that while 82% are confident they’re able to meet compliance deadlines, 49% mostly or solely rely on manual, point-in-time audits. This simply isn’t sustainable for IT teams, given the number of security controls they must manage, the volume of data they generate, and continuous, risk-based compliance requirements. They need a more automated way to continuously measure and evidence KPIs and metrics across all security controls.

Read more here ...

要查看或添加评论,请登录

社区洞察

其他会员也浏览了