Tech CEOs Share 8 Top AI Trends to Watch in 2024
Paul Young
Experience Senior Financial Planning, Analysis and Reporting SME seeking P/T or F/T job.
The Rise of AI-Fueled Malware
We will soon see the rise of generative AI-fueled malware that can essentially think and act on its own. This is a threat the U.S. should be particularly concerned about coming from nation-state adversaries.
We will see attack patterns that get more polymorphic, meaning the artificial intelligence (AI) carefully evaluates the target environment and then thinks on its own to find the ultimate hole in the network or the best area to exploit and transforms accordingly.
Passkey Adoption Will Increase
There’s a dark side of the AI boom that not many consumers or businesses have realized: cybercriminals are now able to make their phishing attacks more credible, frequent, and sophisticated by leveraging the power of generative AI , such as WormGPT . As we enter 2024, this threat will grow in size and scale.
Against this backdrop, we’ll reach the tipping point for mass passkey adoption (although there will still be a significant period of transition before we reach a truly passwordless future).
However, passkeys will ultimately surpass passwords as the status quo technology once the consequences of not adopting a more secure, phishing-resistant form of authentication become clear in the wake of increasingly harmful and costly cyberattacks.
Adding Safeguards to AI Models
Safety and privacy must continue to be a top concern for any tech company, regardless of whether it is AI-focused or not. When it comes to AI, ensuring that the model has the necessary safeguards, feedback loop, and, most importantly, mechanism for highlighting safety concerns is critical.
As organizations continue to rapidly adopt AI in 2024 for all of the efficiency, productivity, and democratization of data benefits, it’s important to ensure that as concerns are identified, there is a reporting mechanism to surface those, in the same way, a security vulnerability would be identified and reported.
LLMs Will Reshape Cloud Security
In 2024, the evolution of Generative AI (Gen AI) and Large Language Models (LLM ), initiated in 2023, is poised to redefine the cybersecurity chain, elevating efficiency and minimizing manpower dependencies in cloud security.
One example is detection tools fortified by LLMs. We’ll see LLMs bolster log analysis, providing early, accurate, and comprehensive detection of both known and elusive zero-day attacks .
Data Security ‘Risk Reduction’ Will Evolve
The concept of ‘risk reduction’ in data security will evolve in the next few years, in line with the rise in the use of Generative AI technologies.
Until recently, organizations implemented data retention and deletion policies to ensure minimal risk to their assets. As GenAI capabilities become more widespread and valuable for organizations, they will become more motivated to hold on to data for as long as possible in order to use it for training and testing these new capabilities.
Data security teams will, therefore, no longer be able to address risk by deleting unnecessary data since the new business approach will be that any and all data may be needed at some point, This will bring about a change in how organizations perceive, assess and address risk reduction in data security.
An Erosion of Trust Surrounding AI Decision-Making
In a rapidly evolving technological landscape, the parallels between the adoption of cloud services and the current surge in artificial intelligence (AI) implementation are both striking and cautionary.
Just as organizations eagerly embraced cloud solutions for their transformative potential in innovation, the haste of adoption outpaced the development of robust security controls and compliance tools.
Developers Will Be More Efficient
This is a two-pronged topic for leadership to really think about in 2024. On one hand, CISOS and IT leaders need to be able to think about how we’re going to securely consume it into our own source code “kingdoms” within the tuner-rise.
领英推荐
With the likes of Co-Pilot and ChatGPT, developers and organizations will be a lot more efficient, but it also introduces more risk of potential vulnerabilities we need to worry about.
“On the other side, we need to be able to think about how Application Security vendors in the space will allow CISOS and IT leadership to leverage generative AI in their tools to be able to run their programs a lot more efficiently and drive productivity in terms of using AI to speed up security outcomes like security policy generation, identifying patterns and anomalies, finding and prioritizing vulnerabilities a lot faster, and assisting with the incident response process.
Video Generation Goes Mainstream
Over the past year, video generative models (text-to-video, image-to-video, video-to-video) became publicly available for the first time.
Additionally, as large models become faster to turn and we develop more structured ways of controlling them, we’ll start to see more kinds of novel interfaces and products emerge around them that go beyond the standard prompt-to-X or chat assistant paradigms.
Many thanks to the following for their contribution to this article
Lior Levi CEO and co-founder at Cycode ? Cycode | Complete ASPM
Varun Badhwar , CEO and co-founder at Endor Labs Endor Labs
Liat Hayun CEO and co-founder at Eureka Security Eureka Security (Acquired by Tenable)
Chen Burshan , CEO of Skyhawk Security ? Skyhawk Security
Dave Gerry , CEO at Bugcrowd ? Bugcrowd
John Bennett , CEO at Dashlane ? Dashlane
Patrick H. , CEO at SlashNext ? SlashNext
Anastasis Germanidis s, CTO and co-founder at Runway Runway
I will add a bit more color around the above:
Paul Young CPA CGA
Paul Young is a former IBM Customer Success Manager that has deployed over 300 data and AI solutions across geographies and industries for the past 8 years. Paul is an expert when it comes to the data journey as part of driving better business outcomes.