October 04, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
M-ishing was highlighted to be the top security challenge plaguing the mobile space, both in the public sector (10%) and the private sector, and more importantly, 76% of phishing sites are now using HTTP, giving users a false sense of communication protocol. “Phishing using HTTPS is not completely new,” Krishna Vishnubhotla, vice President for product strategy at Zimperium. “Last year’s report revealed that, between 2021 and 2022, the percentage of phishing sites targeting mobile devices increased from 75% to 80%. Some of them were already using HTTPS but the focus was converting campaigns to target mobile.” “This year, we are seeing a meteoric rise in this tactic for mobile devices, which is a sign of maturing tactics on mobile, and it makes sense. The mobile form factor is conducive to deceiving the user because we rarely see the URL in the browser or the quick redirects. Moreover, we are conditioned to believe a link is secure if it has a padlock icon next to the URL in our browsers. Especially on mobile, users should look beyond the lock icon and carefully verify the website’s domain name before entering any sensitive information,” Vishnubhotla said.
OpenAI’s latest model, GPT-4o, is designed to identify and stop these growing threats. As an “autoregressive omni model, which accepts as input any combination of text, audio, image and video,” as described on its system card published on Aug. 8. OpenAI writes, “We only allow the model to use certain pre-selected voices and use an output classifier to detect if the model deviates from that.” Identifying potential deepfake multimodal content is one of the benefits of OpenAI’s design decisions that together define GPT-4o. Noteworthy is the amount of red teaming that’s been done on the model, which is among the most extensive of recent-generation AI model releases industry-wide. All models need to constantly be training on and learning from attack data to keep their edge, and that’s especially the case when it comes to keeping up with attackers’ deepfake tradecraft that is becoming indistinguishable from legitimate content. ... GANs most often consist of two neural networks. The first is a generator that produces synthetic data (images, videos or audio) and a discriminator that evaluates its realism. The generator’s goal is to improve the content’s quality to deceive the discriminator. This advanced technique creates deepfakes nearly indistinguishable from real content.
Known unknowns can be used to describe the second stage. They fit because we’re looking at things we know we don’t know, but we’re trying to see how well we can develop the understanding of those unknowns, whereas if these were unknown unknowns, we wouldn’t even know where to start. If the first stage is where most of your observability tooling lies, then this is the era of service-level objectives (SLOs); this is also the stage where observability starts being phrased in a “yes, and” manner. … Having developed the ability to figure out that you can ask questions about what happened in a system in the past, you’re probably now primarily concerned with statistical questions and developing more comprehensive correlations. ... Additionally, one of the most interesting developments here is when your incident reports change: They stop becoming concerned about what happened and start becoming concerned with how unusual or surprising it was. You’re seeing first hand this stage of the observability journey in action if you’ve ever read a retrospective that said something like, “We were surprised by the behavior, so we dug in. Even though our alerts were telling us that this other thing was the problem, we investigated the surprising thing first.”
领英推荐
At one point or another, all of us are probably guilty of posing a question without offering a solution. Often we may feel that others are more qualified to address an issue than we are and as long as we bring the matter to someone’s attention, then that’s as far as we need go. While this is well and good – and certainly not every scenario can be dealt with single-handedly – it can be good practice to brainstorm ideas for the problems you identify. It’s important to loop people in and utilise the expertise of others, but you should also have confidence in your ability to tackle an issue. Identifying the problem is half the battle, so why not keep going and see what you come up with? ... Some are born with confidence to spare and some are not, luckily it is a skill that can be learned over time. Working on improving your confidence level, being more vocal and presenting yourself as an expert in your field are crucial to improving your ability to show initiative, as it means you are far more likely to take the reins and lead the way. Taking the initiative or going out on a limb, in many scenarios, can be nerve-wracking and you may doubt that you are the best person for the job.?
RPA is often touted as a mechanism to bolster ROI or reduce costs, but it can also be used to improve customer experience. For example, enterprises such as airlines employ thousands of customer service agents, yet customers are still waiting in queues to have their calls fielded. A chatbot could help alleviate some of that wait. ... COOs were some of the earliest adopters of RPA. In many cases, they bought RPA and hit a wall during implementation, prompting them to ask for IT’s help (and forgiveness). Now citizen developers without technical expertise are using cloud software to implement RPA in their business units, and often the CIO has to step in and block them. Business leaders must involve IT from the outset to ensure they get the resources they require. ... Many implementations fail because design and change are poorly managed, says Sanjay Srivastava, chief digital officer of Genpact. In the rush to get something deployed, some companies overlook communication exchanges between the various bots, which can break a business process. “Before you implement, you must think about the operating model design,” Srivastava says. “You need to map out how you expect the various bots to work together.”?
Threat exposure management is the evolution of traditional vulnerability management. Several trends are making it a priority for modern security teams. An increase in findings that overwhelm resource-constrained teams As the attack surface expands to cloud and applications, the volume of findings is compounded by more fragmentation. Cloud, on-prem, and AppSec vulnerabilities come from different tools. Identity misconfigurations from other tools. This leads to enormous manual work to centralize, deduplicate, and prioritize findings using a common risk methodology. Finally, all of this is happening while attackers are moving faster than ever, with recent reports showing the median time to exploit a vulnerability is less than one day! Threat exposure management is essential because it continuously identifies and prioritizes risks—such as vulnerabilities and misconfigurations—across all assets, using the risk context applicable to your organization. By integrating with existing security tools, TEM offers a comprehensive view of potential threats, empowering teams to take proactive, automated actions to mitigate risks before they can be exploited.?