November 21, 2023
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Observing end-users and recognizing different stakeholder needs is a learning process. Data scientists may feel the urge to dive right into problem-solving and prototyping but design thinking principles require a problem-definition stage before jumping into any hands-on work. “Design thinking was created to better solutions that address human needs in balance with business opportunities and technological capabilities,” says Matthew Holloway, global head of design at SnapLogic. To develop “better solutions,” data science teams must collaborate with stakeholders to define a vision statement outlining their objectives, review the questions they want analytics tools to answer, and capture how to make answers actionable. Defining and documenting this vision up front is a way to share workflow observations with stakeholders and capture quantifiable goals, which supports closed-loop learning. Equally important is to agree on priorities, especially when stakeholder groups may have common objectives but seek to optimize department-specific business workflows.
Conventionally, audit judgements rely on sole evidence sourced from structured datasets in an organization’s financial records. But technological advances in data storage, processing power and analytic tools have made it easier to obtain unstructured data to support audit evidence. Big data can be used for prediction by using a complex method of analytics to glean audit evidence from datasets and other sources which encompass organizations, industries, nature, internet clicks, social media, market research and numerous other sources. ... An innovative system will not only enable the application of artificial intelligence embedded Natural Language Processing (NLP) to streamline unstructured data but also ensure its integration with an Optical Character Recognition (OCR). These capabilities and other new cutting-edge technologies will effectively help to convert both structured and unstructured data into meaningful insights to drive audit. Thus, the use of big data is to make it easier to eliminate human errors, flag risks in time and spot fraudulent transactions and, in effect, modernize audit operations, thereby improving the efficiency and accuracy of the financial reporting process.
A key feature supported by the latest technology is passive or ambient IoT, which aims to connect sensors and devices to cellular networks without a power source and that could dramatically increase the number of cellular IoT devices. This facet is increasingly becoming appealing to several enterprise verticals. NB-IoT and LTE-M are backed by major mobile operators, offering standardised connectivity with global reach. Yet Juniper warned that a key technical challenge faced by operators is their inefficiency in detecting low-power devices roaming on their networks, meaning that operators lose potential revenue from these undetected devices. Due to their low data usage and intermittent connectivity, these devices require constant network monitoring to fully maximise roaming revenue. ... “Operators must fully leverage the insights gained from AI-based detection tools to introduce premium billing of roaming connections to further maximise roaming revenue,” said research author Alex Webb. “This must be done by implementing roaming agreements that price roaming connectivity on network resources used and time connected to the network.”
领英推荐
The biggest challenge in evaluating cyber risk is that we always underestimate it. The impact is almost always worse than what was estimated. A lot of us are professional risk mitigators and managers, and we still get it wrong. Going back to the MGM Resorts cyber attack, I refuse to believe that MGM believed that their ransomware breach was going to cost them US$1 billion in between lost revenues, lost valuation and loss of confidence from both the market and customers. That, to me, is the biggest issue. There is a huge gap there. Even though there are a lot of numbers surrounding the cost of a data breach, they still all significantly underestimate it. So that to me, I think is the biggest area. ... We are spending a lot of time talking about the tools that these actors use, whether it is artificial intelligence (AI), ransomware, hacking, national security threats and so on. To make an impact against this threat we must focus on resilience and what you can tolerate, then understanding what you can withstand and what conditions you can withstand them under.?
Microsoft acquires what's left of OpenAI and kicks OpenAI's current board of directors to the curb. Much of OpenAI's current technology runs on Azure already, so this might make a lot of sense from an infrastructure point of view. It also makes a lot of sense from a leadership point of view, given that Microsoft now has OpenAI's spiritual and, possibly soon, technical leadership. Plus, if OpenAI employees were already planning to defect, it makes a lot of sense for Microsoft to simply fold OpenAI into the company's gigantic portfolio. I think this may be the only practical way forward for OpenAI to survive. If OpenAI were to lose the bulk of its innovation team, it would be a shell operating on existing technology in a market that's running at warp speed. Competitors would rapidly outpace it. But if it were brought into Microsoft, then it can keep moving at pace, under the guidance of leadership it is already comfortable with, and continue executing on plans it already has.
Botnets are typically more prevalent in cybercrime activities compared to APT, yet Kaspersky expects the latter to start using them more. The first reason is to bring more confusion for the defense. Attacks leveraging botnets might “obscure the targeted nature of the attack behind seemingly widespread assaults,” according to the researchers. In that case, defenders might find it more challenging to attribute the attack to a threat actor and might believe they face a generic widespread attack. The second reason is to mask the attackers’ infrastructure. The botnet can act as a network of proxies, but also as intermediate command and control servers. ... The global increase in using chatbots and generative AI tools has been beneficial in many sectors over the last year. Cybercriminals and APT threat actors have started using generative AI in their activities, with large language models explicitly designed for malicious purposes. These generative AI tools lack the ethical constraints and content restrictions inherent in authentic AI implementations.