February 16, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
With cybercriminals largely sticking to the same tactics, it is critical that security starts with the developer. "You can buy tools to prevent and detect vulnerabilities, but the first thing you need to do is help developers ensure they're building secure applications," Hanley said in a video interview with ZDNET. As major software tools, including those that power video-conferencing calls and autonomous cars, are built and their libraries made available on GitHub, if the accounts of people maintaining these applications are not properly secured, malicious hackers can take over these accounts and compromise a library. The damage can be wide-reaching and lead to another third-party breach, such as the likes of SolarWinds and Log4j, he noted. Hanley joined GitHub in 2021, taking on the newly created role of CSO as news of the colossal SolarWinds attack spread. "We still tell people to turn on 2FA...getting the basics is a priority," he said. He pointed to GitHub's efforts to mandate the use of 2FA for all users, which is a process that has been in the works during the last one and a half years and will be completed early this year.?
“An ERP solution like ours is massive,” he says, highlighting that this can make it difficult to keep track of everything you are, and not, using. For instance, he says if you’re getting charged $20,000 for electricity, you might want to check your meter and verify that your usage and bill align. “If your electricity meter is locked away and you just get a piece of paper at the end of the month telling you everything’s fine and you owe $20 000, you’re probably going to ask some questions,” he says. Tomago was told everything was secure and running as it should, but they had no way to verify what they were being told was accurate. “We essentially had a swarm of big black boxes,” he says. “We put dollars in and got services out, but couldn’t say to the board, with confidence, that we were really in control of things like compliance, security, and due diligence.” Then in 2020, Tomago moved its ERP system back on-prem — a decision that’s paying dividends. “We now know what our position is from a cyber perspective because we know exactly what our growth rates are, and we know that our systems are up-to-date, and what our cost is because it’s the same every month,” he says.
Threat actors linked to Iran and North Korea also used GPT-4, OpenAI said. Nation-state hackers primarily used the chatbot to query open-source information, such as satellite communication protocols, and to translate content into victims' local languages, find coding errors and run basic coding tasks. "The identified OpenAI accounts associated with these actors were terminated," OpenAI said. It conducted the operation in collaboration with Microsoft. Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI," the Redmond, Washington-based technology giant said. Microsoft's relationship with OpenAI is under scrutiny by multiple national antitrust authorities. A British government study published earlier this month concluded that large language models may boost the capabilities of novice hackers but so far are of little use to advanced threat actors. China-affiliated Charcoal Typhoon used ChatGPT to research companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.?
领英推荐
Recognizing disruption requires an open mind. In many instances, people can't believe or see something is disruptive at first. They think the idea is foolish or won't work. Disruption is usually caused by something that hasn't existed before or something new. Airbnb is a great example here as well. Its founders are said to have gone to every venture capitalist in Silicon Valley and were famously laughed out of meetings. People couldn't see what they saw — it hadn't been invented yet. Even the most seasoned business leaders can misunderstand and mistake disruption or fail to recognize it. Disruption doesn't always mean extinction. History has proven this for countless companies, processes, products, services, and ideas. Organizations can collapse after big changes. They did not or could not adapt. But something new or different tends to fill in the gap. It's often better, and the cycle continues. I have been on both sides of disruption at my company, BriteCo. We are one of the jewelry industry's disruptors – we were the first to move jewelry consumers to 100% paperless processes with technology and the internet. We also provide our customers with different ways to buy our coverage, unique to BriteCo, versus an outdated analog process at the retail point of sale.
Lee Mallon, the chief technology officer at AI vendor Humanity.run , sees an LLM cybersecurity threat that goes way beyond quickly making false documents. He worries that thieves could use LLMs to create deep back stories for their frauds in case someone at a bank or government level reviews social media posts and websites to see if a person truly exists. “Could social media platforms be getting seeded right now with AI-generated life histories and images, laying the groundwork for elaborate KYC frauds years down the line? A fraudster could feasibly build a ‘credible’ online history, complete with realistic photos and life events, to bypass traditional KYC checks. The data, though artificially generated, would seem perfectly plausible to anyone conducting a cursory social media background check,” Mallon says. “This isn’t a scheme that requires a quick payoff. By slowly drip-feeding artificial data onto social media platforms over a period of years, a fraudster could create a persona that withstands even the most thorough scrutiny. By the time they decide to use this fabricated identity for financial gains, tracking the origins of the fraud becomes an immensely complex task.”
A new category called "AI Risk Decisioning" is poised to transform the landscape of fraud detection. It leverages the strengths of generative AI, combining them with traditional machine learning techniques to create a robust foundation for safeguarding online transactions. ... The first pillar involves creating a comprehensive knowledge fabric that serves as the foundation for the entire platform. This fabric integrates various internal data sources unique to the company, such as transaction records and real-time customer profiles. ... The third pillar of the AI Risk Decisioning approach focuses on automatic recommendations, offering powerful capabilities for real-time and effective risk management. It can automatically monitor transactions and identify trends or anomalies, suggest relevant features for risk models, conduct scenario analyses independently, and recommend the next best action to optimize performance. ... The fourth pillar of the AI Risk Decisioning approach emphasizes human-understandable reasoning. This pillar aims to make every decision, recommendation, or insight provided by the AI system easily understandable to human users.