December 01, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Migrating to a microservice architecture has been known to cause complex interactions between services, circular calls, data integrity issues and, to be honest, it is almost impossible to get rid of the monolith completely. Let’s discuss why some of these issues occur once migrated to the microservices architecture. ... When moving to a microservices architecture, each client needs to be updated to work with the new service APIs. However, because clients are so tied to the monolith’s business logic, this requires refactoring their logic during the migration. Untangling these dependencies without breaking existing functionality takes time. Some client updates are often delayed due to the work’s complexity, leaving some clients still using the monolith database after migration. To avoid this, engineers may create new data models in a new service but keep existing models in the monolith. When models are deeply linked, this leads to data and functions split between services, causing multiple inter-service calls and data integrity issues. ... Data migration is one of the most complex and risky elements of moving to microservices. It is essential to accurately and completely transfer all relevant data to the new microservices.?
Researchers found that both prefix caching and semantic caching, which are used by many major LLM providers, can leak information about what users type in without them meaning to. Attackers can potentially reconstruct private user queries with alarming accuracy by measuring the response time. The lead researcher said, “Our work shows the security holes that come with improving performance. This shows how important it is to put privacy and security first along with improving LLM inference.” “We propose a novel timing-based side-channel attack to execute input theft in LLMs inference. The cache-based attack faces the challenge of constructing candidate inputs in a large search space to hit and steal cached user queries. To address these challenges, we propose two primary components.” “The input constructor uses machine learning and LLM-based methods to learn how words are related to each other, and it also has optimized search mechanisms for generalized input construction.” ... The research team emphasizes the need for LLM service providers and developers to reassess their caching strategies. They suggest implementing robust privacy-preserving techniques to mitigate the risks associated with timing-based side-channel attacks.
As cybercriminal groups grow, specialization is a necessity. In fact, as cybercriminal gangs grow, their business structures increasingly resemble a corporation, with full-time staff, software development groups, and finance teams. By creating more structure around roles, cybercriminals can boost economies of scale and increase profits. ... some groups required specialization in roles based on geographical need — one of the earliest forms of contract work for cybercriminals is for those who can physically move cash, a way to break the paper trail. "Of course, there's recruitment for roles across the entire attack life cycle," Maor says. "When you're talking about financial fraud, mule recruitment ... has always been a key part of the business, and of course, development of the software, of malware, and end of services." Cybercriminals' concerns over software security boil down to self-preservation. In the first half of 2024, law enforcement agencies in the US, Australia, and the UK — among other nations — arrested prominent members of several groups, including the ALPHV/BlackCat ransomware group and seized control of BreachForums. The FBI was able to offer a decryption tool for victims of the BlackCat group — another reason why ransomware groups want to shore up their security.
领英推荐
Hybrid isn’t just about cutting costs — it boosts speed, security, and performance. Agile applications run faster in the cloud, where teams can quickly spin up, test, and launch without the limits of on-prem systems. This agility becomes especially valuable when delivering software quickly to meet market demands without compromising the core stability of the entire system. Security and compliance are also critical drivers of hybrid adoption. Regulatory mandates often require data to remain on-premises to ensure compliance with local data residency laws. Hybrid infrastructure allows companies to move customer-facing applications to the cloud while keeping sensitive data on-prem. This separation of data from the front-end layers has become common in sectors like finance and government, where compliance demands and data security are non-negotiable. I have been speaking regularly to the CTOs of two very large banks in the US. They currently manage 15-20% of their workloads in the cloud and estimate the most they will ever have in the cloud would be 40-50%. They tell me the rest will stay on-prem — always — so they will always need to manage a hybrid environment.
The increased dependence and popularity of the cloud environment expands the attack surface. These are the potential entry points, including network devices, applications, and services that attackers can exploit to infiltrate the cloud and access systems and sensitive data. ... Cloud services rely upon APIs for seamless integration with third-party applications or services. As the number of APIs increases, they expand the attack surface for attackers to exploit. Hackers can easily target insecure or poorly designed APIs that lack encryption or robust authentication mechanisms and access data resources, leading to data leaks and account takeover. ... The device or application not approved or supported by the IT team is called shadow IT. Since many of these devices and apps do not undergo the same security controls as the corporate ones, they become more vulnerable to hacking, putting the data stored within them at risk of manipulation. ... Unaddressed security gaps or errors threaten the cloud assets and data. Attackers can exploit misconfiguration and vulnerabilities in the cloud-hosted services, resulting in data breaches and other cyber attacks.
The key word here is “structured” (its synonyms include organized, precise and efficient). When “structured” precedes the word “cabling,” it immediately points to a standardized way to design and install a cabling system that will be compliant to international standards, whilst providing a flexible and future-ready approach capable of supporting multiple generations of AI hardware. Typically, an AI data center’s structured cabling will be used to connect pieces of IT hardware together using high-performance, ultra-low loss optical fiber and Cat6A copper. ... What do we know about AI? Network speeds are constantly changing, and it feels like it’s happening on a daily basis. 400G and 800G are a reality today, with 1.6T coming soon. Just a few years ago, who would have believed that it was possible? Structured cabling offers the type of scalability and flexibility needed to accommodate these speed changes and the future growth of AI networks. ... Data centers are the “factory floor” of AI operations, and as AI continues to impact all areas of our lives, it will become increasingly integrated into emerging technologies like 5G, IoT, and Edge computing. This trend will only further emphasize the need for robust and scalable high-speed cabling systems.