February 06, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
When security solutions are crafted with privacy as a central consideration, organisations can deploy robust security measures while safeguarding the personal data of their customers and employees. A comprehensive cost-benefit analysis reveals significant advantages in adopting a privacy-first approach to security. For instance, proactively blocking malware before it infiltrates an organisation’s systems can avert a potential data breach. Given the average cost of US$4.45 million in 2023, coupled with the consequential impact on brand reputation and legal ramifications, preventing even a single data breach becomes paramount for any company. Hence, the importance of industry-leading security measures is indisputable. Any reputable security company should provide solutions that limit its access to sensitive data and ensure the protection of the personal data entrusted to its care. ... A privacy-first security program assesses the risks associated with both implementing and not implementing security measures. If the advantages of deploying a security solution, such as email scanning, outweigh the drawbacks – which is highly probable – the organisation should proceed with the careful implementation of this capability.
Far memory is a memory tier between DRAM and Flash that has a lower cost per GB than DRAM and a higher performance than Flash. Far memory works by disaggregating memory and allowing nodes or machines to access the memory of a remote node/machine via compute express link. Memory is the most contested and least elastic resource in a data center. Currently, servers can only use local memory, which may be scarce on the local system but abundant on other underutilized servers. With far memory, local machines can use remote machine’s memory. By introducing far memory into the memory tier and moving less frequently accessed data to far memory, the system can perform efficiently with low DRAM and reduce the total cost of ownership. Far memory uses a remote machine’s memory as a swap device, either by using idle machines or by building memory appliances that only serve to provide a pool of memory shared by many servers. This approach optimizes memory usage and reduces over-provisioning. However, far memory also has its own challenges. Swapping out memory pages to remote machines increases the failure domain of each machine, which can lead to a catastrophic failure of the entire cluster.?
There are four key considerations for integrating security architecture effectively in an agile environment - Cross-Functional Collaboration: Security experts must actively engage with developers, testers, and product owners. Collaborating with experts helps create a shared understanding of security requirements and facilitates quick resolution of security-related issues. Embedding security professionals within Agile teams can enhance real-time collaboration and ensure consistent security controls.?Security Training and Awareness: Given the rapid pace of an Agile sprint, all team members should be equipped with the knowledge to write secure code. ...?Foster a Security Culture: Foster a culture where security is seen as everyone's responsibility, not just the security team's. Adapt the organizational mindset to value security equally with other business objectives. ...?Security Champions within Agile Teams: Identify and nurture 'Security Champions' within each Agile team. These individuals with a keen interest in security act as a bridge between the security team and their respective agile teams. They help promote security best practices, ensuring security is not overlooked amidst other technical considerations.
领英推荐
Artificial intelligence (AI) tools are so easy to leverage that they can be used by anyone within your organization without technical support. This means that you need to keep a careful eye on, not just the authorized applications you leverage, but what AI tools your colleagues could be using without authorization. In leveraging AI tools to generate content for your organization, your employees could unwittingly input private data into the public instance of ChatGPT. Not only does this share that data with ChatGPT's vendor, OpenAI, but it actually trains ChatGPT on that content, meaning the AI tool could potentially output that information to another user outside of your organization. Alternatively, overuse of generative AI tools without proper supervision could lead to factual or textual errors being published to your customers. Gen AI tools need careful supervision to ensure they don't "hallucinate" or produce mistakes, as they are unable to self-edit. It's equally important to be able to report back to legislators on what AI is being used across your company, so they can see you're compliant. This will likely become a regulatory requirement in the near future.
The first option is to set up your own secondary DR data center in a different location from your primary site. Many large enterprises go this route; they build out DR infrastructure that mirrors what they have in production so that, at least in theory, it can take over instantly. The appeal here lies in control. Since you own and operate the hardware, you dictate compatibility, capacity, security controls and every other aspect. You’re not relying on any third party. The downside of course, lies in cost. All of that redundant infrastructure sitting idle doesn’t come cheap. ... The second approach is to engage an external DR service provider to furnish and manage a recovery site on your behalf. Companies like SunGard built their business around this model. The appeal lies in offloading responsibility. Rather than build out your own infrastructure, you essentially reserve DR data center capacity with the provider. ... The third option for housing your DR infrastructure is leveraging the public cloud. Market leaders like AWS and Azure offer seemingly limitless capacity that can scale to meet even huge demands when disaster strikes.?
Simply speaking, if existing network controls are now being moved to the cloud, the scope of technical controls does not drastically differ from legacy approaches. The technology, however, has massively evolved towards platform-centric controls, and that for a good reason. Isolated controls cause complexity, and if you are moving your perimeter to a hyperscaler, both your users and their devices will no longer be managed by the corporate on-prem security controls either. A good CASB to broker between user and data is key, as is identity and access management. What’s now new is workload protection requirements á la CSAP technology. In addition to increasing sophistication and the number of security threats and successful breaches, most enterprises further increase risk by “rouge IT” teams leveraging cloud environments without the awareness and management by security teams. Cloud deployments are typically deployed faster and with less planning and oversight than data center or on-site environment deployments.?Cloud security tools should be an extension of your other premise-based tools for ease of management, consistency of policy enforcement and cost savings due to additional purchase commitments, training, and certification non-duplicity.?