February 21, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Kubernetes configurations are primarily defined using YAML files, which are human-readable data serialization standards. However, the simplicity of YAML is deceptive, as small errors can lead to significant security vulnerabilities. One common mistake is improper indentation or formatting, which can cause the configuration to be applied incorrectly or not at all. ... The ransomware attack on the Toronto Public Library revealed the critical importance of network microsegmentation in Kubernetes environments. By limiting network access to necessary resources only, microsegmentation is pivotal in preventing the spread of attacks and safeguarding sensitive data. ... eBPF is the basis for creating a universal “security blanket” across Kubernetes clusters, and is applicable on premises, in the public cloud and at the edge. Its integration at the kernel level allows for immediate detection of monitoring gaps and seamless application of security measures to new and changing clusters. eBPF can automatically apply predefined security policies and monitoring protocols to any new cluster within the environment.
The best strategy, says Sam Lucero, chief quantum analyst at Omdia, would be to combine multiple approaches to get the error rates down even further. ... The bigger question is which type of qubit is going to become the standard – if any. “Different types of qubits might be better for different types of computations,” he says. This is where early testing can come in. High-performance computing centers can already buy quantum computers, and anyone with a cloud account can access one online. Using quantum computers via a cloud connection is much cheaper and quicker. Plus, it gives enterprises more flexibility, says Lucero. “You can sign on and say, ‘I want to use IonQ’s trapped ions. And, for my next project, I want to use Regetti, and for this other project, I want to use another computer.’” But stand-alone quantum computers aren’t necessarily the best path forward for the long term, he adds. “If you’ve got a high-performance computing capability, it will have GPUs for one type of computing, quantum processing units for another type of computing, CPUs for another type of computing – and it’s going to be transparent to the end user,” he says. “The system will automatically parcel it out to the appropriate type of processor.”
One of the biggest debates is how much security hybridization offers. Much depends on the details and the algorithm designers can take any number of approaches with different benefits. There are several models for hybridization and not all the details have been finalized. Encrypting the data first with one algorithm and then with a second combines the strength of both, essentially putting a digital safe inside a digital safe. Any attacker would need to break both algorithms. However, the combinations don’t always deliver in the same way. For example, hash functions are designed to make it hard to identify collisions, that is two different inputs that produce the same output: (x_1 and x_2, such that h(x_1)=h(x_2)). If the input of the first hash function is fed into a second different hash function (say g(h(x))), it may not get any harder to find a collision, at least if the weakness lies in the first function. If two inputs to the first hash function produce the same output, then that same output will be fed into the second hash function to generate a collision for the hybrid system: (g(h(x_1))= g(h(x_2)) if h(x_1)=h(x_2)). Digital signatures are also combined differently than encryption. One of the simplest approaches is to just calculate multiple signatures independently from each other.?
领英推荐
The MSSPs have a significant opportunity for growth, with an increasing number of partners showing interest in this domain. What’s notable is that our focus isn’t solely on partners delivering network security solutions but also extends to other offerings. For instance, our SIEM solutions now feature a consumption-based model, attracting more partners to explore the realm of MSSP partnerships. This trend has already gained momentum over the past year, indicating a promising trajectory for the future. As the market continues to expand, catering to a diverse range of customers across various sizes and sectors, the demand for managed security services will only intensify. Here, our integrator partners play a crucial role, positioned to capitalise on the growing requirements of clients. Moreover, selected MSSP partners have the opportunity to develop specialised services around Fortinet solutions, leveraging programs like FortiDirect, FortiEDR, FortiWeb, and FortiMail. Our offerings, such as the MSSP Monitor program and Flex VM program, provide flexible consumption models tailored to the evolving needs of MSP partners.?
One in four organizations say gen AI is critically important to gaining increased productivity and efficiency. Thirty percent say improving customer experience and personalization is their highest priority, and 26% say it’s the technology’s potential to improve decision-making that matters most. ... “The generative AI phenomenon has captured the attention of the market—and the world—with both positive and negative connotations,” said Howard Dresner, founder, and chief research officer at Dresner Advisory. “While generative AI adoption remains nascent in the near term, a strong majority of respondents indicate intentions to adopt it early or in the future.” ... Nearly half of organizations consider data privacy to be a critical concern in their decision to adopt gen AI. Legal and regulatory compliance, the potential for unintended consequences, and ethics and bias concerns are also significant. Less than half of respondents—46% and 43%, respectively—consider costs and organizational policy important to generative AI adoption. Weaponized LLMs and attacks on chatbots fuel fears over data privacy. More organizations are fighting back and using gen AI to protect against chatbot leaks.
Is it the data set, i.e. volume of data? The number of parameters used? The transformer model? The encoding, decoding, and fine-tuning? The processing time? The answer is of course a combination of all of the above. It is often said that GenAI Large Language Models (LLMs) and Natural Language Processing (NLP) require large amounts of training data. However, measured in terms of traditional data storage, this is not actually the case. ...?It is thought that ChatGPT-3 was trained on 45 Terabytes of Commoncrawl plaintext, filtered down to 570GB of text data. It is hosted on AWS for free as its contribution to Open Source AI data. But storage volumes, the billions of web pages or data tokens that are scraped from the Web, Wikipedia, and elsewhere then encoded, decoded, and fine-tuned to train ChatGPT and other models, should have no major impact on a data center. Similarly, the terabytes or petabytes of data needed to train a text-to-speech, text to image or text-to-video model should put no extraordinary strain on the power and cooling systems in a data center built for hosting IT equipment storing and processing hundreds or thousands of petabytes of data.
Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer
1 年Thanks for sharing.