March 12, 2022
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Even though ITIL has been around for many years and is considered the de facto best practice framework for IT service management (ITSM), VeriSM emerged in 2018 to find its place in the market. And this came before the launch of ITIL 4 from AXELOS in February 2019. VeriSM’s publication introduced some modern approaches in service management such as Agile and shift-left among others. ITIL 4, once released, also incorporated these modern concepts that have conquered the IT world during the last few years. VeriSM claims not to be a body of service management best practice but is instead an approach where the key facet of the model (it’s not a process flow, nor a set of procedures) is the Management Mesh where all the popular management practices (ITIL, COBIT, ISO/IEC 20000, CMMI-SVC, DevOps, Agile, Lean, SIAM, etc.) and emerging technologies and trends (artificial intelligence (AI), containerization, the Internet of Things (IoT), big data, cloud, shift-left, continuous delivery, CX/UX, etc.) are included. Maybe there’s some truth in this statement.?
Introduced last year, Gloo Mesh Enterprise is an Istio-based Kubernetes-native solution for multicluster and multimesh service mesh management. New features in 2.0 such as multitenant workspaces enable users to set fine-grained access control and editing permissions based on roles for shared infrastructure, enabling teams to collaborate in large environments. Users can manage traffic, establish workspace dependencies, define cluster namespaces, and control destinations directly in the UI. And the policies can be re-used and adapted using labels. Gloo Mesh Enterprise 2.0 also features a new Gloo Mesh API for Istio management enables developers to configure rules and policies for both north-south traffic and east-west traffic from a single, unified API. The new API also simplifies the process of expanding from a single cluster to dozens or hundreds of clusters. And the new Gloo Mesh UI for observability provides service topology graphs that highlight network traffic, latency, and speeds while automatically saving the new state when you move clusters or nodes.?
You can use CSA to further investigate high-fidelity security findings from Security Command Center (SCC) and correlate them with logs for decision-making. For example, you may use a CSA query to get the list of admin activity performed by a newly created service account key flagged by Security Command Center in order to validate any malicious activity. It’s important to note that the detection queries provided by CSA will be self-managed and you may need to tune to minimize alert noise. If you’re looking for managed and advanced detections, take a look at SCC Premium’s growing threat detection suite which provides a list of regularly-updated managed detectors designed to identify threats within your systems in near real-time. CSA is not meant to be a comprehensive, managed set of threat detections, but a collection of community-contributed sample analytics to give examples of essential detective controls, based on cloud techniques. Use CSA in conjunction with our threat detection and response capabilities in conjunction with our threat prevention capabilities.
领英推荐
Our theory of scaling enables a procedure to transfer training hyperparameters across model sizes. If, as discussed above, μP networks of different widths share similar training dynamics, they likely also share similar optimal hyperparameters. Consequently, we can simply apply the optimal hyperparameters of a small model directly onto a scaled-up version. We call this practical procedure μTransfer. If our hypothesis is correct, the training loss-hyperparameter curves for μP models of different widths would share a similar minimum. Conversely, our reasoning suggests that no scaling rule of initialization and learning rate other than μP can achieve the same result. This is supported by the animation below. Here, we vary the parameterization by interpolating the initialization scaling and the learning rate scaling between PyTorch default and μP. As shown, μP is the only parameterization that preserves the optimal learning rate across width, achieves the best performance for the model with width 213 = 8192, and where wider models always do better for a given learning rate—that is, graphically, the curves don’t intersect.
Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree. The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more. “Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich. Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence.
In February 2022, an op-ed, titled “Revisiting Bitcoin’s Carbon Footprint,” was published in the scientific journal “Joule,” authored by four researchers: Alex de Vries, Ulrich Gallersd?rfer, Lena Klaa?en and Christian Stoll. Their written commentary, which admits limitations in their estimates, states that as bitcoin miners migrated from China to Kazakhstan and the United States in 2021, the network’s carbon footprint increased to 0.19% of global emissions. What went unnoticed by the media was that the researchers have professional motives to overstate Bitcoin’s relatively tiny environmental impact. The op-ed’s lead author, Alex de Vries, failed to disclose that he is employed by De Nederlandsche Bank (DNB), the Dutch central bank. Central banks are no fans of open, global payment rails, which bypass monopolistic government settlement layers. De Vries first released his “Bitcoin Energy Consumption Index” in November 2016, which coincides with his first round of employment with DNB, giving the appearance that DNB encouraged his critique of Bitcoin’s energy consumption.?