March 06, 2021

March 06, 2021

From Agile to DevOps to DevSecOps: The Next Evolution

While Agile and DevOps share common goals, they have not always agreed on how to achieve those goals. DevOps differs in many respects from Agile, but, at its best, DevOps applies Agile methodologies, along with lean manufacturing principles, to speed up software deployment. One area of particular tension between Agile and DevOps is that the latter relies heavily on tools; in particular, when it comes to the automation of testing and deployment processes. But DevOps can overcome the resistance of Agile developers to tool usage simply by applying Agile principles themselves. Effectively, DevOps proponents must convince Agile teams that dogmatic adherence to the underlying principles of Agile is actually inconsistent with Agile in the first place. Ironically, Agile developers who insist that the process is always bad actually violate Agile principles by refusing to acknowledge the benefits offered through change, another Agile principle. The challenge is to have the Agile development teams trust in the automation efforts of DevOps, while at the same time encouraging the DevOps team to consider the business goals of deployment rather than pursuing speed of deployment for its own sake.


Geoff Hinton’s Latest Idea Has The Potential To Break New Ground In Computer Vision

According to Dr. Hinton, the obvious way to represent the part-whole hierarchy is by combining dynamically created graphs with neural network learning techniques. But, if the whole computer is a neural network, he explained, it is unclear how to represent part-whole hierarchies that are different for every image, if we want the structure of the neural net to be identical for all images. Capsule networks introduced by Dr.Hinton a couple of years ago offer a solution: A group of neurons, called a capsule, is set aside for each possible type of object or part in each region of the image. However, the problem with capsules is they use a mixture to model the set of possible parts. The computer will have a hard time answering questions like “Is the headlight different from the tyres and more such questions” (more on this example in the next section). The recent work on Neural Field offers a simple way to represent values of depth or intensity of the image. It uses a neural network that takes as input a code vector representing the image along with image location and outputs the predicted value at that location.


Addressing Security Throughout the Infrastructure DevOps Lifecycle

Keep in mind that developer-first security doesn’t preclude “traditional” cloud security methods — namely monitoring running cloud resources for security and compliance misconfigurations. First of all, unless you’ve achieved 100% parity between IaC and the cloud (unlikely), runtime scanning is essential for complete coverage. You probably still have teams or parts of your environment — maybe legacy resources — that are still being manually provisioned via legacy systems or directly in your console and so need to be continuously monitored. Even if you are mostly covered by IaC, humans make mistakes and SREmergencies are bound to happen. We recently wrote about the importance of cloud drift detection to catch manual changes that result in unintentional deltas between code configuration and running cloud resources. Insight into those resources in production is essential to determine those potentially risky gaps. Runtime scanning also has some advantages. Because it follows the actual states of configurations, it’s the only viable way of evaluating configuration changes over time when managing configuration in multiple methods. Relying solely on build-time findings without attributing them to actual configuration states in runtime could result in configuration clashes.


Privacy breaches in digital transactions: Examination under Competition Law or Data Protection Law?

As long as the search engines’ search data is kept secret, no rival or would be rival or entrant, will have access to this critical ‘raw material’ for search innovation. Further, when transactions take place in the digital economy, firms generally tend to collect personal as well as non-personal data of users in exchange for services provided. While it can be argued that personal data is probably collected with the user’s consent, usually, collection of non-personal data happens without the consent or knowledge of the consumers. Data is further compromised when businesses that have large amounts of data merge or amalgamate, and when dominant firms abuse their market position and resort to unethical practices. Traditional Competition Law analysis involves a wide focus on ‘pricing models’ i.e., methods used by business players to determine the price of their goods or services. User data forms part of the ‘non-pricing model’. With the Competition Act, 2002 undergoing a number of changes owing to technological developments, there is a possibility that non-pricing models are also considered under the ambit of the Act.


GraphQL: Making Sense of Enterprise Microservices for the UI

GraphQL has become an important tool for enterprises looking for a way to expose services via connected data graphs. These graph-oriented ways of thinking offer new advantages to partners and customers looking to consume data in a standardized way. Apart from the external consumption benefits, using GraphQL at Adobe has offered our UI engineering teams a way to grapple with the challenges related to the increasingly complicated world of distributed systems. Adobe Experience Platform itself offers dozens of microservices to its customers, and our engineering teams also rely on a fleet of internal microservices for things like secret management, authentication, and authorization. Breaking services into smaller components in a service-oriented architecture brings a lot of benefits to our teams. Some drawbacks need to be mitigated to deploy the advantages. More layers mean more complexity. More services mean more communication. GraphQL has been a key component for the Adobe Experience Platform user experience engineering team: one that allows us to embrace the advantages of SOA and helping us to navigate the complexities of microservice architecture.


Serverless Functions for Microservices? Probably Yes, but Stay Flexible to Change

When serverless functions are idle they cost nothing (“Pay per use” model). If a serverless function is called by 10 clients at the same time, 10 instances of it are spun up almost immediately (at least in most cases). The entire provision of infrastructure, its management, high availability (at least up to a certain level) and scaling (from 0 to the limits defined by the client) are provided out of the box by teams of specialists working behind the scenes. Serverless functions provide elasticity on steroids and allow you to focus on what is differentiating for your business. ... A “new service” needs to go out fast to the market, with the lowest possible upfront investment, and needs to be a “good service” since the start. When we want to launch a new service, a FaaS model is likely the best choice. Serverless functions can be set up fast and minimise the work for infrastructure. Their “pay per use” model means no upfront investment. Their scaling capabilities provide good consistent response time under different load conditions. If, after some time, the load becomes more stable and predictable, then the story can change, and a more traditional model based on dedicated resources, whether Kubernetes clusters or VMs, can become more convenient than FaaS.

Read more here ...
salim sarker

Manager admin, HR & Compliance at Textil Fashions Ltd.

3 年

nice

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了