November 10, 2022
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Prior to the rise of open source CD solutions, companies often relied on point automation using scripts. These could improve efficiency a bit, but when companies moved from the monolithic architecture of a mainframe or on-premises servers to a microservices-based production environment, the scripts could not be easily adapted or scaled to cope with the more complex environment. This led to the formulation of continuous delivery orchestration solutions that could ensure code updates would flow to their destination in a repeatable, orderly manner. Two highly popular open source CD solutions have emerged, Spinnaker and Argo. Spinnaker was developed by Netflix and extended by Google, Microsoft and Pivotal. It was made available on GitHub in 2015. Spinnaker creates a “paved road” for application delivery, with guardrails to ensure only valid infrastructure and configurations reach production. It facilitates the creation of pipelines that represent a software delivery process. These pipelines can be triggered in a variety of ways, including manually, via a cron expression, at the completion of a Jenkins Job or other pipeline and other methods.?
There are many things about technical debt that can be quantified. Henney mentioned that we can list off and number specific issues in code and, if we take the intentional sense in which technical debt was originally introduced, we can track the decisions that we have made whose implementations need to be revisited. If we focus on unintentional debt, we can look at a variety of metrics that tell us about qualities in code. There’s a lot that we can quantify when it comes to technical debt, but the actual associated financial debt is not one of them, as Henney explained: The idea that we can run a static analysis over the code and come out with a monetary value that is a meaningful translation of technical debt into a financial debt is both a deep misunderstanding of the metaphor – and how metaphors work – and an impossibility. According to Henney, quantifying how much financial debt is present in the code doesn’t work. At the very least, we need a meaningful conversion function that takes one kind of concept, e.g., "percentage of duplicate code" or "non-configurable database access", and translates it to another, e.g., euros and cents
IIoT is redefining the types of data that enterprises use, and how networks process this data. For example, an IIoT network primarily transmits and processes unstructured data, not fixed record transactional data. In contrast, the corporate network processes data that is far more predictable, digestible and manageable. The bulk and the traffic of IIoT data virtually makes it a necessity to implement a single, private, dedicated network to each manufacturing facility for use with its IoT. Security is also a concern, because the networks that operate on the edges of the enterprise must often be maintained and administered by non-IT personnel who don’t have training in IT security practices. It’s not uncommon for someone on a production floor to shout a password to another employee so they can access a network resource — nor is it uncommon for someone on the floor to admit another individual into a network equipment cage that is supposed to be physically secured and accessible by only a few authorized personnel.
领英推荐
As humans, we’re just not that good. While we have experience driving cars and can look out the front window, we don’t have a perfect understanding of current data, past data, and what this data likely means in the operation and driving of the vehicle. Properly configured automation systems do. For the same reasons that we are anxious when our cars drive away without us actively turning the wheel, we are slow to adopt automation for cloud deployments. Those charged with making core decisions about automating security, operations, finops, etc., are actively avoiding automation, largely because they are uncomfortable with critical processes being carried out without humans looking on. I get it. At the end of the day, automation is a leap of faith that the automated systems will perform better than humans. I understand the concern that they won’t work. The adage is true: “To really screw things up requires a computer.” If you make a mistake in setting these systems up, you can indeed do real damage. So, don’t do that. However, as many people also say: “The alternative sucks.” Not using automation means you’re missing out on approaches and mechanisms to run your cloud systems cheaper and more efficiently
As the scale and growth of software development accelerates, and with ongoing AI developments in programming and engineering, the role requirements of software development also look set to change. "AI/ML are changing the world of programming much like the calculator and the computer changed the world," says Stormy Peters, VP of Communities at GitHub. "These technological advancements are taking care of a lot of the mundane, grunt work that developers once had to devote all their time to. Development looks different now." ... As we enter 2023 and software development remains at the heart of business strategies, problem-solving, critical thinking and other human skills will prove integral. "While emerging technologies will increasingly enable them to stay in the flow and solve challenging problems, the technicalities in being able to program, engineer, and develop code through a high level understanding of AI, DevOps, and programming languages will also stay central in importance to the discipline," she adds.
The best metrics to compare are the ones most applicable to the applications and workloads you will run. If the application is an Oracle database, the performance metric most applicable is 8 KB mixed read/write random IOPS. When the vendor only provides the 4 KB variation, there is a way to roughly estimate the 8 KB results -- simply divide the 4 KB results in half. If the vendor objects, ask for actual 8 KB test results. Use this same simple math for other I/O sizes. Throughput is somewhat more difficult to standardize, especially if vendors don't supply it. You can roughly calculate it by multiplying the sequential read IOPS by the size of the I/O. Latency is the most difficult to standardize, especially when vendors measure it differently. There are many factors that affect application latency, such as storage system load, storage capacity utilization, storage media, storage DRAM caching, storage network congestion, application server load, application server utilization and application server contention. The most important question to ask is how the vendor measured the latency, under what loads and from where.?