April 16, 2022
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Most teams have insufficient insight into the current environment at each endpoint; therefore, failures take time to investigate, and often unique tweaks and fixes are needed to handle each change in the state of the distributed system. That’s why DevOps engineers are doing so much hand-coding. Engineers are finding they must stop the normal CI/CD flow, investigate what part of an endpoint infrastructure is not running, and then make manual tweaks to the software and deployment code to compensate for the change. Here’s the thing: there will always be changes to the system. Infrastructure environments never stay static, and therefore a lot of “continuous deployment” systems aren’t really continuous at all. Because DevOps engineers don’t always know the state of each endpoint environment in a distributed system, the CI/CD pipeline can’t possibly be adaptive enough. In the end, the process of ensuring continuous deployment in distributed environments can be extremely burdensome and complicated, slowing the pace of business innovation.
To truly ensure the organization’s stability, CTOs need to pay as much attention to the seemingly smaller tasks as they do the big transformational changes. This starts with having a rigorous diligent process by understanding where the business is today and looking in-depth for any weak spots. To do this, CTOs need to look towards the specialist solutions provided by the right vendor. Adoption of a configuration management tool can allow CTOs to have oversight of the whole IT suite, which is able to identify and track changes against a defined set of policies and flag any deviances for rectification. Policies that are devised from the Center for Internet Security (CIS) guidelines mean that CTOs have an established standard of security measures to work with, facilitating visibility and control to make required changes and pursue a continuous improvement strategy by achieving best practice configuration. For critical legacy applications that need to make the successful move to a newer operating system version, application compatibility packaging can allow for them to be transplanted to an on-prem, hybrid or cloud system without the need for any code modifications.
Despite the promise agile methodologies hold for the public sector, certain characteristics can make government entities a difficult fit for the agile model. Government budgets tend to follow longer time horizons—often annual—than agile cadences; internal competition for funding between agencies for a fixed pool of funding can discourage collaboration across government; and because the returns on investments in change are often dispersed within the government and to the public, it can be difficult to motivate employees to work for an upside they cannot necessarily see or experience. The public sector’s hierarchical structure—and its accompanying culture and ways of working—can also make implementing agile methodologies, such as flat organizations and fast iterations, difficult. ... Agile operating models configure teams based on facilitating outcomes instead of on function and expertise. This orientation can boost productivity and engagement by limiting handoffs between functional silos and focusing a wider array of skills on a shared objective.?
领英推荐
Architecting modern software applications is a fundamentally explorative activity. Teams building today’s applications encounter new challenges every day: unprecedented technical challenges as well as providing customers with new ways of solving new and different problems. This continuous exploration means that the architecture can’t be determined up-front, based on past experiences; teams have to find new ways of satisfying quality requirements. ... Some decisions will, inevitably and unavoidably, create technical debt; for example, the decision to meet reliability goals by using a SQL database has some side effects on technical debt (see Figure 1). The now long-past “Y2K problem” was a conscious decision that developers made at the time that reduced data storage, memory use, and processing time needs by not storing century data as part of standard date representations. The problem was that they didn’t expect the applications to last so long, long after those constraints became irrelevant.?
Digital identity enables greater cybersecurity and data ownership.?While this use case speaks volumes about how the future of the energy market may take shape, the application of DIDs ultimately enables better cybersecurity for grid operators. For instance, when compared with traditional Web1 or Web2 approaches, Morris explained that most grid operators use a centralized database to manually enter information about sensors or hardware located on utilities within their network. Yet, such an approach could allow for grid operators to collect user data and even gain control of those sensors. “This level of centralization is a cybersecurity risk, which is why our solution with Stedin also proves to be a cybersecurity application,” Morris remarked. Jongepier added that Stedin was indeed looking to raise the bar on its cybersecurity. “Blockchain is effective for this because it provides the ground rules for utilizing decentralized identifiers for Stedin’s IoT assets, serving as a solution for raising the bar on security.”?
The IAM process is a critical base for secure, cost-effective and efficient business operations. The foundation of IAM is comprised of the process first, followed by people, then technology. The spotlight on zero trust has witnessed sizeable traction, but most do not realize that to get that model off the ground, the identity process plays a vital role. There is no zero-trust model without a rock-solid identity process. Complex access permissions, loose processes within access management and insider threats are the most common reasons for a breach. A study sponsored by the Identity Defined Security Alliance found that 99% of security and identity professionals believed that identity-related breaches were preventable. And yes, it is preventable. Can you imagine not setting up a process to revoke access of a disgruntled employee or even someone gullible immediately after the employment is discontinued? The longer it takes to revoke access because there is no set protocol or process, the higher the chances of the organization being exposed.?