October 09, 2021
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
While established companies invest in new APIs to support digital transformation projects, early startups build on top of the latest technology stacks. This trend is turning the Internet into a growing fabric of interconnected technologies the likes of which we've never seen. As the number of new technologies peaks, the underlying fabric — otherwise known as the API economy — fuels the market to undergo technology consolidations with the historic-high number of acquisitions. There are two interesting consequences of this trend. The first is that all of this drives the need for better, faster, and easier-to-understand APIs. Many Integration-Platform-as-a-Service (iPaaS ) vendors understand this quite well. Established iPaaS solutions, such as those from Microsoft, MuleSoft, and Oracle, are continually improved with new tools while new entrants, like Zapier and Workato, continue to emerge. All invest in simplifying the integration experience on top of APIs, essentially speeding the time-to-integration. Some call these experiences "connectors" while others call them "templates."?
As a consequence of the utility of AI tools, they have been widely and rapidly adopted by all but the most stubborn DevOps teams. Indeed, for teams now running several different clouds (and that’s all teams, pretty much) AI interfaces have become almost a necessity as they evolve and scale their DevOps program. The most obvious and tangible outcome of this shift has been in the data and systems that developers spend their time looking at. It used to be that a major part of the role of the operations team, for instance, was to build and maintain a dashboard that all staff members could consult, and which contained all of the apposite data on a piece of software. Today, that central task has become largely obsolete. As software has grown more complex, the idea of a single dashboard containing all the relevant information on a particular piece of software has begun to sound absurd. Instead, most DevOps teams now make use of AI tools that “automatically†monitor the software they are working on, and only present data when it is clear that something has gone wrong.
Defending against cybersecurity threats is very expensive, said Michael Rogers, operating partner at venture capital firm Team8 and former director of the U.S. National Security Agency. But the costs for attackers are low, he told Data Center Knowledge. "Prioritizing cybersecurity solutions that provide smart, cost-effective ways to reduce, mitigate or even prevent cyberattacks is key," he said. "Inevitably, as we move to an increasingly digital world, these options are game-changers in safeguarding our society and digital future.†Some areas where cybersecurity automation is making a particular difference include incident response, data management, attack simulation, API and certificate management, and application security. ... "A lot of machine learning is being thrown at huge data sets," he said. "The analytics are getting better. And what do you do with that analysis? You want to do threat detection and response, you want to bring the environment back to a safer operating state. Now, these new tools are able to do a lot of this automatically."
领英推è
To deliver software rapidly, frequently, and reliably, you need what I call the success triangle. You need a combination of three things: process, organization, and architecture. The process, which is DevOps, embraces concepts like continuous delivery and deployment, and delivers a stream of small changes frequently to production. You must structure your organization as a network of autonomous, empowered, loosely coupled, long-lived product teams. You need an architecture that is loosely coupled and modular. Once again, loose coupling is playing a role. If you have a large team developing a large, complex application, you must typically use microservices. That's because the microservice architecture gives you the testability and deployability that you need in order to do DevOps, and it gives you the loose coupling that enables your teams to be loosely coupled. I've talked a lot about loose coupling, but what is that exactly? Operations that span services create coupling between them. Coupling between services is the degree of connectedness.
Although D-Wave was the first company to build a working quantum computer, it has struggled to gain commercial traction. Some researchers, most notably computer scientist Scott Aaronson at the University of Texas at Austin, faulted the company for over-hyping what its machines were capable of. (For a long time, Aaronson cast doubt on whether D-Wave's annealer was harnessing any quantum effects at all in making its calculations, although he later conceded that the company's machine was a quantum device.) In the past few years, the company has also had trouble exciting investors: in March, it secured a $40 million grant from the Canadian government. But that came after The Globe & Mail newspaper reported that a financing round in 2020 had valued the company at just $170 million, less than half of its previous $450 million valuation. The company's decision to add gate-model quantum computers to its lineup may be an acknowledgment that commercial momentum seems to be far greater for those machines than for the annealers that D-Wave has specialized in.
Facebook was clearly prepared to respond to this incident quickly and efficiently. If it wasn’t, it would no doubt have taken days to restore service following a failure of this magnitude rather than just hours. Nonetheless, Facebook has reported that troubleshooting and resolving the network connectivity issues between data centers proved challenging for three main reasons. First and most obviously, engineers struggled to connect to data centers remotely without a working network. That’s not surprising: as an SRE, you’re likely to run into an issue like this sooner or later. Ideally, you’ll have some kind of secondary remote-access solution, but that’s hard to implement within the context of infrastructure like this. The second challenge is more interesting. Because Facebook’s data centers “are designed with high levels of physical and system security in mind,†according to the company, it proved especially difficult for engineers to restore networking even after they went on-site at the data centers.