January 05, 2021
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
IoT adds smarts to IT asset monitoring
The market for IoT tools that can monitor IT assets (as well as many other devices) has attracted major technology vendors including Cisco, Dell, HPE, Huawei IBM, Microsoft, Oracle, SAP, and Schneider Electric, along with IoT specialists including Digi, Gemalto, Jasper, Particle, Pegasystems, Telit, and Verizon. IoT is often deployed in existing physical systems to increase the contextual understanding of the status of those systems, says Ian Hughes, senior analyst covering IoT at research firm 451 Research. "Compute resources tend to already have lots of instrumentation built in that is used to manage them, such as in data centers," he says. Companies can use IoT to provide additional information about the physical infrastructure of a building such as heating, ventilation, and air conditioning (HVAC) systems, Hughes says. Data centers would tend to need building- and environmental-related IoT equipment, to measure environmental conditions and possible security threats, he says. As with any IoT rollout, preparation is key. "Some approaches yield too much data, or non-useful content," Hughes says. "So understanding the context for measurement is important."
Three ways formal methods can scale for software security
FM is a type of mathematical modelling where the system design and code are the subjects of the model. By applying mathematical reasoning, FM tools can answer security questions with mathematical certainty. For example, FM tools can determine whether a design has lurking security issues before implementation begins; show that an implementation matches the system design; and prove that the implementation is free of introduced defects such as low-level memory errors. That certainty distinguishes FM from other security technologies: unlike testing and fuzzing, which can only trigger a fraction of all system executions, an FM model can examine every possible system behavior. Like machine learning, the roots of formal methods lie in the 1970s, and also like machine learning, recent years have seen rapid adoption of FM technologies. Modern FM tools have been refined by global-scale companies like Microsoft, Facebook, and Amazon. As a result, these tools reflect the engineering practices of these companies: rapid pace of iteration, low cost of entry, and interoperability between many complementary tools.
Why IATA is banking on cloud to help the airline industry weather the Covid-19 crisis
“The Covid-19 crisis is impacting the way we are responding and means we have to adjust our resources to what we can afford at the moment,” he says. “Our team understands that we need to change the way we are working to avoid wasting time and resources.” By his own admission, running an airline in 2020 is a very different business to what it was in 2019, and this, in turn, has created an additional need for new, artificial intelligence-based predictive models that factor in the impact of the pandemic. “Now our airlines are asking us [if we can] use the data for the last month to tell us what will happen in the next three months, and that means we have to build new predictive models,” he says. “We have to use technology like artificial intelligence, we have to use a lot of innovation and we need an environment that will allow us to do that.” It is worth noting that when this body of work began, around 60% of the organisation’s IT footprint was already in the Amazon Web Services (AWS) cloud, but there was definite room for improvement with regard to how that environment was being managed and used, says Buchner. “The way we were using AWS in the past is different from the way that we want to use it today. ...”
Modern Operations Best Practices From Engineering Leaders at New Relic and Tenable
Beyond the technical challenges of creating RCAs, there is a human layer as well. Many organizations use these documents to communicate about incidents to customers involved. However, this may require adding a layer of obfuscation. Nic shares, “The RCA process is a little bit of a bad word inside of New Relic. We see those letters most often accompanied by ‘Customer X wants an RCA.’ Engineers hate it because they are already embarrassed about the failure and now they need to write about it in a way that can pass Legal review. Dheeraj agrees, and believes that RCAs should have value to customers reading them. “Today, the industry has become more tolerant to accepting the fact that if you have a vendor, either a SaaS shop or otherwise, it is okay for them to have technical failures. The one caveat is that you are being very transparent to the customer. That means that you are publishing your community pages, and you have enough meat in your status page or updates." If legal has strict rules about what is publishable, RCAs can still be valuable. “We try to run a meaningful process internally. I use those customer requests as leverage to get engineering teams to really think through what's happened.
What the critics get wrong about serverless costs
There are a few main areas where people misunderstand serverless costs. They often exclude the total cost of running services on the web. This includes the personnel requirements and the direct payments to the cloud provider I just discussed. Other times, they build bad serverless architectures. Serverless, like cloud, is not a panacea. It requires knowledge and experience about what works and what doesn't -- and why. If you use serverless correctly, it shifts significant costs to the cloud provider. They keep your services running, scaling up and down, and recovering from hardware, software and patching failures. Most companies that run mission-critical web applications and/or APIs have operations staff who do exactly this. This is not to say that adopting serverless means putting people out of work. Charity Majors, co-founder and CTO of Honeycomb, wrote a great article on how operations jobs are changing rather than going away. But if you can hand off patching operating system and software vulnerabilities to a cloud provider, then the people on your staff who previously handled those tasks become available for more strategic and differentiated tasks for your organization. There also seems to be a shocking number of people who try to build something with serverless without fully understanding the technology first.
Hack your APIs: interview with Corey Ball - API security expert
In Corey’s opinion, because most APIs are primarily used/consumed by developers and machines they often get overlooked during security assessments. Compounding this problem, many organizations would struggle to actually list all the APIs they have on their systems. Worse still, because APIs are so varied, they’re difficult to scan. Even within a single organization, similar-looking endpoints could have completely different specifications from one another. Corey points out that many vulnerability scanners lack the features to properly test APIs, and are consequently bad at detecting API vulnerabilities. If your API security testing is limited to running one of these scanners, and it comes back with no results, then you run the risk of accepting false negative results. You can see the results of this in the news. The 2018 USPS incident (above) happened because security was simply not taken into consideration during an API’s design. A researcher was able to compromise the USPS application’s security using trivial methods, despite a vulnerability assessment having been carried out a month beforehand. The assessment had failed to spot the glaring issue. ... You can define business logic vulnerabilities as “deliberately designed application functionality that can be used against the application to compromise its security”.
Read more here ...
Marketing at Ford Motor Company
4 年Vest