November 04, 2020

November 04, 2020

Reworking the Taxonomy for Richer Risk Assessments

With pre-assessment and planning, you need to think about the desired outcome (i.e., identify the risks to the facility) and identify the necessary actions to mitigate or eliminate the risks and associated vulnerabilities. The flow chart above is a detailed view of this phase and includes collecting and digesting documents, identifying the team members and the necessary skill sets, and getting ready for travel. Of course, contacting the "customer" and setting up the necessary on-site logistics are important. ... Don't forget these threats and vulnerabilities can be cyber or physical. They can also be part of the site management and culture. What about training or lack thereof? They can all contribute to the risk profile of the facility. The graphic above offers some elements of the on-site activities. You can see that we have inspections, observations, taking photographs, and looking at the site network and architecture. Even a cyber-vulnerability scan may be part of the site assessment. These activities are intended to be part of the site assessment plan. However, don't let the plan place barriers on your site risk reviews. Feel free to follow leads and evidence of problems, since that is why you are on-site rather than doing a remote risk assessment via Zoom.


How blockchain is set to revolutionize the healthcare sector

Despite its potential, data portability across multiple systems and services is a real issue. There is nothing more valuable and personal to an individual than their personal medical records, so making data shareable across services will inevitably raise concerns around the spectre of data being misused. Currently, data does not flow seamlessly across technology solutions within healthcare. For example, in the UK your hospital records do not form part of your GP records, but the advantages are clear in terms of treatment and preventative care were they to do so. Unfortunately, it is not likely a centralised storage and delivery system will get traction until there is one that can ensure the appropriate encryption and security. The risks are simply too high. Yet, it is an issue that a technology like blockchain can tackle. This is because the purpose of the chain is to store a series of transactions in a way that cannot be altered or changed. What renders it immutable is the combination of two opposing things: the cryptography and its openness. Each transaction is signed with a private key and then distributed amongst a peer to peer set of participants. Without a valid signature, new blocks created by data changes are ignored and not added to the chain. 


UX Patterns: Stale-While-Revalidate

Stale-while-revalidate (SWR) caching strategies provide faster feedback to the user of web applications, while still allowing eventual consistency. Faster feedback reduces the necessity to show spinners and may result in better-perceived user experience. ... Developers may also usestale-while-revalidate strategies in single-page applications that make use of dynamic APIs. In such applications, oftentimes a large part of the application state comes from remotely stored data (the source of truth). As that remote data may be changed by other actors, fetching it anew on each request guarantees to always return the freshest data available. Stale-while-revalidate strategies substitute the requirement to always have the latest data for that of having the latest data eventually. The mechanism works in single-page applications in a similar way as in HTTP requests. The application sends a request to the API server endpoint for the first time, caches and returns the resulting response. The next time the application will make the same request, the cached response will be returned immediately, while simultaneously the request will proceed asynchronously. When the response is received, the cache is updated, with the appropriate changes to the UI taking place.


The Inevitable Rise of Intelligence in the Edge Ecosystem

Edge computing is becoming an integral part of the distributed computing model, says Nishith Pathak, global CTO for analytics and emerging technology with DXC Technology. He says there is ample opportunity to employ edge computing across industry verticals that require near real-time interactions. “Edge computing now mimics the public cloud,” Pathak says, in some ways offering localized versions of cloud capabilities regarding compute, the network, and storage. Benefits of edge-based computing include avoiding latency issues, he says, and anonymizing data so only relevant information moves to the cloud. This is possible because “a humungous amount of data” can be processed and analyzed by devices at the edge, Pathak says. This includes connected cars, smart cities, drones, wearables, and other internet of things applications that consume on demand compute. The population of devices and scope of infrastructure that support the edge are expected to accelerate, says Jeff Loucks, executive director of Deloitte’s center for technology, media and telecommunications. He says implementations of the new communications standard have exceeded initial predictions that there would be 100 private 5G network deployments by the end of 2020. “I think that’s going to be closer to 1,000,” he says.


Take a Dip into Windows Containers with OpenShift 4.6

Windows Operating System in a container? Who would have thought?!? If you asked me that question a few years back, I would have told you with conviction that it would never happen! But if you ask me now, I will answer you with a big, emphatic yes and even show you how to do so!In this article, I will demonstrate how you can run Windows workloads in OpenShift 4.6 by deploying a Windows container on a Windows worker node. In addition, I will then highlight some of the issues and challenges that I see from a system administrator perspective. ... For customers who have heterogeneous environments with a mix of Linux and Windows workloads, the announcement of a supported Windows container feature on OpenShift 4.6 is exciting news. As of this writing, the supported workloads to run on Windows containers can be either .NET core applications, traditional .NET framework applications, or other Windows applications that run on a Windows server. So when did the work start to make Windows containers possible to run on top of OpenShift? In 2018, Red Hat and Microsoft announced the joint engineering collaboration with the goal of bringing a supported Windows containers feature into OpenShift.


GPS and water don't mix. So scientists have found a new way to navigate under the sea

Underwater devices already exist, for example to be fitted on whales as trackers, but they typically act as sound emitters. The acoustic signals produced are intercepted by a receiver that in turn can figure out the origin of the sound. Such devices require batteries to function, which means that they need to be replaced regularly – and when it is a migrating whale wearing the tracker, that is no simple task. On the other hand, the UBL system developed by MIT's team reflects signals, rather than emits them. The technology builds on so-called piezoelectric materials, which produce a small electrical charge in response to vibrations. This electrical charge can be used by the device to reflect the vibration back to the direction from which it came. In the researchers' system, therefore, a transmitter sends sound waves through water towards a piezoelectric sensor. The acoustic signals, when they hit the device, trigger the material to store an electrical charge, which is then used to reflect a wave back to a receiver. Based on how long it takes for the sound wave to reflect off the sensor and return, the receiver can calculate the distance to the UBL. "In contrast to traditional underwater acoustic communication systems, which require each sensor to generate its own signals, backscatter nodes communicate by simply reflecting acoustic signals in the environment," said the researchers.

Read more here ...

要查看或添加评论,请登录

社区洞察

其他会员也浏览了