November 18, 2022

November 18, 2022

How connected automation will release the potential of IoT

Connected automation is an industry-first, no-code, highly secure, software-as-a-service layer that IoT devices can easily connect to. It intelligently orchestrates multi-vendor software robots, API mini-robots, AI, and staff: all operating together in real-time as an augmented digital workforce. It’s a hyper-productive digital workforce delivering high-speed, data-rich, end-to-end processes that enable IoT devices to instantly inter-communicate and securely work with physical and digital systems of all ages, sizes, and complexities – at scale.?So, for the first-time, investments in IoT can deliver their true potential, but without huge investments in changing existing systems. ... So, when human judgement is required, handoffs arrive via robot-created, sophisticated, intuitive, digital user interfaces – all in real-time. Where augmented insights are instantly required within IoT initiated processes, AI or other smart tools are used to escalate with predictive analysis and problem-solving capabilities, in real-time. And once decisions are made, by people or AI, they can immediately be actioned, yet without having to make major changes to existing systems or processes.


Internet Outages Could Spread as Temperatures Rise. Here's What Big Tech Is Doing

We need data centers to be close to populations, but that means their climatological impact is local, too. "If we don't address climate change, we really will be toast," former Google CEO and chairman Eric Schmidt told CNBC in April. He left the tech giant in 2017 to launch his own philanthropic firm to support research in future-looking fields -- and found climate change harder to ignore. "We really are putting the jeopardy of our grandchildren, great-grandchildren and great-great-grandchildren at risk." Experts say that data centers can be built to be kinder to the climate. But it's going to be tough to pull off. When selecting a site for their data centers, companies like Microsoft and Amazon prioritize access to low-cost energy, which they've historically found in places like Silicon Valley, northern Virginia and Dallas/Fort Worth, though Atlanta and Phoenix have been growing. They also look for internet infrastructure from telecoms AT&T, Verizon and CenturyLink, along with fiber providers like Charter and Comcast, to keep data flowing.?


Google AI — Reincarnating Reinforcement Learning

To overcome inefficiencies of the tabula rasa RL, Google AI introduces Reincarnating RL — an alternative approach to RL research, where prior computational work, such as learned models, policies, logged data, etc., is reused or transferred between design iterations of an RL agent or from one agent to another. Some sub-areas of Reinforcement Learning leverage prior computation, whereas most of the RL agents are largely trained from scratch. Until now, there has been no broader effort to leverage prior computational work for the training workflow in RL research. The code and trained agents have been released to enable researchers to build on this work. Reincarnating RL is a more efficient way to train RL agents than training from scratch. This can allow for more complex RL problems to be tackled without requiring excessive computational resources. Furthermore, RRL can enable a bench-marking paradigm where researchers continually improve and update existing trained agents. The real-world RL use cases will likely be in the domains where prior computational work is available.


Best practices for bolstering machine learning security

Given the proliferation of businesses using ML and the nuanced approaches for managing risk across these systems, how can organizations ensure their ML operations remain safe and secure? When developing and implementing ML applications, Hanif and Rollins say, companies should first use general cybersecurity best practices, such as keeping software and hardware up to date, ensuring their model pipeline is not internet-exposed, and using multi-factor authentication (MFA) across applications. After that, they suggest paying special attention to the models, the data, and the interactions between them. “Machine learning is often more complicated than other systems,” Hanif says. “Think about the complete system, end-to-end, rather than the isolated components. If the model depends on something, and that something has additional dependencies, you should keep an eye on those additional dependencies, too.” Hanif recommends evaluating three key things: your input data, your model’s interactions and output, and potential vulnerabilities or gaps in your data or models.


How To Be Crypto-Agile Before Quantum Computing Upends The World

To be crypto-agile means to be able to make cryptographic changes quickly and without the burden of massive projects. That means adopting tools and technologies that abstract away underlying cryptographic primitives and that can change readily. To be crypto-agile is to acknowledge that change is on the horizon and that anything built today needs to be able to adapt to coming changes. Smart organizations are already updating existing systems and forcing crypto-agility requirements for all new projects. This is an opportunity for security teams to re-examine not just what algorithms they are using but also their data protection strategies in general. Most data today is “protected” using transparent disk or database encryption. This is low-level encryption that makes sure the bytes are scrambled before they hit the disk but is invisible while the machine is on. Servers stay on around the clock. A better approach is to use application-layer encryption (ALE). ALE is an architectural approach where data is encrypted before going to a data store. When someone peeks at the data in the data store, they see random bytes that have no meaning without the correct key.


What Happens if Microservices Vanish -- for Better or for Worse

The modern cloud has really accelerated the move towards those architectures. There’s benefits and drawbacks to those architectures. There’s a lot more moving pieces, a lot more complexity, and yet microservices offers a way to tame some of the complexity by putting services behind API boundaries. Amazon was very famous in the early days because Jeff Bezos required the way teams communicate is through APIs. That created this notion that each team was running a different service and the service was connected through software; APIs, not human beings. That helps different teams move independently and codify the contract between the teams, and yet there is no question that it can be massively overdone and can be used as a tool to sweep complexity under the rug and pretend it doesn’t exist. Once it’s behind an API, it’s easy to just set it and forget it. The reality is, I see companies with thousands of microservices when they probably should have had five. It can definitely be overdone, but a spectrum is the way I think of it.

Read more here ...

要查看或添加评论,请登录

社区洞察