January 05, 2022
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
With the emphasis on cybersecurity, I expect to see open source projects and commercial offerings squarely focused on cloud native security. Two areas will get the attention — software supply chain and eBPF. The software supply chain closely mimics the supply chain of real-world commerce where resources are consumed, then transformed, through a series of steps and processes, and finally supplied to the customer. Modern software development is about assembling and integrating various components available in the public domain as open source projects. In the complex supply chain of software, a compromised piece of software can cause severe damage to multiple deployments. Recent incidents involving CodeCov, Solarwinds, Kaseya, and the ua-parser-js NPM package highlight the need to secure the software supply chain. In 2022, there will be new initiatives, projects, and even new startups focusing on secure software supply chain management. The other exciting trend is eBPF that enables cloud native developers to build secure networking, service mesh, and observability components.
Many chronic diseases move along a dynamic trajectory that creates a challenge of unpredictable progression. This is often disregarded by first-generation AI as it requires constant adaptation of therapeutic regimens. Also, many therapies do not show loss of response until even a few months. The second-generation AI systems are designed to improve response to therapies and facilitate analysing inter-subject and intra-subject variabilities in response to therapies over time. Most first-generation AI systems extract data from large databases and artificially impose a rigid “one for all” algorithm on all subjects. Attempts to constantly amend treatment regimens based on big data analysis might be irrelevant for an individual patient. Imposing a “close to optimal” fit on all subjects does not resolve difficulties associated with dynamicity and the inherent variability of biological systems. The second-generation AI systems focus on a single patient as the epicentre of an algorithm and to adapt their output in a timely manner.?
A public blockchain network — one that anyone can join without asking for permission — allows unlimited viewing of information stored on it, eliminates intermediaries, and operates independently of any governing party. It is well-suited for digital consumer offerings (like NFT’s), cryptocurrencies, and certifying information such as individuals’ degrees or certificates. But private networks — those that require a party to be granted permission to join it — are often far better suited for businesses because access is restricted to verified members and only parties directly working together can see the specific information they exchange. This better satisfies industrial-grade security requirements. For these reasons, Walmart decided to go with a private network built on Hyperledger Fabric, an open-source platform. ...?For Walmart and its carriers, this meant working with each carrier’s unique data (vendor name, payment terms, contract duration, and general terms and conditions), which is combined with governing master tables of information such as fuel rates and tax rates. The parties should then jointly agree to the formulas that the blockchain will use to calculate each invoice.
领英推荐
A property graph uses nodes, relationships, labels, and “properties.” Both the relationships and their connecting nodes of data are named, and capable of storing properties. Nodes can be labeled in support of being part of a group. Property graphs use “directed edges” and each relationship has a start node and an end node. Relationships can also be assigned properties. This feature is useful in presenting additional metadata to the relationships between the nodes. ... Knowledge graphs are very useful in working with data fabric. The semantics feature (and the use of graphs) supports discovery layers and data orchestration in a data fabric. Combining the two makes the data fabric easier to build out incrementally and more flexible, which lowers risk and speeds up deployment. The process allows an organization to develop the fabric in stages. It can be started with a single domain, or a high value use case, and gradually expanded incrementally with more data, users, and use cases. A data fabric architecture, combined with a knowledge graph, supports useful capabilities in many key areas.?
The world of ten years ago was dominated by structured data. After 2012, though, as sensors became cheaper, cell phones gradually became smartphones, and cameras were installed to make shooting easier. With this, a large amount of unstructured data was generated, and enterprises entered uncharted territory, making progress slow. Some of the inhibitors to progress in this area include: Complexity: Unlike structured data which can be analyzed intuitively, unstructured data needs to be further processed and then analyzed, usually best done through artificial intelligence. Machine learning algorithms classify and label content from it. However, it is not easy to identify high-quality data from the data set due to the large amount and complexity of unstructured data -- this has been painful for developer teams and a key challenge to data architectures that are already complex. Cost: Although the enterprise recognizes the value of unstructured data, the cost can be a potential obstacle to making use of it. The cost of enterprise infrastructure, human resources, and time can hinder the implementation and development of AI and the data it analyzes.
“It’s a digital representation of the physical supply chain,” said Hans Thalbauer, the managing director for supply chain and logistics for Google Cloud. “You model all the different locations of your enterprise. Then you model all your suppliers, not just the tier one but tier two, three, and four. You bring in the logistic service providers. You bring in manufacturing partners. You bring in customers and consumers so that you have really the full view.” Once a network of supply chain players has been built out, the customer then starts loading data into their digital twin. The customer starts with their private enterprise data, which typically includes past orders, pricing, costs, and supply and demand forecasts, Thalbauer said. “Then you also want to get information from your business partners,” Thalbauer told Datanami last year. “You share your demands with your suppliers. And they actually loop back to you what is the supply situation. You share the information with the logistics service providers. You share sustainability information with the service provider.”
Empowering Digital Transformation through Data Strategy & AI Innovation | Data & Privacy Leader | Speaker & Author
3 年Nice thoughts Kannan Subbiah on property graphs & knowledge graphs. While they have become buzzwords, I would emphasize the problem to solve & the approach to simplifying the complexity of a business-data landscape. A strong metadata management operating model can make it easier to define a semantic layer that makes data across domains simple to understand & interoperable across domains. Sharing similar thoughts - https://www.dataversity.net/data-democratization-and-the-data-fabric/