August 17, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
There is no point in having IoT if the connectivity is weak. Without reliable connectivity, the data from sensors and devices, which are intended to be collected and analysed in real-time, might end up being delayed when they are eventually delivered. In healthcare, in real-time, connected devices monitor the vital signs of the patient in an intensive-care ward and alert the physician to any observations that are outside of the specified limits. ...? The future evolution of connectivity technologies will combine with IoT to significantly expand its capabilities. The arrival of 5G will enable high-speed, low-latency connections. This transition will usher in IoT systems that were previously impossible, such as self-driving vehicles that instantaneously analyse vehicle states and provide real-time collision avoidance. The evolution of edge computing will bring data-processing closer to the edge (the IoT devices), thereby significantly reducing latency and bandwidth costs. Connectivity underpins almost everything we see as important with IoT – the data exchange, real-time usage, scale and interoperability we access in our systems.
When it comes to enterprise development, platforms alone can’t address the critical challenge of maintaining consistency between development, test, staging, and production environments. What teams really need to strive for is a seamless propagation of changes between environments made production-like through synchronization and have full control over the process. This control enables the integration of crucial safety steps such as approvals, scans, and automated testing, ensuring that issues are caught and addressed early in the development cycle. Many enterprises are implementing real-time visualization capabilities to provide administrators and developers with immediate insight into differences between instances, including scoped apps, store apps, plugins, update sets, and even versions across the entire landscape. This extended visibility is invaluable for quickly identifying and resolving discrepancies before they can cause problems in production environments. A lack of focus on achieving real-time multi-environment visibility is akin to performing a medical procedure without an X-ray, CT, or MRI of the patient.?
So are we doomed to live in a world where staging is eternally broken? As we’ve seen, traditional approaches to staging environments are fraught with challenges. To overcome these, we need to think differently. This brings us to a promising new approach: canary-style testing in shared environments. This method allows developers to test their changes in isolation within a shared staging environment. It works by creating a “shadow” deployment of the services affected by a developer’s changes while leaving the rest of the environment untouched. This approach is similar to canary deployments in production but applied to the staging environment. The key benefit is that developers can share an environment without affecting each other’s work. When a developer wants to test a change, the system creates a unique path through the environment that includes their modified services, while using the existing versions of all other services. Moreover, this approach enables testing at the granularity of every code change or pull request. This means developers can catch issues very early in the development process, often before the code is merged into the main branch.?
The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China. Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements. ... The EU is not alone in taking action to tame the AI revolution. Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law. Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks. Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors. ... The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.
The finance team needs to have a ‘seat at the table’ from the very beginning to overcome these challenges and effect successful transformation. Too often, finance only becomes involved when it comes to the cost and financing of the project, and when finance leaders do try to become involved, they can have difficulty gaining access to the needed data. This was recently confirmed by members of the Future of Finance Leadership Advisory Group, where almost half of the group polled (47%) noted challenges gaining access to needed data. As finance professionals understand the needs of stakeholders within the business, they are in the best position to outline what is needed for IT to create an effective, efficient structure. Finance professionals are in-house consultants who collaborate with other functions to understand their workings and end-to-end procedures, discover where both problems and opportunities exist, identify where processes can be improved, and ultimately find solutions. Digital transformation projects rely on harmonizing processes and standardizing systems across different operations.?
The core of DevSecOps is ‘security as code’, a principle that dictates embedding security into the software development process. To keep every release tight on security, we weave those practices into the heart of our CI/CD flow. Automation is key here, as it smooths out the whole security gig in our dev process, ensuring we are safe from the get-go without slowing us down. A shared responsibility model is another pillar of DevSecOps. Security is no longer the sole domain of a separate security team but a shared concern across all teams involved in the development lifecycle. Working together, security isn’t just slapped on at the end but baked into every step from start to finish. ... Adopting DevSecOps is not without its challenges. Shifting to DevSecOps means we’ve got to knock down the walls that have long kept our devs, ops and security folks in separate corners. Balancing the need for rapid deployment with security considerations can be challenging. To nail DevSecOps, teams must level up their skills through targeted training. Weaving together seasoned systems with cutting-edge DevSecOps tactics calls for a sharp, strategic approach.?