What are the pitfalls to avoid when managing a hybrid cloud ecosystem?
The Fundamentals of Fintech Tackling the Security Issues For the Future
E-Book?$12.95
ORDER- +1888 857 9203
VISA Mastercard Amex PayPal
www.mgireservationsandbookings.co.uk
What are the pitfalls to avoid when managing a hybrid cloud ecosystem?
www.mgireservationsandbookings.co.uk
A hybrid cloud environment promises a business an optimum solution from a security, cost and accessibility perspective. It means that organisations can move workloads between resources according to the needs of the business.
Valued by organisations that want to continue operating ‘business as usual’ whilst driving their digital transformation initiatives, hybrid cloud offers an agile and scalable approach to hosting applications in multiple environments.
Experts have estimated that the global hybrid cloud market will value at USD 63,658.0 million in 2021 and is projected to reach USD 6, 67,916.4 million by 2030 at a CAGR of 29.8% during the forecast period.
Many organisations start off by using a ‘lift and shift’ approach to moving existing applications over to their new cloud environment – whether through choice or pandemic provoked necessity. But the majority of these companies soon find that this approach lacks compatibility. Hybrid cloud introduces some major risks to an organisation that need to be carefully mitigated and managed if their cloud project is to be a success. Not to mention that in some cases, a lack of effective cloud management could see cloud budgets spiral or in other cases, are under-optimised with current estimations that up to 30% of cloud budgets are wasted.
So how do you design, integrate and manage a hybrid cloud ecosystem so that it delivers all the security, cost savings and efficiencies that were promised? This article will explain more.
What considerations need to be made??
Before starting your cloud journey, it’s important to take into consideration the service infrastructure that is trying to be replicated into the cloud environment. Working to a cloud adoption framework will give a clear line of best practice.
Each cloud platform has its own framework of excellence and is consumed in different ways. This demands a review of the reasons behind why a business is moving to the cloud in the first place. A detailed benefits and cost analysis should be undertaken to decide whether an application would be better in a cloud environment rather than on premise. For example, do the desired benefits come down to revenue generation, internal infrastructure, employee productivity or another reason?
It’s the ecosystem of the platform that differentiates where you place the workload. A lot of customer-facing businesses lean towards AWS because of its affiliate ecosystem and marketspace which allows you to bring in a multitude of third parties with ease. This is beneficial if you have broken down your services into lines of code and looked at which elements other providers could supply to ultimately reduce your time to value in delivering a service back out to customers.
In comparison, Microsoft’s cloud environment is based on an infrastructure way of working or GCP which tends to be used for its data-lead capabilities. It’s important to choose a cloud provider that best suits your business’ operating model. Is it more important that the platform allows you to manage revenue streams or boost employee productivity and experience? The way that you’re looking to improve efficiencies to boost the bottom line is an important consideration here.
What are the pitfalls of hybrid cloud management??
Some of the first risks an organisation will encounter centre around governance and risk. For example; who has access to what data, how do you maintain an audit trail? From a security perspective there are also challenges around system vulnerabilities, insufficient identity, and credential and access management. There is a real need to apply effective hybrid cloud management to mitigate these challenges.
The risks differ depending on which multi-cloud strategy is taken. Each cloud service – whether its Azure, AWS, GCP or another – has its own characteristics and thus is consumed in distinct ways. For example, AWS is a business revenue generating platform, therefore the risk of revenue and reputation is massive if a business lifts and shifts a workload to this environment inefficiently without a decent adoption framework or a centre of excellence.
If the foundation isn’t right in the first place it could be catastrophic; leading to loss of revenue, loss of data and loss of reputation. The risk is that a business could end up spending more to fix mistakes then the cost of moving to the cloud in the first place.
The other key issue is around adoption. There’s little point in moving applications and services to the cloud if no-one ends up using them. They need to be appropriate and relevant as well as functional
How can an organisation avoid these pitfalls??
Take your time from the outset in building out a centre of excellence. Research and employ good developers who will talk through a cloud adoption framework with you. Remember that no matter how quickly you want to migrate your services to the cloud, having too much haste will lead to operational inefficiencies that will hold you back from the true benefits it promises.
Cloud transformation affects the whole business from finance to HR, so while the overall strategy needs to be led from the top down, every department needs to be listened to, to make sure their requirements are met.?
Companies will either be investing in people or investing in tools; but the market currently isn’t strong enough from a tooling perspective not to require people. This is unless you look at it from a 1:1 perspective; in this case, the hybrid cloud management of an on-premises environment to one cloud platform is quite advanced. But we’re probably still a year or two out for true multi-cloud management.
The pitfalls of trying to optimise a hybrid cloud environment comes down to workload behaviours; are the applications that you’re moving to the cloud ready? If not, you’re probably moving the problem to a different location without resolving it. This will make your cloud project run up unnecessary costs. There’s a lot of maturing that needs to happen to have true hybrid cloud management within workloads and spaces. It comes down to infrastructure, type of services, how employees are communicating to the business and the overall operating model.
A move to the cloud is a good chance to consolidate and optimise your applications and services.
Once a hybrid cloud environment is optimised, what’s next??
Most companies will initially lift a service that they have on premises into the cloud and then they will start to look at redesigning those applications natively by leveraging PaaS components instead of IaaS, thus reaping the true benefits of moving to a cloud provider from on premise. They can then look at bringing third party platforms, API integrations and managed services into this, leading to second generation optimisation and app modernisation programmes.
A final piece of advice if you’re looking at hybrid cloud management, is that you need to engage with a provider that knows both worlds as they will be able to deliver tried and tested capabilities. Having a partner that is solely cloud or solely on-premises will ultimately do you a disservice as they don’t understand the other side of the coin. A suitably designed, integrated and managed hybrid cloud ecosystem can be as secure as conventional on-premises IT while delivering improved productivity and business outcomes.
Sustainability in the tech industry: The hidden problem and how to tackle it.
Businesses are constantly innovating towards a better tomorrow—to compete, win customers, and expand into new territories. However, to exist and grow in tomorrow’s world, organisations must adopt sustainability. Priyanka Roy, enterprise evangelist, ManageEngine outlines the steps that organisations in the IT sector can implement to adopt a sustainable approach.
Technology has helped businesses move towards a better tomorrow—to innovate, to compete, to win customers. The digitisation of value chains has accelerated new business models, cost efficiencies, and revenue streams.
But beneath the glossy exterior lies a silent enemy: technology’s contribution to the carbon footprint. A recent study predicted that if the trends in the information and communication technology (ICT) industry continue at the current rate, the field will constitute 14% of global carbon emissions by 2040. While some tech organisations have begun to opt for green energy sources to combat carbon emissions, the industry is still primarily fossil-fuel driven.
Software giants such as Microsoft, Google, and Infosys are on the path to net-zero environments. As the first to go carbon-neutral in 2007, Google is the front runner in committing to utilising renewable energy sources. Infosys claims that it is already 30 years ahead of the climate change target set by the Paris Agreement, getting there by optimising its energy sources and using solar panels in its offices.
Eco-friendliness in the tech industry is falling short
Compared to what we’re seeing in industries such as food, fashion, and manufacturing, eco-friendliness in the largely intangible ICT industry is lagging. For example, the fashion industry could curb its carbon footprint by 40% using AI-based tools to predict demand and increase profits. Using these models, organisations in this industry can predict which designs will be popular and which won't sell as much, thereby reducing waste by making fewer versions of less popular designs. That said, while AI reduces carbon emissions, it has its own environmental impact, and ignoring this hinders the goals of sustainability and the move towards a circular economy.?
Generally, there are five accessible ways tech companies can adopt a better, more sustainable approach:
Internal audits: Evaluate the current processes followed across the value chain to highlight any unsustainable practices that may contribute to the carbon footprint. For example, assess the source of power, turn off unused appliances, and optimise data centres.
Green upgrades: Track the consumption of resources across the organisation. This can also include upgrades that range from energy-efficient equipment to implementing eco-friendly software within the organisation.
Renewable power sources: Look for renewable sources of energy that can power the company. Some companies are strategically located close to power plants that supply renewable energy, and some can even install solar power units within their premises. Location is key since renewable energy sources are not available in abundance yet.
Operational efficiency: Look for ways to reduce storage and power use at data centres. Data centre management software can improve operational efficiency by analysing bottlenecks and the performance and organisation of data to ensure optimum usage.
Data centre temperature monitoring: Install smart temperature control devices to reduce energy use by monitoring data centre temperature and turning cooling devices off when the data centre reaches the right temperature.
There is yet another dimension to sustainability: the human aspect. By moving corporate facilities to smaller towns, organisations can offset the tremendous strain placed on resources, such as energy, as a result of employees and their families moving to urban centres.
Here at Zoho Corporation, we power our Indian offices with a 5-megawatt solar power plant, and our IT management division, ManageEngine, has set up a Zoho Farm near Austin, Texas. This enables employees to work from the Wi-Fi-enabled farmhouse as well as step outside to harvest organically grown crops for themselves and their families. Excess produce from the farm is contributed to a local food bank.
Extending sustainability to users
Sustainability practices can also be extended to users. Green coding produces algorithms that reduce energy consumption during the use of software. This leads to simpler processes and superior performance, increasing efficiency for the user by reducing glitches and processing time. Similarly, deploying software upgrades that increase memory consumption without slowing down devices can increase the life span of devices.
Customer reviews can also be analysed to find any hidden sustainability problems. For example, a customer might talk about increased battery consumption due to a software upgrade for their device, a possible signal that the upgrade itself is not eco-friendly. Customer education is key, too. How many times have we heard people say they never shut down their laptops? Customers often feel that they aren’t directly responsible for sustainability until they are made aware of the alarming impact of individual actions. Raising awareness is key to changing the mindset and habits of customers towards being more environmentally conscious.
Technology can be a powerful force in driving progress, but we can't ignore the detrimental impact it is having on sustainability. In its continuous cycle of consuming and releasing energy, the tech industry must adopt sustainable practices and contribute more to a cleaner, greener future.
DeepMind exposes new mathematics technique using AI?
The Lead
领英推荐
[1] DeepMind claims AI has aided new discoveries and insights in mathematics?
[2] Amazon debuts IoT TwinMaker and FleetWise?
[3] Microsoft launches fully managed Azure Load Testing service?
The Follow
[1] DeepMind, the AI research laboratory funded by Google’s parent company, Alphabet, today published the results of a collaboration with mathematicians to apply AI toward discovering new insights in areas of mathematics. DeepMind claims that its AI technology helped to uncover a new formula for a previously unsolved conjecture, as well as a connection between different areas of mathematics elucidated by studying the structure of knots.
“At DeepMind, we believe that AI techniques are already sufficient to have a foundational impact in accelerating scientific progress across many different disciplines,” Alex Davies, DeepMind machine learning specialist, said in a statement. “Pure maths is one example of such a discipline, and we hope that [our work] can inspire other researchers to consider the potential for AI as a useful tool in the field.”
What ostensibly sets DeepMind’s work apart is its detection of the existence of patterns in mathematics with supervised learning — and giving insight into these patterns with attribution techniques from AI.
In a paper published in the journal Nature, DeepMind describes how it used AI to help discover a new approach to a longstanding conjecture in representation theory. >> Read more.
[2] Yesterday at its re:Invent 2021 conference, Amazon announced the Amazon Web Services (AWS) IoT TwinMaker, a service designed to make it easier for developers to create digital twins of real-time systems like buildings, factories, industrial equipment, and product lines. The company also debuted AWS IoT FleetWise, an offering that makes it ostensibly easier and more cost-effective for automakers to collect, transform, and transfer vehicle data in the cloud in near-real-time.
With IoT TwinMaker, Amazon says that customers can leverage prebuilt connectors to data sources like equipment, sensors, video feeds, and business applications to automatically build knowledge graphs and 3D visualizations. IoT TwinMaker supplies dashboards to help visualize operational states and updates in real time, mapping out the relationships between data sources.
IoT FleetWise enables AWS customers to collect and standardize data across fleets of upwards of millions of vehicles. IoT FleetWise can apply intelligent filtering to extract only what’s needed from connected vehicles to reduce the volume of data being transferred. Moreover, it features tools that allow automakers to perform remote diagnostics, analyze fleet health, prevent safety issues, and improve autonomous driving systems. >> Read more.
[3]?Microsoft is rolling out a fully managed load testing service for Azure, helping quality assurance testers and developers optimize their app’s performance and scalability.
Load testing fits into the broader software performance testing and quality assurance sectors, which might include everything from cross-platform web testing to continuous profiling for cutting cloud bills — it’s all about ensuring that an application is robust and optimized for every potential scenario, minimizing outages and downtime for software in production environments.
But as its name suggests, Azure Load Testing is designed with Azure customers in mind. This includes integrated Azure resource management and billing, and integrations with related products such as Azure Monitor, Microsoft’s monitoring tool for applications, infrastructure, and networks.
“Azure Load Testing is designed from the ground up with a specific focus on Azure customers and delivering Azure-optimized capabilities,” Mandy Whaley, Microsoft’s partner director of product for Azure dev tools.
Amazon pushes further into the robotics sphere
The Lead
[1] Amazon launches AWS RoboRunner to support robotics apps?
[2] NinjaOne expands data backup, security features to thwart ransomware??
[3]?Starburst launches fully -managed cross-cloud analytics.
The Follow
[1] At a keynote during its Amazon Web Services (AWS) re:Invent 2021 conference today, Amazon launched AWS IoT RoboRunner, a robotics service designed to make it easier for enterprises to build and deploy apps that enable fleets of robots to work together.?
Alongside IoT RoboRunner, Amazon announced the AWS Robotics Startup Accelerator, an incubator program in collaboration with nonprofit MassRobotics to tackle challenges in automation, robotics, and industrial internet of things (IoT) technologies.
The adoption of robotics — and automation more broadly — in enterprises has accelerated as the pandemic prompts digital transformations. Amazon is a heavy investor in robotics itself and hasn’t been shy about its intent to capture a larger part of a robotics software market that is anticipated to be worth over $7.52 billion by 2022.
IoT RoboRunner, currently in preview, builds on the technology already in use at Amazon warehouses for robotics management. It allows AWS customers to connect robots and existing automation software to orchestrate work across operations, combining data from each type of robot in a fleet and standardizing data types like facility, location, and robotic task data in a central repository. >> Read more.
[2] IT monitoring and management software?firm NinjaOne today announced an expansion of its data protection and security capabilities to better enable ransomware recovery and prevention.?
The Ninja Data Protection offering has added image backup capabilities, a top request from customers, according to Lewis Huynh, chief security officer (CSO) at NinjaOne. Founded in 2013, the company had been known as NinjaRMM until its rebranding last month to reflect the company’s expansion beyond remote monitoring and management (RMM).
This year’s spate of high-profile ransomware attacks including Colonial Pipeline, JBS Foods, and Kaseya have raised awareness about this variety of cyberthreats. A recent survey from fraud prevention software firm SpyCloud found 72% of respondents saying their organization had been affected by ransomware in the previous 12 months.
Ransomware is notoriously difficult to prevent because the root cause is often people-related — frequently the result of social engineering and phishing attacks. Data backup and restore capabilities are thus considered critical for risk management, as a way to prevent major business disruption and recover quickly in the wake of a ransomware breach
NinjaOne furnishes managed services providers and corporate IT teams with a unified view of both RMM and the Ninja Data Protection backup and disaster recovery offering. >> Read more.?
[3] Starburst, the commercial entity behind the open source Presto-based SQL query engine Trino, has announced a new fully managed,?cross-cloud analytics product that allows companies to query data hosted on any of the “big three’s” infrastructure — without moving the data from its original location.
While many of the big cloud data analytics vendors support the burgeoning multicloud movement by making their products available for each platform, problems remain in terms of making data stored in multiple environments easy to access. Companies still have to find a way to “pool” data from these different silos, be it through moving data to a single cloud or data warehouse, which is not only time-consuming but can also incur so-called “egress” fees for transferring data. And this is what Starburst is now addressing, by extending its fully managed SaaS product to allow its customers to analyze data across the major clouds with a single SQL query.
Starburst Galaxy was originally available only for AWS, but to support Starburst’s push into cross-cloud analytics, the company is now extending support to Microsoft’s Azure and Google Cloud Platform (GCP). It’s worth noting that Starburst had previously introduced a cross-cloud analytics product called Gateway for the self-managed incarnation. Now Starburst is bringing this same functionality to its fully-managed service, where it handles all the infrastructure and the customer doesn’t have to worry about what’s going on under the hood.?
Hold on, we’re entering the age of the AI-accelerator.
Technology accelerators speed things up, obviously. But then so does extra processing power, additional server space and drinking too much coffee - we are talking about the more considered use of IT accelerators in the new world of AI and ML.
Technology accelerators speed things up, obviously. But then so does extra processing power, additional server space, GPU-charged super boosting and drinking too much coffee.
Simply pumping more juice into an enterprise IT system is not regarded to be an accelerator per se. We are talking about the more considered use of IT accelerators in the new and far more algorithmically advanced world of Artificial Intelligence (AI) and Machine Learning (ML).
Industry accelerators have actually been around for most of our post-millennial existence. SAP has championed their use in its various platform guises as a means of getting customers running with live production systems faster. Through the use of templates, pre-architected application and data services design, customers can start with what clearly is rather more than a blank first sheet of paper.
Sometimes using obfuscated and anonymised datasets to run at system test stage, accelerators can get organisations to market faster, but only if used prudently, as not necessarily as some sort of blanket deployment panacea.
The weight of a thousand clouds
Among the firms now tabling accelerator-flavoured enrichment is Accenture. As part of an extended relationship with AWS, the IT services and consulting gurus at Accenture claim to have had experience working on ‘thousands of cloud projects’ in recent times. This, the company says, gives it the ability to understand the ‘human and business dimensions’ of cloud change at scale with greater speed and certainty
This human-business duality is an (arguably) refreshing way of expressing cloud computing deployment challenges. In live production environments we know that all too many of them suffer from poor integration, clunky alignment and misconfiguration headaches, the latter in particular being one of the problems that key players like Qualys are working to address through the use of Infrastructure-as-Code technologies.?
Over the next five years, Accenture will develop a range of new accelerators to address the biggest challenges in cloud migrations with a goal of enabling AWS innovations to be adopted at what is promised (at this stage) to be up to 50% faster.
To date, Accenture and AWS have co-created nearly 40 solutions for 16 industries with proven use-case relevance in order to ‘jumpstart’ client value. In today’s hyper-competitive era of compressed transformation, organisations must implement change under tremendous time pressure.
This human-business duality is an (arguably) refreshing way of expressing cloud computing deployment challenges. In live production environments we know that all too many of them suffer from poor integration, clunky alignment and misconfiguration headaches, the latter in particular being one of the problems that key players like Qualys are working to address through the use of Infrastructure-as-Code technologies.?
Over the next five years, Accenture will develop a range of new accelerators to address the biggest challenges in cloud migrations with a goal of enabling AWS innovations to be adopted at what is promised (at this stage) to be up to 50% faster.
To date, Accenture and AWS have co-created nearly 40 solutions for 16 industries with proven use-case relevance in order to ‘jumpstart’ client value. In today’s hyper-competitive era of compressed transformation, organisations must implement change under tremendous time pressure.
For Generali Vitality, a subsidiary of one of the largest global insurance and asset management providers, their wellness business group needed to scale quickly to reach new customers. Working with Accenture and AWS to tap into cloud-native technology, Generali Vitality is now able to roll out new features at the push of a button to continuously improve their products and engage customers.
So what really makes an accelerator work at the software code level? How much do accelerator technologies depend on AI & ML algorithmic strength? When is an IT accelerator not an IT accelerator… and where do accelerators go next?
More cores, does not equal smart acceleration
CEO at cloud workload management software company YellowDog Simon Ponsford says that his firm has witnessed the growth of data analytics in the shadow of ever-increasing iterations of high performance servers, each becomes faster than the last.
“The number of cores increases and network throughput is ever improving. Nevertheless, enterprise systems are typically used for running the same applications as they were 20 years ago, only faster... and with more data. This works for some processes, but in many cases, once results data has been generated it requires humans to interpret it,” he said.
?At YellowDog, Ponsford and team say they have come across many organisations that accelerate processes only to find it then takes them, days, weeks or even months for their experts to interpret the results. This is where ML and AI can play a key part, learning from the data and giving viable results in minutes to truly accelerate discovery and provide a shortcut to so-called ‘great business outcomes’, as the marketing people like to say.
An enterprise applications and data platform specialist with a history of working for firms including ServiceNow, Cognizant and the NYSE, Chris Pope agrees with the ‘acceleration does not equal intelligence’ sentiment. He says that software accelerators are only appropriately and effectively deployed if there is a distinct benefit to the business or use problem identified being solved.
“The practical aspects of software accelerator implementation need some care. Should we be asking ourselves whether humans also need to accelerate if we are being given faster results and insights – and, if so, at what point do we accept that we need to hand over some of that decision making to machine intelligence? There are questions to be asked here,” said Pope.
It’s a point well made. As we start to apply more of these super-charged, pre-templated, auto-tuned accelerator controls to the coalface of business, we should be looking introspectively at the engineering DNA we are feeding into these self-leaning engines. Having seen a wide variety of customer use cases run in every industry vertical imaginable, Pope heeds some cautionary advice and questions whether badly built systems will start to look for problems that aren’t actual business problems.
“Crafting human ingenuity, empathy and problem solving scope into the new software constructs being built to accelerate our future operations is not a one-click task. The old adage of ‘trust and verify’ still applies if we are to create acceleration controls that really work… especially as we look 10-years into the future and start to accelerate our accelerators,” said Pope.
Our accelerated future
We may not quite get to ‘push of a button’ simplicity in every deployment scenario, but the drive to apply IT accelerators to the coalface of business is on.
Undeniably, the advancements that have come forwards in algorithmic logic and power are a big part of the way these technologies can now be applied. We are also at a far more advanced stage in terms of our understanding of the ‘shape’ of data, so much so that so-called ‘data exchange’ platforms now exist where a business can actually trade their anonymised datasets with other firms inside of their business verticals and specialisms.
Like any physical automobile accelerator, nobody should be driving with a heavy right foot that stamps too hard on the gas, so prudence and patience is still needed for enterprises that wish to accelerate inside the IT stacks and drive at speed.
We’re building IT accelerators, but we’re not necessarily building a particularly sophisticated braking system or any formalised version of a reverse gear, so please look both ways.