Harnessing the power of data through digital transformation - a roundtable discussion among IT peers in London

Harnessing the power of data through digital transformation - a roundtable discussion among IT peers in London

I attended a roundtable event in London a couple of months ago. What happens, when you start a discussion around digital transformation with a Big Data analytics audience? If you are interested, then read the following write-up which was published by our media partner National Technology News, in a newsletter after the event.

Artificial intelligence and the cloud are seemingly everywhere, but how can they be practically used to drive enterprises forward?

As the volume of data grows exponentially, businesses must harness it to develop better products, empower employees with insight and deliver personalised customer service. But regardless of size and sector, those that lack the means to store, process and unlock the value within their data are at risk of falling behind in the race to digital transformation.

Modernised data infrastructure - whether in-house or outsourced - makes use of automation to sidestep errors in manual processes and reduce costs, while channelling the data to where it needs to be.

However, increased data flows can also slow businesses and existing applications down. As a result, the way businesses store their data is crucial in order to be able to leverage artificial intelligence (AI) and machine learning. Modernised data centres sit at the heart of this transformation, enabling firms to store more data types from more sources, perform high level analysis and improve compliance.

This event brought together relevant industry peers to discuss key pain points, best practice and available solutions when it comes to data centre modernisation.

Tom Christensen, chief technology officer for northern EMEA at Hitachi Vantara, began with a presentation looking at some of the top trends impacting operations, as well as the need for AI-operation and automation strategy rethink.

The Internet of Things, edge infrastructure, the variety of cloud storage solutions, and of course people’s personal devices all mean company data is spread across several locations at any one time. Christensen commented that while it’s great people can easily access information on the go, this increases the potential for security breaches and non-compliance with regulations.

Moving onto AI, he predicted that humans have a good 25 to 30 years before they’re at real risk of being replaced by robots, but until then, augmented intelligence will become increasingly prevalent. “These technologies can help people work with data faster and easier, freeing them up to do more creative and customer-facing work.”

Cloud is flexible, so is infrastructure

When it comes to storage infrastructure, customers are looking for a flexible and programmable architecture, one that can scale from a few terabytes to many petabytes and allows scaling with the resource required, such as connectivity or central processing unit resources or media.

“It requires an architecture where controller nodes are disconnected from the storage medium,” Christensen commented. “So, you can do online data migration, expand with whatever resource you need or simply upgrade components forever without thinking of new technology that may come up around the corner.”

When IT hardware assets are bought today, the asset depreciates in value over time as it is used. AI-operation is opposite in nature and increases in value as an asset over time. The more the software is used, the smarter it will get. “It has reached the tipping-point where it will automate and guide you to make the right decision and a whole lot more.”

No alt text provided for this image

Christensen then talked about domaincentric AI and domain-agnostic AI in the data centre.

Domain-centric AI: “imagine a storage infrastructure that can change behaviour while it runs in production – do real-time data reduction and change it to a post job, if internal resources are required elsewhere or do root cause analysis with guided recommendations.” These kind of internal AI capabilities will guide administration to do the right thing.

Automation also extends to entire data centres, where human-developed scripts hold back the ability to scale. Christensen said it needs to be developed to the level where we begin to automate the entire data centre.

Domain-agnostic AI: “is where we combine native AI-information from different vendors in a holistic approach, like valuable insights from hypervisors, servers, network and storage systems into a common self-service portal for automation,” he explained. “So, when you create a virtual machine, you might be informed that a performance bottleneck is coming up next Wednesday between 1pm and 2pm on the selected compute node.”

This is possible because Hitachi's AI-Operation software has learned your workload behaviour in the data centre, and therefore can guide you in the right direction. As time goes by, the AI-Operation will learn more and more about your data centre and thereby be more value adding in your day-to-day operation. “At the end of the day, we are human, and we tend to forget or make mistakes, but machines do not.”

This lead to a discussion on ‘hyperconverged’ solutions, which can lead to changes in storage architecture, with key motivations for hybrid cloud, Big Data, machine learning workloads and ‘one click’ management.

The quantum future

There was a question from one person around the table about how quantum computing will affect the industry. Christensen admitted that it is a hot topic. “It will do great in a complex world of billions of possibilities and find the one answer, in close to real time,” he stated, suggesting that it is going to solve problems in biomimetics, optimisation, material science, chemistry, personalised medicine, etc.

However it will not come to fruition before 2028. A simplified layman’s description would be; if one computer can search/read through one book in one process, quantum computing can search/ read all books available in a parallel process.

Christensen pointed out that machine learning is good at coming up with recommendations based on what happened in the past. “But if we have a scenario where we need to deliver 100 packages with 15 trucks in a couple of hours, quantum computing can easily calculate the optimal delivery route for 15 trucks – that is not easy with the current computer power we have today or even the human mind.” He continued that this was why Hitachi is looking at quantum computing, with research already underway on possible solutions.

No alt text provided for this image

His colleague Jason Beckett, solutions consultant manager for the UK and Ireland, took over and began by explaining the SEAM (Store, Enrich, Activate and Monetize) methodology - a four stage ‘stairway to value’ concept around managing data. He quoted a Gartner study which suggested that by 2025, more than 90 per cent of enterprises will have an automation architect, up from less than 20 per cent today, and referred to Hitachi Vantara’s chief technology officer Bill Schmarzo’s points about automating processes to make data usable. “This frees up data scientists to work on more valuable tasks.”

There was another question about whether all data was valuable, given how nothing seems to get deleted anymore due to regulatory paranoia.

What to delete, what to keep?

Beckett responded that when working with a tier one organisation in the US, looking into its data storage challenges, his team found that much of the problem was down to them simply storing an inordinate number of JPEG images of Saddam Hussein and Kylie Minogue.

His practical suggestion was to delete what is no longer necessary, but to use deletion certificates which make sure there’s a clear audit trail.

Someone else pointed out that when data deletion was raised in the run-up to the General Data Protection Regulation (GDPR), no senior executives at their firm wanted to make a decision – with Subject Access Requests seeming to be the biggest issue for risk and legal teams.

Beckett explained that data should first be stored and protected at the appropriate cost across on-premise, public and hybrid cloud; data should then be abstracted, with metadata used to enhance context and enable more intelligent use; then the next move is to integrate and orchestrate data assets, leveraging analytics to generate insight; before finally turning those insights into business value. He handed the baton on to his colleague and technology consultant Cliff Prescott, who talked though storage infrastructure refresh work he had undertaken with a range of global banks, technology platforms and stock exchange clients. Such an update would aim to move infrastructure on to a digital economics or utility consumption model, with potential business outcomes including total cost of ownership reductions and data centre modernisation.

Among the advantages he gave for using something like Hitachi’s Flex Consumption model, were removing risk, aligning costs to usage and enabling better prediction over future costs.

Prescott mentioned specific work undertaken with a financial services provider, which adopted a storage-as-a-service model with full remote managed service that improved agility and led to a £13 million saving over five years.

要查看或添加评论,请登录

Tom Christensen的更多文章

社区洞察

其他会员也浏览了