Who's afraid of the big bad cloud: demystifying cloud computing

Who's afraid of the big bad cloud: demystifying cloud computing

Cloud computing has reached a tipping point in enterprise adoption, with the global market surpassing $200 billion and nearly 60% of enterprises deploying some form of cloud solution. The cloud offers scalability and cost savings that have driven exponential rates of adoption in recent years. Some studies anticipate that as much as 83% of all enterprise workload will be in the cloud by 2020.

The rapidly increasing volume, variety, and velocity of data facing legal practitioners coupled with the at times debilitating cost of keeping pace with infrastructure demands makes the ediscovery industry ripe to make the leap to cloud. The time to adapt is now, and those that lag will be stuck playing catch-up. 

So… what exactly is the cloud? 

Cloud computing, despite all the hype and confusion, is, at its core, a very simple resource sharing model. The cloud is simply on-demand computer resource, generally storage and computation power. 

Cloud providers offer enterprises a pay-as-you-go consumption-based model. This eliminates the burden of each individual enterprise building its own data center or infrastructure on premise, staffing it with system administrators and various expensive IT professionals and worrying about the ever looming threat of cyber attacks. Basically, a well-executed cloud-based IT program allows enterprises to focus more on the business of doing business and less on the headaches of IT management. 

Birth of cloud

Although there is quite a bit of buzz around the concept today the underlying concept dates back to the mid 1950s. 

The earliest versions of computing relied on something called a mainframe, an inconceivably expensive (millions in the 1950s) and massive central computer. Since it was not a scalable model to have a mainframe for every employee or even every corporation, multiple users accessed this central computer through dumb terminals. These dumb terminals possessed no standalone processing power and their sole purpose was to provide access back to the mainframe.

No alt text provided for this image

The UNIVAC I Mainframe from the 1950s

Eventually, the cost and space-saving benefit of virtual memory and the advent of personal computers replaced the mainframe model. But, this was not the end of the shared resource approach to computation. 

J.C.R. Licklider conceptualized a global decentralized computer network he dubbed an intergalactic computer network. Licklider had a big vision of interconnected communication and interaction with computers that drove his development of Advanced Research Projects Agency Network (ARPANET), the predecessor to the internet. 

Despite its limited abilities from nascent technology, ARPANET allowed globally disparate researchers to connected to the limited number of supercomputer mainframes and each other. The system had the additional benefit of increased security because no one node would destabilize the whole ecosystem if compromised. 

ARPANET started with 4 nodes and ballooned up to hundreds over the decades it developed. But with the advent and adoption of the internet, it faded into obscurity and was decommissioned in the ‘90s. 

Continue Reading

Catalin C.

Compliance | Risk | CIPP/E

5 年

Lets' see: The Cloud Act, The Patriot act, the Prism. Should I continue? And don't feed me the "datacenter is in the EU".? How the backup works? How do you allocate the physical and the logical cloud storage, how the DRP works and ultimately, from where the cloud is managed? Should I fear the cloud? Oooh yeah, cause of privacy first!

要查看或添加评论,请登录

CAT CASEY的更多文章

社区洞察

其他会员也浏览了