- On-demand: Customers get computing resources on-demand and self-service. Cloud-computing customers use an automated interface and get the processing power, storage, and network they need, with no need for human intervention.
- Access over network: Users can access these resources over the network (internet).
- Big resource pool: The cloud provider of those resources has a big pool of them, and allocates the resources to customers out of the pool. That allows the provider to get economies of scale by buying in bulk. Customers don’t have to know or care about the exact physical location of those resources. The idea behind resource pooling is that through modern scalable systems involved in cloud computing and software as a service (SaaS), providers can create a sense of infinite or immediately available resources by controlling resource adjustments at a meta level. This allows customers to change their levels of service at will without being subject to any of the limitations of physical or virtual resources.
- The resources are elastic. Customers who need more resources can get more rapidly. When they need less, they can scale down. Elastic resources are applications and infrastructure that can be summoned on demand when traffic or workloads get high. This supply side and demand side economic cycle are the underpinnings of the cloud ecosystem. A simple example of an elastic resource is: a business only requires 2 servers to run their website, but see a holiday traffic spike, they can simply allocate additional elastic resources by increasing the VM machines from 2 to 4 to handle the holiday traffic load. Once that traffic dies down, they can deprovision the servers back to 2. This is an elastic resource.
- Pay what you use: The customers pay only for what they use or reserve, as they go. If they stop using resources, they stop paying.