Introduction
In today's fast-paced digital landscape, businesses face the challenge of efficiently handling fluctuating workloads and user demands. Elastic scaling has emerged as a powerful solution to this problem, allowing organizations to dynamically adjust their computing resources based on real-time requirements. This article will delve into the concept of elastic scaling, exploring its benefits, use cases, case studies, metrics, challenges, and future outlook.
What is Elastic Scaling?
Elastic scaling, also known as autoscaling, is the ability of a system to automatically adapt its capacity by provisioning or deprovisioning resources in response to changes in demand. It enables applications to scale up or down based on predefined metrics and thresholds, ensuring optimal performance and cost-efficiency.
The core concept behind elastic scaling is the dynamic allocation of resources. Instead of statically provisioning a fixed amount of computing power, elastic scaling allows systems to intelligently adjust their resources based on workload variations. This flexibility ensures that applications have the necessary capacity to handle peak loads while minimizing resource wastage during periods of low demand.
Benefits of Elastic Scaling
Elastic scaling offers numerous benefits to organizations, making it an attractive solution for managing modern applications:
- Cost Optimization: By automatically scaling resources based on demand, elastic scaling helps organizations optimize their infrastructure costs. Instead of overprovisioning resources to handle peak loads, which can lead to underutilization during off-peak periods, elastic scaling ensures that resources are allocated only when needed. This pay-per-use model allows businesses to align their IT expenditure with actual usage, resulting in significant cost savings.
- Improved Performance: Elastic scaling enables applications to maintain optimal performance even during periods of high traffic or increased workload. By automatically provisioning additional resources when needed, elastic scaling prevents performance degradation and ensures a seamless user experience. This is particularly crucial for applications that experience sudden spikes in traffic, such as e-commerce websites during sales events or news portals during breaking news.
- Scalability and Flexibility: Elastic scaling provides applications with the ability to scale horizontally and vertically based on demand. Horizontal scaling involves adding or removing instances of an application, while vertical scaling involves adjusting the resources allocated to individual instances. This flexibility allows organizations to adapt to changing business requirements, whether it's accommodating growth or handling seasonal variations in workload.
- High Availability: Elastic scaling contributes to the high availability of applications by automatically replacing failed instances and distributing traffic across healthy instances. By dynamically adjusting the number of instances based on demand, elastic scaling ensures that there are always sufficient resources available to handle user requests, even in the face of hardware failures or network disruptions.
- Automated Operations: Elastic scaling automates the process of resource allocation, eliminating the need for manual intervention. By defining scaling policies and metrics, organizations can let the system automatically make scaling decisions based on real-time data. This automation reduces operational overhead, minimizes human error, and allows IT teams to focus on higher-value tasks.
Use Cases
Elastic scaling finds applicability across various industries and use cases. Let's explore some common scenarios where elastic scaling proves beneficial:
- E-commerce: Online retail platforms experience significant variations in traffic, especially during sales events, holidays, or product launches. Elastic scaling allows e-commerce applications to handle sudden surges in demand by automatically provisioning additional resources. This ensures that the website remains responsive and provides a smooth shopping experience for customers, even during peak periods.
- Web Applications: Web applications, such as content management systems, social media platforms, or online services, can benefit from elastic scaling. As user traffic fluctuates throughout the day or based on specific events, elastic scaling dynamically adjusts the resources allocated to the application. This helps maintain optimal performance, reduces latency, and prevents service disruptions due to resource constraints.
- Big Data Analytics: Big data workloads often involve processing large volumes of data in batch or real-time scenarios. Elastic scaling enables big data platforms to scale horizontally by adding or removing nodes in a cluster based on the processing requirements. This allows organizations to efficiently handle data-intensive tasks, such as data ingestion, transformation, and analysis, while optimizing resource utilization.
- Media Streaming: Media streaming services, such as video-on-demand platforms or live streaming applications, experience variable demand based on the popularity of content and viewer engagement. Elastic scaling helps these services accommodate sudden spikes in viewership by automatically scaling resources to handle increased traffic. This ensures a smooth streaming experience for users and prevents buffering or interruptions.
- Gaming: Online gaming platforms require low latency and high responsiveness to provide an immersive gaming experience. Elastic scaling allows gaming servers to dynamically adjust their capacity based on the number of active players. During peak gaming hours or when new game releases attract a large number of players, elastic scaling ensures that the gaming infrastructure can handle the increased load without compromising performance.
- Internet of Things (IoT): IoT deployments involve a large number of connected devices generating and transmitting data in real-time. Elastic scaling enables IoT platforms to handle the massive influx of data by automatically scaling the backend infrastructure. This allows for efficient data ingestion, processing, and storage, ensuring that the IoT system remains responsive and can handle the growing volume of data.
Case Studies
To further illustrate the effectiveness of elastic scaling, let's examine a few real-world case studies:
- Netflix: Netflix, the popular video streaming service, relies heavily on elastic scaling to handle its massive user base and variable workload. Netflix uses Amazon Web Services (AWS) Auto Scaling to automatically adjust the number of server instances based on the incoming traffic. During peak viewing hours or when new content is released, Netflix's infrastructure seamlessly scales up to accommodate the increased demand, ensuring a smooth streaming experience for millions of users worldwide.
- Airbnb: Airbnb, the online marketplace for short-term rentals, experiences significant traffic fluctuations based on seasonal travel patterns and popular events. Airbnb leverages elastic scaling to dynamically adjust its infrastructure capacity. By utilizing AWS Auto Scaling, Airbnb can automatically provision additional server instances during high-demand periods and scale down during off-peak times. This allows Airbnb to maintain optimal performance, handle bookings efficiently, and provide a seamless user experience.
- Expedia: Expedia, a leading online travel booking platform, faces the challenge of handling a large volume of search queries and booking transactions. Expedia implemented elastic scaling using Kubernetes, an open-source container orchestration platform. By leveraging Kubernetes' autoscaling capabilities, Expedia can automatically scale its microservices based on CPU utilization and request metrics. This ensures that the platform can handle sudden spikes in traffic, maintain low latency, and provide a reliable booking experience for travelers.
- Hubspot: Hubspot, a cloud-based marketing and sales platform, relies on elastic scaling to handle its growing user base and varying workload. Hubspot utilizes Google Cloud Platform's autoscaling features to dynamically adjust its infrastructure based on demand. By setting up scaling policies and defining target metrics, Hubspot can automatically scale its application servers, databases, and other resources. This enables Hubspot to handle increased traffic during peak usage periods and optimize costs during low-demand times.
Metrics for Elastic Scaling
To effectively implement and monitor elastic scaling, organizations need to define and track relevant metrics. These metrics help determine when to trigger scaling actions and assess the performance of the scaling mechanism. Some commonly used metrics for elastic scaling include:
- CPU Utilization: CPU utilization is a key metric that indicates the percentage of CPU resources being used by an application. By monitoring CPU utilization, organizations can set thresholds to trigger scaling actions. For example, if the average CPU utilization across instances exceeds a certain threshold (e.g., 80%), the system can automatically provision additional instances to handle the increased workload.
- Memory Utilization: Memory utilization measures the amount of memory being consumed by an application. Similar to CPU utilization, monitoring memory utilization helps identify when additional resources are needed. If the memory utilization consistently reaches a high threshold, elastic scaling can be triggered to allocate more memory to the application instances.
- Request Rate: The request rate represents the number of incoming requests per second that an application receives. Monitoring the request rate helps determine the load on the system and can be used as a scaling metric. If the request rate exceeds a predefined threshold, elastic scaling can be triggered to add more instances to handle the increased traffic.
- Response Time: Response time measures the time taken by an application to process and respond to user requests. It is an important metric for assessing the performance and user experience of an application. If the response time consistently exceeds acceptable levels, elastic scaling can be used to allocate more resources or distribute the load across additional instances to improve responsiveness.
- Queue Length: In some scenarios, applications may rely on message queues to handle asynchronous processing. Monitoring the queue length helps identify when the system is struggling to process tasks promptly. If the queue length grows beyond a certain threshold, elastic scaling can be triggered to add more worker instances to process the backlog of tasks.
- Network Bandwidth: Network bandwidth measures the amount of data transferred to and from an application. Monitoring network bandwidth helps identify when the system is reaching its capacity limits. If the network bandwidth consistently reaches a high threshold, elastic scaling can be used to distribute the load across multiple instances or allocate more network resources.
By defining and monitoring these metrics, organizations can establish scaling policies that automatically trigger scaling actions based on predefined thresholds. These metrics provide valuable insights into the performance and resource utilization of the application, enabling proactive scaling decisions to maintain optimal performance and user experience.
Roadmap for Implementing Elastic Scaling
Implementing elastic scaling requires careful planning and execution. Here's a roadmap that organizations can follow to successfully adopt elastic scaling:
- Assess Scalability Requirements: Start by analyzing the scalability requirements of your application. Identify the key performance indicators (KPIs) and determine the expected workload patterns. Consider factors such as peak traffic periods, seasonal variations, and future growth projections. This assessment will help define the scaling objectives and metrics.
- Architect for Scalability: Design your application architecture with scalability in mind. Break down monolithic applications into smaller, loosely coupled components or microservices. Ensure that the application components can operate independently and scale horizontally. Use stateless designs whenever possible to facilitate easier scaling.
- Choose the Right Technology Stack: Select a technology stack that supports elastic scaling. Consider using cloud platforms like Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform, which provide built-in autoscaling capabilities. Evaluate container orchestration platforms like Kubernetes or Docker Swarm for managing and scaling containerized applications.
- Define Scaling Policies: Establish clear scaling policies that define when and how to scale resources. Determine the metrics and thresholds that will trigger scaling actions. Consider factors such as CPU utilization, memory usage, request rate, and response time. Define the minimum and maximum number of instances and the scaling increment or decrement steps.
- Implement Monitoring and Alerting: Set up comprehensive monitoring and alerting systems to track the performance and resource utilization of your application. Use monitoring tools like Prometheus, Grafana, or cloud-native monitoring solutions to collect and visualize metrics. Configure alerts to notify the team when scaling thresholds are breached or when anomalies are detected.
- Test and Validate: Thoroughly test and validate the elastic scaling implementation. Conduct load testing to simulate various workload scenarios and observe how the system scales in response to increased demand. Verify that the scaling policies work as expected and that the application maintains its performance and stability during scaling events.
- Optimize and Fine-tune: Continuously monitor and analyze the performance of the elastic scaling implementation. Fine-tune the scaling policies based on real-world observations and feedback. Optimize the scaling thresholds and resources allocated to achieve the desired balance between performance and cost-efficiency.
- Automate and Integrate: Automate the scaling process as much as possible to minimize manual intervention. Integrate the scaling mechanism with your continuous integration and continuous deployment (CI/CD) pipeline. Ensure that the scaling process is triggered automatically based on predefined policies and metrics.
- Monitor and Iterate: Regularly monitor the effectiveness of the elastic scaling implementation. Analyze the metrics, logs, and performance data to identify areas for improvement. Iterate on the scaling policies and architecture based on the insights gained from monitoring and user feedback.
- Educate and Train: Provide training and education to the development and operations teams on elastic scaling concepts and best practices. Ensure that everyone understands the scaling policies, metrics, and procedures. Foster a culture of collaboration and continuous improvement to optimize the elastic scaling implementation over time.
By following this roadmap, organizations can effectively implement elastic scaling and reap the benefits of dynamic resource allocation. It is important to remember that elastic scaling is an ongoing process that requires continuous monitoring, optimization, and adaptation to changing business requirements and workload patterns.
Return on Investment (ROI)
Implementing elastic scaling can provide significant return on investment (ROI) for organizations. Let's explore the key factors that contribute to the ROI of elastic scaling:
- Cost Optimization: Elastic scaling allows organizations to optimize their infrastructure costs by aligning resource allocation with actual demand. Instead of overprovisioning resources to handle peak loads, elastic scaling dynamically adjusts capacity based on workload variations. This pay-per-use model ensures that organizations only pay for the resources they actually consume, reducing waste and minimizing costs during off-peak periods.
- Improved Resource Utilization: By automatically scaling resources up and down based on demand, elastic scaling maximizes resource utilization. Instead of having idle resources during low-demand periods, elastic scaling allows organizations to efficiently utilize their infrastructure. This leads to better resource utilization and cost savings, as resources are not unnecessarily provisioned and can be allocated to other applications or workloads.
- Increased Agility and Time-to-Market: Elastic scaling enables organizations to quickly respond to changing business requirements and market demands. With the ability to rapidly scale resources, organizations can launch new applications or services faster, without the need for extensive capacity planning or infrastructure provisioning. This agility allows businesses to seize opportunities, innovate faster, and gain a competitive edge in the market.
- Enhanced User Experience: Elastic scaling ensures that applications maintain optimal performance even during periods of high traffic or increased workload. By automatically provisioning additional resources when needed, elastic scaling prevents performance degradation and ensures a seamless user experience. This leads to higher user satisfaction, increased engagement, and improved customer retention, which can translate into increased revenue and brand loyalty.
- Reduced Downtime and Business Losses: Elastic scaling contributes to the high availability and resilience of applications. By automatically replacing failed instances and distributing traffic across healthy instances, elastic scaling minimizes the impact of hardware failures or network disruptions. This reduction in downtime and service interruptions prevents potential business losses and maintains the reliability and reputation of the application.
- Operational Efficiency: Elastic scaling automates the process of resource allocation, eliminating the need for manual intervention and reducing operational overhead. IT teams can focus on higher-value tasks, such as application development, optimization, and innovation, rather than spending time on manual scaling and resource management. This increased operational efficiency leads to cost savings and allows organizations to allocate their resources more effectively.
To calculate the ROI of elastic scaling, organizations need to consider the following factors:
- Cost savings from optimized resource allocation and reduced overprovisioning
- Increased revenue resulting from improved user experience and faster time-to-market
- Reduced downtime and business losses prevented by high availability and resilience
- Operational efficiency gains and cost savings from automated scaling processes
By quantifying these factors and comparing them to the initial investment and ongoing costs of implementing elastic scaling, organizations can determine the ROI. The specific ROI will vary based on the organization's infrastructure setup, application requirements, and business objectives. However, the benefits of elastic scaling, such as cost optimization, agility, and improved user experience, often outweigh the initial investment, leading to a positive ROI over time.
Challenges and Considerations
While elastic scaling offers numerous benefits, it also presents certain challenges and considerations that organizations need to address:
- Application Compatibility: Not all applications are designed to be easily scalable. Monolithic architectures or tightly coupled components may require significant refactoring or redesign to support elastic scaling. Organizations need to assess their existing applications and determine the effort required to make them scalable. This may involve breaking down monoliths into microservices, implementing stateless designs, and ensuring loose coupling between components.
- Data Consistency and Synchronization: Elastic scaling often involves running multiple instances of an application simultaneously. Ensuring data consistency and synchronization across these instances can be challenging. Organizations need to implement appropriate data management strategies, such as distributed databases, caching mechanisms, or data replication techniques, to maintain data integrity and consistency during scaling events.
- Network and Security Considerations: Elastic scaling introduces additional complexity in terms of network configuration and security. As instances are dynamically provisioned and deprovisioned, organizations need to ensure that network policies, firewall rules, and security groups are properly configured and updated. Securing communication between instances, implementing proper authentication and authorization mechanisms, and protecting against potential vulnerabilities become critical considerations.
- Monitoring and Logging: With a dynamically scaling infrastructure, monitoring and logging become more complex. Organizations need to implement robust monitoring solutions that can track the performance and health of individual instances as well as the overall system. Centralized logging and log aggregation mechanisms are essential to gain visibility into the behavior of the application across multiple instances.
- Cost Management: While elastic scaling helps optimize costs by aligning resource allocation with demand, it also introduces the risk of unexpected costs if not properly managed. Organizations need to establish cost monitoring and budgeting mechanisms to ensure that the scaling policies do not lead to excessive resource provisioning or uncontrolled costs. Setting appropriate scaling thresholds, defining cost alerts, and regularly reviewing the scaling behavior are essential for effective cost management.
- Vendor Lock-in: Elastic scaling often relies on cloud platforms or specific technologies provided by vendors. Organizations need to consider the potential risk of vendor lock-in when adopting elastic scaling solutions. Dependence on a particular vendor's autoscaling capabilities or proprietary technologies can limit flexibility and portability in the future. It is important to evaluate the vendor's roadmap, standards compliance, and the availability of alternative solutions to mitigate the risk of vendor lock-in.
- Skill Set and Expertise: Implementing and managing elastic scaling requires a specific skill set and expertise. Organizations need to ensure that their development and operations teams have the necessary knowledge and experience to design, configure, and maintain elastic scaling solutions. This may involve training existing staff, hiring new talent, or partnering with external experts to bridge the skill gap.
- Compliance and Regulatory Requirements: Elastic scaling introduces additional considerations for compliance and regulatory requirements. As instances are dynamically provisioned and deprovisioned, organizations need to ensure that they adhere to relevant regulations and standards, such as data privacy laws, security guidelines, and industry-specific compliance frameworks. Proper auditing, logging, and access controls must be in place to maintain compliance in a dynamically scaling environment.
- Testing and Quality Assurance: Testing and quality assurance become more complex with elastic scaling. Organizations need to develop comprehensive testing strategies that cover various scaling scenarios, including sudden spikes in traffic, instance failures, and data consistency. Load testing, chaos engineering, and automated testing frameworks are essential to ensure the reliability and performance of the application under different scaling conditions.
Addressing these challenges and considerations requires careful planning, architecture design, and ongoing management. Organizations need to invest in the right tools, processes, and expertise to successfully implement and operate elastic scaling solutions. By proactively addressing these challenges and incorporating best practices, organizations can maximize the benefits of elastic scaling while minimizing risks and ensuring the stability and performance of their applications.
Future Outlook
The future of elastic scaling looks promising as organizations continue to embrace cloud computing and the need for dynamic resource allocation grows. Let's explore some of the trends and advancements that are shaping the future of elastic scaling:
- Serverless Computing: Serverless computing takes elastic scaling to the next level by abstracting away the underlying infrastructure entirely. With serverless architectures, organizations can focus solely on writing and deploying code, while the cloud provider automatically scales the resources based on the incoming requests. This model eliminates the need for explicit scaling policies and offers granular, event-driven scaling capabilities.
- Kubernetes and Container Orchestration: Kubernetes has emerged as the de facto standard for container orchestration, providing a powerful platform for elastic scaling of containerized applications. As Kubernetes continues to mature and gain widespread adoption, organizations can leverage its autoscaling capabilities, such as the Horizontal Pod Autoscaler (HPA) and the Vertical Pod Autoscaler (VPA), to dynamically scale their applications based on metrics and resource utilization.
- AI-Driven Scaling: Artificial Intelligence (AI) and Machine Learning (ML) are increasingly being applied to optimize elastic scaling. By leveraging AI algorithms, scaling systems can learn from historical data, predict future workload patterns, and make intelligent scaling decisions. AI-driven scaling can help organizations proactively scale resources, anticipate demand spikes, and optimize resource allocation for improved efficiency and cost-effectiveness.
- Multi-Cloud and Hybrid Cloud Scaling: As organizations adopt multi-cloud and hybrid cloud strategies, elastic scaling solutions will need to evolve to support seamless scaling across different cloud environments. This requires the development of standardized interfaces, APIs, and management tools that can orchestrate scaling actions across multiple cloud platforms. Hybrid cloud scaling will enable organizations to leverage the best of both on-premises and cloud resources while ensuring consistent scaling behavior.
- Edge Computing and IoT Scaling: The proliferation of Internet of Things (IoT) devices and the rise of edge computing are driving the need for elastic scaling at the edge. As data processing and analysis move closer to the source, elastic scaling solutions will need to adapt to handle the unique requirements of edge environments, such as limited resources, intermittent connectivity, and real-time processing. Edge scaling will enable organizations to efficiently handle the massive influx of data from IoT devices while ensuring low latency and responsiveness.
- Serverless Containers: The convergence of serverless computing and container technologies is leading to the emergence of serverless containers. Serverless containers combine the benefits of serverless computing, such as automatic scaling and pay-per-use pricing, with the portability and flexibility of containers. This approach allows organizations to package their applications as containers and run them in a serverless environment, enabling fine-grained scaling and efficient resource utilization.
- Scaling for Stateful Applications: Traditionally, elastic scaling has been more straightforward for stateless applications. However, the increasing demand for stateful applications, such as databases and message queues, requires advanced scaling techniques. Innovations in distributed databases, in-memory caching, and state management frameworks are enabling elastic scaling for stateful applications, ensuring data consistency and high availability.
- Scaling for Machine Learning and Big Data: Machine Learning (ML) and Big Data workloads present unique scaling challenges due to their resource-intensive nature and complex data dependencies. The future of elastic scaling will see the development of specialized scaling solutions tailored for ML and Big Data applications. This includes auto-scaling for ML training jobs, dynamic resource allocation for data processing pipelines, and intelligent scaling based on data volume and complexity.
- Scaling for Serverless Functions: Serverless functions, also known as Function-as-a-Service (FaaS), are becoming increasingly popular for building event-driven and scalable applications. The future of elastic scaling will see advancements in scaling serverless functions based on the incoming event rate, function execution time, and resource requirements. This will enable organizations to build highly responsive and cost-effective serverless applications.
- Scaling for 5G and Edge Computing: The advent of 5G networks and the growth of edge computing will drive the need for elastic scaling solutions that can handle the massive scale and low latency requirements of these environments. Elastic scaling will need to adapt to the distributed nature of edge computing, enabling efficient resource allocation and scaling across edge nodes and central cloud infrastructure.
As the technology landscape evolves and new challenges emerge, elastic scaling solutions will continue to advance and adapt. Organizations will need to stay informed about the latest trends, best practices, and innovations in elastic scaling to effectively meet their scalability requirements and stay competitive in the digital age.
Conclusion
Elastic scaling has revolutionized the way organizations approach resource management and application scalability. By dynamically adjusting computing resources based on real-time demand, elastic scaling enables businesses to optimize performance, reduce costs, and enhance user experiences.
Throughout this analysis, we have explored the concept of elastic scaling, its benefits, use cases, and real-world case studies. We have discussed the key metrics and the roadmap for implementing elastic scaling, highlighting the importance of proper planning, architecture design, and monitoring. We have also examined the return on investment (ROI) of elastic scaling, demonstrating how it can lead to cost savings, improved resource utilization, and increased agility.
However, we have also acknowledged the challenges and considerations that come with elastic scaling, such as application compatibility, data consistency, security, and vendor lock-in. Organizations must carefully address these challenges and implement best practices to ensure a successful and sustainable elastic scaling implementation.
Looking ahead, the future of elastic scaling is promising, with advancements in serverless computing, container orchestration, AI-driven scaling, and edge computing. As technology continues to evolve, elastic scaling solutions will adapt to meet the changing requirements and enable organizations to scale their applications effectively in the face of increasing complexity and demand.
In conclusion, elastic scaling is a powerful tool that empowers organizations to dynamically manage their computing resources, optimize costs, and deliver exceptional user experiences. By embracing elastic scaling and staying informed about the latest trends and best practices, organizations can unlock the full potential of their applications and thrive in the ever-changing digital landscape.
References
- Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., ... & Zaharia, M. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50-58.
- AWS. (n.d.). AWS Auto Scaling. Retrieved from https://aws.amazon.com/autoscaling/
- Google Cloud. (n.d.). Autoscaling groups of instances. Retrieved from https://cloud.google.com/compute/docs/autoscaler/
- Kubernetes. (n.d.). Horizontal Pod Autoscaler. Retrieved from https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
- Microsoft Azure. (n.d.). Autoscale. Retrieved from https://docs.microsoft.com/en-us/azure/azure-monitor/autoscale/autoscale-overview
- Vaquero, L. M., Rodero-Merino, L., Caceres, J., & Lindner, M. (2009). A break in the clouds: towards a cloud definition. ACM SIGCOMM Computer Communication Review, 39(1), 50-55.
- Wang, L., Bai, X., Li, X., & Chen, H. (2018). A survey on auto-scaling web applications in cloud environment. Journal of Network and Computer Applications, 117, 102-115.
- Worldwide Public Cloud Services Spending Forecast to Reach $494.7 Billion in 2022, According to IDC. (2022). Retrieved from https://www.idc.com/getdoc.jsp?containerId=prUS48881022
- Youssef, A. E. (2021). Exploring Kubernetes-Based Autoscaling Techniques for Cloud-Native Applications: A Review. IEEE Access, 9, 141433-141450.
- Zhang, Q., Cheng, L., & Boutaba, R. (2010). Cloud computing: state-of-the-art and research challenges. Journal of Internet Services and Applications, 1(1), 7-18.