On the sixth day of my journey through Data Structures and Algorithms (DSA), I earned about time and space complexity. Today, I also tackled problems with arrays. These challenges helped me get better at solving problems. l In this article, I'll share a summary of what I've learned so far. ????
- Initialize three pointers: low, mid, and high.
- Use a while loop until mid is less than or equal to high.
- Check the value at arr[mid]:If it's 0, swap arr[mid] with arr[low], then increment both mid and low.If it's 1, just increment mid.If it's 2, swap arr[mid] with arr[high], then decrement high.
- Repeat these steps until mid crosses high.
- The array is sorted in-place based on 0s, 1s, and 2s.
- The code defines a function pairSum that takes a vector arr and an integer s as parameters.
- It uses two nested loops to iterate through pairs of elements in the array.
- Inside the loops, it checks if the sum of the current pair equals the target sum s.
- If a pair with the desired sum is found, it creates a temporary vector temp containing the pair in ascending order and adds it to the result vector ans.
- After completing the loops, the result vector ans is sorted in ascending order.
- The function returns the sorted vector of pairs that sum up to the given target.
- The code defines a function printArray to print the elements of an array and initializes it with zeros and ones.
- The sortOne function sorts the array in such a way that all the 0s appear before 1s.It uses two pointers (left and right) initialized at the start and end of the array.While the left pointer points to 0, it moves to the right.While the right pointer points to 1, it moves to the left.If the left pointer is still less than the right pointer, it swaps the elements at these pointers and increments left and decrements right.
- The main function initializes an array with a mix of 0s and 1s.
- It then calls the sortOne function to sort the array.
- Finally, it prints the sorted array using the printArray function.
- The output will be a sorted array with all 0s appearing before 1s.
Time complexity is a measure of the amount of time an algorithm takes to complete based on the size of the input.
- Big O Notation:Time complexity is often expressed using Big O notation (O notation). It describes the upper bound of the growth rate of an algorithm in terms of the input size.
- Example - Linear Search:For a linear search in an array, the time complexity is O(n), where 'n' is the size of the array.This means the time taken grows linearly with the size of the input.
- Importance of Time Complexity:It helps in evaluating and comparing algorithms based on their efficiency.Allows us to make informed decisions about algorithm selection for different problem sizes.
- Common Time Complexities:O(1): Constant time complexity (e.g., accessing an element in an array).O(log n): Logarithmic time complexity (e.g., binary search).O(n): Linear time complexity (e.g., linear search).O(n^2): Quadratic time complexity (e.g., bubble sort).
- Analyzing Algorithms:Time complexity analysis considers the number of basic operations an algorithm performs relative to the input size.It abstracts away constant factors and lower-order terms, focusing on the dominant growth rate.
- Algorithm Efficiency:Lower time complexity generally indicates a more efficient algorithm.It becomes crucial when dealing with large datasets where even small improvements in efficiency matter.
- Trade-offs:Achieving lower time complexity might involve trade-offs, such as increased space complexity or more complex code.
- Conclusion:Time complexity is a fundamental concept in algorithm analysis, guiding the selection of algorithms based on their efficiency and performance characteristics.
Space Complexity: Space complexity is a measure of the amount of memory or space an algorithm uses in relation to the size of the input. It evaluates how efficiently an algorithm utilizes memory resources during its execution. Here's a breakdown of space complexity and its significance:
- Definition:Space complexity is expressed in Big O notation (O notation), similar to time complexity. It describes the upper bound of the amount of memory an algorithm uses relative to the input size.
- Memory Usage:Algorithms may require memory for variables, data structures, recursive call stacks, and other auxiliary space.
- Importance of Space Complexity:Efficient use of memory resources is crucial, especially in resource-constrained environments (e.g., embedded systems, mobile devices, or large-scale server applications).Helps in understanding how the memory requirements of an algorithm scale with the input size.
- Common Space Complexities:O(1): Constant space complexity. The memory usage remains constant, regardless of the input size (e.g., variables).O(n): Linear space complexity. The memory usage scales linearly with the input size (e.g., arrays or linked lists).O(n^2): Quadratic space complexity. Memory usage grows with the square of the input size (e.g., nested data structures).
- Analyzing Space Complexity:Consider the variables, data structures, and recursive calls that consume memory.Exclude constants and focus on the dominant factors affecting space usage.
- Trade-offs:Optimizing for lower space complexity might involve trade-offs, such as increased time complexity or code complexity.Striking a balance between time and space complexity is crucial based on the application's requirements.
- Conclusion:Space complexity is a key aspect of algorithm analysis, helping developers make informed decisions about memory usage and choose algorithms that align with the available resources. It complements time complexity in providing a comprehensive understanding of an algorithm's performance characteristics.
In our adventure, we faced tricky problems, but each one taught us something new. From struggles, we learned, and in the process, we upgraded our skills. Every challenge brought us a bit closer to becoming better at this coding game!
Thankyou so much for giving your time! Keep Coding??.