Algorithms
Big O Notation
Big O notation is a mathematical concept used in computer science to describe the performance or complexity of an algorithm. Specifically, it expresses the upper bound of the time complexity or space complexity in terms of the size of the input data (denoted as \( n \) ). Understanding Big O notation is crucial for evaluating which algorithms are most efficient under different circumstances.
Big O notation helps in understanding the worst-case scenario of an algorithm's running time or space requirement, which is critical for determining the scalability of an algorithm.
The notation is generally expressed as \[ O(f(n)) \] , where \( f(n) \) represents a function that approximates the number of steps (in time complexity) or the amount of memory (in space complexity) in terms of the input size \( n \).
Common Time Complexities
- Constant Time \( O(1) \) : The execution time or space remains constant regardless of the input size.
- Logarithmic Time \( O(\log n) \) : The execution time increases logarithmically with the input size.
- Linear Time \( O(n) \) : The execution time increases linearly with the input size.
- Linearithmic Time \( O(n \log n) \) : The execution time increases linearly with the input size multiplied by a logarithmic factor.
- Quadratic Time \( O(n^2) \) : The execution time increases quadratically with the input size.
- Exponential Time \( O(2^n) \) : The execution time grows exponentially based on the input size.
- Factorial Time \( O(n!) \) : The execution time grows factorial with the input size.
Fundamental Algorithms
- Sorting Algorithms
- Search Algorithms
- Recursive Algorithms