
Lecture 9: Algorithms and Complexity
In this lecture, we explore algorithms—the backbone of computer science—and the study of their complexity, which allows us to measure efficiency in terms of time and space. Understanding algorithms and complexity is crucial for solving computational problems effectively and optimizing resources.
1. Introduction to Algorithms
An algorithm is a finite, well-defined sequence of steps or instructions designed to solve a problem or perform a task. Algorithms can be expressed in natural language, pseudocode, flowcharts, or programming languages.
- Input: Data provided to the algorithm.
- Process: Steps performed on the input.
- Output: Desired result after execution.
2. Properties of Good Algorithms
- Correctness: Produces the expected output for all valid inputs.
- Efficiency: Uses minimal resources (time and memory).
- Finiteness: Terminates after a finite number of steps.
- Clarity: Easy to understand and implement.
- Generality: Solves a broad range of problems, not just specific cases.
3. Categories of Algorithms
- Brute Force: Tries all possible solutions until one works.
- Divide and Conquer: Breaks the problem into smaller subproblems (e.g., Merge Sort, Quick Sort).
- Greedy Algorithms: Makes locally optimal choices at each step (e.g., Huffman Coding).
- Dynamic Programming: Solves overlapping subproblems by storing results (e.g., Fibonacci, Knapsack).
- Backtracking: Explores all possible solutions systematically and abandons invalid paths.
4. Algorithm Analysis
To compare algorithms, we analyze their complexity:
- Time Complexity: How the execution time grows as input size increases.
- Space Complexity: How much memory the algorithm uses.
4.1 Big-O Notation
Big-O notation expresses the worst-case scenario of algorithm growth rate:
- O(1): Constant time – execution does not depend on input size.
- O(log n): Logarithmic time – grows slowly with input size (e.g., Binary Search).
- O(n): Linear time – grows proportionally with input size (e.g., Linear Search).
- O(n log n): Efficient sorting algorithms (e.g., Merge Sort, Quick Sort).
- O(n²): Quadratic time – less efficient (e.g., Bubble Sort, Selection Sort).
- O(2ⁿ): Exponential time – impractical for large inputs (e.g., brute force on NP problems).
4.2 Example of Time Complexity
for (i = 0; i < n; i++) {
print(i);
}
This loop runs n times → O(n) time complexity.
5. Common Algorithms
- Sorting: Bubble Sort, Merge Sort, Quick Sort, Heap Sort.
- Searching: Linear Search, Binary Search.
- Graph Algorithms: Dijkstra’s Algorithm, Bellman-Ford, BFS, DFS.
- Optimization: Knapsack Problem, Traveling Salesman Problem.
6. P vs NP Problem
One of the greatest unsolved problems in computer science:
- P (Polynomial time): Problems that can be solved efficiently.
- NP (Nondeterministic Polynomial time): Problems where solutions can be verified quickly, but not necessarily solved quickly.
- The big question: Is P = NP?
7. Practical Applications
- Data compression (Huffman coding, LZW).
- Routing and navigation systems (Dijkstra’s algorithm).
- Cryptography (Prime factorization, modular arithmetic).
- Artificial intelligence (search strategies, dynamic programming).
8. Summary
- Algorithms are step-by-step procedures to solve problems.
- Complexity analysis helps measure efficiency.
- Big-O notation describes algorithm growth rates.
- Different algorithm strategies exist (brute force, greedy, divide and conquer, etc.).
- Applications span across AI, cryptography, optimization, and networking.
Next Lecture (10): Data Structures