# 25 Big-O Notation Interview Questions

## Introduction

Big-O notation is a way to describe the efficiency or complexity of an algorithm. It is commonly used in computer science to analyze and compare algorithms. Big-O notation represents how the runtime of an algorithm grows as the input size increases. It allows us to understand how well an algorithm scales with larger inputs. In interviews, you may encounter Big-O notation questions to assess your understanding of algorithmic efficiency. It’s important to be familiar with the notation and its various forms (e.g., O(1), O(n), O(n^2)) and understand how to determine the time and space complexity of algorithms.

## Questions

### 1. What is Big-O notation?

Big-O notation is a mathematical notation used to describe the performance or time complexity of an algorithm as the input size increases. It provides an upper bound on how the algorithm’s running time grows relative to the size of the input. In simpler terms, Big-O notation helps us analyze and compare the efficiency of algorithms, allowing us to identify which algorithms are more scalable and perform better as the input grows larger.

Big-O notation is expressed as O(f(n)), where “f(n)” represents the function that describes the algorithm’s time complexity in terms of the input size “n.” The “O” stands for “order of,” and it indicates the maximum growth rate of the algorithm’s running time relative to “n.” It is important to note that Big-O notation deals with the worst-case scenario, considering the upper limit of the algorithm’s performance.

### 2. What does O(1) represent in Big-O notation?

O(1) in Big-O notation represents constant time complexity. It means that the algorithm’s running time remains constant, regardless of the input size. The time taken to execute the algorithm does not increase as the input grows larger. Constant time complexity is considered the most efficient time complexity.

Example:

Python
``````def constant_time_operation(arr):
return arr[0]

# No matter how large the input array is, the function always returns the first element in constant time.``````

### 3. What does O(n) represent in Big-O notation?

O(n) in Big-O notation represents linear time complexity. It means that the algorithm’s running time increases linearly with the input size “n.” As the input grows larger, the execution time of the algorithm also increases proportionally.

Example:

Python
``````def linear_time_search(arr, target):
for item in arr:
if item == target:
return True
return False

# The time taken by this function grows linearly with the size of the input array 'arr.'``````

### 4. What does O(log n) represent in Big-O notation?

O(log n) in Big-O notation represents logarithmic time complexity. It is commonly seen in algorithms that divide the input into smaller parts and work on only one part at each step. The running time of such algorithms increases very slowly with the input size. Logarithmic time complexity is more efficient than linear time complexity but less efficient than constant time complexity.

Example:

Python
``````def binary_search(arr, target):
low, high = 0, len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return True
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return False

# The time taken by this binary search function grows logarithmically with the size of the input array 'arr.'``````

### 5. What does O(n^2) represent in Big-O notation?

O(n^2) in Big-O notation represents quadratic time complexity. It means that the algorithm’s running time increases quadratically with the input size “n.” For each element in the input, the algorithm may perform nested iterations or comparisons, leading to a significant increase in execution time as the input grows larger.

Example:

Python
``````def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]

# Bubble sort has a time complexity of O(n^2) because of the two nested loops.``````

### 6. What is the difference between O(1) and O(n) time complexity?

The key difference between O(1) and O(n) time complexity is how the running time of an algorithm scales with the input size “n.”

• O(1) (constant time complexity): The running time of the algorithm remains constant regardless of the input size. It means that the algorithm takes the same amount of time to execute, regardless of whether the input is small or large.
• O(n) (linear time complexity): The running time of the algorithm increases linearly with the input size “n.” It means that as the input size grows, the execution time also increases proportionally.

### 7. What is the worst-case time complexity of the linear search algorithm?

The worst-case time complexity of the linear search algorithm is O(n). In the worst-case scenario, the element being searched for may be at the end of the list or array, requiring the algorithm to traverse through all “n” elements to find it. As a result, the time taken to perform the search increases linearly with the size of the input list or array.

Example:

Python
``````def linear_search(arr, target):
for item in arr:
if item == target:
return True
return False

# The worst-case time complexity of this linear_search function is O(n).``````

### 8. What is the worst-case time complexity of the binary search algorithm?

The worst-case time complexity of the binary search algorithm is O(log n). Binary search operates on a sorted list or array and continually divides the search space in half, efficiently reducing the number of elements to check. This logarithmic behavior leads to a significantly faster search compared to linear search, especially for large inputs.

Example:

Python
``````def binary_search(arr, target):
low, high = 0, len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return True
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return False

# The worst-case time complexity of this binary_search function is O(log n).``````

### 9. What is the time complexity of adding an element to the end of an array?

The time complexity of adding an element to the end of an array depends on the implementation of the array.

If the array has empty slots at the end (dynamic array):

• Average Case: O(1) – Adding an element to the end takes constant time, assuming there is space available in the underlying array.
• Worst Case: O(n) – In some cases, the array may need to be resized, leading to copying all elements to a new location, which takes linear time.

If the array does not have empty slots at the end (static array):

• Adding an element to the end is not possible or requires creating a new array with a larger size and copying elements, which takes O(n) time.

Example of dynamic array (Python’s list):

Python
``````# Appending to the end of a list has an average time complexity of O(1).
my_list = [1, 2, 3]
my_list.append(4)  # O(1) on average``````

### 10. What is the time complexity of removing an element from the beginning of a linked list?

The time complexity of removing an element from the beginning of a linked list is O(1). Since a linked list consists of nodes connected through pointers, removing the first node only requires updating the head pointer to point to the next node. This operation takes constant time and is independent of the size of the linked list.

Example:

Python
``````class Node:
def __init__(self, data):
self.data = data
self.next = None

def __init__(self):

def remove_first(self):

# The remove_first function has a time complexity of O(1) as it only involves updating the head pointer.``````

### 11. What is the time complexity of a bubble sort algorithm?

The time complexity of a bubble sort algorithm is O(n^2) in the worst and average cases. Bubble sort repeatedly steps through the list to be sorted, compares adjacent elements, and swaps them if they are in the wrong order. The algorithm continues this process until the list is fully sorted. Because of the nested loops, where each pass requires iterating through the entire list, the time complexity is quadratic.

Example:

Python
``````def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]

# The time complexity of this bubble_sort function is O(n^2).``````

### 12. What is the time complexity of a merge sort algorithm?

The time complexity of a merge sort algorithm is O(n log n) in all cases. Merge sort is a divide-and-conquer algorithm that recursively divides the input list into halves until each sublist contains only one element. Then, it merges these sublists in sorted order. The merging step takes linear time, and the division step takes logarithmic time, leading to a total time complexity of O(n log n).

Example:

Python
``````def merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2
left_half = arr[:mid]
right_half = arr[mid:]

merge_sort(left_half)
merge_sort(right_half)

i = j = k = 0

while i < len(left_half) and j < len(right_half):
if left_half[i] < right_half[j]:
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1

while i < len(left_half):
arr[k] = left_half[i]
i += 1
k += 1

while j < len(right_half):
arr[k] = right_half[j]
j += 1
k += 1

# The time complexity of this merge_sort function is O(n log n).``````

### 13. What is the time complexity of a binary heap insertion operation?

The time complexity of a binary heap insertion operation is O(log n). Binary heaps are data structures that maintain a partially ordered binary tree. When inserting an element into a binary heap, it is placed at the next available position (usually the bottom-rightmost position). Then, it is “bubbled up” through the tree to maintain the heap property. The height of a binary heap is logarithmic with respect to the number of elements (n), so the insertion operation takes logarithmic time complexity.

Example:

Python
``````import heapq

def heap_insert(heap, item):
heapq.heappush(heap, item)

# The heap_insert function has a time complexity of O(log n) for the binary heap insertion.``````

### 14. What is the time complexity of a hash table lookup operation?

The time complexity of a hash table lookup operation is O(1) on average. Hash tables use a hash function to map keys to an index in an underlying array, where the actual values are stored. Retrieving an element from a hash table involves computing the hash value for the given key and accessing the corresponding index in the array. In the average case, the hash function distributes the keys evenly, resulting in constant time complexity for lookups.

However, in the worst case, when there are hash collisions (multiple keys map to the same index), the lookup time complexity can degrade to O(n) due to linear search in the collided bucket.

Example (Python’s dictionary):

Python
``````# Dictionary lookups have an average time complexity of O(1).
my_dict = {'apple': 1, 'banana': 2, 'orange': 3}
print(my_dict['banana'])  # O(1) on average``````

### 15. What is the time complexity of a breadth-first search (BFS) algorithm on a graph?

The time complexity of a breadth-first search (BFS) algorithm on a graph is O(V + E), where V is the number of vertices (nodes) and E is the number of edges in the graph. In BFS, starting from a source node, the algorithm explores all nodes at the current depth level before moving on to nodes at the next level. The BFS traversal visits all vertices and edges once, leading to a linear time complexity with respect to the size of the graph.

Example (using an adjacency list representation):

Python
``````from collections import deque

def bfs(graph, start_node):
visited = set()
queue = deque([start_node])

while queue:
current_node = queue.popleft()
if current_node not in visited:
for neighbor in graph[current_node]:
queue.append(neighbor)

# The time complexity of this BFS function is O(V + E).``````

### 16. What is the time complexity of a depth-first search (DFS) algorithm on a graph?

The time complexity of a depth-first search (DFS) algorithm on a graph is also O(V + E), where V is the number of vertices (nodes) and E is the number of edges in the graph. In DFS, starting from a source node, the algorithm explores as far as possible along each branch before backtracking. Like BFS, the DFS traversal visits all vertices and edges once, leading to the same linear time complexity.

Example (using an adjacency list representation and recursion):

Python
``````def dfs(graph, current_node, visited):
if current_node not in visited:
for neighbor in graph[current_node]:
dfs(graph, neighbor, visited)

# Usage:
# visited = set()
# dfs(graph, start_node, visited)

# The time complexity of this DFS function is O(V + E).``````

### 17. What is the time complexity of a binary search tree (BST) insertion operation?

The time complexity of a binary search tree (BST) insertion operation depends on the tree’s balance.

1. Balanced BST (e.g., AVL tree, Red-Black tree):
• Average Case: O(log n) – The tree remains balanced after the insertion, ensuring that the height of the tree is logarithmic with respect to the number of nodes “n.”
• Worst Case: O(log n) – Same as the average case for balanced BSTs.
1. Unbalanced BST (e.g., a skewed tree):
• Average Case: O(log n) – For randomly inserted elements, the tree can still be relatively balanced on average.
• Worst Case: O(n) – In the worst-case scenario, the BST can degenerate into a linked list, and the height becomes linear, leading to linear time complexity for insertion.

Example (binary search tree implementation with balanced AVL tree):

Python
``````# Assuming an AVLTree class with an insert method.
class AVLTree:
def __init__(self):
self.root = None

def insert(self, data):
# Insertion operation for an AVL tree, ensuring balance.
# Time complexity: O(log n) on average and worst-case for balanced AVL trees.

# Usage:
# avl_tree = AVLTree()
# avl_tree.insert(5)
# avl_tree.insert(3)
# avl_tree.insert(7)``````

### 18. What is the time complexity of a stack’s push and pop operations?

The time complexity of a stack’s push and pop operations is O(1). Stacks are designed to allow constant-time operations at the top of the stack, irrespective of the stack’s size.

Example (using a list as a stack):

Python
``````def push(stack, item):
stack.append(item)

def pop(stack):
if not is_empty(stack):
return stack.pop()

# The push and pop operations for this stack implementation have a time complexity of O(1).``````

### 19. What is the time complexity of a queue’s enqueue and dequeue operations?

The time complexity of a queue’s enqueue and dequeue operations is O(1). Queues, like stacks, are designed to allow constant-time operations at the front (enqueue) and rear (dequeue) of the queue, regardless of the queue’s size.

Example (using a list as a queue with collections.deque):

Python
``````from collections import deque

def enqueue(queue, item):
queue.append(item)

def dequeue(queue):
if not is_empty(queue):
return queue.popleft()

# The enqueue and dequeue operations for this queue implementation have a time complexity of O(1).``````

### 20. What is the time complexity of a dynamic array’s append operation?

The time complexity of a dynamic array’s append operation is as follows:

1. Average Case: O(1) – Appending an element to the end of the dynamic array takes constant time, assuming there is available space in the underlying array. The average case considers that the dynamic array occasionally needs to resize, but the resizing happens infrequently.
2. Amortized Time Complexity: O(1) – Dynamic arrays use an amortized time complexity analysis to consider the cost of resizing over a sequence of append operations. Although resizing the array takes O(n) in the worst case, it occurs infrequently and can be “amortized” over multiple append operations, resulting in a constant average cost per append operation.

Example:

Python
``````# Using Python's list as a dynamic array
my_list = []
my_list.append(1)  # O(1) average and amortized time complexity
my_list.append(2)  # O(1) average and amortized time complexity``````

### 21. What is the time complexity of a selection sort algorithm?

The time complexity of a selection sort algorithm is O(n^2) in all cases. Selection sort repeatedly selects the minimum (or maximum) element from the unsorted part of the list and places it at the beginning (or end) of the sorted part. The algorithm requires two nested loops, and for each element, it may perform a swap operation. As a result, the time complexity is quadratic.

Example:

Python
``````def selection_sort(arr):
n = len(arr)
for i in range(n):
min_index = i
for j in range(i+1, n):
if arr[j] < arr[min_index]:
min_index = j
arr[i], arr[min_index] = arr[min_index], arr[i]

# The time complexity of this selection_sort function is O(n^2).``````

### 22. What is the time complexity of a quicksort algorithm?

The time complexity of a quicksort algorithm depends on the choice of the pivot and the input data distribution.

1. Average Case: O(n log n) – On average, quicksort performs very well, dividing the input into approximately equal halves at each step. This results in a balanced partitioning, leading to a time complexity of O(n log n).
2. Worst Case: O(n^2) – The worst-case scenario occurs when the pivot selection always results in unbalanced partitions. This happens, for example, when the input is already sorted or nearly sorted. In this case, the partitioning does not divide the input efficiently, leading to quadratic time complexity.

Example:

Python
``````def quicksort(arr):
if len(arr) <= 1:
return arr

pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]

return quicksort(left) + middle + quicksort(right)

# The time complexity of this quicksort function is O(n log n) on average and O(n^2) in the worst case.``````

### 23. What is the time complexity of a radix sort algorithm?

The time complexity of a radix sort algorithm is O(d * (n + k)), where “d” is the number of digits in the maximum number, “n” is the number of elements to be sorted, and “k” is the range of values (number of unique digits).

Radix sort is a non-comparative sorting algorithm that sorts elements by processing individual digits. It sorts the elements from the least significant digit to the most significant digit, repeatedly applying a stable sort (e.g., counting sort or bucket sort) at each digit position.

Since the number of digits (“d”) determines the number of iterations, the radix sort’s time complexity is dependent on the number of digits in the largest element.

Example:

Python
``````def counting_sort(arr, exp):
n = len(arr)
output = [0] * n
count = [0] * 10

for i in range(n):
index = (arr[i] // exp) % 10
count[index] += 1

for i in range(1, 10):
count[i] += count[i - 1]

i = n - 1
while i >= 0:
index = (arr[i] // exp) % 10
output[count[index] - 1] = arr[i]
count[index] -= 1
i -= 1

for i in range(n):
arr[i] = output[i]

max_num = max(arr)
exp = 1
while max_num // exp > 0:
counting_sort(arr, exp)
exp *= 10

# The time complexity of this radix_sort function is O(d * (n + k)), where d is the number of digits.``````

### 24. What is the time complexity of Dijkstra’s algorithm for finding the shortest path in a graph?

The time complexity of Dijkstra’s algorithm for finding the shortest path in a graph with non-negative edge weights is O((V + E) log V), where V is the number of vertices (nodes) and E is the number of edges in the graph.

Dijkstra’s algorithm is a single-source shortest path algorithm that maintains a priority queue (e.g., binary heap or Fibonacci heap) to select the next node with the shortest distance from the source. The priority queue operations add an additional log V factor to the overall time complexity.

The algorithm visits all vertices and edges once, but due to the priority queue, the overall time complexity is reduced to O((V + E) log V).

Example (using Python’s heapq and adjacency list representation):

Python
``````import heapq

def dijkstra(graph, start):
distances = {node: float('inf') for node in graph}
distances[start] = 0

priority_queue = [(0, start)]

while priority_queue:
current_distance, current_node = heapq.heappop(priority_queue)

if current_distance > distances[current_node]:
continue

for neighbor, weight in graph[current_node].items():
distance = current_distance + weight
if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(priority_queue, (distance, neighbor))

# Usage:
# graph = {
#     'A': {'B': 1, 'C': 4},
#     'B': {'A': 1, 'C': 2, 'D': 5},
#     'C': {'A': 4, 'B': 2, 'D': 1},
#     'D': {'B': 5, 'C': 1}
# }
# dijkstra(graph, 'A')

# The time complexity of this Dijkstra's algorithm implementation is O((V + E) log V).``````

### 25. What is the time complexity of the Boyer-Moore string searching algorithm?

The Boyer-Moore string searching algorithm has an average-case time complexity of O(n/m), where n is the length of the input text and m is the length of the pattern being searched for. In the best-case scenario, the algorithm can achieve a linear time complexity of O(n/m), while in the worst-case scenario, it can have a time complexity of O(n*m). However, in practice, the Boyer-Moore algorithm tends to perform very efficiently, especially for longer patterns or when the pattern contains repeating characters, as it can skip several characters in the text during the search process. This makes it one of the most efficient string searching algorithms for practical use.

## MCQ Questions

### 1. What is Big-O notation in computer science?

• a. It is a notation used to measure the size of computer programs.
• b. It is a notation used to analyze the efficiency of algorithms.
• c. It is a notation used to denote the number of lines of code in a program.
• d. It is a notation used to represent the memory usage of a program.

Answer: b. It is a notation used to analyze the efficiency of algorithms.

### 2. What does Big-O notation represent?

• a. The exact running time of an algorithm.
• b. The best-case running time of an algorithm.
• c. The average running time of an algorithm.
• d. The worst-case running time of an algorithm.

Answer: d. The worst-case running time of an algorithm.

### 3. What is the significance of Big-O notation?

• a. It helps in determining the number of operations an algorithm requires.
• b. It helps in determining the memory usage of an algorithm.
• c. It helps in determining the speed of an algorithm.
• d. It helps in determining the accuracy of an algorithm.

Answer: a. It helps in determining the number of operations an algorithm requires.

### 4. What does O(1) represent in Big-O notation?

• a. Constant time complexity.
• b. Linear time complexity.
• d. Exponential time complexity.

### 5. What does O(n) represent in Big-O notation?

• a. Constant time complexity.
• b. Linear time complexity.
• d. Exponential time complexity.

### 6. What does O(n^2) represent in Big-O notation?

• a. Constant time complexity.
• b. Linear time complexity.
• d. Exponential time complexity.

### 7. What does O(log n) represent in Big-O notation?

• a. Constant time complexity.
• b. Linear time complexity.
• c. Logarithmic time complexity.
• d. Exponential time complexity.

### 8. What does O(2^n) represent in Big-O notation?

• a. Constant time complexity.
• b. Linear time complexity.
• c. Logarithmic time complexity.
• d. Exponential time complexity.

### 9. What is the time complexity of an algorithm with O(n log n)?

• a. Linear time complexity.
• c. Log-linear time complexity.
• d. Exponential time complexity.

### 10. What is the time complexity of an algorithm with O(n^3)?

• a. Linear time complexity.
• c. Cubic time complexity.
• d. Exponential time complexity.

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

### 12. What is the space complexity of an algorithm?

• a. The amount of disk space required by the algorithm.
• b. The amount of memory required by the algorithm.
• c. The number of lines of code in the algorithm.
• d. The running time of the algorithm.

Answer: b. The amount of memory required by the algorithm.

### 13. What is the space complexity of an algorithm with O(n)?

• a. Constant space complexity.
• b. Linear space complexity.
• d. Exponential space complexity.

### 14. What is the space complexity of an algorithm with O(log n)?

• a. Constant space complexity.
• b. Linear space complexity.
• c. Logarithmic space complexity.
• d. Exponential space complexity.

### 15. What is the space complexity of an algorithm with O(n^2)?

• a. Constant space complexity.
• b. Linear space complexity.
• d. Exponential space complexity.

### 16. What is the space complexity of an algorithm with O(1)?

• a. Constant space complexity.
• b. Linear space complexity.
• d. Exponential space complexity.

### 17. Which of the following is not a valid Big-O notation?

• a. O(n!)
• b. O(2^n)
• c. O(n^2)
• d. O(n log n)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

• a. O(n + m)
• b. O(n * m)
• c. O(n^2)
• d. O(log n)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(2^n)

### 21. Which of the following represents better algorithm efficiency?

• a. O(n^2)
• b. O(2^n)
• c. O(n!)
• d. O(n log n)

### 22. What is the time complexity of the best sorting algorithm in most cases?

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n log n)

### 23. Which of the following is the most efficient sorting algorithm in terms of time complexity?

• a. Bubble Sort (O(n^2))
• b. Merge Sort (O(n log n))
• c. Insertion Sort (O(n^2))
• d. Selection Sort (O(n^2))

Answer: b. Merge Sort (O(n log n))

• a. O(1)
• b. O
• (log n)
• c. O(n)
• d. O(n log n)

• a. O(n)
• b. O(log n)
• c. O(n^2)
• d. O(1)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(2^n)

### 27. Which of the following is not a common time complexity?

• a. O(n^3)
• b. O(n!)
• c. O(n log n)
• d. O(2^n)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n^2)

### 35. What is the time complexity of the Sieve of Eratosthenes algorithm for finding prime numbers?

• a. O(1)
• b. O(log n)
• c. O(n)
• d. O(n log n)

Check Also
Close