CSE1305 -- Exam Pattern Analysis
Based on 16 past exams (2018--2025): 5 Midterms, 6 Endterms, 5 Resits
MID Midterm
END Endterm
RES Resit
1. Asymptotic Notation Concepts (Big-O / Ω / Θ) Very High
MID 2018 Q1-Q4 • 2019 Q1 • 2022 Q1-Q2 • 2024 Q1 • 2025 Q1 |
END 2020 Q1-Q2 • 2022 Q1 • 2023 Q1 • 2024 Q1 • 2025 Q1,Q3 |
RES 2019 Q1 • 2022 Q1 • 2023 Q1 • 2024 Q1 • 2025 Q1
Identify true/false statements about Big-O, Big-Ω, and Big-Θ relationships. Compare asymptotic growth rates, determine tightest bounds for piecewise or composite functions, and understand the formal definitions.
Methods to Solve
- Definitions: O(g(n)) means f(n) ≤ c·g(n) for all n ≥ n0; Ω is the lower bound (≥); Θ means both O and Ω
- Growth rate hierarchy: 1 < log n < √n < n < n log n < n2 < n3 < 2n < n!
- Dominant term: For sums, the fastest-growing term determines the Big-O (e.g., n2 + n log n is O(n2))
- Big-O is NOT about best/worst case: Big-O, Ω, Θ are bounds on a function, not case indicators. Any case can use any notation
- Piecewise functions: Big-O concerns behavior as n → ∞, so only the branch for large n matters
- Constant factors and lower-order terms can be ignored: 5n2 + 3n is Θ(n2)
- Log base does not matter: log2 n and log10 n differ by a constant factor
- Time vs space relationship: If time is Θ(f(n)) and space is Θ(g(n)), then g(n) is O(f(n)) because you need at least as much time as space used
Example (MID 2018 Q1): What is the asymptotic relationship between (c · n log n) and nc for c=1? Since n1 = n and n log n grows faster than n, (n log n) is Ω(nc). Answer: A.
2. Sorting Algorithm Properties & Comparisons Very High
MID 2018 Q13-Q14 • 2019 Q17-Q18 • 2022 Q11-Q13 • 2024 Q11-Q14 • 2025 Q12-Q14 |
END 2020 Q13-Q16 • 2022 Q10-Q12 • 2023 Q7-Q9 • 2024 Q7-Q9 • 2025 Q7-Q8 |
RES 2019 Q9,Q11 • 2022 Q9-Q11 • 2023 Q7-Q9 • 2024 Q6-Q8 • 2025 Q7-Q9
Identify true/false statements about sorting algorithms: stability, in-place property, time/space complexity, comparison-based nature. Trace execution of sorting algorithms on given arrays.
Methods to Solve
- Know the complexity table:
Insertion sort: O(n2) worst, O(n) best, stable, in-place, O(1) space
Selection sort: O(n2) always, NOT stable (with swaps), in-place, O(n) swaps worst-case
Merge sort: O(n log n) always, stable, NOT in-place, O(n) space
Heap sort: O(n log n) always, NOT stable, in-place, O(1) space
Quicksort: O(n2) worst / O(n log n) expected, NOT stable, in-place, O(log n) space
- Stability: A sort is stable if equal elements maintain their relative order. Merge sort and insertion sort are stable; selection sort (with swaps) and heap sort are NOT
- In-place: Uses O(1) extra space. Merge sort cannot be implemented in-place with its optimal O(n log n) time
- PQ connection: Selection sort = unsorted list PQ; Insertion sort = sorted list PQ; Heap sort = heap PQ
- Comparison-based lower bound: Any comparison-based sort requires Ω(n log n) comparisons in worst case
- For tracing: Step through the algorithm iteration by iteration, tracking swaps and comparisons
Example (MID 2019 Q18): Selection sort CANNOT be implemented with a sorted list PQ (that would be insertion sort). Answer: A is false.
3. Closed Form Derivation by Repeated Unfolding Very High
MID 2018 Q22c • 2019 Q24c • 2022 Q16 • 2024 Q17 • 2025 Q17 |
END 2020 Q24b • 2022 Q21 • 2023 Q17 • 2024 Q17 • 2025 Q17 |
RES 2019 Q24b • 2022 Q22 • 2023 Q17 • 2024 Q17 • 2025 Q17
Given a recurrence equation (e.g., T(n) = c1 + 2·T(n/2)), derive the closed-form solution by repeatedly substituting the recurrence into itself, identifying a pattern, then expressing in terms of n.
Methods to Solve
- Step 1 -- Unfold 2-3 times: Substitute T(n) into itself: T(n) = c1 + 2·T(n/2) = c1 + 2(c1 + 2·T(n/4)) = 3c1 + 4·T(n/4)
- Step 2 -- Identify the pattern: After k steps, express coefficients in terms of k (e.g., (2k - 1)c1 + 2k·T(n/2k))
- Step 3 -- Substitute base case: Set n/2k = base case size to solve for k (e.g., k = log2 n), then replace T(base) with c0
- Step 4 -- Simplify: Use identities like 2log n = n, geometric series ∑2i = 2k - 1, arithmetic series ∑i = k(k+1)/2
- Common results: T(n) = a·T(n/2) + c ⇒ O(n) if a=2, O(log n) if a=1; T(n) = T(n-1) + cn ⇒ O(n2); T(n) = 2T(n/2) + cn ⇒ O(n log n)
- Alternative: Guess the closed form and prove by induction (base case + inductive step)
Example (MID 2018 Q22c): T(1)=c0, T(n)=c1+2T(n/2). Unfolding k times: T(n) = (2k-1)c1 + 2kT(n/2k). Let k=log2n: T(n) = (n-1)c1 + nc0 = O(n).
4. Recurrence Relation Setup (Base & Recursive Case) Very High
MID 2018 Q22b • 2019 Q24b • 2022 Q15 • 2024 Q16 • 2025 Q16 |
END 2020 Q24a • 2023 Q16 • 2024 Q16 • 2025 Q16 |
RES 2019 Q24a • 2022 Q21 • 2023 Q16 • 2024 Q16 • 2025 Q16
Given a recursive Java method, derive the base case T(base) = c0 and the recurrence equation T(n) = ... + T(...) expressing worst-case time or space complexity. Define all variables and constants, and justify each term by referencing code lines.
Methods to Solve
- Identify the input size n: Usually the length of the array/list, or a parameter like (a.length - i)
- Base case: Find the termination condition (e.g., n == 1) and assign T(base) = c0 for all constant-time operations
- Recursive case: For each recursive call, determine (1) how many calls, (2) the size of input to each call
- Non-recursive work: Count loops (c·n), constant operations (c1), and data structure operations (e.g., sort = O(n log n))
- Justify each term: Map c0 to specific lines (method call, conditional, return), c1·n to loop iterations, T(n/2) to recursive call
- Common patterns: Linear recursion: T(n) = c + T(n-1); Binary split: T(n) = c + 2T(n/2); With loop: T(n) = c·n + T(n-1)
Example (MID 2019 Q24b): methodY has a for loop (c2·n) and one recursive call on (n-1): T(0) = c0, T(n) = c1 + n·c2 + T(n-1).
5. Heap Operations & Heap Sort Very High
MID 2019 Q9-Q11 • 2022 Q9,Q14 • 2024 Q9-Q10,Q13 • 2025 Q11 |
END 2020 Q13 • 2022 Q8 • 2023 Q6 • 2024 Q5-Q6 • 2025 Q5 |
RES 2019 Q5 • 2022 Q8,Q10 • 2024 Q5 • 2025 Q4
Trace heapify (bottom-up construction), insert with up-heap bubbling, removeMin/removeMax with down-heap bubbling. Determine array state after k operations of in-place heap sort.
Methods to Solve
- Array indexing: For node at index i: left child = 2i+1, right child = 2i+2, parent = ⌊(i-1)/2⌋
- Insert: Add at last position (index n), then up-heap bubble: swap with parent while smaller (min-heap) or larger (max-heap)
- RemoveMin: Replace root with last element, decrease size, then down-heap bubble: swap with smallest child while violating heap order
- Heapify (bottom-up): Start from last internal node (index ⌊n/2⌋-1), perform down-heap on each node moving toward root. Time: O(n)
- In-place heap sort (decreasing order): Build min-heap with heapify, then repeatedly removeMin and place at end of array
- In-place heap sort (increasing order): Build max-heap with heapify, then repeatedly swap root with last unsorted element and down-heap
- Min swaps for add: 0 (element larger than parent). Min swaps for remove: 1 (must swap with one child at minimum)
- Heapify time: O(n), NOT O(n log n). Individual insert/remove: O(log n)
Example (MID 2019 Q10): Heapify [12,10,9,5,6,1] into min-heap: down-heap from index 2 (swap 9↔1), then index 1 (swap 10↔5), then index 0 (swap 12↔1, then 12↔9) ⇒ [1,5,9,10,6,12].
6. Array vs Linked List Complexity Comparison Very High
MID 2018 Q9 • 2019 Q7-Q8 • 2022 Q4 • 2024 Q6 • 2025 Q4 |
END 2020 Q4 • 2022 Q4 • 2024 Q3-Q4 |
RES 2022 Q3-Q4 • 2024 Q3 • 2025 Q3,Q5
Compare the worst-case time complexity of the same operation (remove middle, reverse, find element, extract subsequence) when implemented with an array vs a singly-linked list or doubly-linked list.
Methods to Solve
- Array strengths: O(1) random access by index, O(1) add/remove at end (amortized with dynamic array)
- Array weaknesses: O(n) insert/remove at beginning or middle (shifting required)
- SLL strengths: O(1) add/remove at head. O(1) add at tail (if tail reference maintained)
- SLL weaknesses: O(n) access by index, O(n) remove at tail (must traverse to second-to-last), no backward traversal
- DLL strengths: O(1) add/remove at both head and tail, O(1) remove given a reference to the node
- Reverse sequence: Array O(n) with two-pointer swap; SLL O(n) with removeFirst+addFirst on second list; SLL with node access O(n) by flipping pointers
- Remove middle element repeatedly: Array O(n2) total (n shifts each time); SLL O(n2) total (n traversal to find middle each time)
Example (MID 2019 Q7): Reverse a sequence using ADT operations only. Array: O(n) swap pairs. SLL (ADT only, no node access): O(n2) since next() is O(1) but finding elements requires traversal. Answer: B.
7. Dynamic Array Resizing & Amortized Analysis Very High
MID 2018 Q7-Q8 • 2019 Q6 • 2024 Q5 • 2025 Q6 |
END 2022 Q4 • 2023 Q4 • 2025 Q3 |
RES 2019 Q3 • 2022 Q5 • 2024 Q2,Q4
Determine amortized time complexity of operations on dynamic arrays with different resizing strategies (doubling, adding constant, geometric factor). Also applies to stack push/pop backed by dynamic arrays.
Methods to Solve
- Geometric growth (multiply by factor ≥ 2): Amortized O(1) per operation. Total cost of n operations is O(n)
- Arithmetic growth (add constant k): Amortized O(n) per operation. Total cost of n operations is O(n2)
- Why geometric works: Copy cost for n elements is at most n + n/2 + n/4 + ... = O(n) total (geometric series)
- Why arithmetic fails: Copy cost is n + (n-k) + (n-2k) + ... which sums to O(n2/k) = O(n2)
- Shrinking strategy: To maintain O(1) amortized for both add and remove, shrink to C/2 when n < C/4 (NOT when n < C/2, which causes thrashing)
- Amortized meaning: Average over a sequence of n operations is O(1), not each individual operation. Some single operations may take O(n)
Example (MID 2018 Q7-Q8): methodZ resizes by f*s+1 (geometric, factor 2): calling n times is O(n) amortized. If changed to s+3 (arithmetic): calling n times is O(n2).
8. Tree Traversals (Preorder, Inorder, Postorder, BFS) Very High
MID 2019 Q20-Q22 • 2022 Q11 • 2024 Q8 • 2025 Q8-Q9 |
END 2020 Q12 • 2022 Q7,Q9 • 2023 Q5 • 2024 Q5 • 2025 Q6,Q13 |
RES 2023 Q5
Determine traversal order for a given tree (from array representation or linked nodes). Identify which traversal a code snippet implements. Reconstruct tree from traversal output. Compare traversal orders for special tree shapes.
Methods to Solve
- Preorder: Visit node FIRST, then left subtree, then right subtree (root, L, R)
- Inorder: Visit left subtree, then node, then right subtree (L, root, R). For BST, gives sorted order
- Postorder: Visit left subtree, then right subtree, then node LAST (L, R, root). Root is always last
- BFS (level-order): Visit level by level, left to right. Uses a queue
- Last visited node: Preorder = rightmost leaf on deepest right path; Inorder = rightmost node; Postorder = root; BFS = last node at deepest level (rightmost)
- Complete binary tree: Postorder visits all leaves before any internal node only for trees of height 1
- Line tree: Preorder and BFS give the same order (only one path to traverse)
- Reconstruct from postorder: Last element is root; for complete tree, use the structure constraint to assign values level by level
Example (MID 2025 Q9): Code pushes root to stack, then recurses right, then left. Popping from stack reverses this: left subtree first, then right, then root = postorder.
9. Hash Table Operations (Linear Probing / Separate Chaining) Very High
END 2020 Q17-Q18 • 2022 Q13 • 2023 Q10-Q11 • 2024 Q10-Q11 • 2025 Q10 |
RES 2019 Q12 • 2022 Q13 • 2023 Q10-Q11 • 2024 Q11 • 2025 Q10
Trace insert/delete/search operations on hash tables using linear probing or separate chaining. Handle DEFUNCT markers, determine bucket contents after a sequence of operations, and analyze expected time complexity with different load factors.
Methods to Solve
- Hash function: h(k) = k mod N gives the home bucket. If occupied, probe linearly: (h(k)+1) mod N, (h(k)+2) mod N, ...
- Insert (linear probing): Find first empty or DEFUNCT cell starting from h(k)
- Delete (linear probing): Mark cell as DEFUNCT (do NOT leave empty, or search will break)
- Search: Start at h(k), probe until finding the key, an empty cell (not found), or wrapping around
- Separate chaining: Each bucket stores a list. Insert at end of list. Expected time O(1+λ) where λ = n/N
- Load factor λ: λ < 1 required for linear probing. For O(1) expected time, keep λ < 0.5-0.75. Resize when threshold exceeded
- hashCode/equals contract: If a.equals(b), then a.hashCode() == b.hashCode(). Converse is NOT required
Example (END 2020 Q18): Delete key 16 (mark DEFUNCT), insert 27 at h(27)=27 mod 11, insert 6 which probes past DEFUNCT. Track each step carefully.
10. Space Complexity Analysis (Iterative & Recursive) Very High
MID 2018 Q4,Q21d • 2019 Q3 • 2025 Q2-Q3 |
END 2020 Q24 • 2022 Q2,Q5 • 2023 Q2 • 2024 Q16 • 2025 Q2 |
RES 2019 Q2,Q24d • 2023 Q2 • 2024 Q16 • 2025 Q2
Determine the worst-case space complexity of a method, accounting for stack frames in recursion, array allocations, and whether arrays are passed by reference or value.
Methods to Solve
- Iterative methods: Usually O(1) extra space (variables only), unless new arrays/structures are allocated
- Linear recursion (1 call, depth d): O(d) stack frames. If each frame allocates O(k) space, total is O(d · k)
- Binary recursion (2 calls, depth d): Space is O(d), NOT O(2d), because only one branch is active at a time on the call stack
- Arrays passed by reference: Passing an array to a method does NOT copy it — no extra space for the array
- Tail recursion optimization: Reuses the current stack frame, reducing space from O(n) to O(1). The recursive call must be the LAST operation
- New array allocation in each recursive call: If each call allocates an array of size n, n-1, n-2, ..., total space is O(n2)
- Local variables: Deallocated when method returns, so only count the maximum depth of active frames
Example (MID 2025 Q2-Q3): methodW creates a new array of size n-1 at each recursive call (depth n). Without tail recursion: all frames active = O(n2). With tail recursion: only one frame at a time = O(n).
11. Recursion Analysis (Type, Tail Calls, Optimization) Very High
MID 2018 Q5 • 2019 Q3-Q4 • 2024 Q2 • 2025 Q2-Q3 |
END 2020 Q5-Q6 • 2022 Q2 • 2024 Q2 • 2025 Q2 |
RES 2019 Q2
Identify whether a recursive method uses linear or binary recursion, whether it contains tail calls, and what changes would enable tail recursion optimization. Compare recursive vs iterative implementations.
Methods to Solve
- Linear recursion: At most ONE recursive call per execution path (may have multiple branches, but each branch makes at most one call)
- Binary recursion: TWO recursive calls in a single execution path (e.g., return f(n-1) + f(n-2))
- Tail call: The recursive call is the LAST operation before returning. "return f(n-1)" is tail; "return 1 + f(n-1)" is NOT tail (addition happens after)
- Tail recursion optimization: Compiler reuses the stack frame, reducing space from O(n) to O(1). Java does NOT perform this optimization automatically
- Making a method tail-recursive: Move any work after the recursive call to before it (e.g., swap lines so recursive call is last)
- Recursive vs iterative: Same time complexity, but iterative uses O(1) space vs O(n) for non-tail recursion
Example (MID 2018 Q5): Method f has "return f(n/2)" (tail call) and "return 1 + f(n-1)" (NOT tail). It uses linear recursion (one call per path) and contains at least one tail call. Answer: A.
12. (2,4) Tree Operations (Insert / Delete / Overflow / Underflow) High
END 2020 Q20 • 2023 Q14 • 2024 Q14 • 2025 Q12 |
RES 2019 Q16 • 2022 Q15 • 2023 Q12 • 2024 Q14 • 2025 Q12
Trace insertions and deletions in (2,4) trees. Count overflow splits (on insert) and underflow fusions/transfers (on delete). Determine the number of nodes of each type (2-node, 3-node, 4-node) after operations.
Methods to Solve
- Node types: 2-node (1 key, 2 children), 3-node (2 keys, 3 children), 4-node (3 keys, 4 children)
- Insert: Find the correct leaf, add key. If node becomes a 5-node (overflow): split into two 2-nodes and push middle key up to parent. This may cascade
- Delete: If key is in internal node, replace with inorder successor (from leaf), then delete from leaf. If node becomes a 1-node (underflow): try transfer from sibling, else fusion with sibling (may cascade)
- Transfer (rotation): Borrow a key from a sibling through the parent. Sibling must have ≥ 2 keys
- Fusion: Merge underflow node with a sibling and pull down a key from parent. Parent may underflow too
- All leaves at same depth: (2,4) trees are always perfectly balanced
Example (END 2023 Q14): Deleting 10 from a (2,4) tree: replace with successor, then the leaf underflows, fusion cascades upward. Count remaining 2-nodes.
13. Red-Black Tree Operations & (2,4) Tree Correspondence High
END 2020 Q25 • 2022 Q16 • 2023 Q15 • 2024 Q15 • 2025 Q12 |
RES 2019 Q16 • 2022 Q15 • 2023 Q13 • 2024 Q15
Count red/black nodes after insertions or deletions in red-black trees. Convert between (2,4) trees and red-black trees. Understand the correspondence between overflow/underflow and recoloring/restructuring.
Methods to Solve
- Properties: Root is black. Red nodes cannot have red children. Every path from root to null has the same number of black nodes
- (2,4) ↔ Red-Black correspondence: 2-node = single black node; 3-node = black node with one red child; 4-node = black node with two red children
- Insert: New node is red. If parent is red (double-red): check uncle. Red uncle ⇒ recolor (push black down). Black uncle ⇒ restructure (rotate)
- Delete: More complex. May require recoloring and/or restructuring to fix double-black
- Height: A red-black tree with n nodes has height ≤ 2·log2(n+1). Minimum leaf depth in RB tree of height h is ⌈h/2⌉
- Multiple RB trees from one (2,4) tree: A 3-node can have the red child on left OR right, creating multiple valid RB trees
14. Dijkstra's Algorithm Execution High
END 2022 Q17 • 2023 Q13 • 2024 Q12 • 2025 Q14 |
RES 2019 Q18 • 2022 Q19 • 2023 Q14 • 2024 Q9 • 2025 Q14
Trace Dijkstra's algorithm on a weighted graph: track distance label updates, count relaxations, determine when vertices are finalized, and identify the order of extraction from the priority queue.
Methods to Solve
- Algorithm: Initialize all distances to ∞ except source (0). Use a min-priority queue. Extract minimum, relax all outgoing edges
- Relaxation: For edge (u,v) with weight w: if d[u] + w < d[v], update d[v] = d[u] + w
- Count distance updates: A vertex's distance label changes each time a shorter path is found. Track each relaxation step
- Complexity: O((n + m) log n) with binary heap PQ; O(n2) with unsorted list PQ; O(n2 + m) with sorted list PQ
- Requirement: All edge weights must be non-negative. Does NOT work with negative edges
- Greedy property: Once a vertex is extracted from PQ, its distance is final (correct)
- Worst-case PQ operations: O(n) insertions, O(n) extractions, O(m) decrease-key operations
Example (RES 2019 Q18): Running Dijkstra from A, vertex F gets distinct non-infinite labels as shorter paths are discovered through different intermediate vertices.
15. Linked List Operations (SLL / DLL / CLL) High
MID 2018 Q15-Q16 • 2019 Q14-Q15 • 2022 Q5 • 2025 Q5 |
END 2022 Q3,Q6 |
RES 2022 Q3 • 2024 Q3
Trace through linked list code to determine what operation it performs (addFirst, removeLast, etc.). Understand circularly-linked list rotation, identify which data structures can/cannot be built with a given list type.
Methods to Solve
- addFirst (SLL): new Node(e, head); head = newNode; O(1)
- removeLast (DLL): Access tail.getPrevious(), set its next to null, update tail; O(1). For SLL: O(n) since must traverse to second-to-last
- CLL rotate: tail = tail.getNext(). This makes the old head become the new tail's next. The implicit head shifts by one position
- CLL removeFirst: head = tail.getNext().getNext(); tail.setNext(head). Removes the node after tail
- Method identification: Check if new nodes are created (add) or elements are returned (remove). Check which end is affected (head/tail reference updates)
- DLL not suitable for CLL: A circularly-linked list cannot use getPrevious (no backward links in SLL-based CLL)
- Inserting at head of CLL: Need to update 2 next references (new node's next and tail's next)
Example (MID 2018 Q16): CLL = {Birch, Cherry, Oak, Pine}, tail → Oak. After rotate: tail → Pine. After removeFirst: remove Birch. Result: {Cherry, Oak, Pine} with head pointing to Cherry.
16. AVL Tree Operations (Insert / Delete / Rotations) High
END 2020 Q19 • 2025 Q11 |
RES 2019 Q13-Q15,Q17 • 2022 Q14 • 2024 Q13
Trace AVL tree insertions and deletions. Count the number of tri-node restructurings (rotations) needed. Determine the height or maximum number of nodes for a given AVL tree depth.
Methods to Solve
- AVL property: For every node, |height(left) - height(right)| ≤ 1
- Insert: Standard BST insert, then walk up to find first unbalanced node. Perform ONE tri-node restructuring (single or double rotation)
- Delete: Standard BST delete, then walk up. May require MULTIPLE restructurings (up to O(log n))
- Single rotation: When imbalance is on same side (left-left or right-right)
- Double rotation: When imbalance is on opposite side (left-right or right-left)
- Max height with n nodes: h ≤ 1.44 · log2(n+2). With 7 nodes, max height = 3
- Min nodes for height h: N(h) = N(h-1) + N(h-2) + 1 (Fibonacci-like). N(0)=1, N(1)=2, N(2)=4, N(3)=7, N(4)=12
- Max nodes at depth d: 2d+1 - 1 (full tree)
17. Minimum Spanning Trees (Kruskal's / Prim's) High
END 2020 Q21 • 2022 Q18 • 2024 Q13 • 2025 Q15 |
RES 2019 Q19 • 2022 Q20 • 2023 Q15 • 2024 Q10
Trace Kruskal's or Prim's algorithm on a weighted graph. Count the number of distinct MSTs, identify which edges are included, and understand when the algorithm terminates.
Methods to Solve
- Kruskal's: Sort edges by weight. For each edge, if it connects two different components (clusters), add it. Use Union-Find
- Termination: Stop when n-1 edges are added (n = number of vertices)
- Skip edge: If both endpoints are in the same cluster (would create a cycle)
- Distinct edge weights: If all edge weights are distinct, the MST is unique
- Multiple MSTs: Occur when there are edges with equal weights that can be interchanged
- MST property: In a forest (tree), m = n - 1 (edges = vertices - 1)
- Prim's: Grow MST from a single vertex, always adding the cheapest edge connecting MST to non-MST vertex
18. Data Structure Choice for Application High
MID 2024 Q18 • 2025 Q18 |
END 2023 Q18 • 2024 Q18 • 2025 Q18 |
RES 2022 Q24 • 2024 Q18 • 2025 Q18
Given a specific application with its required operations, compare data structures (array, linked list, hash map, BST/AVL, heap) and determine which gives the best time complexity for the given use case.
Methods to Solve
- List all required operations and their frequencies (e.g., n inserts, n removes, k lookups)
- For each candidate data structure, compute the time complexity of each operation:
Hash map: O(1) expected insert/delete/lookup, but NO ordering (no findMin, findNext)
AVL/BST: O(log n) for everything, supports ordering (findMin, findNext, range queries)
Heap: O(log n) insert/removeMin, but O(n) for arbitrary delete or search
Sorted array: O(log n) search (binary search), O(n) insert/delete
Unsorted list: O(1) insert, O(n) search/delete
- Multiply each operation cost by its frequency, sum up for total time
- Key insight: If you need ordering/range queries, use AVL tree. If you only need key lookup, use hash map. If you need repeated min/max extraction, use heap
Example (MID 2024 Q18): Sorted list with O(1) add-highest and O(n) remove-by-value. Array: O(1) add at end, O(n) remove (shift). Linked list: O(1) add at tail, O(n) remove (traverse). Both are O(n) for remove, so equivalent.
19. Priority Queue Implementations Comparison High
MID 2022 Q8 • 2024 Q9 • 2025 Q10,Q18 |
END 2020 Q9 • 2023 Q7 |
RES 2022 Q9 • 2023 Q18
Compare the time complexity of PQ operations (insert, removeMin, peek) across three implementations: unsorted list, sorted list, and binary heap. Determine which is best for specific operation patterns.
Methods to Solve
- Unsorted list PQ: Insert O(1), removeMin O(n), peek O(n) — corresponds to selection sort
- Sorted list PQ: Insert O(n), removeMin O(1), peek O(1) — corresponds to insertion sort
- Heap PQ: Insert O(log n), removeMin O(log n), peek O(1) — corresponds to heap sort
- n inserts + n removes: Unsorted = O(n2), Sorted = O(n2), Heap = O(n log n). Heap wins
- For heapify + n removes: O(n) + O(n log n) = O(n log n) with heap. Alternative: unsorted list to sorted list in O(n log n) + O(n) removes = O(n log n). Same asymptotically
- Merging two PQs: Heap: create combined array + heapify = O(n). Sorted list: merge two sorted lists = O(n). Unsorted list: concatenate = O(1) with linked list
20. Non-Comparison Sorting (Bucket / Radix / Counting Sort) High
END 2020 Q14-Q15 • 2024 Q9 |
RES 2022 Q12 • 2023 Q9 • 2024 Q8 • 2025 Q9
Identify when non-comparison sorts are applicable and their time complexity. Understand bucket sort, counting sort (csort), and radix sort (LSD/MSD) and when each outperforms comparison-based sorts.
Methods to Solve
- Counting sort: O(n + k) time and space, where k is the range of values. Best when k = O(n)
- Bucket sort: Distribute into buckets, sort each bucket. O(n + k) expected time if elements are uniformly distributed
- Radix sort (LSD): Sort by each digit from least significant to most significant using a stable sort (e.g., counting sort). O(d · (n + b)) where d = digits, b = base
- When to use: When keys are integers in a bounded range, or strings of bounded length. NOT suitable for arbitrary comparison-based data
- Stability requirement: Radix sort requires the per-digit sort to be stable (counting sort is stable)
- Not comparison-based: These sorts break the Ω(n log n) lower bound for comparison sorts
21. Formal Big-O / Big-Ω Proof High
MID 2018 Q23 • 2019 Q25 • 2022 Q17 |
END 2019 Q25 • 2022 Q19 |
RES 2019 Q25 • 2022 Q23
Formally prove that f(n) is O(g(n)) or Ω(g(n)) by stating the mathematical conditions (∃ c, n0) and finding concrete values of c and n0 that satisfy the inequality.
Methods to Solve
- For O(g(n)): Prove ∃ c > 0, n0 ≥ 1 such that f(n) ≤ c · g(n) for all n ≥ n0
- For Ω(g(n)): Prove ∃ c > 0, n0 ≥ 1 such that f(n) ≥ c · g(n) for all n ≥ n0
- Step 1: State the formal condition (the ∃c, ∃n0, ∀n ≥ n0 inequality)
- Step 2: Choose specific values of c and n0 (e.g., c = 1, n0 = 1)
- Step 3: Show the inequality holds for all n ≥ n0. Use algebraic simplification
- Common trick for Ω: Drop positive terms from the larger side; for O, bound each term above
- Be careful with O vs Ω: The inequality direction flips! O uses ≤, Ω uses ≥
Example (MID 2018 Q23): Prove 3(n+1)3 is Ω(n2). Choose c=1, n0=1. For any positive n: 3(n+1)3 ≥ 3n3 ≥ n2 = c·n2. QED.
22. Identify Data Structure from Code High
MID 2018 Q6 • 2019 Q5 • 2022 Q3,Q10 • 2024 Q3 |
END 2020 Q3 • 2023 Q3
Given a partial Java implementation (instance variables, method bodies), determine which ADT or data structure the code represents (stack, queue, deque, BST, general tree with parent references, etc.).
Methods to Solve
- Look at instance variables: Array + size variable = stack/queue; Node with 3 references = binary tree with parent; Node with children array = general tree
- Look at method behavior: LIFO (add/remove at same end) = stack; FIFO (add at one end, remove at other) = queue; add at arbitrary position = list
- Resizing strategy: If growth factor is multiplicative (f*s+1), compatible with stack (amortized O(1)). Not suitable for queue (can't efficiently remove from front)
- Node references: 1 reference (next) = SLL; 2 references (prev, next) = DLL; 3 references (left, right, parent) = binary tree with parent; element + children[] + parent = general tree
23. Stack / Queue Behavior Tracing High
MID 2018 Q18 • 2019 Q12 • 2022 Q6 |
END 2020 Q8 • 2025 Q4 |
RES 2019 Q4
Trace through code that uses stacks and/or queues. Determine the order in which elements are retrieved after a series of push/pop/enqueue/dequeue operations.
Methods to Solve
- Stack (LIFO): Last In, First Out. Push adds to top, Pop removes from top. Inserting 1,2,3 and popping gives 3,2,1
- Queue (FIFO): First In, First Out. Enqueue adds to back, Dequeue removes from front. Inserting 1,2,3 and dequeuing gives 1,2,3
- Reversing with stack: Moving all elements from a queue to a stack and back reverses the order
- Even/odd index split: When inserting elements at even indices into a stack and odd indices into a queue, carefully track which elements go where
- Draw the state: At each step, draw the current contents of the stack (top marked) and queue (front marked)
Example (MID 2018 Q18): S={1,2,3} pushed to stack, Q={4,5,6} enqueued. Pop all from S: 3,2,1. Dequeue all from Q: 4,5,6. Answer: B.
24. Topological Ordering High
END 2020 Q22 • 2022 Q20a • 2023 Q12 |
RES 2019 Q20 • 2024 Q9 • 2025 Q15
Count the number of valid topological orderings for a DAG, determine if a given sequence is a valid topological ordering, or identify which ordering is NOT valid.
Methods to Solve
- Definition: A topological ordering places all vertices in a linear order such that for every directed edge (u,v), u appears before v
- Only for DAGs: A topological ordering exists iff the graph has no directed cycles
- Algorithm: Repeatedly remove a vertex with in-degree 0 (and its outgoing edges). The removal order is a valid topological ordering
- Counting orderings: At each step, count how many vertices have in-degree 0 (those are interchangeable). Multiply the choices at each step
- Checking validity: For each edge (u,v), verify u appears before v in the given sequence
- If a appears before b in a topological sort: It does NOT necessarily mean there's an edge from a to b, only that there's no path from b to a
25. Graph Representation Comparison High
END 2020 Q23c • 2022 Q20b-c • 2023 Q18 • 2025 Q18 |
RES 2019 Q22 • 2025 Q11
Compare edge list, adjacency list, adjacency matrix, and adjacency map representations for specific graph operations (iterate vertices, check edge existence, iterate neighbors).
Methods to Solve
- Edge list: Space O(n+m). Check edge: O(m). Iterate neighbors of v: O(m). Good for nothing specific
- Adjacency list: Space O(n+m). Check edge: O(deg(v)). Iterate neighbors: O(deg(v)). Best for sparse graphs
- Adjacency matrix: Space O(n2). Check edge: O(1). Iterate neighbors: O(n). Best for dense graphs with frequent edge queries
- Adjacency map: Space O(n+m). Check edge: O(1) expected. Iterate neighbors: O(deg(v)). Best of both worlds for most operations
- Dense graph: m ≈ n2. Adjacency matrix is efficient. Sparse graph: m ≈ n. Adjacency list/map is efficient
26. Iterator Behavior (Snapshot vs Lazy) High
MID 2018 Q19-Q20 • 2019 Q19 • 2024 Q7 • 2025 Q7 |
RES 2022 Q6
Predict the output of code using snapshot vs lazy iterators. Understand that snapshot iterators copy the collection at creation time while lazy iterators reflect ongoing changes.
Methods to Solve
- Snapshot iterator: Makes a copy of the collection at creation. Subsequent modifications to the original collection do NOT affect iteration. Construction: O(n) time, O(n) space
- Lazy iterator: References the live collection. Modifications to the collection ARE reflected during iteration. Construction: O(1) time, O(1) space
- Creating new iterator each loop: elements.iterator() creates a NEW iterator starting from the beginning each time. The loop will always print the first element repeatedly
- Single iterator reused: iterator.next() advances the SAME iterator. Successive calls return consecutive elements
- Java for-each: Syntactic sugar for creating an iterator and calling hasNext()/next(). Uses a lazy iterator
Example (MID 2018 Q19-Q20): printElements1 creates a new iterator each loop iteration ⇒ always prints first element: {10,10,10}. printElements2 reuses one iterator ⇒ prints consecutive: {10,50,100}.
27. Explain What Algorithm Does High
MID 2018 Q21a,Q22a • 2019 Q23c,Q24a • 2025 Q16 |
END 2022 Q20a
Given a Java method, describe in plain English what it computes, for which inputs it returns true, and sometimes suggest a more efficient alternative algorithm.
Methods to Solve
- Trace with small inputs: Run the code mentally with arrays of size 3-5 to understand the pattern
- Identify the invariant: What property is being checked/maintained? (e.g., "all elements are positive", "no duplicates", "sorted in strictly increasing order")
- Look at return conditions: When does it return true vs false? This often reveals the purpose
- For "faster solution": Think about sorting (O(n log n)), hash sets for duplicates (O(n)), or mathematical properties (min/max comparison)
Example (MID 2019 Q23c): methodX returns true iff the difference between every pair of elements is at most 1. Faster: find min and max in O(n) and check if max - min ≤ 1.
28. Merge Sort Execution & Recursion Tree High
MID 2019 Q16 • 2024 Q12,Q15 • 2025 Q15 |
END 2025 Q9
Trace the merge sort recursion tree: determine subsequences at each level, array state after merging at a specific level, or after k iterations of bottom-up merge sort.
Methods to Solve
- Top-down: Recursively split array in half until size 1, then merge. Level 0 = full array, level k = subarrays of size n/2k
- Bottom-up: Start with runs of size 1, merge adjacent pairs into runs of size 2, then 4, etc. After iteration k, runs have size 2k
- Merging at level k: At level k (from leaves), subarrays of size 2k-1 are merged into sorted subarrays of size 2k
- Recurrence: T(n) = 2T(n/2) + c1n + c2. The c1n comes from the merge step (linear scan). Closed form: O(n log n)
- Space: O(n) for the auxiliary array used during merging. Cannot be done in-place with O(n log n) time
Example (MID 2025 Q15): Bottom-up merge sort on [2,13,6,24,43,1,51,9,10]. After iteration 1 (pairs): [(2,13),(6,24),(1,43),(9,51),(10)]. After iteration 2 (quads): [(2,6,13,24),(1,9,43,51),(10)].
29. Quicksort / Quickselect High
END 2022 Q11 • 2023 Q8 • 2024 Q8 |
RES 2019 Q10 • 2023 Q8
Trace in-place quicksort partition steps, count swaps and comparisons, understand worst-case recurrence. Compare quicksort and quickselect (1 vs 2 recursive calls).
Methods to Solve
- Partition: Choose pivot (often last element). Rearrange so elements ≤ pivot are left, > pivot are right. Pivot ends in final position
- In-place partition comparisons: n-1 comparisons for array of size n (compare each element to pivot)
- Worst case: Pivot is always min or max ⇒ T(n) = T(n-1) + cn = O(n2)
- Expected case: Balanced partitions ⇒ T(n) = 2T(n/2) + cn = O(n log n)
- Quickselect: Like quicksort but only recurses into ONE partition (the one containing the k-th element) ⇒ O(n) expected time
- Dual-pivot quicksort: Two pivots, three partitions. Partition step: 2(n-2) comparisons worst case
- NOT stable: Quicksort does not preserve relative order of equal elements
30. Positional List Operations (DLL with Sentinels) High
MID 2018 Q17 • 2022 Q7 • 2024 Q4 • 2025 Q5 |
RES 2022 Q4
Count the number of prev/next reference accesses needed for positional list operations (addBefore, removeFirst, replace). Understand the Position abstraction and sentinel (header/trailer) nodes.
Methods to Solve
- Sentinel nodes: Header and trailer are dummy nodes that simplify boundary conditions. Actual elements are between them
- addBefore(p, e): (1) Get predecessor: p.prev, (2) Create new node, set new.prev = predecessor, (3) Set new.next = p, (4) Set predecessor.next = new, (5) Set p.prev = new. Total: 3 prev references
- removeFirst: (1) first = header.next, (2) second = first.next, (3) header.next = second, (4) second.prev = header. Total: 3 next + 1 prev = 4 references
- replace(e, p): Simply set p.element = e. 0 prev references needed (just access the node directly)
- Position abstraction: Encapsulates the node, only exposes getElement(). Prevents unauthorized modifications
31. Time Complexity Polynomial for Iterative Method Low
MID 2018 Q21b • 2019 Q23a |
END 2020 Q23a |
RES 2019 Q23a
Write the exact polynomial T(n) = c0 + c1n + c2n2 for the worst-case time complexity of an iterative method. Define all constants and map each term to specific lines of code.
Methods to Solve
- c0 (constant term): Method call overhead, variable initialization, return statement
- c1n (linear term): Single loop executing n times, with constant work per iteration
- c2n2 (quadratic term): Nested loops (e.g., insertion sort), or a call to an O(n2) subroutine
- Include subroutine costs: If methodX calls inPlaceInsertionSort (O(n2)), include that as c3 + c4n + c5n2
- Justify each constant: Name specific lines of code each ci accounts for
- Simplify to Big-O: The highest-degree term dominates, so T(n) = c0 + c1n + c2n2 is O(n2)
32. BST Properties & Operations Low
END 2023 Q5 • 2024 Q12 • 2025 Q6 |
RES 2024 Q12
Identify valid BST inorder traversals, determine properties that can be derived from a BST (inorder = sorted), and understand BST structure constraints.
Methods to Solve
- BST property: For every node, all keys in left subtree < node key < all keys in right subtree
- Inorder traversal of BST: Always gives keys in sorted (non-decreasing) order
- Valid inorder check: The sequence must be sorted. If not sorted, it cannot be a BST inorder traversal
- Preorder from BST: First element is root, then all elements smaller than root (left subtree), then all larger (right subtree)
- From a set S: Inorder traversal is fully determined (sorted order). Preorder is NOT determined (depends on tree shape)
33. Array-Based Tree Representation Low
MID 2019 Q20-Q21 • 2025 Q8 |
END 2020 Q11
Navigate array-based binary trees using index formulas. Determine which tree shapes use the most/least space, and perform traversals on array-represented trees.
Methods to Solve
- Standard formulas (0-indexed): Left child = 2i+1, Right child = 2i+2, Parent = ⌊(i-1)/2⌋
- Alternative (with l = 2*p+1, r = 2*p+2): Same as above, used in some exam questions
- Space usage: Complete trees use minimal space (no gaps). Degenerate (line) trees waste exponential space: need 2h+1-1 positions for h nodes
- Smallest tree that doesn't fit in array of size C: A line tree (each node has only one child on the far side) maximizes the indices needed
- Not suitable for general trees: Array-based representation assumes binary trees with specific index formulas
34. Deep Copy / Shallow Copy / Clone Behavior Low
MID 2018 Q10-Q12 |
END 2020 Q10
Trace through code that copies arrays of objects using different clone methods. Determine whether modifications to the original affect the copy (shallow vs deep), and count the number of new objects created.
Methods to Solve
- Shallow copy: Copies the array/structure but shares the same element references. Modifying an element in the original affects the copy
- Deep copy: Copies the array AND creates new copies of each element. Modifications are independent
- Copy by reference: Both variables point to the same object. No new objects created
- clone1() = new Score(this.value): Creates a new independent object (deep copy of that element)
- clone2() = return this: Returns the same object reference (shallow/alias)
- Count new objects: Track how many times "new" is called during execution. clone1 at even indices creates ⌈n/2⌉ new objects
Example (MID 2018 Q10): copy() uses clone1() for even indices (deep copy) and clone2() for odd indices (reference copy). After changing rankA[0].value=40, rankB[0] still shows 25 (deep copied). rankA[1].value=15, rankB[1] shows 15 (shared reference).
Generated from 16 exams: Midterms 2018, 2019, 2022, 2024, 2025 | Endterms 2019, 2020, 2022, 2023, 2024, 2025 | Resits 2019, 2022, 2023, 2024, 2025
CSE1305 Algorithms & Data Structures, TU Delft