In this project, I developed a framework to stress test an optimized algorithm against a brute force solution in C++. The idea is to generate random test cases and compare the outputs of both solutions. The brute force solution acts as a correctness benchmark, while the optimized solution is evaluated for performance and accuracy under various input conditions.
This approach is particularly useful when a test is failing, but the exact failing case isn’t visible or obvious. It helps uncover edge cases that may break the optimized solution, especially when you’re unable to manually identify the tests that can cause the solution to fail.