Blind Spot Navigation in Large Language Model Reasoning with Thought Space Explorer

Jan 1, 2025·
Jinghan Zhang
,
Fengran Mo
,
Tharindu Cyril Weerasooriya
,
Xinyue Ye
,
Dongjie Wang
,
Yanjie Fu
,
Kunpeng Liu
· 1 min read
Abstract
Large language models have shown strong reasoning capabilities through chain-structured methods such as Chain-of-Thought. Recent studies optimize thought structures by generating parallel or tree-like structures, switching between long and short reasoning modes, or aligning reasoning steps with task performance. However, these approaches mainly rely on previously generated logical directions of the chains, which ignore the unexplored regions of the solution space. Such a phenomenon is defined as blind spots, which limit the diversity and effectiveness of the reasoning process. To this end, we propose the ``Thought Space Explorer’’ (TSE), a framework for navigating and expanding thought structures to overcome blind spots in LLM reasoning. Our TSE first identifies key nodes with high impact, then generates new nodes by integrating information from multiple chains. Finally, it extends new branches through connection strategies. We conduct a series of experiments on math and QA benchmarks. Compared with existing baseline methods, TSE improves the accuracy of both the final answer and intermediate reasoning steps, while maintaining a better effectiveness-efficiency trade-off for practical deployment.
Type
Note

Presenting at NeurIPS (Math-AI Workshop)
Date: December 6, 2025
Time: Sat 3:30 p.m. - 4:15 p.m. Location: NeurIPS 2025 - Workshop Upper Level Ballroom 6A Session Type: Poster Session