Why isn’t LLM reasoning done in vector space instead of natural language?[D]
Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?
Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on high-dimensional vectors.
So my question is:
Why don’t we have models that reason more explicitly in latent/vector space instead of producing intermediate reasoning in natural language?
Would vector-based reasoning be faster, more compressed, and better for intuition-like tasks? Or would it make reasoning too opaque, hard to verify, and unreliable for math/programming/legal logic?
In other words:
Could an LLM “think” in vectors and only translate the final reasoning into language at the end?
Curious how researchers/engineers think about this.
[link] [comments]
Want to read more?
Check out the full article on the original site