QuietSTaR method improves AI's thinking process, responsiveness and strategic cognition
Generative AI
Touchapon Kraisingkorn
3
min read
March 26, 2024

QuietSTaR: Training AI to Think Better Before Responding

What if AI could think internally, like humans, before speaking or acting? This could transform how AI operates, enabling more thoughtful and intuitive responses, enhancing its effectiveness and reliability in diverse contexts.

Contemporary AI systems, particularly Large Language Models (LLMs), still face significant limitations, especially in high-level planning. These systems operate by continuously predicting the next word or token based on existing data, which constrains their ability to engage in step-by-step analysis or strategic thinking.

What is QuietSTaR?

QuietSTaR is a technique/methodology used for training artificial intelligence (AI) systems to engage in internal dialogues, improving their ability to think and reason before responding to queries or commands.

How QuietSTaR Technique Works for AI Training?

Researchers have devised a new AI training method called "QuietSTaR," aimed at teaching AI systems to think before responding to questions or commands. This involves creating an inner monologue for AI, similar to how humans think before speaking or taking action. Unlike traditional methods used to train chatbots like ChatGPT, which do not prioritize pre-response thinking or anticipate various possibilities in dialogue, Quiet-STaR encourages AI systems to generate multiple internal rationales before responding and select the best answer from a blend of these rationales. Ultimately, AI learns from discarding incorrect rationales, enabling it to anticipate future dialogues and learn from current ones.

Researchers applied the Quiet-STaR algorithm to Mistral 7B, an open-source LLM, and published their experimental results on arXiv on March 14 (not yet peer-reviewed).

QuietStar: How to train AI to think better
QuietStar: How to train AI to think better

Test Results Show AI's Enhanced Reasoning & Math Skills

Experimental results show significant improvements in AI's thinking and reasoning abilities after training.

In the tests, Mistral 7B version trained with Quiet-STaR achieved a reasoning score of 47.2%, up from 36.3% in the pre-training version. Although Mistral 7B still struggled in school-level mathematics tests, scoring only 10.9%, this score is nearly double the 5.9% achieved by the untrained version.

These findings vividly demonstrate that training AI to engage in step-by-step analysis and triage through internal dialogue significantly enhances its capabilities in reasoning and mathematical computation.

Advancing AI to Human-like Thinking

In Conclusion: Developing AI systems' ability to think before responding through methods like Quiet-STaR represents a significant step forward in elevating AI's potential to analyze, reason, predict, and learn from data. This fills current AI limitations and propels it towards new abilities approaching human-like cognition in the future.

Feel free to seek advice from our team of AI specialists here.