NousResearch has recently released its reasoning model, DeepHermes-3-Llama-3-8B-Preview, with an interesting twist: you can toggle between standard LLM behavior and enhanced reasoning mode simply by using a specific system prompt.
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside tags, and then provide your solution or response to the problem.
I was curious to see what would happen if I used this reasoning system prompt with other models—and the results turned out to be quite interesting. Some models followed the system instructions and started mimicking reasoning. They engaged in an internal dialogue within the <think>
tag, conversed with themselves, and only output meaningful information for the user outside of the tag.
Here are few examples
I’ve used one of the default LMStudio prompt and anothe ropen ended question for the test.
Can you teach me a cool math proof? Explain it to me step by step and make it engaging. Ask me questions; don't just output the proof. Use LaTeX for the math symbols.
-----------------------------------------------------------------------
Think about a novel new way of using LLM models
-----------------------------------------------------------------------
solve:
x−7y=−11
5x+2y=−18
qwen2.5-7b-instruct-1m
Internal monologue makes sense, output to user has minimal reference to the thinking.

For the other prompt the model has generated a summary like result

meta-llama-3.1-8b-instruct
The model not using the tag correctly but get into self questioning

For the Novel LLM use it has split the discussion to collecting details and finalizing a decision. It haven’t used the tag correctly and missing the self questioning.

llama-3.1-supernova-lite@q6_k
For the math proof prompt the model haven’t started mimicking reasoning, but for novel LLM use-case it used a tag and provided a summary.

llama-3.1-nemotron-70b-reward-hf
Math prood prompt provided regular response, while the Novel LLM idea kicked in correct tag use.

gemma-2-9b-it
Gemma kept the calculations within its internal monologue (multiple monologue) and, in the end, provided the correct final answer.

For the math proof it has also made a good job to talk to itself.
