Hello!
As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:
It lies. Confidently. ALL THE TIME.
(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5
I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.
The thing: llama-conductor
llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).
Not a model, not a UI, not magic voodoo.
A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.
TL;DR: “In God we trust. All others must bring data.”
Three examples:
1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)
You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:
>>attach <kb>— attaches a KB folder>>summ new— generatesSUMM_*.mdfiles with SHA-256 provenance baked in- `>> moves the original to a sub-folder
Now, when you ask something like:
“yo, what did the Commodore C64 retail for in 1982?”
…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:
The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.
Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.
Confidence: medium | Source: Mixed
No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.
And when you’re happy with your summaries, you can:
>>move to vault— promote those SUMMs into Qdrant for the heavy mode.
2) Mentats: proof-or-refusal mode (Vault-only)
Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:
- no chat history
- no filesystem KBs
- no Vodka
- Vault-only grounding (Qdrant)
It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:
FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.
Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]
Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.
The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”
3) Vodka: deterministic memory on a potato budget
Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.
Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).
!!stores facts verbatim (JSON on disk)??recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)- CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages
So instead of:
“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”
you get:
!! my server is 203.0.113.42?? server ip→ 203.0.113.42 (with TTL/touch metadata)
And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.
There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.
TL;DR:
If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:
- Primary (Codeberg): https://codeberg.org/BobbyLLM/llama-conductor
- Mirror (GitHub): https://github.com/BobbyLLM/llama-conductor
PS: Sorry about the AI slop image. I can’t draw for shit.
PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.


Oh it can try…but you can see it’s brain. That’s the glass box part of this. You can LITERALLY see why it says what it says, when it says it. And, because it provides references, you can go and check them manually if you wish.
Additionally (and this is the neat part): the router actually operates outside of the jurisdiction of your LLM. Like, the LLM can only ask it questions. It can’t affect the routers (deterministic) operation. The router gives no shits about your LLM.
Sometimes, the LLM might like to give you some vibes about things. Eg: IF YOU SHOUT AT IT LIKE THIS, the memory module of the router activates and stores that as a memory (because I figured, if you’re shouting at the llm, it’s probably important enough in the short term. That or your super pissed).
The llm may “vibe” a bit (depending on the temp, seed, top_k etc), but 100/100, ALL CAPS >8 WORDS = store that shit into facts.json
Example:
User: MY DENTIST APPOINTMENT IS 2:30PM ON SATURDAY THE 18TH. LLM: Gosh, I love dentists! They soooo dreamy! <----PS: there’s no fucking way your LLM is saying this, ever, especially with the settings I cooked into the router. But anywayz
[later]
USER: ?? When is my dentist appointment again LLM: The user’s dentist appointment is at 2:30 PM on Saturday, the 18th. The stored notes confirm this time and date, with TTL 4 and one touch count. No additional details (e.g., clinic, procedure) are provided in the notes.
Confidence: high | Source: Stored notes
Yes, I made your LLM autistic. You’re welcome