I am running local offline small models in the old fashioned REPL style, without any agentic features. One prompt at a time.
Instead of asking for answers, I ask for specific files to read or specific command line tools with specific options. I pipe the results to a file and then load it into the CLI session. Then I turn these commands into my own scripts and documentation (in Makefile).
I forbid the model wandering around to give me tons of irrelevant markdown text or generated scripts.
I ask straight questions and look for straight answers.
One line at a time, one file at a time.
This gives me plenty of room to think what I want and how I get what I want.
Learning what we want and what we need to do to achieve it is the precious learning experience that we don’t want to offload to the machine.
> I ask straight questions and look for straight answers. One line at a time, one file at a time.
I've also taken to using the Socratic Method when interrogating an LLM. No loaded questions, squeaky clean session/context, no language that is easy to misinterpret. This has worked well for me. The information I need is in there, I just need to coax it back out.
I did exactly this for an exercise a while back. I wanted to learn Rust while coding a project and AI was invaluable for accelerating my learning. I needed to know completely off-the-wall things that involved translating idioms and practices from other languages. I also needed to know more about Rust idoms to solve specific problems and coding patterns. So I carefully asked these things, one at a time, rather than have it write the solution for me. I saved weeks if not months on that activity, and I'm at least dangerous at Rust now (still learning).
This. I'm also using an LLM very similarly and treat it like a knowledgeable co-worker I can ask for an advice or check something. I want to be the one applying changes to my codebase and then running the tests. Ok, agents may improve the efficiency but it's a slippery slope. I don't want to sit here all day watching the agents modify and re-modify my codebase, I want to do this myself because it's still fun though not as much fun as it was pre-AI
I found out the other day that llama.cpp can work with PDF images offline without Internet connection. Combine with local Rustdoc and TeXLive, playing with plan TeX becomes fun again.
LLMs only produce markdown [1], usually Math is wrapped with KaTeX [2]. The rendering happens in the Web UI. If some math failed to render, you can copy the code and paste to latex-sandbox [3] and fix yourself.
Is it possible to build a full OS emulator on top of MMIX?
> The above tools could theoretically be used to compile, build, and bootstrap an entire FreeBSD, Linux, or other similar operating system kernel onto MMIX hardware, were such hardware to exist.
https://news.ycombinator.com/item?id=47280645
It is more about LLMs helping me understand the problem than giving me over engineered cookie cutter solutions.
reply