Hacker Newsnew | past | comments | ask | show | jobs | submit | ontouchstart's commentslogin

I made a comment in another thread about my acceptance criteria

https://news.ycombinator.com/item?id=47280645

It is more about LLMs helping me understand the problem than giving me over engineered cookie cutter solutions.


I am running local offline small models in the old fashioned REPL style, without any agentic features. One prompt at a time.

Instead of asking for answers, I ask for specific files to read or specific command line tools with specific options. I pipe the results to a file and then load it into the CLI session. Then I turn these commands into my own scripts and documentation (in Makefile).

I forbid the model wandering around to give me tons of irrelevant markdown text or generated scripts.

I ask straight questions and look for straight answers. One line at a time, one file at a time.

This gives me plenty of room to think what I want and how I get what I want.

Learning what we want and what we need to do to achieve it is the precious learning experience that we don’t want to offload to the machine.


> I ask straight questions and look for straight answers. One line at a time, one file at a time.

I've also taken to using the Socratic Method when interrogating an LLM. No loaded questions, squeaky clean session/context, no language that is easy to misinterpret. This has worked well for me. The information I need is in there, I just need to coax it back out.

I did exactly this for an exercise a while back. I wanted to learn Rust while coding a project and AI was invaluable for accelerating my learning. I needed to know completely off-the-wall things that involved translating idioms and practices from other languages. I also needed to know more about Rust idoms to solve specific problems and coding patterns. So I carefully asked these things, one at a time, rather than have it write the solution for me. I saved weeks if not months on that activity, and I'm at least dangerous at Rust now (still learning).


This. I'm also using an LLM very similarly and treat it like a knowledgeable co-worker I can ask for an advice or check something. I want to be the one applying changes to my codebase and then running the tests. Ok, agents may improve the efficiency but it's a slippery slope. I don't want to sit here all day watching the agents modify and re-modify my codebase, I want to do this myself because it's still fun though not as much fun as it was pre-AI

And you don't know what might trigger AI into overthinking. ;-)

https://gist.github.com/ontouchstart/bc301a60067f687b65dad64...

(This is an ongoing experiment, it doesn't matter what model I use.)


I found out the other day that llama.cpp can work with PDF images offline without Internet connection. Combine with local Rustdoc and TeXLive, playing with plan TeX becomes fun again.

https://ontouchstart.github.io/rabbit-holes/tex_rabbit_hole_...


It doesn’t matter, you can replace “Rust” with any trendy language at the moment.

FORTRAN, COBOL, PASCAL, C, Java, C++, Perl, PHP, Python, Ruby, JS, TS, and now Rust…

The only thing matters is people pay attention to them and then have to use them because their job requires them.

AI would not change it.

We are slaves of the trend.


Hoarding is becoming an epidemic mental disease in the society of abundance. I don’t know what the solution would be.

https://simonwillison.net/guides/agentic-engineering-pattern...


Personally my plan is to hoard more.

It seems that GitHub gist can renders some of \LaTeX, but not perfect.

https://gist.github.com/ontouchstart/bcffb186a753c5b75522fc8...


LLMs only produce markdown [1], usually Math is wrapped with KaTeX [2]. The rendering happens in the Web UI. If some math failed to render, you can copy the code and paste to latex-sandbox [3] and fix yourself.

[1] https://docs.github.com/en/get-started/writing-on-github/wor...

[2] https://katex.org

[3] https://latex-sandbox.vercel.app


What is NUS?

Fascinating report by DEK himself.

Time to sit down, read, digest and understand it without the help of LLM.


I don't have time to do that myself yet so I just dug a quick TL;DR rabbit hole for fun:

https://ontouchstart.github.io/rabbit-holes/llm_rabbit_hole_...


Lol, it's longer than the original article.

Is it possible to build a full OS emulator on top of MMIX?

> The above tools could theoretically be used to compile, build, and bootstrap an entire FreeBSD, Linux, or other similar operating system kernel onto MMIX hardware, were such hardware to exist.

https://en.wikipedia.org/wiki/MMIX


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: