I have been thinking about this, and don’t have a proper answer for myself.
I like llms, or lin other words, I like that we are getting better at something.
However, just want to ask; what was the initial problem llms were trying to solve, what problem did they solve so far?
Do you have any examples in your life or work, which you can clearly say “we were not able to do this before llms, but now we can” or “we were able to do it, but not good enough, it was causing us some issues, now it is a lot better”
If the answer yes; second question would be like, does the total cost of those problem at least equal or exceeding the amount of investment on these models?
Thanks in advance
Now, before you jump on me saying that AI is wrong, this is true. But at the same time, I no longer can be 100% sure that whatever SEO optimized website I land at provides accurate information. If I need solid facts, I usually double check AI with various other sources. For queries like "best keyboard for software engineers", I'd rather get a table with pros/cons from AI rather than landing on whatever affiliate related website is promoted on Google. LLM gives me a good starting point to either dig deeper into particular products, or query further to find more suggestions.
Same for coding. I used to Google "how to split a string in ruby" and land on flame war, or 19 years old, StackOverflow question. Now I can get an updated answer from whatever LLM you prefer with a reference to the official documentation. It works for simple queries, as well as code snippets.
Lastly, I use LLMs to plan trips or gift ideas. I'd just throw in my preferences, and let LLM build a rough plan, from which I can iterate further, or start doing my own research.