Hacker Newsnew | past | comments | ask | show | jobs | submit | crabmusket's commentslogin

Not just a decent API, but fully open-source and self-hostable.

This. Zulip's topics map exactly to AI chats - you can have the whole team and the bot focused on one thing.

The Zulip team has been admirably cautious with their own approach to AI in the product - which I am so thankful for! - but I am sure someone out there has built the integration to get bots deeply into a Zulip org. And if not, building that integration is so much more achievable than rebuilding the whole of Slack.



It's all about licensing sadly...

It is somehow less funny today but in the 90's we would say "is there something wrong with your hands?"

A truly funny story: I wrote an rss aggregator and one day I discover some feeds had died without me noticing it. I looked at the feed, it was gone, I look at my aggregate and the headlines were all there?!?!

Since I gather a lot of feeds I couldn't help but noticed that a very large amount isn't wellformed. For example, in xml attributes the & (in urls) is suppose to be &, if you do that however many aggregators won't be able to parse it.

Every other month I wrote little bits of code to address the most annoying issues. 1) if I cant find a <link> or <guide> etc I eventually just gather <a>'s and take the href. 2) if I really cant find a title for the item I had it fail back on whatever is in the <a> since I was gathering those anyway. 3) if I cant even find an <item> I just look for the things that are suppose to go in the <item> 4) if I cant find a proper time stamp ill try parse one out of the url 5) if the urls are relative path complete them.

What was actually going on: The feed was gone, it redirected to the home page. In an attempt to parse the "xml" it eventually resorted to gathering the url and title from the <a>'s and build valid time stamps from the urls.


Uh, they lie about everything?

https://www.abc.net.au/news/feed/51120/rss.xml

I haven't fully examined it but looking at the xml I see it was last build in 2026 and a headline about Women's Asian Cup 2026.

abc.net.au/news/2026-03-05/matildas-iran-asian-cup-quick-hits-hayley-raso-mary-fowler/106413886


"smart answering machine" seems like a very apt use case for LLMs, provided the rest of the system works - that a human actually received and acts on the feedback.

This is the thing that drives me crazy. Most of these phone calls should just be emails; I can usually stand to wait a week or two for the company to get back to me. General support funnels like support@example.com have been dead for most consumer-facing technologies for close to a decade at this point. I’m not installing an app for every company I’m forced to interact with when there are already existing, universal technologies available that they could implement if they just priced their products appropriately.

It would be nice if more businesses embraced email instead of requiring phone calls for basic tasks. Imagine how much more productive we could be if we could just send off a quick email with the information and questions.

Instead, what we're likely going to get are "voice agents" calling each other when we could have just used email instead...


Businesses likely don't know a better way because the person selling them software doesn't want them to use an open and federated technology. They want the business to use Slack, with a SalesForce CRM, and then add a JIRA workflow to top it off.

Most of the time it's simply not being aware of what's out there or just showing them a different work flow.


Yeah. I recently had to deal with Amazon's robot. Definitely bird-brained but close enough that the right objective was accomplished even though I don't think it ever understood what happened (but woe to the non-native speaker!) The problem is not chatbot customer support, the problem is bird-brained managers that think a system that solves 99% of issues doesn't need a fallback for that 1%.

Couldn't you Ender's Game a model? Models will play video games like Pokemon, why not Call of Duty? Sorry if this is a naive question, but a model can only know what you feed it as input... how would it know if it were killing someone?

EDIT: didn't see sibling comment. Also, I guess directly operating weaponry is different to producing code for weaponry.

I guess we'll find out the exciting answers to these questions and more, very soon!


No but you can Abiliterate one locally

https://grokipedia.com/page/Abliteration


> feels a bit tragedy-of-the-commons ... I can't quite get the analogy straight in my head

I have a personal theory that "tragedy of the commons" has a very specific meaning, and beyond this meaning it just adds confusion. This isn't your fault - it's an overused phrase.

I'd try to examine the root of your discomfort. Why does it make you feel bad? Avoid thinking about "big ideas" like the commons or the public good.


> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Absolutely.


And what? Get nationalized? Get labelled as terrorists?

The US system doesn't empower a company to say no. It should though.


Yes. Force them to do it the hard way and fight through it. Don’t abdicate in advance

Literally Rule 1 On Fighting Tyranny:

> 1. Do not obey in advance.

> Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.

https://scholars.org/contribution/twenty-lessons-fighting-ty...


You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.

You own nothing but your opinion. (No offense to personal property aficionados)


I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)

That is an interesting question, very far from my daily concern and brings dilemmas when I think about it. My response would probably be "I don’t know".

However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.


It is of course possible to argue that the reason there is no ongoing invasion of the USA is because of our continued investment in technology for killing people

Thats the same type of thinking conspiracy theorists have, the type you can never disprove.

I am 100% against militarism and wished we didn't need any of this, but the power balance between Russia and Ukraine or even Israel and the Palestinians seem to corroborate the thesis... There likely would be no Ukraine war today if Ukraine hadn't voluntarily given up its nukes three decades ago (unproven thesis). There was one as Russia thought it could win. The ongoing (after the "peace fire") Israeli occupation and attacks of the remnants of Palestinian territory show the same. If you are the weaker party and there is a stronger party that wants what you have (or plain wants to eradicate you) then they'll do so..

> I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)

There are a lot of well meaning people that are very anti-weapon or anti-violence under any circumstances. The problem is that when those people actually need those weapons and that violence, they are so inadequate at it that they become a liability to themselves and others.

I'm not saying I have or know of a solution, but I remember the old saying (paraphrasing) that it's better to be a warrior working a farm than a farmer working a war.


Sure, if that's what it takes to do the right thing.

"We have a million pieces of content to show you, but are not allowed to editorialize" sounds like a constraint that might just spark some interesting UI innovations.

Not being allowed to use the "feed" pattern to shovel content into users' willing gullets based on maximum predicted engagement is the kind of friction that might result in healthier patterns of engagement.


It reminds me of that Apple ad where a guy just rocks up to a meeting completely unprepared and spits out an AI summary to all his coworkers. Great job Apple, thanks for proving Graeber right all along.


Hrm. Mitchell has been very level-headed about AI tools, but this seems like a rare overstep into hype territory.

"This new thing that hasn't been shipped, tested, proven, in a public capacity on real projects should be the default experience going forwards" is a bit much.

I for one wouldn't prefer a pre-chewed machine analysis. That sounds like an interesting feature to explore, but why does it need to be forced into the spotlight?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: