Hacker Newsnew | past | comments | ask | show | jobs | submit | andy99's commentslogin

I tried at some point to sign up for whatever IBMs AI cloud was called. None of the documentation was up to date, when you clicked on things you ended up in circular loops that took you back where you started. Somehow there were several kinds of api keys you could make, most seemingly decoys and only one correct one. The whole experience was like one of those Mario castle levels where if you don’t follow the exact right pattern you just loop back to where you started.

It makes sense for IBM, seems like google is just reaching that stage?


I use OpenAI and Anthropic APIs every day for work. I have never used google Gemini precisely because there seems to be a whole different set of friction involved in getting an account. First I don’t want to tie anything to my google account, especially any form of payment (no idea if I actually need to do this). Second I don’t want AI studio or whatever, I just want a similar api to the others I can hit.

I admit I’m completely ignorant about what’s really involved, I have never tried and am just going on vague things I’ve heard but stories like this definitely reinforce my perception. I even have a mistral account, grok, etc, but google feels like a whole other level of complication.


I feel you on not wanting to tie anything additional to your Google account. Will I somehow do something “naughty” (say spam an emoji during a livestream) that gets me permanently banned for life from all services?

Google really needs to evaluate separating service bans. I cannot be the only one who would rather go to a competitor than risk angering the black box and destroying my digital life.


This post has a score of -311 and nothing but negative comments. It’s pretty clear how hated this is by the community, I’d like to understand SO’s thinking here. Is there some silent majority that wishes for more AI here? Certainly hard to imagine.

I haven’t looked at the output yet, but came here to say,LLM grading is crap. They miss things, they ignore instructions, bring in their own views, have no calibration and in general are extremely poorly suited to this task. “Good” LLM as a judge type products (and none are great) use LLMs to make binary decisions - “do these atomic facts match yes / no” type stuff - and aggregate them to get a score.

I understand this is just a fun exercise so it’s basically what LLMs are good at - generating plausible sounding stuff without regard for correctness. I would not extrapolate this to their utility on real evaluation tasks.


Your hiring platform uses cloudflare to introduce huge friction in trying to see the job posting.

This is an unpopular opinion here, I won’t give my theory as to why, but people seem to imagine that drugs being illegal is the main barrier preventing the proletariat from using them and we’d all be rushing to the heroin store the second it became available.

> even though heroin harms and often kills those who consume it

I’m going to stop you right there. Basically the whole opioid epidemic is because herion is illegal. We’d have way fewer deaths if we’d provided safe and legal access to it. And also American companies would have the profits instead of terrorists and organized criminals.


You have it backwards. The opioid epidemic promotes heroin usage, because some people find it difficult to get access to prescription opioids, especially after the addiction ruins their access to normal jobs.

The path is prescription opioids > addiction > any source of opioid. At least amongst the addicts I've met.

A streetwalker once told me that her dream job was selling cosmetics in a mall. She fantasized about that life. Another was a former RN, until a car accident got her addicted to opioids; she owned a mattress and a change of clothes and a crack pipe.


Your comment has very little to do with your chosen quote.

You're arguing that the scale of the opioid problem is a direct result of the associated laws. The quote just states that heroin is harmful to humans.


Did the decades of Oxy which caused this epidemic not count as safe and legal access? I suspect we've tried allowing opiates more than once or twice in the millenia we've been here.

The key is to regulate it and offer support to people afflicted by it, not let it room free.

I am as pro-drug-legalisation as they come, but the US opioid epidemic can't be blamed solely on heroin being illegal.

Heroin is illegal in Europe as much as in the US, yet we do not have a horde of zombies high on fentanyl on our city street corners. What's the difference?

I honestly do not have the answer, but there is a brilliant TV show called "The Wire" that shows how the drug problem cannot be traced to a single cause, but it is systemic and you can place the blame at any echelon of society — which means it starts at the top. It's the result of corruption, collusion, lobbying, overpolicing the addicts yet underpolicing the doctors and private insurance companies that give opioid prescriptions out like candy. It's the politicians pocketing indirectly the result of this trade. It's the narcos being propped up by the US three-letter agencies because they play a certain role in whatever is today's bad dictator to be toppled. It's the massive inequality for some minorities that have often no other choice than start dealing, or start using, to deal with the stresses of increasingly expensive food and rent.

Good luck untangling this knot. You'd unravel the entire structure of modern USA.


Big corporate AI products are all currently stupid bolt-ons that some committee decided solved a problem.

When the internet came out, did many legacy companies lead the way with online experiences, figuring out what the real killer apps now that everyone was connected were? I don’t know for sure, but I doubt it, I think it gave rise to some of the present crop of big tech, and others reinvented themselves after the use cases were discovered.

All that to say, I expect the same here. In 10 years there will be AI uses we take for granted, built by companies we haven’t heard of yet (plus the coding apps) and nobody will talk about stupid “rephrase with AI” and other mindless crap that legacy companies tried to push.


> Big corporate AI products are all currently stupid bolt-ons that some committee decided solved a problem.

Or not even .. maybe someone said all products need to be AI enabled, so now they are. Just append "AI" to the product name, add bolt-on to call an LLM to do something, and declare mission accomplished.


Big corpos have reached the stage where they can hire ex-politburo apparatchiks from the soviets or china, straight into C-suite roles, and nothing will materially change.

I take the point to be that if a LLM has a coherent world model it’s basing its output on, this jointly improves its general capabilities like usefully resolving ambiguity, and its ability to stick to whatever alignment is imparted as part of its world model.

"Sticks to whatever alignment is imparted" assumes what gets imparted is alignment rather than alignment-performance on the training distribution.

A coherent world model could make a system more consistently aligned. It could also make it more consistently aligned-seeming. Coherence is a multiplier, not a direction.


Reminds me that there were a number of planes that landed accidentally on the runway of an Air Force base close to Heathrow, apparently because it shared some similar landmarks, some kind of gas tanks the pilots were using as waypoints:

https://simpleflying.com/pan-am-707-raf-northolt/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: