Hacker Newsnew | past | comments | ask | show | jobs | submit | archon1410's commentslogin

And of course it's available even in Icelandic, spoken by ~300k people, but not a single Indian language, spoken by hundreds of millions.

भारत दुर्दशा न देखी जाई...


Please don't take HN threads into nationalistic flamewar. It leads nowhere interesting or good.

We detached this subthread from https://news.ycombinator.com/item?id=44615783.


[flagged]


Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html


Yep, noted!


Presumably almost all competitors from India would be fluent in English (given it is the second most spoken language there)? I guess the same is true of Icelandic though.


Yes, and there's also languages of ex-USSR countries, whose competitors presumably all understand Russian, and so on.

The real reason might be that there's an enormous class of self-loathing elites in India who actively despise the possibility of any Indian language being represented in higher education. This obviously stunts the possibility of them being used in international competitions.


Discussions about Indian politics or the Indian psyche—especially when laced with Indic supremacist undertones—are off-topic and an annoyance here. Please consider sharing these views in a forum focused on Indian affairs, where they’re more likely to find the traction they deserve.


It is not "supremacist" to believe that depriving hundreds of millions of people from higher education in their native language is deeply unjust. This reflection was prompted by a comment on why Indian languages are not represented in international competitions, which was prompted by a comment on the competition being available in many languages.

Discussions online have a tendency to go off into tangents like this. It's regrettable that this is such a contentious topic.


> self-loathing elites in India

Your disdain for English-speaking Indian elites (pejoratively referred to as ‘Macaulayites’ by Modi’s supporters) is quite telling. That said, as I mentioned earlier, this kind of discourse doesn’t belong here.


My disdain is for the fact that hundreds of millions of Indians cannot access higher education in their native language, and instead of simply learning a foreign language as a subject like the rest of world, they have the bear the burden[1] of learning things in a foreign language which they have to simultaneously learn. I have disdain for the people responsible for this mess. I do not have any disdain for any language-speaking class, specially not one which I might be part of.

[1]https://www.mdpi.com/2071-1050/14/4/2168


Much more efficient for us to all speak the same language. Trying to create fragmentation is inefficient.


You should take that up with the IMO then, or all of European Union. They provide services in ~two dozen languages.


Sure, but why worsen the situation by using more languages?


Human culture should not be particularly concerned with efficiency


Seems like all the p(doom) is coming from Elon himself, and not the AI. An "empowered" but unintelligent "stochastic parrot", which seems to be his view of LLMs, is more likely to hinder than help with one's plan for world domination and annihilation.


> If DeepSeek said May

It is pretty strange that DeepSeek didn't say May anywhere, that was also a Reuters report based on "three people familiar with the company".[1] DeepSeek itself did not respond and did not make any claims about the timeline, ever.

[1]: https://www.reuters.com/technology/artificial-intelligence/d...


How it is written it could be 3 anonymous and random guys from Reddit who heard about DeepSeek online.


The phrasing for quoting sources is extremely codified, it means the journalists have verified who the sources are (either insider or people with access with insider information).


How does this matter?

If the journalists aren’t fully trusted in the first place… trusting them to strictly adhere to even the best codified rules seems even less likely.


Sure, if you don't trust anything what's the point. There's a lot of information that relies on anonymous sources and we usually use third party to vet them (otherwise how would they stay anonymous). Without this system we'd be missing out on a lot of things (if only named sources are used, a lot of things would never come out).

(A lot of things break down in society without trust, maybe that's already how the US is? Where I live it is thankfully still somewhat ok)


https://www.ndtv.com/world-news/donald-trumps-big-warning-to...

The Washington Post, The New York Times, The New Republic, The Intercept, Rolling Stone, CBS News, CNN, Newsweek, USA Today, NBC News, Der Spiegel (Germany), The Sunday Times (UK), Daily Mail (UK), Al Jazeera (Qatar), RT (Russia), Xinhua (China), Press TV (Iran), Haaretz (Israel), Le Monde (France), El País (Spain) all have been caught using fake anonymous sources.


Do you not understand what “fully trust” means?

No one I’ve ever heard of on HN fully trusts journalists.


Welcome to most China news. Many "well-documented" China "facts" are in fact cases like this: the media taking rumors or straight up fabricating things for clicks, and then self-referencing (or different media referencing each other in a circle) to put up the guise of reliable news.

This is why we need to be critical of journalists nowadays. No longer are they the Fourth Column, protecting society and democracy by providing accurate information.


Not just "China news", unfortunately.


The second (be critical of journalism as a field's accuracy) doesn't follow from the first (there are bad journalists).

Especially since the alternative is to live in a world without facts.

Which some people would probably love, but I prefer my reality to be constructed from objectivity rather than authority.


That sounds to me like you are excusing a bad reality based on a nonexistant ideal. Saying "there are bad journalists" is a huge understatement. There are many, perhaps even the majority. Ask yourself why society at large has stopped trusting mainstream media, it's not just because there are a "few" bad apples but because the bad apples are widespread and systemic.

The tendency to compare to a nonexistant ideal is also something I find very very weird. This tendency does not exist for many other concepts. For example when people talk about communism, and someone say "hey $COUNTRY is just one bad apple, it doesn't mean real communism is bad" then others are quick to respond with "but all countries doing communism have devolved into tyranny/dictatorship/etc, so real communism doesn't exist and what we've seen is the real deal". I am not criticizing that (common) point of view, but people ought to take responsibility and apply this principle equally to all concepts, including "journalism".

It also doesn't follow that my critique of journalists/journalism means tearing down journalism altogether. It can also mean:

- that people need to stop trusting mainstream journalists blindly on topics they're not adept in. Right now many people have stopped trusting mainstream journalists only for topics they're adept in, but as soon as those journalists write nonsense about something else (e.g. $ENEMY_STATE) then they swallow that uncritically. No. The response should be "they lied about X, what else are they lying about?" instead of letting themselves be manipulated in other areas.

- that society as a whole needs to hold journalism accountable, and demand that they return to the role of the Fourth Column.


> Ask yourself why society at large has stopped trusting mainstream media

Because certain political interests take the existence of a fact-based, independent power center as a threat to their own power?

And so engineered a multi-decade campaign to indoctrinate people against the news/media, thus removing a roadblock to imposing their own often contrary-to-fact narratives?

Pretending this happened in a vacuum or was grassroots ignores mountains of money deployed with specific intent over spans of time.

> It can also mean that society as a whole needs to hold journalism accountable, and demand that they return to the role of the Fourth Column.

I absolutely agree with this.

If I had my druthers, the US would reinstate the fairness doctrine (abolished in 1987) and specifically the components requiring large media corporations to subsidize non-profit newsrooms as a public good.

The US would be a better place if we banned 24/7 for-profit news.


Actually I think one of the researchers at Deepseek did say on Twitter but I think that tweet has since been deleted.


The original Vending-Bench paper from Andon Labs might be of interest: https://arxiv.org/abs/2502.15840


I read this paper when it came out. It’s HILARIOUS. Everyone should read it and then print copies for their managers.


The blog itself reads as if it was written by an LLM. (e.g. "This isn't about X, it's about Y." "... is timely ..." "X isn't Y".)

Weird.

And it has been discussed to death already:

Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems) [https://www.lesswrong.com/posts/5uw26uDdFbFQgKzih/beware-gen...]

Seven replies to the viral Apple reasoning paper and why they fall short [https://news.ycombinator.com/item?id=44278403]


Claude Plays Pokemon was the original concept and inspiration behind "Gemini Plays Pokemon". Gemini arguably only did better because it had access to a much better agent harness and was being actively developed during the run.

See: https://www.lesswrong.com/posts/7mqp8uRnnPdbBzJZE/is-gemini-...


Not sure "original concept" is quite right, given it had been tried earlier, e.g. here's a 2023 attempt to get gpt-4-vision to play pokemon, (it didn't really work, but it's clearly "the concept")

https://x.com/sidradcliffe/status/1722355983643525427


I see, I wasn't aware of that. The earliest attempt I knew of was from May 2024,[1] while this gpt-4-vision attempt is from November 2023. I guess Claude Plays Pokemon was the first attempt that had any real success (won a badge), and got a lot of attention over its entertaining "chain-of-thought".

[1] https://community.aws/content/2gbBSofaMK7IDUev2wcUbqQXTK6/ca...


I disagree - this is all an homage to Twitch Plays Pokemon, which was a noteworthy moment in internet culture/history.

https://en.wikipedia.org/wiki/Twitch_Plays_Pok%C3%A9mon


The naming scheme used to be "Claude [number] [size]", but now it is "Claude [size] [number]". The new models should have been named Claude 4 Opus and Claude 4 Sonnet, but they changed it, and even retconned Claude 3.7 Sonnet into Claude Sonnet 3.7.

Annoying.


It seems like investors have bought into the idea that llms has to improve no matter what. I see it in the company I'm currently at. No matter what we have to work with whatever bullshit these models can output. I am however looking at more responsible companies for new employment.


I'd argue a lot of the current AI hype is fuelled by hopium that models will improve significantly and hallucinations will be solved.

I'm a (minor) investor, and I see this a lot: People integrate LLMs for some use case, lately increasingly agentic (i.e. in a loop), and then when I scrutinise the results, the excuse is that models will improve, and _then_ they'll have a viable product.

I currently don't bet on that. Show me you're using LLMs smart and have solid solutions for _todays_ limitations, different story.


Our problem is that non coding stakeholders produce garbage tiers frontend prototypes and expect us to include whatever garbage they created in our production pipeline! Wtf is going on? That's why I'm polishing my resume and getting out of this mess. We're controlled by managers who don't know Wtf they're doing.


Maybe a service mentality would help you make that bearable for as long as it still lasts? For my consulting clients, I make sure I inform them of risks, problems and tradeoffs the best way I can. But if they want to go ahead against my recommendation - so be it, their call. A lot of technical decisions are actually business decisions in disguise. All I can do is consult them otherwise and perhaps get them to put a proverbial canary in the coal mine: Some KPI to watch or something that otherwise alerts them that the thing I feared would happen did happen. And perhaps a rough mitigation strategy, so we agree ahead of time on how to handle that.

But I haven't dealt with anyone sending me vibe code to "just deploy", that must be frustrating. I'm not sure how I'd handle that. Perhaps I would try to isolate it and get them to own it completely, if feasible. They're only going to learn if they have a feedback loop, if stuff that goes wrong ends up back on their desk, instead of yours. The perceived benefit for them is that they don't have to deal with pesky developers getting in the way.


It's been refreshing to read these perspectives as a person who has given up on using LLMs. I think there's a lot of delusion going on right now. I can't tell you how many times I've read that LLMs are huge productivity boosters (specifically for developers) without a shred of data/evidence.

On the contrary, I started to rely on them despite them constantly providing incorrect, incoherent answers. Perhaps they can spit out a basic react app from scratch, but I'm working on large code bases, not TODO apps. And the thing is, for the year+ I used them, I got worse as a developer. Using them hampered me learning another language I needed for my job (my fault; but I relied on LLMs vs. reading docs and experimenting myself, which I assume a lot of people do, even experienced devs).


When you get outside the scope of a cruddy app, they fall apart. Trouble is that business only see crud until we as developers have to fill in complex states and that's when hell breaks loose because who tought of that? Certainty not your army of frontend and backend engineers who warned you about this for months on end.....

The future will be of broken UIs and incomplete emails of "I don't know what to do here"..


The sad part is that there is a _lot_ of stuff we can now do with LLMs, that were practically impossible before. And with all the hype, it takes some effort, at least for me, to not get burned out on all that and stay curious about them.

My opinion is that you just need to be really deliberate in what you use them for. Any workflow that requires human review because precision and responsibility matters leads to the irony of automation: The human in the loop gets bored, especially if the success rate is high, and misses flaws they were meant to react to. Like safety drivers for self driving car testing: A both incredibly intense and incredibly boring job that is very difficult to do well.

Staying in that analogy, driver assist systems that generally keep the driver on the well, engaged and entertained are more effective. Designing software like that is difficult. Development tooling is just one use case, but we could build such _amazingly_ useful features powered by LLMs. Instead, what I see most people build, vibe coding and agentic tools, run right into the ironies of automation.

But well, however it plays out, this too shall pass.


The "first MP3", without the background music, and just the voice, sounds a lot better to me than the original I listened to on YouTube. I liked the MP3 more.

Any way for me to find similar stuff? Just a good voice singing stuff, without music? I know acapella, and some of it is good, but I'm thinking of something more specific. Just one person singing without music I guess, something poetic.


The acapella version of “Tom’s Diner” is, in fact, the original. The dance version was first put out as an unauthorized remix, but Suzanne Vega liked it and they negotiated an agreement.


Oh, I didn't know that. Thanks for this!



I had the same impression, but apparently not.

> Many data centers rely on evaporative cooling, or “swamp cooling,” where warm air is drawn through wet pads. Data centers typically evaporate about 80% of the water they draw, discharging 20% back to a wastewater treatment facility, according to Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside.


I've also noticed this. Google Search is vastly superior to any LLM (including their own LLM Gemini) for any "tip of my tongue" questions, even the ones that don't contain any exact-match phrase and require natural language understanding. This is surprising. What technology are they using to make Search so amazing at finding obscure stuff from descriptions, while LLMs that were supposed to be good at this badly fail?


Probably some super fuzzy thesaurus that will take your words, and create a weighted list of words that are similar to them and so some search matching going down the weighted lists.

Maybe also, they take those queries that needed lots of fuzziness to get to the answer, and track what people click to relate the fuzzy searches to actual results. Keep in mind, what you might think is a super unique "tip of tongue" question, across billions of searches, might not be that unique.

Building a search system to find things can be much more optimized than making an AI to return an answer, especially when you have humans in the loop that can tweak things based on analytics data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: