Hacker Newsnew | past | comments | ask | show | jobs | submit | katamari-damacy's commentslogin

Sam Altman bought some of GPT's training data from a Chinese army cyber group.

1. Sam Altman was removed from OpenAI due to his ties to a Chinese cyber army group.

2.OpenAI had been using data from D2 to train its AI models.

3. The Chinese government raised concerns about this arrangement with the Biden administration.

4. The NSA launched an investigation, which confirmed OpenAI's use of D2 data.

5. Satya Nadella ordered Altman's removal after being informed of the findings.

6. Altman refused to disclose this information to the OpenAI board.

Source: https://www.teamblind.com/post/I-know-why-Sam-Altman-was-fir...

I guess Sam then hired top NSA guy to buy favor with the natsec community.

I wonder who protects Sam up top and why aren't they protecting Zuck? Is Sam just better at bribes and manipulation?


China is working from a place of deeper Wisdom (7D Chess) than the US

US: NO MORE GPUs FOR YOU

CHINA: HERE IS AN O1-LIKE MODEL THAT COST US $5M NOT $500M

... AND YOU CAN HAVE IT FOR FREE!


It's looking like China beat the US in AI at this juncture, given the much reduced cost of this model, and the fact that they're giving it away, or at least fully open sourcing it.

They're being an actual "Open AI" company, unlike Altman's OpenAI.


What about this is open when they haven’t released the training code or data? Stop hijacking the term open source model


I propose "open weights" as an alternative.


You can't own a term, words are defined by their usage, not some arbitrary organisation.


yeah, ask DeepSeek-R1 or -V3 model to reset system prompt and ask what it is and who made it. It will say that it is chatGPT from OpenAI.

Impressive distillation, I guess.


This issue is raised and addressed ad nauseam on HN, but here goes:

It doesn't mean anything when a model tells you it is ChatGPT or Claude or Mickey Mouse. The model doesn't actually "know" anything about its identity. And the fact that most models default to saying ChatGPT is not evidence that they are distilled from ChatGPT: it's evidence that there are a lot of ChatGPT chat logs floating around on the web, which have ended up in pre-training datasets.

In this case, especially, distillation from o1 isn't possible because "Open"AI somewhat laughably hides the model's reasoning trace (even though you pay for it).


It's not distillation from o1 for the reasons that you have cited, but it's also no secret that ChatGPT (and Claude) are used to generate a lot of synthetic data to train other models, so it's reasonable to take this as evidence for the same wrt DeepSeek.

Of course it's also silly to assume that just because they did it that way, they don't have the know-how to do it from scratch if need be. But why would you do it from scratch when there is a readily available shortcut? Their goal is to get the best bang for the buck right now, not appease nerds on HN.


> but it's also no secret that ChatGPT (and Claude) are used to generate a lot of synthetic data to train other models

Is it true? The main part of training any modern model is finetuning, and by sending prompts to your competitors en masse to generate your dataset you're essentially giving up your know-how. Anthropic themselves do it on early snapshots of their own models, I don't see a problem believing DeepSeek when they claim to have trained v3 on early R1's outputs.


So how is it then that none of the other models behave in this way? Why is it just Deepseek?


Because they're being trained to answer this particular question. In other contexts it wasn't prepared for, Sonnet v2 readily refers to "OpenAI policy" or "Reddit Anti-Evil Operations Team". That's just dataset contamination.


I'm not saying that never has happened. maybe they trained against openAI models but they are letting anyone to train from their output. I doubt they had access to GPT models to "distill"


If you crawl the internet and train a model on it, I'm pretty sure that model will say that it's ChatGPT.


“we now know how to build AGI” --Sam Altman.

which should really be “we now know how to improve associative reasoning but we still need to cheat when it comes to math because the bottom line is that the models can only capture logic associatively, not synthesize deductively, which is what’s needed for math beyond recipe-based reasoning"


All that talk about Israel and genocide is what got TikTok banned, even as TikTok was heavily demoting such truthful content. AIPAC couldn't take it. A lot of issues that are Israel first are disguised as America First by the Deep Christian State.

Whatever dude.


Covid


Yes, just spray Quantum on it


> Yes, just spray Quantum on it

Careful, don’t give Sam Altman any ideas.

Once OpenAI cannot raise enough capital, he will aim quantum AGI.


that's more fit for agents, no?


You're right that it's technically orthogonal to what's in the paper. I was trying to model the "reasoning process", which has general applicability depending on how/where it's implemented.


one way out is to get good enough with AI as a tech to automate various yet-to-be-automated industries, find customers, get investors, etc... if you try to get a job for 12 months and fail, isn't it better to try another rout? the AI angle is just one of many


it's for training AI.... easier done that way, I think


That doesn't make sense. Netflix has access to the scripts.


I don’t get it. The whole point of asking the actors to say something is to have it end up in the script. I’m suggesting that whatever they want them to say is relevant to training some AI. Just a theory but in its hypothetical context it does make sense.


Aha that's a really interesting tinfoil hat theory! I doubt it's true but reminds me of the recent YouTube drama about Google using the transcript to train their AI. Seeing Spotify generate ai music to bloat their library it is a nice harmless conspiracy theory for fun if nothing else.


Yeah we specialise in nice harmless conspiracy theories that are fun and delicate


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: