Hacker Newsnew | past | comments | ask | show | jobs | submit | admiralrohan's commentslogin

Working on a original algorithm to explain human behavior from 3rd person perspective.

The whole research is divided into 6 stages. In 2nd stage, I want to use that to mathematically establish the best course of action as an individual.

In 3rd stage, I will explain common psychological phenomenon through the theory, things like narcissism, anxiety, self-doubt, how to forgive others, etc.

In 4th stage, I will explain how the theory is the fastest way to learn across multiple domains and become a generalist and critical thinker.

In 5th stage, I will explain how society will unfold if everyone can become generalist and critical thinker through the theory.

In 6th and last stage, I will think about how to use this theory to make India the next superpower, as this theory can give us the demographic advantage.

Shared more about the algorithm here https://x.com/admiralrohan/status/1973312855114998185


I always wondered why people care so much about data. Now I can understand why. Thanks for sharing.

Inevitable.


Most likely the role of "programmer" will go away as we conventionally know Anyways, we have evolved a lot like from assembly language to using npm packages and we are going to see the next evolution.

On inability to solve hard problems, I think we are going to tackle even harder problems in future with AI in our side, just like corporations manage to handle more complex problems than an individual.


I found it more useful to read more books than read one book again and again. This helps me to reinforce the same concept from different angles. Our brain is a pattern matching machine, and it automatically picks up related concepts.


That's true, and it's also the reason why it's so important to ensure your information diet is of high quality. Any concept (especially harmful or radical ones) can be reinforced.

I had to learn this lesson a long while ago when I realized many sites I casually browsed were injecting and repeating many dark thoughts that weren't truly reflective of reality. I've been way more careful of my daily intake and the groups I associate with ever since.


Can relate. Also information diet changed for me over time, as what is "high quality" is subjective based on where I am.

In 2016 I used to browse free webinar. In 2021 youtube self-help videos. Now-a-days only focused on history books as already learned everything needed for self-help.

And most often we focus on what we don't know. In my exp I wasted most time rereading stuffs I already knew.


Everyone is so negative here but we have reached the limit of AI scaling with conventional methods. Who knows Mistral might find the next big breakthrough like DeepSeek did. We should be optimistic.


> but we have reached the limit of AI scaling with conventional methods

We've just only started RL training LLMs. So far, RL has not used more than 10-20% of the existing pre-training compute budget. There's a lot of scaling left in RL training yet.


Isn't this factually wrong? Grok-4 used as much compute on RL as they did on pre-training. I'm sure GPT-5 was the same (or even more)


It was true for models up to o3, but there isn't enough public info to say much about GPT-5. Grok 4 seems to be the first major model that scaled RL compute 10x to near pre-training effort.


Even with pretraining, there's no limit or wall in raw performance, just diminishing returns in terms of the current applications, and business rationale to serve lighter models given the current infrastructure and pricing (and applications). Algorithmic efficiency of inference on a given performance level has also advanced a couple of OOMs since 2022 (for sure a major part of that is about model architecture and training methods).

And it seems research is bottlenecked by computation.


> We've just only started RL training LLMs

That's just factually wrong. Even the original chatGPT model (based on gpt3.5, released in 2022) was trained with RL (specifically RLHF).


RLHF is not the "RL" the parent is posting about. RLHF is specifically human driven reward (subjective, doesn't scale, doesn't improve the model "intelligence", just tweaks behavior) - which is why the labs have started calling it post-training, not RLHF, anymore.

True RL is where you set up an environment where an agent can "discover" solutions to problems by iterating against some kind of verifiable reward AND the entire space of outcomes is theoretically largely explorable by the agent. Maths and Coding are have proven amenable to this type of RL so far.


a) 2022 is not too long ago b) this was a first important step to usable ai but not scalable. I'd say "RL training" is not the same as RLHF.


The original ChatGPT was like 3 years after the first usable transformer models.


It is still an open question whether RL will (at least easily) scale the same way as pretrain or whether it is more effective at elicitation.


This move is mostly about expected EU subsidies


Especially with Euclyd entering the space (efficiency for AI workloads), with founders with tight ties to ASML, this is the move Europe needs.


Thnx for the hint! I missed the news[1].

[1] https://euclyd.ai/#news


I would make a wild guess that this is a policital invesment. It's hard to believe Mistral is the right choice to throw in 1.7B€ for economic reason.


> It’s hard to believe that Mistral isn’t the right choice to invest €1.7B in for economic reasons.

Why? Cursor, essentially a VSCode fork, is valued at $10B. Perplexity AI, which, as far as I'm informed, doesn't have its own foundational models, boasts a market capitalisation of $20B, according to recent news. Yet Mistral sits at just a $14B.

Meanwhile, Mistral was at the forefront of the LLM take-off, developing foundational (very lean, performant and innovative at the time) models from scratch and releasing them openly. They set up an API service, integrated with businesses, building custom models and fine-tunes, and secured partnership agreements. They launched user-facing interface and mobile app which are on par with leading companies, kept pace with "reasoning" and "research" advancements; and, in short, built a solid, commercially viable portfolio. So why on earth should Mistral AI be valued lower? Let alone have its mere €1.7B investment questioned.

Edit: Apologies, I misread your quote and missed the "isn't" part.


Since 2024, it's hard to make an investment that has no political nature.


i recall them being one of the first ones to release a mixture-of-experts (MoE) model [1], which was quite novel at the time. post that, it has appeared to be a catch-up game for them in mainstream utility. like just a week go they announced support for custom MCP connectors to their chat offering [2].

more competition is always nice, but i wonder what can these two companies, separated by several steps in the supply chain, really achieve together.

[1] https://mistral.ai/news/mixtral-of-experts [2] https://mistral.ai/news/le-chat-mcp-connectors-memories


what next big breakthrough are you claiming deepseek found? MLA? GRPO? these are all small tweaks


I am not a ML person but as per the broad level understanding the innovation was about efficient training method and training the model in much cheaper than the US models and it was dubbed as the "Sputnik moment".


yeah that’s basically the media making things up.


Sorry don't understand anything. Just randomly clicking. Need more context on why you did this and how this works. And how much agency do I have.


I think that's the point. You don't have any agency. There's no way to win.


Is this about nihilism?


About as much agency as in real life. That's the point of the game.


Can you talk about the timeline algorithm? Which posts are getting boosted?


If you are running local LLMs what is the hardware requirement in my machine? Don't see any mention of that.


Gemma 3n (the model used by this app) would run on any Apple Silicon device (even with 8GB RAM).


Yup, but you're automatically giving up a ton of RAM that could be better used for Slack.


Why is it so? Is there any legal risk for Elon is Grok says something "wrong"?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: