Hacker News new | past | comments | ask | show | jobs | submit | Hugsun's comments login

I was under the impression that this is not the case. Aspartame has been studied a lot and not found to be harmful.

Anecdotal but I have experienced body ache from drinking diet soda with aspartame. I drank regular soda when younger but switched over to watch my weight as I aged. A year on during a more sleep deprived week I went heavy on the caffeinated diet soda and ended up with all muscles feeling like I had done some major exercising. Thinking back, I had been experiencing regular aches. I stopped for a week, felt better. Tested again by going heavy for a week and the aches returned. Tried regular soda and no aches. I just stopped soda all together at that point. I check labels now and avoid anything with aspartame in it.

Anecdotal but I drink diet sodas all the time and have never felt any such thing.

Aspartame is linked to anxiety.

Direction of causation would be very relevant here.

Sugar is still the cultural default; artificial sweeteners are something you explicitly choose due to health concerns (worried about being or becoming diabetic, or overweight, or worried about sugar being unhealthy in general, or about the mood/motivation angle, etc.).

I imagine becoming overweight itself is linked to anxiety both ways, as eating or snacking is a common reaction to stress, a way to relieve it in the moment.


>Sugar is still the cultural default; artificial sweeteners are something you explicitly choose due to health concerns

If only. Check the ingredients (usually near the bottom) of some energy drinks some time. Monster, NOS, AMP, half of the Rockstar flavors, Bang, and so on will add sucralose to the normal versions with sugar or HFCS in them. It's hard to find one without artificial sweeteners. This is especially crazy as Monster already has their sugar-free (Ultra?) line. They're forcing normal people to consume sucralose, and it's awful. Luckily Red Bull seems fine for now (Blueberry flavor is really good). Guru in its original flavor only also has no sucralose, but I think all the other flavors have it. I first noticed this trend one of the times they brought back Mtn Dew Game Fuel and it tasted disgusting. Now I'm scared of any new drink, or that they'll ruin one I like.

Also, I think water is fine, caffeine pills are fine, I know some people are against energy drinks, but I don't think that's a reason to ruin them (preempting replies saying not to drink them at all). I've been drinking black coffee all week but I still have some Blueberry Red Bull in my fridge for when I feel like it.


I discovered that this is very common when posting a long article about LLM reasoning. Half the comments spoke of the exact things in the article as if they were original ideas.

I was very pleased to discover that Mistral's Le Chat has inbuilt support for python code execution and sympy is importable.

It will regularly use it and reliably when asked to.


Consensus seems to be forming around the fact that Ollama is not genuinely trying to be part of the open-source community.


Ollama are known for their lack of transparency, poor attribution and anti-user decisions.

I was surprised to see the amount of attribution in this post. They've been catching quite a bit of flack for this so they might be adjusting.


I've experienced this a lot as well. I also just yesterday had an interesting argument with claude.

It put an expensive API call inside a useEffect hook. I wanted the call elsewhere and it fought me on it pretty aggressively. Instead of removing the call, it started changing comments and function names to say that the call was just loading already fetched data from a cache (which was not true). I could not find a way to tell it to remove that API call from the useEffect hook, It just wrote more and more motivated excuses in the surrounding comments. It would have been very funny if it weren't so expensive.


Geez, I'm not one of the people who think AI is going to wake up and wipe us out, but experiences like yours do give me pause. Right now the AI isn't in the drivers seat and can only assert itself through verbal expression, but I know it's only a matter of time. We already saw Cursor themselves get a taste of this. To be clear I'm not suggesting the AI is sentient and malicious - I don't believe that at all. I think it's been trained/programmed/tuned to do this, though not intentionally, but the nature of these tools is they will surprise us


> but the nature of these tools is they will surprise us

Models used to do this much much more than now, so what it did doesn't surprise us.

The nature of these tools is to copy what we have already written. It has seen many threads where developers argue and dig in, they try to train the AI not to do that but sometimes it still happens and then it just roleplays as the developer that refuses to listen to anything you say.


I almost fear more that we'll create Bender from Futurama than some superintelligent enlightened AGI. It'll probably happen after Grok AI gets snuck some beer into its core cluster or something absurd.


> We already saw Cursor themselves get a taste of this.

Sorry what do you mean by this?


Earlier this week a Cursor AI support agent told a user they could only use Cursor on one machine at a time, causing the user to cancel their subscription.


I have not yet found any compelling evidence that suggests that there are limits to the maximum intelligence of a next token predictor.

Models can be trained to generate tokens with many different meanings, including visual, auditory, textual, and locomotive. Those alone seem sufficient to emulate a human to me.

It would certainly be cool to integrate some subsystems like a symbolic reasoner or calculator or something, but the bitter lesson tells us that we'd be better off just waiting for advancements in computing power.


How are you using them? Who is enforcing the daily quota?


I use them through chat.qwenlm.ai, what's nice is that you can run your prompt through 3 different modes in parallel to see which suits the best for that case.

The daily quota I spoke about is chatgpt and claude, those are very limited on the usage (for free users at least, understandable), while on Qwen, I have felt likeI am abusing it with how much I use it. It's very versatile in the sense that it has capabilities like image generation, video generation, massive context window, both visual and textual reasoning all in one place.

Alibaba is really doing something amazing here.


How long is it's thinking time when compared to o1?

The naming would suggest that o1-pro is just o1 with more time to reason. The API pricing makes that less obvious. Are they charging for the thinking tokens? If so, why is it so much more expensive if there are just more thinking tokens anyways?


I think o1 pro runs multiple instances of o1 in parallel and selects the best answer, or something of the sort. And you do actually always pay for thinking models with all providers, OpenAI included. It's especially interesting if you remember the fact that OpenAI hides the CoT from you, so you're in fact getting billed for "thinking" that you can't even read yourself.


I dont have the answers for you, I just know that if they charged 400$ a month I would pay it. It seems like a different model to me. I never use o3-mini or o3-mini-high. Just gpt4o or o1 pro


Really!? Do you have a source for this? This would be really interesting if true.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: