Hacker News new | past | comments | ask | show | jobs | submit | maeln's comments login

~Why use MP3 instead of opus, vorbis or AAC ? All of them have (most of the time) better compression ratio (and better quality) than MP3. Is it for compatibility reason ?~

edit: Ah, I missed the ipod nano part


Just compatibility and "high enough" quality. Works in my car, on my iPod, on my Phone, on my kitchen radio and is the most common format in general.

iPod Nano 7th gen. does support AAC (AIFF & WAV too).

All iPods except for the very first and second one have supported AAC out of the box, and I believe there was a firmware update even for the two that didn't. Apple didn't invent the format, but was definitely its biggest proponent from early on.

Although the format might be superior in many ways, it has not replaced mp3 as the default format, because you can make mp3 sound pretty good and the space savings are not that important on modern devices.

Interestingly the iPod Nano 6th generation I think was the first device to support fast-forward and rewind via headphone remote. That's the main reason I don't use my 512GB iFlash iPod classic 2009 mod as often as I want to, although the support for m4b (m4a+aac) audio books is pretty good.

I've not tested ALAC on older iPods, which is a proprietary lossless format, but it might work on the newer more powerful ones (iPod touch)


The study : https://journals.physiology.org/doi/epdf/10.1152/japplphysio...

From a quick read : It was tested in a cell culture, so not in a human or animal. That does change a lot of things.

For the dosage:

> Thereafter, hCMECs were treated116 with regular media or media containing 6 mM erythritol (Sigma Aldrich, Cat #E7500; St. Louis MO), a117 dose equivalent to a typical amount of erythritol [30g] in a single can of commercially available118 artificially sweetened beverage, for 24 hours (N=5 experimental units)


> It was tested in a cell culture, so not in a human or animal. That does change a lot of things.

The article points out that similar observations have already been made in human subjects:

> Positive associations between circulating erythritol and incidence of heart attack and stroke have been observed in U.S. and European cohorts


One of the cited studies (Khafagy et al., 2024) directly contradicts such claims. The study explicitly said "we did not find supportive evidence from MR that erythritol increases cardiometabolic disease".

The primary human study they reference (Witkowski et al., 2023) has a few issues:

- All subjects had a "high prevalence of CVD and risk factor burden" and represented the sickest patients in the healthcare system

- Erythritol was measured only once at baseline, despite data which shows that levels fluctuate dramatically with consumption

- It did not differentiate between dietary intake and erythritol produced by the body

- Seeing as they were already sick they the subjects may have been consuming more artificial sweeteners than the general population

There are two more human studies referenced but I didn't read them.


It’s tiring to see these quick dismissals of scientific studies at the top of the comment section. They are more often than not based on technicalities or fallacies. Pitting a two minute reading vs months of work by a team of scientists is not a great move.

In this case, the nature of the study is clearly acknowledged, it does not “change a lot of things”:

> We recognize given the in vitro, isolated single cell nature of this study we cannot make definitive translational conclusions or assertions regarding erythritol and clinical risk. However, the markers and mediators of brain microvascular endothelial cell function studied herein have been shown to have strong causative links with the development cerebrovascular dysfunction, neuronal damage and injury, thrombosis and acute ischemic stroke

These findings are a starting point for further understanding, not something to be immediately ranked as true/false.


I think what commenters are looking for is a reason where this study is relevant for them (us) as humans, and they assess whether it is definitive or not. As HN is more of a generic curiosity and engineering related site then these starting point for further understanding are unlikely to get more nuanced discussion than that.

Thus, rather than submitting articles like the current, rather wait until anything more is available. We are tired of clickbait as well.


> Thus, rather than submitting articles like the current, rather wait until anything more is available.

How long more to wait?

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Are any of them decisive?

We shouldn't need to wait for a _decisive_ study showing that a novel compound is dangerous to consider avoiding the novel compound.

There's rarely profit in demonstrating that novel compounds are dangerous so it's extremely unlikely for a given dangerous novel compound that there will be any decisive studies showing the danger.

IMHO the novel compound should have to be decisively shown to be safe before being sold in food, but since they are not, I recommend everyone avoid novel compounds as much as it's practicable.


I initially reacted to the root comment with why these kinds of articles often receive dismissive comments. HN is mostly not a medical forum so a typical reader reader isn't going to want nor be able to discuss the technicalities - they just want to know whether to avoid a substance or not. As is often the case, the results are inconclusive, hence the dismissals. (And these dismissals as top comments are also useful for the typical reader as they pretty much want a yes/no verdict and move on.)

But to your points, if there aren't any studies which can show that a compound is dangerous in any meaningful way, why would you want to avoid it? (Given there is a need or purpose, e.g, a low-calories sweetener.)

Also, decisively showing something to be safe is impossible in a similar way that software tests can only show that you haven't found any bugs yet, it doesn't mean there are none. (Off-topic: That's a quote from Edsger Dijkstra, which the following can be added: he is right, but for unit tests - using types, property testing or by running through the entire argument space for a pure function you actually can show that there are no bugs.)


"why would you want to avoid it?"

Here's an example. Company A invents compound B and pays company C to do safety studies that monitor the subjects for a few weeks or months. The study shows no significant danger. They start selling compound B in food or as medicine. Then 10 years later after millions of people have ingested varying amounts of compound B, it's found to cause some harm that wasn't found in the initial study. Company A pays a fine of less than the profit they made selling compound B, and compound B is pulled from the market.

There are many stories like that. Should I have avoided compound B on the precautionary principle? Or, because the only science done so far in those first 10 years showed it was safe, should I have considered it safe?

In the case of food additives it's even worse. Company A makes compound B, it's in food, no studies are done, millions of people eat it, are harmed, and no one knows for years or decades.

Personally I think introducing novel compounds to the body is just a bad idea period and I avoid them as much as is practicable. Too many have been found to be dangerous only decades later, and we have a population with rapidly rising rates of chronic disease and cancer etc that could be related to toxic stuff. Why risk it. And especially why trust science that is paid for by the companies that will profit if the science shows their thing to be safe?!

---

There _could_ be ways to show safety of something to a point, like do a 20 (or 50) year study with large cohorts where one population uses compound B and the other doesn't and monitor overall outcomes. But that's far too expensive and time consuming and not required so basically no one does it. Companies want to profit from their novel compounds fast.


Artificial sweeteners replace sugar (as you likely know), so there's a specific reason. But yes, eating any random newly invented molecul for no reason I wouldn't recommend either.

"tired of clickbait", also tired of declines in the trustworthyness of manufacturers of food/health and regulators thereoff, so personaly I choose the no failure mode solution, ie:I eat mostly whole food items and cook meals at home, and just avoid all of it.....which is the simplest response to the whole dilema for anyone concerned with possible health consequences of the latest "finding" so I simpathise with both sides in the debate, but vote with my kitchen

Typically single one-off studies should be dismissed and shouldn't be cause for concern. Anybody can study anything and it's very, very easy to do wrong.

For most everyday lay people, you should be looking at meta-analysis. We just don't have the context to hone-in on one study and examine how correct it is or what it actually means for our everyday lives.


I tend to read these comments as a quick dismissal of the title moreso than the research. The title implies that a fairly conclusive finding has been made.

> Pitting a two minute reading vs months of work by a team of scientists is not a great move.

> In this case, the nature of the study is clearly acknowledged, it does not “change a lot of things”:

> These findings are a starting point for further understanding, not something to be immediately ranked as true/false.

Yes I know this and do not dismiss their research at all. I have been in the same boat, having to write at the end of a paper "We have proven a certain link between Y and X in this very limited experiment A, a wider, deeper research would be needed to prove if any such link exists in much larger condition B". This is normal, and how most scientific advencement is made.

But look, I don't think the average HN user comes to this article and comment section thinking what happened when you put erythritol on a cell culture outside of a living organism. They care about what is the consequences of consuming erythritol on them. So a small clarification comment stating the 2 importants conditions of the experiment (cell culture + dosage) is usually useful if you don't have the time to read the whole study and if you came here just to know if you should stop consuming your favorite sweetened drink right now.


This is science, not religion. Nothing is owed to any researcher beyond the truth of the matter as supported by the best available evidence to us. Your pastor can request you go easy on him, your research team may not. (Please don't use this as an excuse to be rude.)

This contradicts several reasonably large high quality studies using a low grade substitute for human testing. The burden of proof is on the researchers making a surprising claim in contrast to existing evidence.


Right, likewise the way science works is by publishing studies. Here we have a published peer reviewed study, "versus" a one paragraph anonymous dude trying to discredit the study.

Wake me up when this dude gets a paper accepted in a reputable peer reviewed journal. Then I will read what he has to say and add it to my list of "worthwhile" sources to form my conclusion on Erythritol.

Other than that, online forum comments are just mental candy floss to read while taking my morning caffeine fix.


That's a textbook ad hominem. Either the criticism is valid or it isn't; it doesn't matter who it's from.

You say "a one paragraph anonymous dude trying to discredit the study"; I say "pointing out that this study isn't definitive proof that diet sodas are bad for you without a lot more study."

Potato, potahtoe.

One is intentionally misspelled.


Also most diet sodas are not even sweetened with sugar alcohols, that's more of a stick gum thing.

Science isn't prestige-ism. The parish makes the priest but the same isn't true for the study.

Erythritol has had a lot of top class human studies on it. This is an extreme weak study with a shocking conclusion.


> Not that webflux is easy to use at all and the dx is garbage

My experience with pretty much any Java framework ... It's sad because I do think (especially since Java 8) that Java is a great language for many things. But the community as this insane tendency to create incredibly convoluted pattern-on-top-of-pattern tooling.


Yes. I think micronaut is kind of the sweet spot right now.

But it is the whole point of the article ? Big scrapers can hardly tell if the JS that takes their runtimes is a crypto miner or an anti-scrapping system, and so they will have to give up "useful" scrapping, so PoW might just work.


No they point is there's really advanced PoW challenges out there to prove you're not a bot (those websites that take >3s to fingerprint you are doing this!)

The idea is to abuse the abusers and if you suspect it's a bot change the PoW from a GPU/machine/die fingerprint computation to something like a few ticks of Monero or whatever the crypto of choice is this week.

Sounds useless, but you forget 0.5s of that on their farm x1e4 scraping nodes and you're into something.

The catch is not getting caught out by impacting the 0.1% of tor running anti ad "users" out there who will try and decompile your code when their personal chrome build fails to work. I say "users" because they will be visiting a non free site espousing their perceived right to be there, no different to a bot for someone paying the bills.


While I have a similar experience with, hurm, "legacy" codebase, I gotta say, LLM (in my experience) made the "legacification" of the codebase way, way faster.

One thing especially, is the loss of knowledge about the codebase. While there was always some stackoverflow-coding, when seeing a weird / complicated piece of code, I used to be able to ask the author why it was like that. Now, I sometimes get the answer "idk, its what chatgpt gave me".


At least LLM write a huge amount of comments /s


It really depends on the product. Especially if 0-downtime deployment is not possible.


Depends on if they cashed out and how they did it. There was a big trend for a while to go live in Portugal for a while, enough to be considered a tax resident there, and then cash out there because (at the time, idk if it's still true), they had no (or little) tax on crypto cash out.


Yeah, I know two French people who did it (one of them avoided UK taxes as he was paid in crypto while working in the UK, the other it's muddier). I know three people in the space, and only those two were on the financial side, so to me, while Blockchain is still a legit tech, anybody using cryptocurrency I peg as a tax evader.


Good thing we have courts, lawyers and judges for that. It’s funny everyone here hates on Trump but as soon as something align with their view, they want a defacto no due process application.


Sorry if i implied anything, i must have missed part of the conversation, i was just confirming that did happen (taking the portugese residency to avoid crypto tax) a few years ago. In my opinion, police should protect even violent criminals from violence when possible, so of course i'm not advocating for anything to happen on tax "avoiders", and they should be protected. I was just stating that i know people in the crypto space, and if you are, i immediately peg you as a small-time sociopath from my past experience.

Also i don't care about them getting judged for tax evasion, i know they won't be and honestly, good for them. I also don't care for nonviolent thieves and think the same thing about them. Profiteering was not how i was raised, but i understand different people have different standards (and parents, luckily mine are great, it's not the case for everybody). People do what they need to do, i found some comportment sociopathic, but as long as it is nonviolent, i'm not mad.


More than CPU speed, I think the increase in storage and RAM is to blame for the slow decay in latency. When you have only a few Kb/Mb of RAM and storage, you can't really afford to add much more to the software than what is the core feature. Your binary need to be small, which lead to faster loading in RAM, and do less, which means less things to run before the actual program.

When size is not an issue, it's harder to say no when the business demand for a telemetry system, an auto-update system, a crash handler with automatic report, and a bunch of features, a lot of which needs to be initialized at the start of the program, introducing significant latency at startup.


It's also complexity - added more than necessary, and @ a faster pace than hardware can keep up with.

Take font rendering: in early machines, fonts were small bitmaps (often 8x8 pixels, 1 bit/pixel), hardcoded in ROM. As screen resolutions grew (and varied between devices), OSes stored fonts in different sizes. Later: scalable fonts, chosen from a selection of styles / font families, rendered to sub-pixel accuracy, sub-pixel configuration adjustable to match hw construction of the display panel.

Yeah this is very flexible & can produce good looking fonts (if set up correctly). Which scale nicely when zooming in or out.

But it also makes rendering each single character a lot more complex. And thus eats a lot more cpu, RAM & storage than 8x8 fixed size, 1bpp font.

Or the must-insert-network-request-everywhere bs. No, I don't need search engine to start searching & provide suggestions after I've typed 1 character & didn't hit "search" yet.

There are many examples like the above, I won't elaborate.

Some of that complexity is necessary. Some of it isn't, but lightweight & very useful. But much of it is just a pile of unneeded crap of dubious usefulness (if any).

Imho, software development really should return to 1st principles. Start with a minimum viable product, that only has the absolute necessary functionality relevant to end-users. Don't even bother to include anything other than the absolute minimum. Optimise the heck out of that, and presto: v1.0 is done. Go from there.


> But it also makes rendering each single character a lot more complex.

Not millions of times more complex.

Except for some outliers that mess up everything (like anything from Microsoft), almost all of the increased latency between keypress and character rendering we see on modern computers comes from optimizing for modularity and generalization instead of specialized code for handling the keyboard.

Not even our hardware reacts fast enough to give you the latency computers had in the 90s.


Im not really sure what are you talking about ;) HW become much much much faster. Mostly in speed of computing, but latency also dropped nicely. The letency bloat you see its 99% of software (OS). I still run Win2003 on modern desktop, and it flies! Really, booting/shutdown is quick. Im on spinning rust, so first start of webbrowser is slowish a bit, but once cached, its like 200ms-500ms depending on version (more modern = slower).


Take a look on the latency of a keypress on your modern keyboard to when your CPU has the chance to first process the data, you'll be surprised. Depending on how your hardware is configured, it can reach 100ms there alone.

Your mouse has an equivalent issue, except that it's not usually optimized to the same level, so the worst case is way more common. Audio has a much worse issue, and can lag a large fraction of a second on hardware alone. And there's the network, that has optional features that are emulated by adding lags proportional to the bandwidth.

All of our hardware has become more capable with time, but that doesn't mean latency has decreased. Some kinds of latency have gone down, others have gone way up.


Okey, yes.. Its a mix of HW/OS issue indeed. I remember my old gaming rig I assembled more than 10 years ago. Asus mother board, i5-760 CPU, ATI HD 6850. Win2003 as OS. All was tuned and it was really great. Basic DPC latency was around 30-40us. Under load it increased slighty but it was always <100us.

Now catastrohpic (with I didnt knew at that time) event occured. After 10 years, Internal NIC burnded. I was like, okey.. I have dozens of PCI NICs, lets plug one and vioala. And I did. But, there was problems, after a while (hour or so) I noticed Audio glitches, especially when when there was network activitiy. After more investigation and reading mobo manual I noticed that IRQs were nicely spread for all internal mobo components + PCIe x16 bus. Other PCI ports were always shared one or another. I could do nothing to fix it.

PC now catch dust, I bought used HP 8200 PC with works nice, but its not gaming rig, standard DPC latency is around 2000us, with is quite large.. Still, for normal use that latency is great, Im very sensitive to lag and latency so if I had issues here, I would be mad.

At the end, some pics from my DPC stall fight:

http://ds-1.ovh.uu3.net/~borg/pics/DPClat.png

http://ds-1.ovh.uu3.net/~borg/pics/DPC_stall.png


I'm not sure your font rendering is very good example here. Windows has used vector fonts since 90s and ClearType since Windows XP. That is nearly 25 years ago. And it wasn't really much of a performance issue even back then.


Correct. Modern font rendering likely falls into that "more complex, but lightweight / useful" category.

My point was it's much more complex even though it does essentially the same thing (output character to screen).

>> There are many examples like the above

Death by a 1000 cuts! Probably much worse offenders out there that deserve being attacked.


It is also useful if your data already lives on the GPU memory. For example, when you need to z-sort a bunch of particles in a 3d renderer particle system.


And their mailbox offering is also overpriced. 5.99€/month (and it use to be pricier! they recently decreased the price) for 10Go and very basic email features. Fastmail is 5€ for 60Go, masked email, and a few other things (not in the EU tho).


I used to pay $5/yr for fastmail but that plan is now $15/yr, but I only get 0.5GB of storage.

So it's like $1.05/month!


I pay $5 a month (50 a year technically) for fastmail and get a 50 gb mailbox and 10 gb cloud storage. How old is your account?


at least 10 years or so, i think. I'm grandfathered in to the real cheap plan, even though it "tripled" in price in 2020 or so. I can only have one domain, but i get all the other features.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: