> Rust’s print function locks by default (because of safety), C doesn’t.
Huh? Traditionally, stdio implementations have placed locks around all I/O[1] when introducing threads—thus functions such as fputc_unlocked to claw back at least some of the performance when the stock bulk functions don’t suffice—and the current ISO C standard even requires it (N3096 7.23.2p8):
> All functions that read, write, position, or query the position of a stream lock the stream before accessing it. They release the lock associated with the stream when the access is complete.
The Microsoft C runtime used to have a statically linked non-threaded version with no locks, but it no longer does. (I’ve always assumed that linking -lpthread as required on some Unices was also intended to override some of the -lc code with thread-safe versions, but I’m not sure; in any case this doesn’t play well with dynamic linking, and Glibc doesn’t do it that way.)
Thanks for the correction, I wasn't aware that the latest C11 standard made these functions thread-safe in the spec. (And as you've said implementations like glibc already have these locks)
Neat tricks. Beyond BufWriter (which I'm already using) and multthreading, I'm guessing there's not much to be done to improve my "frece" (a simple CLI frecency-indexed database) tool's performance without making it overly complicated. https://github.com/YodaEmbedding/frece/blob/master/src/main....
C and Python have adaptive buffering for stdout: if the output is a terminal they flush on newlines, otherwise they only flush when their internal buffer is full.
Here's a C program counting, with a 1ms delay between lines. The second column is a duration since the previous read():
Rust lacks this adaptive behaviour for output, and will always produce the second result, terminal or not.
Technically it unconditionally wraps stdout in a LineWriter (https://doc.rust-lang.org/std/io/struct.LineWriter.html), which always flushes if it sees a write containing a newline. To maximise throughput you therefore want to batch writes of multiple lines together, for example by wrapping it in a BufWriter.
One possible outcome of this is humans stop producing freely accessible digital artifacts like text, code lest they get replaced by machines who can mimic them and which controlled by tech moguls.
> I’m wondering if someone can comment on the cultural aspect here. It seems like adding the smiles was intended to discredit the protesters, by somehow suggesting the protest was insincere.
This was to paint them as insincere and implying that protestors are on the payroll of <insert boogeyman here>.
I think the part about "unrealistic goals" is applicable to a lot of spheres of life and exacerbated by the social media. Social media after all inflicts high expectations and pressure to convert your life into a performance on everyone irrespective of profession, culture, age, country.
And ofcourse the Titans of social media - zuckerberg, chamath, tiktok and 1000s of engineers, PMs will not pay for the harm their engineered products have caused because waters have been muddied enough and no true accounting of the harm can happen.
It is kind of like inventing a new synthetic highly addictive drug which is technically legal but very harmful and you can sell it to everyone.
Blaming social media for this is shooting the messenger.
You don't need Facebook or the Gram to be keenly aware that all your university buddies who went into tech are going on vacations to Maui and buying their second homes, while you're still grinding your life away for nothing in a university lab. In fact, all you need to do be aware of this is to show up to your friends' bi-weekly happy hour at the local.
> Blaming social media for this is shooting the messenger.
No, it's not. Social media created a world wide bullying environment. Most of people all over the world are in trouble because they have to compete with unrealistic image of you or your profession created by social media. Whatever you are or do, you are never good enough. Education, medical field and science are especially vulnerable in this regard.
Tech sector isn't really exception, but insane amount of money people get in some parts of the world for compensation makes it more tolerable.
I don’t disagree, but in person people do I think tend to avoid discussing in detail money-associated things with people who can’t afford the same. Like they’re not going to avoid it entirely, but a lot of points might be glossed over, the conversation may be changed quickly, etc. Whereas on social media there is more detail out in the open regardless of social context. That context ignoring part is kind of new.
If you care primarily about the financial impact of jail, then that would be an argument for the opposite of what you suggest: A rich person is likely to be less affected by the loss of a days income than a poor person, and so if the financial value of the time spent in jail was all that mattered then rich people ought to be imprisoned longer for the same crime, not shorter.
Since the financial impact is not the main consideration of jail time, it shouldn't be major factor.
With respect to children, children whose parents are imprisoned are more at risk of future problems, and so irrespective of other considerations it may well be preferable to society to take it into account.
I can't tell if you're being sarcastic or if these are genuine questions. I suspect the former, in which case you need to unpack your arguments a bit more.
Incarceration by itself is poorly correlated to the goal of reducing overall crime and reducing recidivism. Putting people in prison should be a last resort, instead of the first resort that the US and some other countries treat it as.
Regarding jail time for the wealthy, you're still thinking in simple numbers instead of means-tested comparisons. A more-wealthy person and a less-wealthy person that go to jail for 30 days both are physically restricted for 8% of a year. If you pretend that wealthy people earn the same way that less-wealthy people do (they don't), then both are naievely losing 8% of their income-earning time for that year.
It's still not fair, mind you—the wealthy person likely makes much or most of their money from diversified income streams that are less succeptable to temporary incarceration than the more-likely single salary-based income of a less-wealthy person.
Regarding the question of using children as leverage, this is addressed more intricately than I'll summarize here in contemporary cases e.g. with Elizabeth Holmes and the allegation that she, her partner and legal team have tried to lean on her having children as a way to attempt avoiding incarceration.
> Should rich people get less jail time because they lose more money because of time spent in jail (per day income is higher) than a poor person.
No. The point is that fines should be based on per-day income or some kind of similar measure of wealth. Which actually makes it consistent with the hypothetical economic punishments of being thrown in jail.
Shouldn't childless people get less time in prison, so they have more opportunities to find partners to get children?
I was thinking of this in context of wars. Where people with children should be first to be send in front lines. With those with most offspring being higher priority. And those with none the lowest.
FWIW, both the country where I was born and the one where I live have chosen a simple approximation: Fines are set at "x typical daily earnings" and the court will estimate the equivalent in legal tender, jail time is set in days.
I disagree that it's a "fine example of whataboutism", I think it's more 'apples and oranges'. How would you define whataboutism, or why do you think that it applies here?
Well, yes, that orange exists, but whatabout this apple? Typically with a clear and relevant difference between the two, and not mentioning that difference.
Well, for starters, Musk? As an investor (rather, lead capital provider) in a non-profit, he was blindsided by the decision entirely.
And before anyone criticizes me for defending his viewpoint, let's assume this: if Musk's $100 million investment were to be replaced by 100k HN users investing $1k each into a non-profit focused on safe and "open AI", do you think they would be uniformly happy about this decision?
You can perfectly have an AI that is powerful if you have the investment backing (like Musk provided). You can develop stuff out in the open like most software non profits do (case in point, the Linux Foundation), instead of playing Microsoft's slave.
An open AI is safe inherently, since it means that it can be easily ripped apart and thoroughly studied for exploitable points, unlike some closed black box system. Having Open AI be some closed system does nothing to reduce the number of bad actors - they will all choose to exploit Open AI's system once given the opportunity.
>An open AI is safe inherently, since it means that it can be easily ripped apart and thoroughly studied for exploitable points, unlike some closed black box system. Having Open AI be some closed system does nothing to reduce the number of bad actors - they will all choose to exploit Open AI's system once given the opportunity.
Replace the word "AI" with "ultra deadly and contagious bioweapon" and it comes clearer why being "open" is itself a danger, for those who aren't able to zero-shot understand it.
That's precisely the point of it being open - it makes it easier to understand its points of failure rather than attributing it as a feature of the black box. An open source bio weapon (if one existed) would not be as dangerous compared to a secretive one, simply because once out in the open, its points of failure would have already been studied.
> but to me it seems like a good idea to at least consider the potential negative effects self-expression may have on others, and weight that against the benefits of self-expression.
I think you are taking a very major assumption that LLMs are deterministic infact they are exactly opposite. They are probabilistic systems.
- They do not transpile Rough English to Deterministic English. Infact they do not do any transpiling at all.
- LLMs learn the probabilities of words in the dataset for all contexts from that dataset (context is a ordered set of words). This is called training.
- Once training is done, LLMs can generate text given a prompt and the probabilities that it has learned. The analogy of LLMs being auto-complete on steroids is very apt.
- Whether the text generated by LLMs is factual or not is purely coincidental.
A lot of things which seem like magic about LLMs will get demystified.
Now one can argue that LLMs are showing human like reasoning/intelligence/sentience as an emergent behavior. This is hard to argue against because all these terms are extremely hard to define.
IMO, the only emergent behavior that LLMs are showing is the output they generate looks like it might have been generated by a human which should not be surprising given that LLMs like ChatGPT has trained on a large amount of human written text available on internet.
On your first point you couldn't be more wrong. LLMs are deterministic. They are run on a deterministic machine and the entailment of that forces them to be so. Also probabilistic does not mean the absence of determinism. For example, for every non-deterministic/probabilistic finite automata there is an equivalent deterministic one.
LLMs are fully deterministic in that sense: same input, same outputs.
Because full determinism is not always desirable, the researchers have implemented an explicit "temperature" parameter that you can use to inject randomness to the outputs. If you set that to 0.0 you will always receive the same output for the same input and model version.
LLMs can be implemented to be formally deterministic but if you ask them to solve a specific problem instance you have not seen before, you cannot generally guarantee they will do so reliably. So you're correct in a pedantic sense but I think GP's perspective is more useful if you are problem solving.
It's not pedantry. The parent commentator is simply stating something incorrect and its very misleading at a conceptual level for people who won't think about it too much. And you can guarantee they will do so reliably unless you parameterize the input with some "truly" (yea i know) random input.
> - Whether the text generated by LLMs is factual or not is purely coincidental.
No, because the probability of a word on the internet being factual is not coincidental. Factuality compresses the corpus; the truth is generally the simplest explanation for a set of observations. (The collected text of the internet is a set of observations about reality.)
> IMO, the only emergent behavior that LLMs are showing is the output they generate looks like it might have been generated by a human
The whole point of the Turing Test is to stop people from asking "yes, it acts indistinguishable from a human but is it human?" "Generating output that looks human" is in fact the entirety of AGI.
> > it wouldn't be so incredibly difficult to determine.
> Never said it was easy.
My point is that this process of determination happens over time and in a non-linear fashion, so the corpus contains tons of noise around any given truth statement.
> > The corpus of written language is full of ambiguity and contradictory statements.
> Right, but the truth is the one set of information that logically cannot be contradictory.
Rust -> 23.2MiB/s
Python3 -> 28.6MiB/s
C -> 238MiB/s
Does anyone know why Rust's performance is in the same ballpark as Python3.
I thought it would be more closer to C.