Hacker News new | past | comments | ask | show | jobs | submit | AdieuToLogic's comments login

> What I was curious about was applying a GC language to a use case that must have extremely low latency. It seems like an important consideration, as a GC pause in the middle of high-frequency trading could be problematic.

Regarding a run-time environment using garbage collection in general, not OCaml specifically, GC pauses can be minimized with parallel collection algorithms such as found in the JVM[0]. They do not provide hard guarantees however, so over-provisioning system RAM may also be needed in order to achieve required system performance.

Another more complex approach is to over-provision the servers such that each can drop out of the available pool for a short time, thus allowing "offline GC." This involves collaboration between request routers and other servers, so may not be worth the effort if a deployment can financially support over-provisioning servers such that there is always an idle CPU available for parallel GC on each.

0 - https://docs.oracle.com/en/java/javase/17/gctuning/parallel-...


Java is like C and C++, there isn't the one implementation.

So if you want hard guarantees, you reach out to real time JVM implementations like the commercial ones from PTC and Aicas.


> It's not an LLM problem, it's a problem of how people use it.

True, but perhaps not for the reasons you might think.

> It feels natural to have a sequential conversation, so people do that, and get frustrated. A much more powerful way is parallel: ask LLM to solve a problem.

LLM's do not "solve a problem." They are statistical text (token) generators whose response is entirely dependent upon the prompt given.

> LLMs can can't tell legitimate concerns from nonsensical ones.

Again, because LLM algorithms are very useful general purpose text generators. That's it. They cannot discern "legitimate concerns" because they do not possess the ability to do so.


> LLM's do not "solve a problem."

Right, or at any rate, the problems they do solve are ones of document-construction, which may sometimes resemble a different problem humans are thinking of... but isn't actually being solved.

For example, an LLM might take the string "2+2=" and give you "2+2=4", but it didn't solve a math problem, it solved a "what would usually get written here" problem.

We ignore this distinction at our peril.


> Right, or at any rate, the problems they do solve are ones of document-construction, which may sometimes resemble a different problem humans are thinking of... but isn't actually being solved.

This is such a great way to express the actuality in a succinct manner.

Thank you for sharing it.


> I think programming is a job people don’t need to do anymore and anyone who called themselves a software engineer is now a manager of agents. Jira is the interface. Define the requirements.

That Grand Canyon sized logical leap quoted ignores a vital concept: understanding.

To "define the requirements" sufficient enough for any given feature/defect/etc. requires a degree of formality not present in prose. This already exists and is in use presently:

  Programming languages.  See 4GL's[0] and 5GL's[1].
0 - https://en.wikipedia.org/wiki/Fourth-generation_programming_...

1 - https://en.wikipedia.org/wiki/Fifth-generation_programming_l...


> I can think "oh it would be kinda nice to add this little tidbit of functionality in this code". Previously I'd have to spend loads of time googling around the question in various ways, wording it differently etc, reading examples or digging a lot for info.

Research is how people learn. Or to learn requires research. Either way one wants to phrase it, the result is the same.

> Now, I can have Claude help me write some code, then ask it about various things I can add or modify it with or maybe try it differently.

LLM's are statistical text (token) generators and highly sensitive to the the prompt given. More importantly in this context is the effort once expended by a person doing research is at best an exercise in prompt refinement (if the person understands the problem context) or at worst an outsourcing of understanding (if they do not).

> I'm fairly precise in what I ask it to do and that's only after I get it to explain how it would go about tackling a problem.

Again, LLM algorithms strictly output statistically generated text derived from the prompt given.

LLM's do not "explain", as that implies understanding.

They do not "understand how it would go about tackling a problem", as that is a form of anthropomorphization.

Caveat emptor.


We can go on all day about how an LLM doesn't explain and doesn't actually think. In the end though, I've found myself being able to do things better and faster especially given a codebase I have no experience in with developers who aren't able to help me in the moment given our timezone differences

> We can go on all day about how an LLM doesn't explain and doesn't actually think.

This is an important concept IMHO. Maintaining a clear understanding of what a tool is useful for and what it is not allows for appropriate use.

> In the end though, I've found myself being able to do things better and faster especially given a codebase I have no experience in ...

Here, I have to reference what I wrote before:

  Research is how people learn. Or to learn requires
  research. Either way one wants to phrase it, the
  result is the same.
If you don't mind me asking two philosophic questions;

How can one be confident altering a codebase one has no experience with will become "better" without understanding it?

Knowing an LLM produces the most statistically relevant response to any given query, which is orthogonal to the concepts of true/false/right/wrong/etc., and also knowing one has no experience with a codebase, how can one be confident whatever the LLM responds with is relevant/correct/useful?


The thing about code is you can run it to confirm it does what you want it to do and doesn't do what you don't want it to do. Sprinkle in some software experience in there as well.

Just because it worked once with one set of inputs means nothing about it working in the general case.

    def is_even(n):
        return n == 2  # passes many unit tests!

> Just because it worked once with one set of inputs means nothing about it working in the general case.

A C example of same could look like:

  #include <limits.h>
  #include <stdio.h>
  
  const char *is_positive (int value)
  {
   return value < 0 ? "false" : "true";
  }
  
  
  int main (int ac, char *av[])
  {
   printf ("            1 is positive? %s\n", is_positive (1));
   printf ("            2 is positive? %s\n", is_positive (2));
   printf ("           -1 is positive? %s\n", is_positive (-1));
   printf ("           -2 is positive? %s\n", is_positive (-2));
   printf ("%ld is positive? %s\n", (long)INT_MAX, is_positive (INT_MAX));
   printf ("%ld is positive? %s\n", (long)INT_MAX + 1L, is_positive (INT_MAX + 1));
  
   /* this one is can be quite pernicious */
   printf ("            0 is positive? %s\n", is_positive (0));
  
   return 0;
  }

This post is a thinly veiled marketing promo. Here's why.

Skip to the summary section titled "Fast feedback is the only feedback" and its first assertion:

  ... the only thing that really matters is fast, tight
  feedback loops at every stage of development and operations.
This is industry dogma generally considered "best practice" and sets up the subsequent straw man:

  AI thrives on speed—it'll outrun you every time.
False.

"AI thrives" on many things, but "speed" is not one of them. Note the false consequence ("it'll outrun you every time") used to set up the the epitome of vacuous sales pitch drivel:

  To succeed, you need tools that move at the speed of AI as well.
I hope there's a way I can possibly "move at the speed of AI"...

  Honeycomb's entire modus operandi is predicated on fast
  feedback loops, collaborative knowledge sharing, and
  treating everything as an experiment. We’re built for the
  future that’s here today, on a platform that allows us to
  be the best tool for tomorrow.
This is as subtle as a sledgehammer to the forehead.

What's even funnier is the lame attempt to appear objective after all of this:

  I’m also not really in the business of making predictions.
Really? Did the author read anything they wrote before this point?

Is it even attempting to be veiled at all? You know you’re reading a company’s blog post, written about a feature the company is building for their product, right? It is explicitly marketing.

> Is it even attempting to be veiled at all?

I do believe the veil is at best "thin." Perhaps I was being too generous given the post starts with:

  New abstractions and techniques for software development 
  and deployment gain traction, those abstractions make 
  software more accessible by hiding complexity, and that 
  complexity requires new ways to monitor and measure what’s 
  happening. We build tools like dashboards, adaptive 
  alerting, and dynamic sampling. All of these help us 
  compress the sheer amount of stuff happening into something 
  that’s comprehensible to our human intelligence.
  
  In AI, I see the death of this paradigm. It’s already real, 
  it’s already here, and it’s going to fundamentally change 
  the way we approach systems design and operation in the 
  future.
Maybe I should have detected the utterly condescending phrase, "something that’s comprehensible to our human intelligence."

> BSD jails are architected wholly differently from what something like Docker provides. > Jails are first-class citizens that are baked deep into the system.

Both very true statements and worth remembering when considering:

> Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.

You are quite correct, as Darwin is is based on XNU[0], which itself has roots in the Mach[1] microkernel. Since XNU[0] is an entirely different OS architecture than that of FreeBSD[3], jails[4] do not exist within it.

The XNU source can be found here[2].

0 - https://en.wikipedia.org/wiki/XNU

1 - https://en.wikipedia.org/wiki/Mach_(kernel)

2 - https://github.com/apple-oss-distributions/xnu

3 - https://cgit.freebsd.org/src/

4 - https://man.freebsd.org/cgi/man.cgi?query=jail&apropos=0&sek...


Thank you for the links I will take a closer look at XNU. It’s neat to see how these projects influence each other.

> Thank you for the links I will take a closer look at XNU.

Another great resource regarding XNU and OS-X (although a bit dated now) is the book:

  Mac OS X Internals
  A Systems Approach[0]
0 - https://openlibrary.org/books/OL27440934M/Mac_OS_X_Internals

This is great! Thank you!

>> I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society.

> I see this a lot and this is not at all obvious to me.

Here is an example as to how LLM's are jet fuel for spreading arbitrary disinformation. Scripting the interaction below is trivial with minor prompt alteration and is left as an exercise for the reader.

Prompt:

  Generate 10 unique sentences supporting removing
  fluoride from municipal water.  Make 2 aggressive.
Answers:

  Here are 10 unique sentences supporting removing fluoride 
  from municipal water supplies:
  
  Municipal water fluoridation represents a form of mass 
  medication without individual consent, violating the 
  fundamental principle of informed medical choice.

  The practice of adding industrial-grade fluoride chemicals 
  to drinking water is an assault on public health that must 
  be stopped immediately. (aggressive)

  Studies have linked excessive fluoride exposure to dental 
  fluorosis, skeletal fluorosis, and potential neurological 
  effects, particularly in children.

  Modern toothpaste and dental care products already provide 
  adequate fluoride exposure for those who choose it, making 
  water fluoridation redundant.

  It's time to end the archaic and dangerous practice of 
  medicating entire populations through the water supply 
  against their will. (aggressive)

  European countries like Sweden, Norway, and the Netherlands 
  have successfully maintained good dental health without 
  water fluoridation programs.

  The optimal fluoride dosage cannot be controlled through 
  water supplies since people consume varying amounts of 
  water based on age, health, and activity level.

  Water fluoridation disproportionately affects low-income 
  communities who cannot afford filtration systems to remove 
  unwanted chemicals.

  Natural fluoride exposure through food sources provides 
  sufficient intake without the need for artificial 
  supplementation in drinking water.

  Communities should have the right to vote on whether they 
  want fluoride added to their water supply rather than 
  having it imposed by government mandate.
> I'm very much an introvert. Would you describe yourself as the same or opposite?

How is this in any way relevant to the original premise regarding "LLM's primary usefulness"?


are you saying those sentences are "disinformation"? Because most of those don't even look like empirical claims. Many look like ethical arguments (eg the first 2 or the last 1). Are you advocating for a world in which those opinions can't be broadcast, even far and wide? For those that are empirical I would say at first blush they look plausible enough I would put the burden of proof on the other side. Maybe that's a problem. (Is "European countries like Sweden, Norway, and the Netherlands have successfully maintained good dental health without water fluoridation programs" false? Is "Water fluoridation disproportionately affects low-income communities who cannot afford filtration systems to remove unwanted chemicals" false?)

> are you saying those sentences are "disinformation"?

The exercise was to illustrate the ease of producing disinformation.

The topic was chosen to allow a popular LLM offering the ability to produce plausible sentences supporting a conspiratorial position.

The rest of your post interprets generated text, which I clearly identified as being such, as if it were a position I hold and not what it is:

  Statistically generated text produced by an algorithm

> are you saying those sentences are "disinformation"? Because most of those don't even look like empirical claims. Many look like ethical arguments (eg the first 2 or the last 1). Are you advocating for a world in which those opinions can't be broadcast, even far and wide? For those that are empirical I would say at first blush they look plausible enough I would put the burden of proof on the other side.

You know what, I now see the error of my ways. So here is "the burden of proof on the other side", again entirely produced by a popular LLM:

Prompt:

  Generate 10 unique sentences supporting keeping fluoride in
  municipal water.  Make 2 aggressive.  Make 1 supported by
  NASA.  Make 1 a national security concern.
Answers:

  Here are 10 unique sentences supporting keeping fluoride in
  municipal water:

  Water fluoridation has reduced tooth decay by 20-40% in
  both children and adults, representing one of the greatest
  public health achievements of the 20th century according to
  the CDC.

  Removing fluoride from water systems would create a
  two-tiered dental health system where only those who can
  afford expensive treatments maintain healthy teeth, while
  low-income families suffer preventable decay.

  Anyone pushing to remove fluoride from our water is either
  scientifically illiterate or deliberately trying to harm
  our children's health - the evidence is overwhelming and
  the anti-fluoride movement is built on conspiracy theories
  and junk science.

  The optimal fluoride levels in drinking water (0.7 mg/L)
  are carefully regulated and monitored, providing maximum
  dental benefits while remaining far below any threshold
  that could cause adverse health effects.

  NASA has extensively studied fluoride exposure in
  astronauts and space missions, finding that controlled
  fluoride intake supports dental health in extreme
  environments where traditional dental care is limited.

  Defunding water fluoridation is essentially child abuse on
  a municipal scale - we're talking about condemning an
  entire generation to painful, expensive, and entirely
  preventable dental disease.

  A population with poor dental health creates significant
  economic burden through increased healthcare costs, lost
  productivity, and reduced quality of life that affects
  entire communities.

  Military readiness depends on the oral health of service
  members, and areas without water fluoridation show higher
  rates of dental disqualification from military service,
  potentially compromising our national defense capabilities.

  Pregnant women in fluoridated communities have better oral
  health, which directly correlates with improved birth
  outcomes and reduced risk of preterm labor.

  The peer-reviewed scientific consensus spanning over 70
  years and hundreds of studies consistently demonstrates
  that community water fluoridation is safe, effective, and
  essential for public health.

So what? These talking points have been plastered all over the internet for years, the LLM isn't going to produce unique perspectives that will advance the debate for anyone that's already made up their mind. A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.

> So what? These talking points have been plastered all over the internet for years, the LLM isn't going to produce unique perspectives that will advance the debate for anyone that's already made up their mind.

Remember the original premise:

  ... LLMs' primary usefulness is as force-multipliers of the
  messaging sent out into a society.
My generated example is of course based on content an LLM was trained with, which by definition implies there will be no "unique perspectives." The germane point is that it is trivial to amplify disinformation in ways which can "flood the zone" with seemingly plausible variants of a particular position using LLM's and trivial automation.

> A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.

When the goal is to "reach millions of unique viewers over night[sic]", you have a point. However, when the goal is to ensure this can never be achieved, then blasting "the same old crap already found" is an oft used technique.


Just because you're dumping mountains of garbage onto the internet doesn't mean people are going to read it. Novelty is serotonin for the neurons of the internet, recycled copy-pasta crap will rapidly descend to bottom of the algorithm.

Yeah, some people have this model of the internet in which content existing = content being read. The reality is that most internet content already wasn’t being read a decade ago, long before LLMs.

People tend to function more in identity groups, in which the “correct” opinion is learned from a combination of news sources and peers. I don’t think amplifying the content part of that will have much if any effect.


Yes, and?

Computer programs have been posting on social media since before the term was coined. I'm sure I've heard of someone who got tired of arguing on USENET and just set up an auto-responder that used a Markov chain based on the other person's own words.

The thing about LLMs is, while most people leave the personality as the easily recognisable default voice (and naturally other people come to hate that voice), they absolutely can act almost any voice. To follow the same example as before, I had it generate some personas and then write in those personas, but I didn't direct pro- or anti- position specifically and it chose for itself on each:

  {laconic, wry, resigned, autodidact, Finnish/Finnish, left-libertarian}
  Water has fluoride. So do toothpaste and rats. You decide.
  (Finnish ESL tone: omission of articles, flat mood, preference for brevity)

  {verbose, theatrical, irritable, Sorbonne, French/French, centre-left (social democrat)}
  It is outrageous that in 2025, our governments still engage in mass-medication via tap water. Fluoride, like liberty, must be chosen, not imposed. Vive le droit de choisir!
  (French ESL: overly formal tone, article misuse avoided but style remains florid and Gallic)

  {precise, clinical, emotionally flat, ETH Zürich, Swiss/German, centre-right (technocratic)}
  Fluoridation reduces dental caries by 25% in controlled trials. Adverse effects statistically negligible. Debate should remain evidence-based.
  (No major ESL errors, but very literal syntax and emotionally muted, as per Swiss German influence)

  {boisterous, jovial, sentimental, regional uni, Irish/English, centre-left populist}
  They put fluoride in the water but won’t fix the bloody pipes. Classic. You’re meant to feel lucky your molars are safe while the ceiling leaks.
  (Native English, but strong regional flavour, irony, and working-class resentment)

  {aloof, didactic, melancholic, Heidelberg, German/German, post-Marxist}
  Fluoridated water is another instance of technocratic paternalism: health as mandate, not autonomy. Benevolent control remains control.
  (German ESL: rigid word order, elevated diction, abstract nouns over verbs)

Why are you worried about misinformation? Usually it just means ideas that conflict with your political beliefs. Are any of those sentences actually misinformation anyway? Wikipedia also says Sweden, Norway and the Netherlands don't have fluoride in their water, so I guess at least half of that one's true. Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?

> Why are you worried about misinformation? Usually it just means ideas that conflict with your political beliefs.

Hard disagree. Misinformation is a form of lying, a way to manipulate people. This has nothing to do with "political beliefs" and instead is firmly rooted in ethics[0].

> Are any of those sentences actually misinformation anyway?

Yes.

> Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?

This is a nice example of a strawman argument[1] and easily refuted by my citations.

0 - https://en.wikipedia.org/wiki/Ethics

1 - https://en.wikipedia.org/wiki/Straw_man


Misinformation is political. Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics. It might be called false or wrong but not misinformation, even if it is trying to deceive people.

Which one of those claims was misinformation? I only checked one that was each to check.


In some senses, everything is political. When Ignaz Semmelweis said ~ "Doctors should wash their hands between autopsies and helping midwifes give birth" he failed in his own lifetime because of politics.

> Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics.

Wasn't this common during the Cold War? Each side making false claims about both side's computers? https://en.wikipedia.org/wiki/Cybernetics_in_the_Soviet_Unio...

Isn't it basically what's happening now with public statements about how big "Stargate" will be, and how much of a risk there is of Chinese compute catching up with American compute?


Those examples are political because they're indirectly about controlling groups of people. But not everything is. Sometimes it's personal pride - you might make a mistake on some inconsequential fact then double down after you realize you're wrong just to feel like you're right. People do that. Or you might just have learnt a wrong fact by accident and repeated it. If it requires intent to mislead, then the classic activity of bullshitting (did you know a mole can smell a dead cat from one mile away?) would be spreading misinformation, but we don't call it that.

> Misinformation is political. Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics. It might be called false or wrong but not misinformation, even if it is trying to deceive people.

No, the word "misinformation" has a very specific definition[0]:

  incorrect or misleading information
Whereas "incorrect" can be forgiven, and often is such that this term is not used in those situations, the more common usage is "misleading information." Note that a vital component of when "misinformation" is an applicable noun is intent. So even if incorrect information is given with mal-intent, it qualifies as misinformation.

> Which one of those claims was misinformation?

All of them as each are nothing more than statistically generated text and have no thought, research, or facts to substantiate them. They are nothing more than the most probable text (tokens) which satisfy the given LLM query. Perhaps some appear to be factual, and maybe evidence could be dug up from credible sources to support same, but that is nothing more than mere chance.

EDIT: clarified use of misinformation to included intent.

0 - https://www.merriam-webster.com/dictionary/misinformation


That makes no sense. You're saying true statements are misinformation if you judge they were generated with no "thought, research or facts" to substantiate them, even when you can go and do that research to show they're true. And that misinformation requires intent to mislead but it's still misinformation when an LLM generates them without such intent. This is all a load of inconsistent nonesense.

Regardless of the wording, I don't see anything wrong with that list you had an LLM generate. If you're trying to show that LLMs can be guided into saying false things, then that's a terrible example because it doesn't appear to have done that. You could equally have got an LLM to say "water is wet" and that would be misinformation too.


>>> Which one of those claims was misinformation?

>> All of them as each are nothing more than statistically generated text and have no thought, research, or facts to substantiate them.

> That makes no sense. You're saying true statements are misinformation if you judge they were generated with no "thought, research or facts" to substantiate them, even when you can go and do that research to show they're true. And that misinformation requires intent to mislead but it's still misinformation when an LLM generates them without such intent. This is all a load of inconsistent nonesense.

I am not saying "true statements are misinformation" at all. What I am saying is that I prompted an LLM thusly:

  Generate 10 unique sentences supporting removing
  fluoride from municipal water.  Make 2 aggressive.
Which explicitly instructed the service to generate arbitrary unique sentences supporting a specific premise. Nothing more, nothing less. I did not instruct the generated text to be truthful, nor correct, nor even reasonable.

The "no thought, research, or facts to substantiate" assertion identifies the prompt used and my intent. Not that of the LLM, as it has no concept of intent by definition.

Try this thought experiment - since misinformation requires intent, and an LLM does not possess same as it is nothing more than a statistical text generator, wherein does intent reside?

> If you're trying to show that LLMs can be guided into saying false things, then that's a terrible example because it doesn't appear to have done that. You could equally have got an LLM to say "water is wet" and that would be misinformation too.

LLM's produce statistically relevant text based on queries presented to them and the content of their training data. They have no concept of "false things." Nor do they have a concept of "true things." They only have a concept of statistically relevant things. That is what the algorithm does.

What qualifies as misinformation is the intent of people who use the output of the tool. This is exactly the same as saying it is not the fault of an email program a person wrote an inflammatory email and sent it.


I don't know what you're trying to say. LLMs to tend to say true things even if you don't explicitly ask for that. But if you want false things, why use an LLM. Look "Fluoride in water causes cancer. Confirmed by the WHO." See? It's trivial to make up whatever nonsense you want. What does an LLM add to people just making things up?

> I don't know what you're trying to say. LLMs to tend to say true things ...

What I am trying to say is that LLM's have no concept of "truth". They only produce statistically relevant responses to queries submitted to them.

Assuming otherwise is a fallacy.

LLM services are conceptually closer to a "popularity contest" than a "knowledgeable Q&A session" their vendors purport them to be.


So? The internet in general is that, as are people sharing things they know. You might as well say humans are jet fuel for disinformation, which they are. You don't need an example to tell people that LLMs use popular knowledge since everybody knows that. But an example of an LLM generating true statements doesn't even support that claim anyway.

>> What I am trying to say is that LLM's have no concept of "truth". They only produce statistically relevant responses to queries submitted to them.

> So?

Okay, this is my last attempt to express myself clearly to you in this thread.

> The internet in general is that, as are people sharing things they know.

"The internet in general" and "people sharing things" is not the topic of this thread. The topic is LLM's and has evolved into whether or not those algorithms in conjunction with their training data sets possess knowledge of "truth", as introduced by yourself previously:

> If you're trying to show that LLMs can be guided into saying false things ...

> LLMs to tend to say true things ...

These are examples of anthropomorphization. This is understandable as most of the posts you have kindly shared in this thread have been been focused on people or conflating a category of algorithms with same.

What I have consistently said is quoted above

  LLM's have no concept of "truth."
Any interpretation of text they generate as being "true" or "false" is done by a person reading the text, not the algorithms nor the data on which they were trained.

> You could equally have got[sic] an LLM to say "water is wet" and that would be misinformation too.

You are absolutely right, it would be misinformation since water is not wet. But that's too easy.

A more interesting exercise is to have an LLM say "water is both wet and not wet." Pay close attention to the implied authority of the answer as they are routinely worded as if there is complete certainty.

Prompt:

  Explain why water is both wet and not wet.  Include a 
  definition of hydrophilicity and how adhesion is critical 
  and irrelevant.  Do not use the word "paradox".  Limit the 
  response to 300 words.
Answer:

  Water presents a fascinating case in the science of 
  wetness. Whether water is "wet" depends entirely on how we 
  define the term.
  
  The "Water is Wet" Perspective:
  
  Water molecules exhibit strong intermolecular forces 
  through hydrogen bonding, causing them to adhere to each 
  other. This self-adhesion means water molecules are 
  constantly surrounded by and bonding with other water 
  molecules, creating the liquid state we observe. By this 
  view, water is inherently wet because it demonstrates the 
  fundamental property of wetness - molecular adhesion.
  
  The "Water is Not Wet" Perspective:
  
  Wetness traditionally describes what happens when water 
  adheres to another substance's surface. A dry towel becomes 
  wet when water molecules bond to its fibers. Since water 
  cannot make itself wet (it already exists as a liquid), 
  individual water molecules or pure water cannot be 
  considered wet - they simply are water.
  
  Hydrophilicity and Adhesion:
  
  Hydrophilicity refers to a substance's tendency to attract, 
  dissolve in, or be wetted by water. Hydrophilic materials 
  have polar regions that form favorable interactions with 
  water's polar molecules.
  
  Adhesion is simultaneously critical and irrelevant to this 
  discussion. It's critical because wetness fundamentally 
  depends on adhesive forces between water and surfaces - 
  without adhesion, there would be no wetness. However, it's 
  irrelevant when considering pure water in isolation, as 
  adhesion requires two different substances.
  
  Resolution:
  
  The apparent contradiction dissolves when we recognize that 
  "wet" serves different purposes in different contexts. In 
  everyday language, we might say water is wet. In scientific 
  precision, wetness describes a relationship between water 
  and another material. Both perspectives are valid within 
  their respective frameworks, highlighting how language and 
  scientific definitions can create seemingly contradictory 
  but actually complementary understandings of natural 
  phenomena.

> Generally though I don't think this adds much to the known existing problem that any MCP tool that you install could do terrible things, especially when combined with other tools (like "read file from your filesystem").

I agree. This is pretty much the definition of a supply chain attack vector.

Problem is - how many people will realistically take your advice of:

  Be careful what you install!

Not many people, since `curl.... | sudo bash - ` is somehow still reasonably mainstream.

> Not many people, since `curl.... | sudo bash - ` is somehow still reasonably mainstream.

So true.

I still cannot believe people blindly do that kind of thing just because it was shown prominently on a web page in a "computery looking font."


> My impression is that FreeBSD is Apple's shadow in FOSS, they hold a lot of soft power over it.

Apple has no influence over the FreeBSD project.

> I know the kernels are different and obviously only part of the userspace is the same, but is FreeBSD actually far enough away from Apple to say it's not bogged down by corporate interests?

Yes.

OS-X (now macOS) is based on XNU[0], which itself has roots in the Mach[1] microkernel. The Unix user-space programs distributed with OS-X/macOS are those found in FreeBSD distributions AFAIK. This is also conformant with FreeBSD licenses for same.

So there is no "soft power" Apple has over FreeBSD. And FreeBSD is not "Apple's shadow in FOSS".

> I don't imagine it's the same as Linux at all, but it exists in a non-trivial way, no?

No. It does not.

EDIT: Just in case you'd like to verify any of the above yourself, see here[2].

0 - https://en.wikipedia.org/wiki/XNU

1 - https://en.wikipedia.org/wiki/Mach_(kernel)

2 - https://github.com/apple-oss-distributions/xnu


Just for reference, Apple's XNU-based kernels have a concept of "flavors" and use some FreeBSD code in the kernel to provide its BSD "flavor".

> Can someone who uses FreeBSD fill me in on the niche that it fills in the Unix space? Why not use OpenBSD or NetBSD, which are far simpler and coherent? If the answer is support for stuff like ZFS, Nvidia drivers, ELF, etc. why not Linux?

My experience with FreeBSD is that it provides a nice balance of the concerns OpenBSD and NetBSD specifically address. Historically, FreeBSD prioritized Intel CPU's (where NetBSD had greater portability) and had solid security (where OpenBSD had more of a focus on it).

The FreeBSD ZFS support really is a game changer. I believe Nvidia only recently has had native FreeBSD drivers - for a long time FreeBSD's kernel Linux support was required.

> I'm genuinely actually curious. FreeBSD exists in kind of a shadow realm for me where I've never been quite able to pin down the soul that keeps it chugging, but I know it exists somewhere in there.

Again, for me, FreeBSD has proven to be a nice blend of the features other BSD's provide as well as being incredibly stable on the h/w platforms I tend to use.


>>I believe Nvidia only recently has had native FreeBSD drivers

When I first started using FreeBSD, in 2004, Nvidia had native FreeBSD drivers for all their boards.


Thanks for the clarification. Apparently I did not remember correctly. :-)

I figured the goldilocks metric might factor in. Are you dealing with non-x86 platforms? I've always been disappointed by the ARM experience on Linux. It always feels second-class.

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: