Hacker Newsnew | past | comments | ask | show | jobs | submit | ajxs's commentslogin

My given name is Anthony, and I have this problem. I grew up in Australia, and visit some family in Poland every other year. The Australian pronunciation of my name is particularly incompatible with Polish, so I change it from 'ænθəni' to (something like) 'ɑːntɒni'. The 'æ' sound at the start of my name is totally foreign, and if I start introducing myself in Polish, then say my name the way an Australian would, the entire sentence just sounds too weird. Obviously Polish people are familiar with Western culture, and have probably seen the name before, but it just sounds too strange when used in spoken Polish.

The sound [æ] is not entirely foreign to Polish and some other Slavic languages. It often shows up as an allophone of /a/ when it's between two "soft" (palatalized) consonants (e.g. the first /a/ in the word "niania"). The problem for us learning it is that it's not a separate phoneme, and worse yet, the environment in which it occurs in Slavic languages doesn't correspond to anything in English - and, conversely, in English it appears in environments where it could only be [a] or [ɛ] in Slavic languages.

The name itself is, of course, originally Roman, and it's also the name of many Christian saints, so basically every Christian country (not even necessarily a Western one) will be aware of it and have some version of it; for Polish that's be "Anton", I think, same as in Russian.


This doesn't happen in my speech, I certainly pronounce both <a>'s as [a] in "niania". [ɛ] is different, it certainly becomes [e] after palatalized consonants. I agree that people cannot tell the difference intuitively, though.

It happens more in fast speech. If you draw out the vowel - as people tend to when they are trying to get a better feel of it - it will end up at [a] even if it doesn't start there. I suspect you'd need to actually record it and then look at the formants to tell for sure.

> Regardless of the language, you cannot write some special program that guarantees an input will be valid if it comes from an external source or a human. It is simply impossible to prove this at compile time.

> ...but from my perspective it looks worse in Ada...

This isn't really true. SPARK obviously can't prove that the input will be valid, but it can formally prove that the validity of the user input is verified before it's used.

I wrote about this here: https://ajxs.me/blog/How_Does_Adas_Memory_Safety_Compare_Aga...


I am no expert here what I remember is mostly from CS courses, but isn't the entire point of a formal program proof that you can reason about the combinatorics of all data and validate hypothesis on those?

It's one thing to say: "objects of this type never have value X for field Y", or "this function only works on type U and V", but its a lot more impressive to say "in this program state X and Y are never achieved simultaneously" or "in this program state X is always followed by state Y".

This is super useful for safety systems, because you can express safety in these kinds of functions that are falsifiable by the proof system. E.g: "the ejection seat is only engaged after the cockpit window is ejected" or "if the water level exceeds X the pump is always active within 1 minute".


> isn't the entire point of a formal program proof that you can reason about the combinatorics of all data and validate hypothesis on those?

You're right. Validating input is a part of this process. The compiler can trivially disprove that the array can be safely indexed by the full set of any valid integer that the user can input. Adding a runtime check to ensure we have a valid index isn't a very impressive use of formal proofs, I admit. It's just a simple example that clearly demonstrates how SPARK can prove 'memory-safety' properties.


But isn't the entire point of rust's verbose type system to declare valid states at compile time? I don't exactly see how this can't be proved in rust.

> But isn't the entire point of rust's verbose type system to declare valid states at compile time?

Different type systems are capable of expressing different valid states with different levels of expressivity at compile time. Rust could originally express constraints that SPARK couldn't and vice-versa, and the two continue to gain new capabilities.

I think in this specific example it's possible to write a Rust program that can be (almost) verified at compile time, but doing so would be rather awkward in comparison (e.g., custom array type that implements Index for a custom bounded integer type, etc.). The main hole I can think of is the runtime check that the index is in bounds since that's not a first-class concept in the Rust type system. Easy to get right in this specific instance, but I could imagine potentially tripping up in more complex programs.


I am not a Rust expert either, but just a general remark: using a programming language like Rust as its intended to be used, i.e. functional/imperative programming of the problem domain, does not lend itself well to proving/disproving the kind of statements I showed above.

Yes, you could theoretically generate a Rust program that does not compile if some theorem does not hold, but this is often times (unless the theorem is about types) not a straightforward Rust program for the problem at hand.

I also think that, although Rust is blurring the lines a bit, equating formal verification and type-checking is not a valid stance. A type checker is a specific kind of formal verification that can operate on a program, but it will only ever verify a subset of all hypotheses: those about object types.


This is the smoking gun for me:

    procedure Overflow_This_Buffer
       with SPARK_Mode => On
    is
       type Integer_Array is array (Positive range <>) of Integer;
       Int_Array : Integer_Array (1 .. 10) := [others => 1];
       Index_To_Clear : Integer;
    begin
       Ada.Text_IO.Put ("What array index should be cleared? ");
       --  Read the new array size from stdin.
       Ada.Integer_Text_IO.Get (Index_To_Clear);

       Int_Array (Index_To_Clear) := 0;
    end Overflow_This_Buffer;

What I am asking is what exactly is formally proving from 1st principles? Because to me this looks no different than a simple assertion statement or runtime bounds check which can be performed in any language with if statements & exceptions.

How did you prove that no one could just put in an arbitrary integer into that function? Does it fail to compile? Then you're just making a custom type which enforces valid state which, again can be done in any language with a rich type system and if statements.

If I might be more blunt: formal proofs in the field of programming appear to be nothing more than mathematicians trying to strong-arm their way into the industry where they aren't necessarily needed. Formal proofs make sense when you're trying to come up with new formulas or studying the cosmos, but they make no sense when you're trying to predict how bits will behave on a computer, because for the most part that is a solved problem by languages themselves.


> How did you prove that no one could just put in an arbitrary integer into that function? Does it fail to compile?

The answer quite literally follows immediately after the code sample you quoted:

> Attempting to prove the absence of runtime errors gives us the following warnings:

  buffer_overflow.adb:162:26: medium: unexpected exception might be raised
    162 |      Ada.Integer_Text_IO.Get (Index_To_Clear);
        |      ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~

  buffer_overflow.adb:164:18: medium: array index check might fail
    164 |      Int_Array (Index_To_Clear) := 0;
        |                 ^~~~~~~~~~~~~~
    reason for check: value must be a valid index into the array
    possible fix: postcondition of call at line 162 should mention Item 
    (for argument Index_To_Clear)
    162 |      Ada.Integer_Text_IO.Get (Index_To_Clear);
        |                         ^ here
> The SPARK prover correctly notices that there's nothing stopping us from entering a value outside the array bounds. It also points out that the Get call we're using to read the integer from stdin can raise an unexpected Constraint_Error at runtime if you type in anything that can't be parsed as an integer.

This is followed by a version of the program which SPARK accepts.

> Then you're just making a custom type which enforces valid state which, again can be done in any language with a rich type system and if statements.

Key word here is can. You can write correct software using any language, from raw binary to theorem proving languages, but some languages check more properties at compile time than others. What using something like SPARK/Lean/Agda/Rocq buys you is the "more" part, up to allowing you to eliminate runtime checks entirely because you proved at compile time that the corresponding error conditions can not happen (e.g., proving that indices are always in bounds allows you to omit bounds checks). That's not something the more popular languages are capable of because their type system is not expressive enough.

> Formal proofs make sense when you're trying to come up with new formulas or studying the cosmos, but they make no sense when you're trying to predict how bits will behave on a computer, because for the most part that is a solved problem by languages themselves.

This seems contradictory, especially when considering that type checking is creating a formal proof?


> The SPARK prover correctly notices that there's nothing stopping us from entering a value outside the array bounds. It also points out that the Get call we're using to read the integer from stdin can raise an unexpected Constraint_Error at runtime if you type in anything that can't be parsed as an integer.

To me, this doesn't sound like something unique to spark. Let's return to the solution example:

    procedure Overflow_This_Buffer
       with SPARK_Mode => On
    is
       type Integer_Array is array (Positive range <>) of Integer;
       Int_Array : Integer_Array (1 .. 10) := [others => 1];
       Index_To_Clear : Integer := Int_Array'First - 1;
    begin
       while Index_To_Clear not in Int_Array'Range loop
          Ada.Text_IO.Put ("What array index should be cleared? ");
          --  Read the new array size from stdin.
          Ada.Integer_Text_IO.Get (Index_To_Clear);
       end loop;

       Int_Array (Index_To_Clear) := 0;
    end Overflow_This_Buffer;
All you have done here is proven that within the vacuum of this function that the index will be a valid one. Indeed, this is even confirmed by the article author just prior:

> If we wrap the Get call in a loop, and poll the user continuously until we have a value within the array bounds, SPARK can actually prove that a buffer overflow can't occur. (Remember to initialise the Index_To_Clear variable to something outside this range!)

You have to poll the user continuously until a valid input is given. Again, I don't see how this is any different or special compared to any other programming language that would handle the error by asking the user to retry the function.

> Key word here is can. You can write correct software using any language, from raw binary to theorem proving languages, but some languages check more properties at compile time than others. What using something like SPARK/Lean/Agda/Rocq buys you is the "more" part, up to allowing you to eliminate runtime checks entirely because you proved at compile time that the corresponding error conditions can not happen (e.g., proving that indices are always in bounds allows you to omit bounds checks). That's not something the more popular languages are capable of because their type system is not expressive enough.

But I think the discussion here is about Rust, Ada, C/C++ no? The type system in Rust, Ada, and C++ are all robust enough at least to accomplish this.

> This seems contradictory, especially when considering that type checking is creating a formal proof?

Exactly! This is the root of my issue: you can accomplish this magical "proof" you speak of without using SPARK or Ada or any special "contract" language.


> Again, I don't see how this is any different or special compared to any other programming language that would handle the error by asking the user to retry the function.

The blog's emphasis is that SPARK catches the possible error at compile time, so you can't forget/neglect to handle the error. Notice that neither rustc nor clippy complain at compile time about the potential OOB access in the Rust program that precedes the SPARK demo, while SPARK catches the potential issue in the naive translation.

> But I think the discussion here is about Rust, Ada, C/C++ no?

Sure, but my point is that type systems are not all equally capable. I don't think it's controversial at all that Rust's type system is capable of proving things at compile time that C++'s type system cannot, the most obvious examples being memory safety and data race safety. Likewise, SPARK (and C++, for that matter) is capable of things that Rust is not.

> The type system in Rust, Ada, and C++ are all robust enough at least to accomplish this.

I'm not sure about that? You might be able to approach what SPARK could do, but at least off the top of my head you'll need to make a check the compiler can't verify somewhere.

And that's just for this instance; I don't think Rust nor C++ are able to match SPARK's more general capabilities. For example, I believe neither Rust nor C++ are able to express "this will not panic/throw" in their type systems. There are tricks for the former that approximate the functionality (e.g., dtolnay/no_panic [0]), but those are not part of the type system and have limitations (e.g., does not work with panic=abort). The latter has `noexcept`, but that's basically semantically a try { } catch (...) { std::terminate(); } around the corresponding function and it certainly won't stop you from throwing or calling a throwing function anyways.

> Exactly! This is the root of my issue: you can accomplish this magical "proof" you speak of without using SPARK or Ada or any special "contract" language.

Perhaps I wasn't clear enough. I was trying to say that it's contradictory to complain about formal proofs while also claiming you can accomplish the same using other programming languages' type systems because those type systems are formal proofs! It's like saying "You don't need formal proofs; just use this other kind of formal proofs".

I don't think that assertion is correct either. The reason separate languages are used for this kind of proof is because they restrict operations that can't be proven, add constructs that provide additional information needed for the proof, or both. Perhaps as an extreme example, consider trying to make compile-time proofs for a "traditional" dynamically-typed language - Smalltalk, pre-type-hints Python, Lua, etc.

[0]: https://github.com/dtolnay/no-panic


Maybe you'll find this example to be a bit more useful: https://blog.adacore.com/i-cant-believe-that-i-can-prove-tha...

The idea is that you define a number of pre- and post- condition predicates for a function that you want proved (in what's effectively the header file of your Ada program). Like with tests, these checks that show that the output is correct are often shorter than the function body, as in this sorting example.

Then you implement your function body and the prover attempts to verify that your post-conditions hold given the pre-conditions. Along the way it tries to check other stuff like overflows, whether the pre- and post- conditions of the routines called inside are satisfied, etc. So you can use the prover to try to ensure in compile-time that any properties that you care about in your program are satisfied that you may otherwise catch in run-time via assertions.


I've never worked in aerospace, however I'm interested in safety-critical software engineering, and I've written a lot of Ada (and written about my experiences with it too). My understanding is that yes you can write safety-critical code in just about any language, but it's much easier to prove conformance to safety standards (like DO-178C, etc.) in Ada than in C++.

I regularly do hobby bare-metal programming in both Ada and C. I find that Ada helps prevent a lot of the common footguns in C. It's not a silver bullet for software safety, but it definitely helps. The point of any programming language is to make doing the right thing easy, and doing the wrong thing hard. All things considered, I think Ada does a good job of that.


This brings to mind FOAF: https://en.wikipedia.org/wiki/FOAF


The early Web 2.0 era was still heavily riding the idea of the Semantic Web, which is why it saw the rise of a lot of metadata standards and "microformats". Most of these rapidly died when it became clear that there are stronger commercial incentives to abuse this data and falsify data (e.g. for "SEO", i.e. spam and false advertising) than there are social incentives to disclose this information and use it responsibly. It was a major clash between the idealistic vision of building global interconnected communities and the financial reality of companies wanting to optimize everything for their own profits.

In other words, the reason we can't have the Semantic Web is the reason social media has become largely unusable (cf. Dead Internet Theory) and the reason Facebook first decided to push features that they knew made their users measurably unhappier (the introduction of the "social stream" home screen instead of having to visit each connection's "wall" to see what they've been up to massively drove up engagement and created advertisement opportunities but resulted in users experiencing an expectation to "perform" for their "audience" instead of "sharing" with "friends" like they had done before - downstream this has resulted in mental health crises especially amongst teenagers using image-heavy social media like Instagram dominated by influencers).

I'd even go so far as to say that "social media" in its original sense died with the hopes of the Semantic Web and the end of the "blogosphere".

At the time it was called "social" media because there was a perception of building actual 1-to-1 connections between individuals (even prior to "real name policies") and people would share unadulterated personal experiences with the understanding that they were sharing them in a globally distributed but ultimately highly specific niche (and often using several such niches for different topics which is why tag clouds and such became popular). The shift away from this to "walled gardens" and "social feeds" (rather than curated RSS feeds and blogrolls) allowed for "brand accounts" (the most famous example likely being Wendy's on Twitter) and eventually the widespread use of Mechanical Turks (i.e. people in low-wage countries being paid to drive up "engagement" and steer perception via fake comments) and eventual full automation or "bots".

I guess FOAF stands out among others like XFN in that it's still seeing some use as a metadata standard because it defines vocabulary that remains useful outside its original intended purpose, surviving it.


The specs for things like microformats were mostly fine. The issue is that search engines mostly obfuscate their behavior and microformats fizzled out in the places that could have benefited from them. This was mainly due to relatively low adoption; not necessarily abuse. This was the kind of thing that techies obsessed about that most other people ignored.

So, search engines couldn't rely on sites properly using microformats on most sites and had to resort to other things to pull out structured information. Browsers removed what little support they had for microformats. And then blogging became more of a niche thing as people moved to social networks and things like substack, tumblr, and similar sites.

Mostly, you can still use microformats. It doesn't hurt anything and there might be some things out there that benefit from having a little bit of extra structure. But otherwise there's just very little point to it (beyond some niche uses); and there never really was. It's the classic chicken egg problem where it would be great if everybody used it but because most people didn't it wasn't all that great. You only get network effects if you get a critical mass of adoption. That just never happened with this stuff. Most developers treated it as a low value, optional thing, that was maybe nice to have but not worth obsessing over.

These days there are related things that are still widely used (e.g. opengraph and similar meta attributes). Also there are new semantic html tags that help tell apart main articles from navigation, footers, etc. that are fairly widely used. That helps things like screen readers, rendering previews of shared links, etc. And it also helps for SEO. And SEO is a great incentive for people to adopt stuff.

XFN and FOAF were indeed mostly relevant in the pre-social network era and became quickly irrelevant. The point with social networks is that they are walled gardens. Standardization was not something that the owners of those actively pursued because they are simply not interested in interoperability. Exclusive ownership of the relationship of the user is the whole point for exploiters of social networks like meta, tik tok, x/twitter, linkedin, etc.


> It was a major clash between the idealistic vision of building global interconnected communities and the financial reality of companies wanting to optimize everything for their own profits.

I think its even simpler than that. HTML et al was a nice idea but its largely ignored in the pursuit of making web pages seem like apps.

Berners Lee had this idea of documents like pages in a book, nothing like the full-fledged applications in the browser we get today, yet that legacy Berners Lee cruft still remains whilst people find ever more inventive ways to overcome it


It's really fortunate that the history of the 555 timer is really well documented. Its inventor, Hans Camenzind, wrote several books, and even had a Youtube channel in his later years[1]. It's a shame that so many iconic chips that have changed the world aren't so well documented. I went down a real rabbithole a while ago trying to find in-depth information about the Hitachi HD44780. I couldn't even decisively pin down exactly what year it was first manufactured. It's interesting to think of microchip designs as a kind of artistic legacy: Chips like the 555 have had an enormous impact on modern history.

1: https://www.youtube.com/@hcamen


I couldn't even decisively pin down exactly what year it was first manufactured

Likely 1985.

https://www.crystalfontz.com/blog/look-back-tech-history-hd4...


I did see this article when I was researching, but it's incorrect. You can find references to the HD44780 in earlier catalogs. The earliest reference I can find is 1981: https://archive.org/details/Hitachi-DotMarixLiquidCrystalDis...

It's also referenced in this catalog from 1982: https://bitsavers.org/components/hitachi/_dataBooks/1982_Hit...

Likely the first year of manufacture was 1981/82.


That 1981 document says "preliminary", which suggests very limited trial production. Even the 1985 reference I found is "advance". First tape-out may have been earlier than that. Have you tried asking Hitachi (apparently now Renesas) about it? It's more likely that this information is available on the Japanese part of the internet.


Try 1972; taped out in 1971.

https://en.wikipedia.org/wiki/555_timer_IC


One disadvantage that I don't think I've seen anyone mention so far: You need to use an in-band value to represent the absence of a node. That may or may not be a problem for your particular implementation.


A null index works fine (obviously, that location in the array must remain unused, but that does not necessarily waste memory, as the base of the array can point one word before its actual start, if checks for not using null indices are always done).


When Instagram introduced reels, I started to get exposed to these weird and horrible clips of people 'rescuing' sick and abused animals, and begging for donations. I don't know for certain, but there's lots of clues that these accounts are engineering these encounters. Seeing these clips is genuinely distressing, and it's hard to make Instagram stop showing them. Just another way social media is causing real harm to add to the list, I guess.


Thank you for sharing this! I'd love to know more about what led them to develop their own CPU, and what the instruction set looks like. It looks like AdaCore actually merged their support for VISIUMCore into upstream GCC. The slides state it features SEU detection/correction, which is pretty interesting.


The core concept behind XSLT is evergreen: Being able to programmatically transform the results of a HTTP request into a document with native tools is still useful. I don't foresee any equivalent native framework for styling JSON ever coming into being though.


If I'm not mistaken, XSLT 3 works with JSON too. It'd be nice to see that shipped in browsers instead of seeing v1 removed.


I could easily imagine a functional-programming JSON transformation language, or perhaps even a JSLT based on latest XSLT spec. The key in these things is to constraing what is can do.


We wouldn't even need anything as complex as XSLT, or a functional language for transforming JSON. Other markup-based template processing systems exist for higher-level languages like Pug, Mustache, etc. for Node.js. You could achieve a lot with a template engine in the browser!


JSX!


That's not... really the same.


> I don't foresee any equivalent native framework for styling JSON ever coming into being though.

Well yeah I hope not! That's what a programming language is for, to turn data into documents.


XSLT 2.0 is Turing complete.


So are many esoteric languages, but I doubt people would appreciate me comparing XSLT to Brainfuck or FORTRAN 66.


Having something declarative is good here, and XSL fills that.


Let’s rewrite W3C into XML and xslt.


This was already the case in Australia as early as 2003. I distinctly remember being shocked that I had to provide ID when I bought a phone from a store for the first time.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: