Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My closest to religious experience was a C program crashing after removing an unused variable. I repeated this and tested multiple times by adding and removing the variable and running the program.

Now I obviously have a believable rational hypothesis for that behaviour but then it was mystical. I left the variable and got a passing grade (it was a student program).



Computers are extremely haunted. It's often remarkably they were working at all before you made the change, then you prod and all the goblins come out.

Similar experiences I've had:

* Putting a big array on a stack (early in C programming career) - compiles fine, instant crash at runtime.

* Colleague discovers that a function needs padding with NOPs to achieve precise timing. Number of NOPs needed varies depending on code changes (even NOPs added or removed) in functions higher up the same source file.

* Occasional crashes in low level routines on ARM64 after changing completely unrelated code. The stack appears to contain a struct from elsewhere in memory instead of ... a stack.

(First one was probably just the array was too big for sensible stack management/ growth and a friendlier compiler would have just told me. Second one was to do with the size of memory pages in the flash - there was a delay if you ran over a boundary. Third one was, IIRC, the variable containing the base of a temporary stack occasionally getting splattered to point into other data structures! The actual stack was fine but you couldn't see it anymore)


2nd may have been a race. That would be the first thing I looked for anyway.


That one confounded us for a while. There was no true parallelism going on in the system but the code that needed nop padding was the interrupt handler.

That it needed padding wasn't unreasonable - we had tight timing constraints. It would have absolutely made sense for it to race with another part of the system but that didn't seem to be the nature of the interaction. The interrupt handler could always run when it wanted to.

The eventual deduction: the flash memory had 256 byte pages at hardware level. I inferred that there's some initial access cost to open a page (and probably cache it into SRAM or something).

The build was putting later functions in the file at later addresses, so what you put in other functions changed the alignment of the interrupt handler. If you requested 256 byte alignment for that function then you still needed nop padding but it was completely consistent.


This is usually the result of undefined behavior somewhere else. Undefined behavior is so insidious because it creates exactly these unexplainable situations. Probably the compiler was running some kind of reordering or optimization which is proven to be safe for defined behavior but when run on UB code, causes the program to segfault on changes that would normally be no op.


It sounds like an array size problem. int a[50]; int unused; a[50]=55; // UB, but compiler probably placed unused after a[49] which prevented the crash.


Deleting a dead function won me a segfault in the last year or so. That turned out to be an oddity related to shared libraries.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: