Hacker News new | past | comments | ask | show | jobs | submit | LgWoodenBadger's comments login

Complicated parallelization? That’s what partitions and consumers/consumer-groups are for!

Of course they are, but I’m not controlling the producer.

Producer doesn’t care how many partitions there are, it doesn’t even know about them, unless it wants to use its own partitioning algorithm. You can change the number of partitions on the topic after the fact.

In this case it would need to use its own partitioning algorithm because of some specific ordering guarantees we care about.

Then rewrite them to another topic. Nevermind, complex multithreading sounds like the better solution

There’s more to it than that. We don’t care about total order even within partitions. Every so often we get a message that must not be sent downstream until some subset of messages have been sent.

So most of the time we’re fine sending 100-200 parallel message batches, but sometimes we need to stop and wait for some batches to complete before sending any more.

We also want to control how hard we hammer specific resources downstream, which don’t correlate with the partitions we’d need. Additionally we want to scale up and scale down the parallelism per each of the previously mentioned resources depending on how fast they are coming in to maximize batch size (while keeping latency low).

There’s of course ways to do this with multiple partitions by having the consumers communicate with each other. But now we have added an additional consumer and topic to the pipeline, and an inter-consumer control system.

It was overall easier to have one consumer read from the existing topic and spawn goroutines, so that we can have more dynamic control, the ability to scale up and down immediately without worrying about rebalancing, and easy communication between threads.


IME, it has always turned out to be the correct decision to eliminate any n^2 operation in anything I’ve written.

I don’t write exotic algorithms, but it’s always astounding how small n needs to be to become observably problematic.


Bruce Dawson says: I like to call this Dawson’s first law of computing: O(n^2) is the sweet spot of badly scaling algorithms: fast enough to make it into production, but slow enough to make things fall down once it gets there.

https://bsky.app/profile/randomascii.bsky.social/post/3lk4c6...


The second law is that O(n * log n) is for practical intents and purposes O(n).

Skiena has a great table in his algorithms book mapping time complexity to hypothetical times for different input sizes.

For n of 10^9, where lgn takes 0.03 us and n takes 1 s, nlgn takes 29.9 s and n^2 takes 31.7 years.


more from table please?

I would rather have the table and related content. Name of the book?

It's probably The Algorithm Design Manual 2ed by Steven S. Skiena, figure 2.4

The second table on this [1] page is pretty similar, though not the same.

[1] https://a1120.cs.aalto.fi/notes/round-efficiency--bigoh.html


To be clear though, that isn't his second law, at least as of two months ago, according to https://bsky.app/profile/randomascii.bsky.social/post/3lk4c6...

Fair, but `n log n` definitely is the historical "good enough to actually sleep at night" in my head, every time I see it I think of the prof who taught my first CSC course and our data structures course due to how often it came up.

Also, the wise statement that 'memory is fairly cheap compared to CPU for scaling'. It's insane to see how often folks would rather manually open and scan a 'static-on-deploy' 20-100MB Json file for each request vs just parsing it into structures in memory (where, for most cases, the in memory usage is a fraction of the json itself) and just caching the parsed structure for the length of the application.


Not often but occasionally I will chose the nlogn algorithm which obviously has no bugs over the O(n) algorithm with no obvious bugs.

Less brittleness is worth paying a few percent. Especially if it unmuddies the waters enough for someone to spot other accidental (time) complexity.


Considerably more than a few percent, IMHO. :)

But I also don't dabble in this area nearly enough to know whether there's years of tears and toil finding out repeatedly that O(n) is ~impossible to implement and verify :)

  | n   | n log n  |
  | 5   | 8.0472   |
  | 10  | 23.0259  |
  | 25  | 80.4719  |
  | 50  | 195.6012 |
  | 100 | 460.5170 |

Depends on the constants and on the value of n. If the constant for the O(n log n) algorithm is five times that of the O(n) algorithm, the O(n) algorithm is faster for n < 100.

If you expect that n < 100 will always hold, it may be better to implement the O(n) algorithm and add a logging warning if n > 250 or so (and, maybe, a fatal error if n > 1000 or so), instead of spending time to write both versions of the algorithm and spend time finding the cut off value for choosing between the two.


Fatal errors tend to blow up in production rather than test.

One of the simplest solutions for detecting cyclic graphs is instead of collecting a lookup table or doing something non-concurrent like marking the nodes, is to count nodes and panic if the encountered set is more than an order of magnitude more than you expected.

I came onto a project that had done that before and it blew up during my tenure. The worst case graph size was several times the expected case, and long term customers were growing their data sets vertically rather than horizontally (eg, ever notice how much friction there is to making new web pages versus cramming more data into the existing set?) and now instead of 10x never happening it was happening every Tuesday.

I was watching the same thing play out on another project recently but it got cancelled before we hit that threshold for anything other than incorrect queries.


Just wanted to say you're one of my favorite posters. Can't put an exact reason on why, but at some point over the last 15 years I learned to recognize your name simply from consistent high quality contributions. Cheers.

This is magic thinking about how C, memory hierarchies, networking, and system calls work.

It's also often in the range where constant factors can make a big difference over a wide range of n


Yes, that isn't actually Dawson's second law.

But sometimes a big enough C can flip which solution helps you hit your margins.

In my mind, that's always been the point in dropping log factors. The algorithms are comparable enough that the actual implementation starts to matter, which is all we're really looking for in a Big-O analysis.

I made the “mistake” in an interview of equating two super-quadratic solutions in an interview. What I meant was what Dawson meant. It doesn’t matter because they’re both too ridiculous to even discuss.

They’re too ridiculous… unless a more optimal solution does not exist

Absolutely not.

If the cost of doing something goes above quadratic, you shouldn't do it at all. Because essentially every customer interaction costs you more than the one before. You will never be able to come up with ways to cover that cost faster than it ramps. You are digging a hole, filling it with cash and lighting it on fire.

If you can't do something well you should consider not doing it at all. If you can only do it badly with no hope of ever correcting it, you should outsource it.


Chess engines faced worse than quadratic scaling and came out the other side…

Software operates in a crazy number of different domains with wildly different constraints.


I believe hinkley was commenting on things that are quadratic in the number of users. It doesn't sound like a chess engine would have that property.

They did make it sound like almost anything would necessarily have n scale with new users. That assumption is already questionnable

There's a bit of a "What Computational Complexity Taught Me About B2B SaaS" bias going.


All of modern Neural Network AI is based on GEMM which are O(n^2) algorithms. There are sub-cubic alternatives, but it's my understanding that the cache behavior of those variants mean they aren't practically faster when memory bound.

n is only rarely related to "customers". As long as n doesn't grow, the asymptotic complexity doesn't actually matter.


The GEMM is O(n^3) actually. Transformers are quadratic in the size of their context window.

I read that as a typo given the next sentence.

I’m on the fence about cubic time. I was mostly thinking of exponential and factorial problems. I think some very clever people can make cubic work despite my warnings. But most of us shouldn’t. General advice is to be ignored by masters when appropriate. That’s also the story arc of about half of kung fu movies.

Did chess solvers really progress much before there was a cubic approximation?


> I read that as a typo given the next sentence.

Thank you for the courtesy.

> I think some very clever people can make cubic work despite my warnings.

I think you're selling yourself short. You don't need to be that clever to make these algorithms work, you have all the tools necessary. Asymptotic analysis is helpful not just because it tells us a growth, but also because it limits that growth to being in _n_. If you're doing matmul and n is proportional to the size of the input matrix, then you know that if your matrix is constant then the matmul will always take the same time. It does not matter to you what the asymptotic complexity is, because you have a fixed n. In your program, it's O(1). As long as the runtime is sufficient, you know it will never change for the lifetime of the program.

There's absolutely no reason to be scared of that kind of work, it's not hard.


Right but back up at the top of the chain the assertion was that if n grows as your company does then IME you’re default dead. Because when the VC money runs out you can’t charge your customers enough to keep the lights on and also keep the customers.

That only matters when the constants are nontrivial and N has a potential to get big.

Not every app is a B2C product intending to grow to billions of users. If the costs start out as near-zero and are going to grow to still be negligible at 100% market share, who cares that it's _technically_ suboptimal? Sure, you could spend expensive developer-hours trying to find a better way of doing it, but YAGNI.


I just exited a B2B that discovered they invested in luxury features and the market tightened their belts by going with cheaper and simpler competitors. Their n wasn’t really that high but they sure tried their damnedest to make it cubic complexity. “Power” and “flexibility” outnumbered, “straightforward” and even “robust” but at least three to one in conversations. A lot of my favorite people saw there was no winning that conversation and noped out long before I did.

The devs voted with their feet and the customers with their wallets.


There are two many obvious exceptions to even start taking this seriously. If we all followed this advice, we would never even multiply matrices.

> no hope of ever correcting it

That's a pretty bold assumption.

Almost every startup that has succeeded was utterly unscalable at first in tons of technical and business ways. Then they fixed it as they scaled. Over-optimizing early has probably killed far more projects and companies than the opposite.


> That’s a pretty bold assumption.

That’s not a bold assumption it’s the predicate for this entire sidebar. The commenter at the top said some things can’t be done in quadratic time and have to be done anyway, and I took exception.

>> unless a more optimal solution does not exist

Dropping into the middle of a conversation and ignoring the context so you can treat the participants like they are confused or stupid is very bad manners. I’m not grumpy at you I’m grumpy that this is the eleventeenth time this has happened.

> Almost every startup

Almost every startup fails. Do you model your behavior on people who fail >90% of the time? Maybe you, and perhaps by extension we, need to reflect on that.

> Then we fixed it as we scaled

Yes, because you picked a problem that can be architected to run in reasonable time. You elected to do it later. You trusted that you could delay it and turned out to be right.

>> unless a more optimal solution does not exist

When the devs discover the entire premise is unsustainable or nobody knows how to make it sustainable after banging their heads against it, they quickly find someplace else to be and everyone wonders what went wrong. There was a table of ex employees who knew exactly what went wrong but it was impolitic to say. Don’t want the VCs to wake up.


Not all n's grow unbounded with the number of customers. If anything, having a reasonable upper bound for how high a n you have to support is the more common case - and you're going to need that with O(n) as well.

I feel this is too hardline and e.g. eliminates the useful things people do with SAT solvers.

The first SAT solver case that comes to mind is circuit layout, and then you have a k vs n problem. Because you don’t SAT solve per chip, you SAT solve per model and then amortize that cost across the first couple years’ sales. And they’re also “cheating” by copy pasting cores, which means the SAT problem is growing much more slowly than the number of gates per chip. Probably more like n^1/2 these days.

If SAT solvers suddenly got inordinately more expensive you’d use a human because they used to do this but the solver was better/cheaper.

Edit: checking my math, looks like in a 15 year period from around 2005 to 2020, AMD increased the number of cores by about 30x and the transistors per core by about 10x.


That's quite a contortion to avoid losing the argument!

"Oh well my algorithm isn't really O(N^2) because I'm going to print N copies of the answer!"

Absurd!


What I’m saying is that the gate count problem that is profitable is in m³ not n³. And as long as m < n^2/3 then you are n² despite applying a cubic time solution to m.

I would argue that this is essentially part of why Intel is flagging now. They had a model of ever increasing design costs that was offset by a steady inflation of sales quarter after quarter offsetting those costs. They introduced the “tick tock” model of biting off a major design every second cycle and small refinements in between, to keep the slope of the cost line below the slope of the sales line. Then they stumbled on that and now it’s tick tick tock and clearly TSM, AMD and possibly Apple (with TSM’s help) can now produce a better product for a lower cost per gate.

Doesn’t TSM’s library of existing circuit layouts constitute a substantial decrease in the complexity of laying out an entire chip? As grows you introduce more precalculated components that are dropped in, bringing the slope of the line down.

Meanwhile NVIDIA has an even better model where they spam gpu units like mad. What’s the doubling interval for gpu units?


Gaussian elimination (for square matrices) is O(n^3) arithmetic operations and it's one of the most important algorithms in any scientific domain.

I’ll allow that perhaps I should have said “cubic” instead of “quadratic” - there are much worse orders in the menagerie than n^3. But it’s a constraint we bang into over and over again. We use these systems because they’re cheaper than humans, yes? People are still trying to shave off hundredths of the exponent in matrix multiplication for instance. It makes the front page of HN every time someone makes a “breakthrough”.

So, how would you write a solver for tower of Hanoi then? Are you saying you wouldn't?

As a business? Would you try to sell a product that behaved like tower of Hanoi or walk away?

Good call. O(N^2) is the worst time complexity because it's fast enough to be instantaneous in all your testing, but slow enough to explode in prod.

I've seen it several times before, and it's exactly what happened here.


We just had this exact problem. Tests ran great, production slowed to a crawl.

I was just helping out with the network at an event. Worked great in testing, but it failed in production due to unicast flooding the network core. Turns out that some of the PoE Ethernet switches had an insufficiently sized CAM for the deployment combined with STP topology changes reducing the effective size of the CAM by a factor of 10 on the larger switches. Gotta love when packet forwarding goes from O(1) to O(n) and O(n^2)! Debugging that in production is non-trivial as the needle is in such a large haystack of packets so as to be nearly impossible to find in the output of tcpdump and wireshark. The horror... The horror...

First big project I worked on a couple of us sped up the db initialization scripts so we could use a less trivial set of test data to stop this sort of shenanigans.

Things like inserting the test data first and turning on constraints and possibly indexes afterward.


Modern computers are pretty great at scanning small blocks of memory repeatedly, so n^2 can be faster than the alternative using a map in cases for small N.

I spent a lot of time fixing n^2 in blink, but there were some fun surprises:

https://source.chromium.org/chromium/chromium/src/+/main:thi...

For large N without a cache :nth-child matching would be very slow doing n^2 scans of the siblings to compute the index. On the other hand for small sibling counts it turned out the cache overhead was noticably worse than just doing an n^2. (See the end of the linked comment).

This turns out to be true in a lot of surprising places, both where linear search beats constant time maps, and where n^2 is better than fancy algorithms to compensate.

Memory latency and instruction timing is the gotcha of many fun algorithms in the real world.


This is true. But unless you are sure that current and future inputs will always be small I find it is better to start with the algorithm that scales better. Then you can add a special case for small sizes if it turns up in a hot path.

This is because performance is typically less important for the fast/small case and it is generally acceptable for processing twice as much to be twice (or slightly more than twice) as slow, but users are far more likely to hit and really burned by n^2 algorithms in things you thought would almost always be small and you never tested large enough sizes in testing to notice.

I wrote more on this topic here https://kevincox.ca/2023/05/09/less-than-quadratic/


A lot of computations are really higher complexity order functions with stair steps at certain intervals based on hardware trying to pretend they are constant time. All operations cost the same amount until n doubles again and then it’s slower. If you zoom out toward infinity, the stair steps smooth out into a logarithmic curve. It becomes logarithmic in the former case and square root in the latter. Even dividing two numbers or doing a memory address lookup stops being C, which is part of why prime factoring worked for RSA for so long.

If anyone had made clockless logic work you would see that adding 1 + 1 is in fact faster than adding 2^63 + 1.

If you put enough data into a hash table the key length has to increase logarithmically to the table size in order to have distinct keys per record. Even Knuth points out that hash tables are really nlogn - something I’m pretty sure my CS professors left out. In multiple classes. Man, did I get tired of hash tables, but I see now why they harped on them. Case on point, this article.


    > where linear search beats constant time maps
Can you give an example? You said lots of good things in your post, but I struggling to believe this one. Also, it would help to see some wall clock times or real world impact.

Pick any compiled language and test it. Pick an algorithm making heavy use of a small (<10, maybe up to a hundred elements) hashset, and try using a linear structure instead. The difference will be most apparent with complicated keys, but even strings of more than a few characters should work.

Some example workloads include:

1. Tokenization (checking if a word is a keyword)

2. Interpretation (mapping an instruction name to its action)

3. ML (encoding low-cardinality string features in something like catboost)

4. JSON parsers (usually key count is low, so parse into a linear-scan hashmap rather than a general-purpose hashmap)

Details vary in the exact workload, the hardware you're using, what other instructions you're mixing in, etc. It's a well-known phenomenon though, and when you're doing a microoptimization pass it's definitely something to consider. 2x speedups are common. 10x or more happen from time to time.

It's similar to (but not _quite_ the same as) the reason real-world binary search uses linear scans for small element counts.

When you go to really optimize the system, you'll also find that the linear scan solution is often more amenable to performance improvements from batching.

As to how much it matters for your composite program? Even at a microoptimization level I think it's much more important to pay attention to memory access patterns. When we wrote our protobuf parser that's all we really had to care about to improve performance (33% less execution time for the entire workload, proto parsing being much better than that). You're much more likely to be able to write sane code that way (contrasted with focusing on instructions and whatnot first), and it's easier to layer CPU improvements on top of a good memory access pattern than to go the other way around.


You've got to keep in mind that computers aren't the 1-instruction-at-a-time purely sequential machines anymore.

Let's say you've got a linear array of bytes, and you want to see if it contains a specific value. What would a modern CPU need to execute? Well, we can actually compare 64 values at a time with _mm512_mask_cmp_epu8_mask! You still need a little bit of setup and a final "did any of them match" check, of course. Want to compare 512 values? You can probably do that in less than 10 clock cycles with modern machines

Doing the same with a hash set? Better make sure that hash algorithm is fast. Sure it's O(1), but if calculating the hash takes 20 cycles it doesn't matter.


This is a good point.

A string search algorithm that uses SIMD to do quickly discard a majority of 16, 32 or 64 attempts in parallel, and then verify the surviving ones quadratically (again 16, 32 or 64 bytes at a time) can go a very long way against a sublinear algorithm that understands needle structure, but necessarily needs to process the haystack one byte at a time.


My rule of thumb for 80%-90% of the problems is, if you need complicated algorithm, it means your data model isn't right. Sure, you do need complicated algorithms for compilers, db internals, route planning et all, but all things considered, those are minority of the use cases.

This is not a complicated algorithm. A hash map (dictionary) or a hash set is how you would always do deduplication in Python, because it is easiest to write / least keystrokes anyway. That is not the case in C though, as it is much easier to use arrays and nested loops instead of hash maps.

    > That is not the case in C though, as it is much easier to use arrays and nested loops instead of hash maps.
I am confused. There are plenty of open source, fast hash map impls in C.

Yes, the problem is getting them into your project.

That's only a problem if you have never done any C development.

This isn't knapsack. This is a dict lookup.

I wrote a funny algorithm to group together words that end the same way to write them once in my binary wordlist file, since there is an array that points to the start character and a \0 to end the word. My initial solution was O(n²) but it was too slow on a real wordlist so I had to come up with something better. In the end I just sort the list with quicksort, but revert the order of the words and then the groupable ones end up nearby each other.

I'd say the exception is when `n` is under about 10, and is counting some sort of hardware constrained thing (e.g. some operation over all CAN interfaces pesent on an OBDII connector can be O(n^(2)) since n will always be between 1 and 4). If you wouldn't have to physically replace hardware for `n` to increase, you really need to avoid n^2 operations. And even then consider them carefully, perhaps explicitly failing if `n` gets too big to allow for noticing rework is needed before new hardware hits the field.

> perhaps explicitly failing if `n` gets too big

That's the problem. A lot of these quadratic time algorithms don't set limits.

Even 'n!' is fine for small 'n'. Real production use cases don't have small 'n'.


> Real production use cases don't have small 'n'.

Real production use cases absolutely have small n. You don't hear about them, because it's very hard for them to cause issues. Unless the use case changes and now the n is not small anymore and nobody noticed the trap.


Or, phrased differently, if n has an upper limit, the algorithm is O(1).

As long as you have tests that regularly exercise your algorithm at n=max where you would notice if they were exceptionally slow

I have an app that's been running an O n^2 algorithm in "production" (free open source app used by various communities) for about half a year now.

It's been fine because "n" is "number of aircraft flying in this flight simulator" - and the simulator's engine starts to fail above around 2000 anyway. So even in the worst case it's still going to run within milliseconds.


For those of us without a formal comp sci background, is there a way for the IDE to detect and warn about these automatically? Or any other easy telltale signs to look for?

As a self taught dev, when I encounter nested loops, I have to mentally go through them and try to see if they iterate through each item more than once. But that's not a very foolproof method.


Too much domain knowledge for an IDE to catch. I'm self taught as well, and it comes down to spending more time thinking about the code than writing the code.

It's a fairly simple thought experiment to ask yourself what if there was 10x items in this array? 100x? That is essentially what the O(n) notation is trying to quantify. You just don't need to do it that formally.


I have an n^3 operation that's currently a huge bottleneck at only 10k elements. Not sure how to fix it.

I once looked into tree diffing algorithms (the literature is all about diffing XML even though it's really trees). The obvious dumb algorithm (which seems to be what everyone uses) is n^4.

2 dots at 5 possibilities each gives 25 (5^2)

2 dots at 2 possibilities each gives 4 (2^2)

They only diverge from there. Or am I doing my math wrong?


Information is ~log(possible states) according to Shannon.

Log(25)/log(4) is 2.3. Among other things this definition has the nice property that two disks/pages/drives/bits together contain the sum of the capacities instead of their product.


To expand if it helps intuition: As data data density grows, the largest representable number grows exponentially.

In what way does/would/could it “fuck up your taxes?”


It's so fantastic that it can't even open Outlook .msg files. It boggles the mind


What makes a computer blabbing all the time different from an ATC blabbing all the time? Just because a computer can speak some things doesn't mean it has to speak all the things.


It doesn't know what's important and what not at a moment's notice.


FYI, tungsten obviousy, not titanium.


Oh! Yes. Remarkably dense is definitely not titanium.


Within reason, which is why soldiers train with blank-firing adapters and blanks, and not live ordnance when simulating combat.

Turning ADS-B on/off likely has zero effect on the training/fighting relationship.


The article says the reason is a bit different - that the route they were practicing is (in theory) sensitive information.

> But the Black Hawk did not operate with the technology because of the confidentiality of the mission for which the crew was practicing. That is because ADS-B Out positions can be obtained by anyone with an internet connection, making the system a potential risk to national security.

Seems like leaving it in listen-only mode would be wise, though.


The route is a public/known helicopter flight path. There's nothing secret about it.

Here's a map of the helicopter routes in the area. In this case, they were flying on route 4... https://www.loc.gov/resource/g3851p.ct004873/?r=0.67,0.258,0...

Yes, this group transports VIPs and sometimes does so in secret. This training flight was a "simple" check-ride for the pilot (simple in scare quotes because part of the ride was using the NVGs, which strikes me as fairly ridiculous in the DCA air space).


The route itself, sure.

When this specific helicopter/mission joins the route, how fast it goes, what callsign it uses, when it leaves the route, etc. may not be so public. Or at least be treated as "try not to make it unnecessarily public".

Overclassification is absolutely a thing, too. I recall when the Snowden NSA leaks came out, government employees were still forbidden from reading the documents, even if they were published in the newspapers. Pointless? Yes. But those were the rules.


> Overclassification is absolutely a thing, too. I recall when the Snowden NSA leaks came out, government employees were still forbidden from reading the documents, even if they were published in the newspapers. Pointless? Yes. But those were the rules.

Not just government employees. I was at a defense contractor at the time, and we were also instructed to not read any of the documents online, even for people who were technically cleared to read them through proper channels.

Edit: misremembering, wasn't the Snowden leaks, it was some earlier set of leaks on WikiLeaks


Surely either you are training, or you are on a mission, but in that case you should be competent pilot.

training on a confidential mission is just inviting disaster


Training for a mission tends to mean pretending it's the real mission, as closely as possible. People fire off $100k missiles (https://www.youtube.com/watch?v=wrhybKEzb-0) so they know what it'll be like to do it in combat for real.

Competent people still make mistakes. I wouldn't want to be anywhere near DCA airspace, personally.


The ADSB is a simple switch. All it does is broadcast the position. It would have had zero impact on operational readiness. It’s not like they were actually flying “dark” - lights were on, they were in context with ATC, etc.


Listen-only mode would be ADS-B In. Black Hawk's support ADS-B Out.

1. C-17 Globemaster III (transport)

2. C-130 Hercules (transport)

3. KC-135 Stratotanker (tanker)

4. KC-10 Extender (tanker)

5. P-8 Poseidon (maritime patrol/reconnaissance)

6. E-3 Sentry (AWACS)

7. E-8 Joint STARS (reconnaissance)

^ above have ADS-B In capability

This answer on Aviation Stack Exchange did some research into ADS-B statistics for military aircraft: https://aviation.stackexchange.com/questions/107851/military...

TCAS (collision avoidance) can use Mode A/C/S however it depends on if the aircraft has the earlier or later model TCAS: https://aviation.stackexchange.com/questions/90356/does-tcas...


They'll all have both in and out capability. (It's typically the same device.)

Military aircraft have permission from the FAA to turn off one, or both, for fairly obvious reasons. https://nbaa.org/aircraft-operations/communications-navigati...


I don't think the Black Hawk can support ADS-B In and usually its the surveillance type aircraft that carry it. I updated my post above. There is limited cockpit space in Black Hawks anyways. There might be a specific modernization occurring for a variant of UH-60 that has ADS-B IN, but vast majority do not.


Every aircraft in controlled airspace is required to have ADS-B transponders, and any aircraft with Out has In as well (In is the easy one; it just listens; you can even build your own with a Raspberry Pi - https://www.flightaware.com/adsb/piaware/build/ and a $36 receiver https://flightaware.store/products/pro-stick). You can buy a portable ADS-B In receiver the size of a wallet for $400 and get traffic alerts on an iPad. https://flywithsentry.com/buy

My dad's little four seat hobby plane has both In/Out. You can track him on FlightAware as a result, because it's continually broadcasting its location; it's certainly not rare or sophisticated equipment.

Here's a military Blackhawk toodling around as we speak: https://globe.adsbexchange.com/?icao=ae27fc

https://www.avweb.com/aviation-news/blackhawk-ads-b-was-off-...

> The Army Black Hawk helicopter crew involved in the midair collision with an American Eagle CRJ700 last January at Reagan National Airport had turned off ADS-B because they were practicing a classified flight profile, according to a New York Times investigation.


We are both in agreement that ADS-B OUT is required. But, I am referring to ADS-B IN which most military aircraft do not have as a matter of practice. If ADS-B IN was running in addition to ADS-B OUT on both aircraft then it might have provided additional situational awareness assuming the Black Hawk pilot was operating the helicopter properly. The original comment was about putting the receiver in listening mode and that's simply not possible with the Black Hawk.

I have been running an ADS-B receiver at home for 6 years via PiAware along with an AIS receiver. So yes, low cost :)


It's really not that rare. Especially with stuff like ForeFlight.

https://download.aopa.org/advocacy/2019/dhowell_jking_DASC20...

> A majority of respondents had used ADS-B In, with 56% of respondents reported having experience with either an installed or portable system. Of the group who had experience with ADS-B In, 85% used portable systems and 30% used installed systems.

And that's in 2019.


No disagreement. ADS-B IN is just a feature that most military aircraft to include UH-60's (Black Hawk) do not have yet.


Sounds like they should sort it out before placing civilians in danger


In retrospect, it was a bad plan to let a young Captain who mostly served as a liaison in DC and not a helicopter pilot to train on that route. A simpler one where she could progressively train up to would have been wiser. She also should have listened to her more well seasoned Warrant Officer copilot. ADS-B In wouldn’t have addressed any of those problems


The route they were training for was to evac government personnel during an emergency (terrorism, incoming attack, etc.). ADS-B is live location whereas transponder is delayed. In a real scenario, you wouldn’t want to be transmitting live location, since whatever the emergency is likely involves targeting of VIP government personnel. But in training, that would not effect your training, since the ADS-B is for others benefit, and doesn’t change your situational awareness or capability.

edit: To add and make clear, the route will be known for a training or real situation, but it will be delayed. So for training, turning off the ADS-B does not protect the route information and that is why there is no reason to fly with it off for training.


If you train to turn the ADS-B on, there's a decent chance you'll turn it on during the real thing. That's the point of training.


You are insisting that this was a training thing. But realistically, military just doesn’t like to be tracked and would rather put everyone else at risk.


I’m not insisting - it’s stated in the article.


We shouldn’t take the article at face value. Reporters are lazy and don’t do in-depth reporting. It’s to drive clicks.


They were coming back from Langley. I'm told it was just to "refuel."


On one hand, I've got a reputable news organization publishing an article with specific information from experts, pilots, etc.

On the other hand, I've got an internet rando who once told me to Google up MGTOW saying "I'm told".

Which one would you find credible?


It wasn't coming back from Langley. That's misinformation from people who don't know the subtleties of what's displayed by flight tracking sites. For more info see https://x.com/aeroscouting/status/1884983390392488306


It’s similar with the TSA facial recognition photos. “We delete your photo immediately” but what they don’t say is that they don’t delete the biometrics from that photo.


It's a crime that were compelled to concede our 4th Amendment rights in order to travel.


Literally not compelled in this case, the TSA signage says that the image capture is completely optional.

More generally, having your stuff screened for security to get on a commercial plane isn't a 4th amendment violation, the word "unreasonable" is right there in the amendment for a reason. You're in public in an enclosed flying object bringing your goods onto someone else's plane with 100+ strangers aboard, it is completely reasonable and necessary for the freedoms of everyone involved for the TSA to ensure that your stuff doesn't have dangerous objects aboard.

Don't forget that freedom also involves the freedom of other people to not be negatively impacted by you exercising your "freedom."


Image capture is optional, your other option is something possibly unpleasant and may make you miss your flight


That is not the other option at all. The other option is essentially just the traditional screening process.

> Standard ID credential verification is in place – Travelers who decide not to participate in the use of facial recognition technology will receive an alternative ID credential check by the TSO at the podium. The traveler will not experience any negative consequences for choosing not to participate. There is no issue and no delay with a traveler exercising their rights to not participate in the automated biometrics matching technology.

My goodness this thread is just the most annoying tinfoil hat thread I've seen all day. Y'all are spending too much time online.


> The other option is essentially just the traditional screening process.

I know that, and you know that, but you have to convince the average traveler that nothing bad will happen if they say no. In the mind of the average traveler, it’s safer to just say “okay” to whatever the TSA wants. There needs to be some kind of neutral ombudsman to placate travelers’ fears of reprisal for opting to preserve their rights.


No, the TSA actively threatens you with unspecified additional hassle/delay if you express a desire to opt out.

They are also running facial recognition on all of those round just-above-eye-level camera pods all up and down the concourse.


It's not; I flew every week for months, and across ALL airports, I got an indifferent "OK" from the TSA agent, and was waved along.


Depends on the type of travel right? I took Amtrak weekly for several years and never even had to show ID.


Did this change? Last time I tried to take them (ten+ years ago, because my license expired) they refused my ticket purchase because my id was expired.


Less than 6 months ago I was able to buy a ticket online and board without showing any ID and have done that for 10+ years with no problem.


I don’t remember ever having to show ID with Amtrak.


Same with drivers licenses and passports having a photo requirement too


The TSA photos are worse. They use a stereoscopic camera to take a 3d image of your head, which makes facial recognition up to 10x more accurate.

You can opt out, just say you do (and preferably cover the camera with your hat or bag)


>You can opt out, just say you do

And then be flagged and 10x more targeted because of that


Not how it works


Oh, sweet summer child


WiFi 7 Sensing is bringing similar functionality to consumer routers and many laptops, with the bonus of passing through walls.


>drivers licenses and passports having a photo requirement too

You're free to take the bus, or hire a chauffeur. A private pilots license doesn't have any pictures either.


For better or worse, we didn’t have to make such hard choices for the first 80 years of aviation. And Greyhound etc require photo ID these days as well


A US pilot certificate itself does not include a photo, but you must have a photo ID to use it. https://www.ecfr.gov/current/title-14/part-61/section-61.3#p...


That’s not a freedom. That’s a restriction that reduces the amount of choices you have for potentially worse ones.


It literally says right on the facial recognition sign that you're free to opt out, just let the TSA employee know


The TSA is - objectively, by their own audits - complete security theater. Why bother to defend them, exactly?

Also, the spirit of the 4th Amendment is most certainly not "here, this is the easy way!" (yes, we are conducting mass surveillance but you can sort of opt out of one piece of it by going through a manual process over here that we will make you feel like you are burdening us by requesting)


correcting disinformation isn't defending something. do you want to live in a world where we dislike someone and so we just make up random terrible things about them that aren't true, and it's fine and encouraged because they're someone we dislike, and people aren't allowed to say "hey that's not actually true, at all"


but we do live in such a world tho?


Yup,people are really good about it in my experience too. I just stand off to the side of the camera, and say "no biometrics please". They take a minute to check my documents and it's done. Try it.

I trust the TSA agents brain to not get hacked in the next 24 hours, a database run by them, not so much.


The purpose is to gather biometric data on people that will be used for future surveillance in our incipient fascist state with the implicit statement that opting out is suspicious and will lead to greater scrutiny.


Amtrak and Greyhound do not require those biometrics, nor does renting a car and driving (or driving your own).


Some of us want to be able to cross the country in an afternoon, and not have to spend days on a slow, uncomfortable train to make the same trip. I don't think that's unreasonable.


Certainly not unreasonable. But it does require you to commission your own transport subject to the rules that that private entity seeks to impose. Public entities which indiscriminately service residents and visitors of a given territory would obviate this requirement. But if you're in the US, good luck convincing taxpayers to agree to pay for that.


> subject to the rules that that private entity seeks to impose.

It's not the private entity taking a 3D face scan, nor are they necessarily wanting for that scan to be taken. It's federal laws and regulations being done by federal agents in spaces controlled by the federal government.


TSA is not a government organization. Neither is Boeing nor any of the airline carriers.


TSA is absolutely a government organization, it's a part of the Department of Homeland Security. It was created by an act of Congress, the Aviation and Transportation Security Act. You might as well argue the IRS or FBI or the US Marshalls aren't a government organization. What about absolutely absurd thing to suggest.

> The Transportation Security Administration (TSA) is an agency of the United States Department of Homeland Security (DHS) that has authority over the security of transportation systems within and connecting to the United States.

https://en.m.wikipedia.org/wiki/Transportation_Security_Admi...


TSA is not government organization as much as Pentagon is not :)


Private and charter aviation exists and is free from those constraints.


Some of us are not billionaires.


Freedom has never been free.


That’s not what that means


You don't have to be to fly charter or private.


Ok, fine, centimillionaire. Maybe even some decamillionaires. Happy?


You can also walk. Lovers of freedom can walk from Manhattan to LA in 40-50 days. Of course if you look “wrong”, you’ll probably get rounded up in some flyover town.


I wonder what the chances of surviving that trip is, based on walking pedestrian fatalities on highways.


Depends on where you walk the US is amazingly poorly situated for long walks outside of major cities. Sidewalks disappear first then lighting then one is liable to run into major stretches with no safe affordance for walking whatsoever where one is either inches from cars or in a ditch.


Ollantaytambo. A beautiful ancient town


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: