Hacker Newsnew | past | comments | ask | show | jobs | submit | more FL410's commentslogin

You're right, but "slightly older" is on the order of seconds.


But only for those areas with receiver coverage that feeds them.

But your own receiver will always cover your area.


In my experience, it's usually lack of awareness about modern security risks, and lack of familiarity with modern infrastructure paradigms. The latter really isn't a problem since these systems are usually standalone, but the former does become a problem - they often are from a time where this just wasn't something to consider. As a result, these legacy systems are often using default passwords, have tons of crazy stuff exposed to the network, and are comprised of custom code written specifically for the business purpose (so the documentation is only as good as what they made).

On the other hand, these guys generally write pretty neat, lean code that is quick, reliable, and directly responsive to the business. The really fun thing is watching the users fly through the keyboard-only screens, sometimes with muscle memory that is faster than the terminal emulator can update - they're literally working ahead of the screens.


Oh yes, I remember that when we swapped out a bunch of terminals at an airline.. The users complained it was all way too slow on the new Windows machines with MS SNA server in between... I was wondering what it was all about, as a young and very naive dropout from uni on his first IT job. When I came down, this dude was banging on his keyboard and after some time stopped, pointed at the screen and you could see it slowly catching up, screen by screen.. He showed me the directly connected version next. I learned something that day.


That's awesome. I set up Arch Linux a while ago, and despite working in Linux shops for more than a decade, let's just say I was very out of my element...


Reminds of me of a TUI Banking software that ran on Sun Solaris. It could keep up as fast as you can navigate - few months in and you could fly through the screens. Then it was "upgraded" to a web-based version and all of us were up in arms, it was like being downgraded to a tractor after experiencing a racecar.


Reminds of the DOS order management software I used in the 90’s.

ASCII tables, text only, with F key shortcuts. Hard to learn but blazing fast once you did.

Nothing modern approaches it.


As a support engineer at IBM we used a mainframe system called NRCPMA iirc... I think NR stood for Northern Region. Accessed via a terminal emulator, fully customizable with macros, fastest tool I ever worked with indeed, once you climbed the initial learning curve.


Reminds me of modern IDEs -- developers, both old and new, are too lazy to learn a complex IDE to speed up their work, even though it's their main tool for making money.


I don't think efficiency of navigating your IDE is a major factor in your productivity. If you like your setup, it's probably not going to matter a whole lot. The tool you make money with is your brain.


Hard disagree.

Few points I can easily remember:

1. Navigating the code, e.g easily see all the callers, navigate up/down the call tree requires static code analysis. Super handy while reading someone's else code, which is like 90% on large projects.

2. Quick refactorings. Often times I see people discuss in lengths what would/could be instead of just go and try it out quickly, seeing all the pros and cons. Many times I proven myself wrong by trying it out and seeing pitfalls I didn't see earlier.

3. Warnings: so many real bugs could've been prevented if developers had seen (or cared about) to IDE showing a warning. Many PR review suggestions are detectable by a proper IDE without wasting reviewer's time.

4. Hotkeys (what the parent comment was talking about) -- speeds up all of that, especially refactorings, freeing dev's brain for thinking of architecture and other problems.

I can go on an on. Sometimes it feels like 50%+ of AI usage for coding is to free up fingers, not knowing that they were already mostly free by using static analysis features/hotkeys.


It pays to help one's brain stay in the flow, though, and fast and reliable muscle-memory-based navigation of one's IDE does exactly that.


In my experience mainframes at financial institutions are hidden behind IBM middleboxes that are specifically designed to obviate the infrastructure risks. It's a classic example of a company selling you both the problem and solution.


That's just an example of incremental improvement. Mainframes and midranges adapt with the times without losing what works. Modern midranges, for example, can run C, Python, bash, and web servers.


Is there any API for the US grids?

I remember reading an article about this being used in some forensic capacity to determine the date/time a video was taken by comparing the frequency noise.



It’s a bit more nuanced than that. CFIT is intended to classify accidents where the aircraft itself was not causal. In any other case, it is assumed that there were mechanical or other aircraft-related factors that were contributory or causal.


Why would a cell phone even work in a SCIF?


its not a faraday cage. just built to spec


I would argue those days are coming back. Thanks to LLMs, I have probably 10x more "utility" scripts/programs than I had 2 years ago. Rather than bang my head against the wall for a couple hours to figure out how to (just barely) do something in Python to scratch an itch, I can get a nice, well documented, reusable and versatile tool in seconds. I'm less inclined than ever to go find some library or product that kinda does what I need it to do, and instead create a simple tool of my own that does exactly what I need it to.


Just please if you ever give that tool to someone else to use, understand, maintain, or fix, mention that it was created using an LLM. Maybe ask your LLM to mention itself in a comment near the top of the file.


The 'as is' nature of open source applies regardless of whether a human or LLM wrote the code.


Who said it was open source?


You could (probably) pull the ADSB data for a "representative" flight on given routes and use that to at least get close - probably would still be useful for things like radiation exposure mentioned elsewhere.

Otherwise, maybe you can get Claude to vibe code you a mobile app that runs in the background and collects all the interesting data (GPS, cabin alt, etc)


I appreciate the effort here, but I’m still confused. Is my $250/mo “ultra” plan considered personal and still something you train on?


To be honest this is by far the most frustrating part of the Gemini ecosystem, to me. I think 2.5 pro is probably the best model out there right now, and I'd love to use it for real work, but their privacy policies are so fucking confusing and disjointed that I just assume there is no privacy whatsoever. And that's with the expensive Pro Plus Ultra MegaMax Extreme Gold plan I'm on.

I hope this is something they're working on making clearer.


In my own experience, 2.5 Pro 03-26 was by far the best LLM model at the time.

The newer models are quantized and distilled (I confirmed this with someone who works on the team), and are a significantly worse experience. I prefer OpenAI O3 and o4-mini models to Gemini 2.5 Pro for general knowledge tasks, and Sonnet 4 for coding.


Gah, enforced enshittification with model deprecation is so annoying.


For coding in my experience Claude Sonnet/Opus 4.0 is hands down better than Gemini 2.5. pro. I just end up fighting with Claude a lot less than I do with Gemini. I had Gemini start a project that involved creating a recursive descent parser for a language in C. It was full of segfaults. I'd ask Gemini to fix them and it would end up breaking something else and then we'd get into a loop. Finally I had Claude Sonnet 4.0 take a look at the code that Gemini had created. It fixed the segfaults in short order and was off adding new features - even anticipating features that I'd be asking for.


Did you try Gemini with a fresh prompt too when comparing against Claude? Sometimes you just get better results starting over with any leading model, even if it gets access to the old broken code to fix.

I haven't tried Gemini since the latest updates, but earlier ones seemed on par with opus.


If I'm being cynical, it's easy to either say "we use it" or "we don't touch it" but they'd lose everyone that cares about this question if they just said "we use it" - most beneficial position is to keep it as murky as possible.

If I were you I'd assume they're using all of it for everything forever and act accordingly.


Yes, agreed, never go there! The schnitzel is small and horrible! :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: