It's probably the wrong place in the stack to implement this, these are very low-cost commodity microcontrollers running the firmware and the design of flight controller software is focused on time guarantees and reliability.
With the exception of low-cost consumer drones, most larger drones have at least a "Flight Controller" (embedded MCU handling guidance, navigation, and control) and a "Flight Computer" (Higher level *nix based computer running autonomy software), and the flight computer is IMO a more appropriate place to put this.
You could encrypt any Mavlink or proprietary protocol at the application layer if you're using an IP link, or you could also just rely on the telemetry radio to perform encryption between the drone and your ground station.
BetaFlight doesn't deal with over-the-air bits, it just receives PWM/PPM/S-Bus/whatever signals your receiver provides. There is no point to have encryption in firmware, because connection between RX and FC is hardwired and can be trusted.
Lack of OTA frames encryption, as far as I can tell, is mostly due to legacy reasons. In DYI FPV there are only couple of transmission standards, most of them using 2.4GHz FHSS or some CC2500 clone so you can mix-and-match transmitters and receivers as you wish. If you use custom TX/RX devices, you are pretty much locked in to that specific vendor. Also, designing a nice transmitter UX-wise requires quite a different skillset than designing nice transmitter RF-wise, so manufacturers tend to choose off-the-shelf RF modules.
The threat model for most FPV pilots (either hobbyists or people in Ukraine) doesn't really include hijacking of the air link. It's trivial to just shoot something down with interference, sometimes inadvertently.
Pretty much everyone in FPV is now using ExpressLRS, which is an open protocol. If you want an encrypted air link, then the best option I'm aware of is the proprietary TBS Crossfire protocol.
Betaflight doesn't really care about what Radio receiver you're using - as long as it can talk to it over uart (/SPI) via one of its supported protocols like crsf, ibus, sbus etc..
If you really want encryption, you can simply use a PiZero that talks CRSF to Betaflight and has an encrypted channel to your ground station over 4G LTE/Wifi/Wfb-ng/what not.
But if you're dealing with 4G and PiZero, might as well use Ardupilot + mavlink. Those tools already support this use case much better.
Betaflight is more of a proximity racing drone kind of use case. Only recently did it's GPS return to home functionality got some improvements.
Crossfire supports encryption.
Mainline ELRS can't add encryption support because the whole idea of ELRS was to reduce LoRa packet size to the bare minimum needed for 4 full res channels + a bit extra for arming and time multiplexed aux channels. There's some discussion on protocol security and scope here [0].
I'm sure these days there are multiple LoRa based links (independent and ELRS forks) that support authenticated encryption.
I think ciphering is not always allowed on remote control hobbyist bands. Some jurisdictions allow stronger radio output in exchange for such restrictions.
That and lack of demand. Most people are nice, key management is PITA, losing expensive toy from a crypto library bug is going to be frustrating.
WPA2 should be still strong enough for most purposes too(threat_model != CIA).
Do any of these attacks matter for single-tenant computers where all network packets are sent on a hardware timer (say, 10 kHz) independent of crypto computing?
Doesn't that mitigate any side-channel timing attacks from the start?
Oh cool, I get to celebrate his birthday twice per year now. That said, strangely enough, if the calendar changed after my death and people were still celebrating my birthday, I'd expect people to celebrate the day on the calendar I used rather than the accurate day.
What's the status of Rust usage inside NASA? I am currently writing software that, hopefully, will be sent to the Moon one day, and I am considering options as to software technologies.
I do not really know. I heard that it's generally perceived as a good option, but I don't know if any teams are actively using it for any missions.
My suspicion is that once it's generally accepted as part of the kernel, things may change.
If the rust community wants rust to be used in safety-critical, my personal view is that they need to prioritize robustness and stability of the rust ecosystem as a whole over frequent changes to the language or libraries to save 1-2 lines of code.
Breaking changes to APIs and tool changes are a big issue in general, so they are best avoided (almost every time some introduces a breaking change that we are forced to adopt, we have to spend thousands of dollars (in time) to adapt). It's best to take longer to release a tool, but when you do it, make sure it'll work for a long time.
We recently had this case with a tool and our project was delayed by several weeks because someone replaced a version of a tool in homebrew that introduced a breaking change. We hit multiple bugs during the upgrade.
A mission that flies won't depend on homebrew, but it would be very plausible for a bug to be fixed in a version that pulls a dependency with a higher version number, and for it to be impossible or impractical to upgrade only one or two packages. In particular (and I don't "speak" rust), if your compiler comes with core libraries that necessarily need a version of the compiler to work, you want those to be de-coupled and the core API not to change.
Please be aware that this is just my personal opinion. I don't speak on behalf of any agencies.
> Breaking changes to APIs and tool changes are a big issue in general, so they are best avoided
There seem to be two different camps in the rust world. Any crates that have to do with web (orthogonal to but not distinct from async) or some GUI-related aspects seem to be constantly breaking in major ways between releases, way too often to actually keep up with. Anything else in the rust world, such that someone coming from a traditional C/C++ background would be interacting with, has a much more mature ecosystem around it w/ saner breakage (i.e. only when/where necessary and typically only changing a name or restructuring a type, but not "let's wholly rearchitecture both our own code and the code of anyone using this in the wild" kind of thing).
Thanks Ivan, makes sense. With the ongoing work of integrating Rust into Linux, I hope we'll see some of that sought-after stability in its tooling. I appreciate the insight - I also wish more code was written with long term in mind, and by long term, I mean decades, not years.
I've been betting on creating good software that survives for decades, but most people just want quick fixes and new features, and few are willing to put in the time to "do the homework" (that is, clean up the code, debug it, benchmark it, reduce it, modularize it, etc.). Over time, with more and more fixes and features, code rots and the maintenance burden increases.
People seem to be creating open source like it's free, with no regards for the time and effort put before. Every solution we create adds to the global maintenance burden of the community. We need to set processes in place that make open source code better over time, not bigger.
Not sure if you’re aware but in Rust there’s no dependency hell. Component A can depend on version X of a library and component B can depend on incompatible version Y and you can still link in components A & B into your program without any hassle/correctness/safety issues. That doesn’t solve the “but I want to upgrade to the latest version & for it to be compatible” but that’s typically an untenable position in any environment when relying on OSS - perhaps try to work out arrangements with those projects if possible to backport fixes instead if it’s that mission critical or live with developing processes to stay on top of updates like the rest of us?
One day I'll learn rust and maybe then I'll understand.
> live with developing processes to stay on top of updates like the rest of us?
NASA follows very robust software engineering processes (even for research projects like e.g., Copilot and, to a lesser extend, Ogma). It would not be able to do what it does if it didn't.
This is a topic for a longer discussion and definitely not to be had here, but I will say that it's not conductive to a constructive discussion to see it as a problem with our processes, or us ("developing processes to stay on top of updates", "like the rest of us").
The people who work on these things are smart. This is a topic we've had long discussions on. If it was obvious or viable to fix internally, we would have done it already.
I have been programming in particular in Haskell for 20y. I've worked for all kinds of companies and organizations, big and small, for the last 18y. I am like the rest of us. The problem is not exclusive to NASA, and NASA's processes are not to blame here.
It's a problem with how to build languages and ecosystems.
My comment was not meant to disparage the work that NASA does so apologies since that’s the way it landed. The engineers working on NASA are really good. I was just trying to convey that the requirements you have are very different from the general ecosystem and thus you will always have a greater cost to do engineering. Where possible, it’s always cheaper to relax constraints at the program level, not at the individual software component level (eg auxiliary components that have a recovery path in the case of SW faults). My impression is that NASA generally strives for highly reliable systems although I think they’re getting better with the Mars copter experiment. SpaceX is also doing good work trying to drive the cost down by making launches less expensive (that way SW faults aren’t as critical in most systems and payloads themselves don’t need high reliability because they can just retry).
On the dependency front, Rust solves this about as well as you can hope for at the language level since dependencies between components don’t imply anything else about the dependency chain. I was just trying to convey that at that point there’s no way I can think of to reduce the cost of upgrading unless you make agreements with your exact SW dependencies about what versioning and changes look like for them (for general OSS that’s not generally tenable as NASA is likely to be a very small use case compared with the number of environments a popular package might get deployed to). That works in some cases but there’s no way to enforce that and nothing any language can do about it.
Generally I’ve found that organizations ossify their dependency chain on the assumption of “if it ain’t broke don’t fix it”. I’m not sure I buy that because that’s just tech debt that starts accruing and it’s better to just always pay a little bit of money along the way. Of course I don’t have any experience running teams on the kinds of problem domains NASA focuses on so I can’t speak to which development process is better for that use case. All I can note is that using off the shelf software and reducing the reliability requirements on as many components as possible generally results in a cheaper outcome (eg the Mars drone). When you’re in that domain you’re out of the high reliability domain of expensive space rocket launches and into more of the traditional SW development processes. Generally I’ve seen Rust libraries do semver better than most since that’s culturally the expectation. Even with Semver though you’re stuck if the library authors decide to go to the next major version.
"A review published last year in the journal Bioelectromagnetics found no evidence that hypersensitive individuals had an improved ability to detect EMFs, and the study found evidence of the nocebo effect in those same people."
Every time I see someone claiming to have extremely clear symptoms of EMFs sensitivy, I wonder why they don't do a double-blind test to prove the whole world they can actually detect radio-waves. Should be a trivial test to perform properly, and would clearly help the case of hypersensitive people, so why hasn't it happened yet?
The published article already shows studies have been performed and quite conclusively found these people to be liars. No point in wasting further money or time on a self suppressing group.
All my recents searches for old movies were vain. Last Sunday, I wanted to see 'Little Miss Sunshine (2006)', result: not available (in Canada anyway). Unsubscribing has started to cross my mind.
When I subscribed for Netflix, the motivation was a large catalog of old movies, cool stuff from other countries (I remember watching TV series from Iceland that were great), no ads. But the greatest selling point: it was more convenient than pirating.
Now, it's all about pushing Netflix produced content down my throat using whatever trick possible, and pirating is again more convenient (large catalog, no ads) even if less user-friendly.
So yeah, "outsmarted" is really pushing it. I'm not the only one getting bored by the current Netflix direction.
> I'm not the only one getting bored by the current Netflix direction
The current Netflix direction is the only one possible; one big catalog only worked before they had proven the viability of mass streaming—once that happened competitors bidding for exclusives (and content owners reserving material for their own services) was inevitable.
Full quote: "Mason's team was able to identify specific plastics over 100 microns (0.10 mm) in size but not smaller particles. According to experts contacted by CBC News, there is a chance the Nile Red dye is adhering to another unknown substance other than plastic."
Key point: they're unsure about the smaller particles, but they're sure about the bigger ones. There is microplastics in the samples.
I don't think that is quite true. I believe with UDP there is no promise of packets being received in order. I think this article is saying that you still get the benefits of processing the packets in the order received, but you don't have to worry about the latency of waiting for the re-transmission of any packets.
It's so trivial to add ordering to UDP, it's really the right protocol to use here.
Subverting TCP leads to all sorts of problems around congestion(which you can no longer filter on because which TCP streams are being non-compliant?) that it just should not be done.
I guess I don't really see the big issue with that. It's not like windowing and congestion control is some kind of black magic. It's spelled out pretty cleanly in the TCP RFC and pretty straight-forward to reimplement.
Generally if you're hitting cases where TCP is causing you grief and you need to reach for UDP you've already got enough context to understand your congestion problems/etc.
We've been doing this in game-dev for decades, ditto the voip space so it's not like you don't have a wealth of knowledge to draw from if you're really stumped.
If you just use TCP again you haven't done anything. The whole point is to avoid latency.
Most folks use some UDP-based protocol package instead of reinventing the wheel. Its not rocket science, but it isn't trivial. Defining your own packets to do all the flow stuff is just work, like any other programming task.
I don't think I was suggesting using TCP, I was suggesting implementing the features you like from TCP into your stack if you really need them. You can do congestion control without retry, etc.
I've built variations of UDP based protocols 4 or 5 different times over my career. I'm literally in the middle of this right now with the radio framing protocol I've been developing. I really think you're making it out to be much harder than it is.
It focuses narrowly on a congestion control protocol, and is intended to be combined which whichever datagram-based protocol you have lying around that might be suffering from congestion issues.
I'm not sure I understand the distinction. How could TCP guarantee that packets will be received in order without re-transmissions? Re-transmission is a mechanism for making this a guarantee. If the receiver just ACKs everything it gets, then isn't that effectively making no guarantees about the order?
TCP layer of your IP stack does re-ordering and presents them to the client in order. UDP layer doesn't. So by acking every packet, TCP layer will still present what it DOES receive in order.
Right, but UDP also presents what it receives in order - so what's the advantage of forcing TCP to behave this way? I struggle to think of a practical use case where either a) UDP or another protocol wouldn't be selected [e.g. in a VoIP system] in the design phase b) using TCP in this non-standards compliant way would be nothing more than a short-term bandaid because of other constraints (e.g. can't change Layer 4 to something non-TCP).
Don't underestimate how often packets are received out of order. There's even a consumer DSL modem that swaps every odd UDP packet with every even one - I had to compensate for this in a VOIP product. Using TCP in this bastardized way would cure that. That said, I tend to agree its a poor idea to use TCP in this way. The famous book on IP used to list 8 protocol possibilities (only 2 commonly survive today, UDP and TCP) of which streaming and packet reordering was a valid combination (without acking/retransmitting). Don't know what it was called, but that's what being attempted here.
I think we’re operating on different definitions of in-order and as received. TCP delivers packets in order, but perhaps not as received, if it had to request retransmission of a dropped packet. UDP delivers packets in order that they were received. Doing what the article suggests would make TCP also deliver packets in the order that they were received. No?
No problem, thanks for clarifying! So basically the benefit to the TCP NACK approach is that the TCP layer will also do a sort on the packets received?
Yeah, which is imperfect but there you are. It means sometimes out-of-order packets will be reordered, and sometimes dropped (since TCP acked it, the (existing) TCP code will discard the out-of-order packet as 'duplicate'. Which turns out works pretty well in practice, since out-of-order are almost always in a burst (no delay inter-packet)
To clarify, what I meant is that with TCP, I can set up a two-way communication channel, even if I'm behind a NAT/firewall I don't control. As far as I understand, with UDP this is harder (i.e., does not work with all NAT types), because UDP does not establish a connection, and it does not provide a two-way communication channel. However, I am not up to date on NAT traversal techniques, so I might be wrong.
I suspect that what andreasvc meant is that the default "NAT" configuration of most consumer-grade gear is such that it will block UDP (unless some other mechanism such as UPnP is used)...
... in which case blocking is not an issue. Consumer-grade NAT hardware will no more block client-initiated UDP than it'll block client-initiated TCP, at least not without extra configuration.
I've been using normal consumer routers for NATed home connects since 2004 or so and I've never had an issue with outgoing UDP. It's required for basically every video game after all.
I think most routers set themselves as the DNS server, so NAT is not in effect (the computer only sends the request to a local address) unless you define a custom DNS server, which isn't common for home users.
That said, I've never seen a router that didn't allow UDP packets to flow back to the origin client.
> I think most routers set themselves as the DNS server
DNS forwarders like dnsmasq are a relatively recent inclusion in home routers. Sure, they've been there for 10 years or so, but they weren't there for the 5+ years before that. Before Linux took over the embedded OS on home routers, the DHCP servers just passed the DNS configuration that the WAN port got from the ISP, and you can still do that now if you want. That's why nslookup.exe and dig still work on your workstation when you specify an external DNS server instead of the one your DHCP server on your home router gives you.
> That said, I've never seen a router that didn't allow UDP packets to flow back to the origin client.
A typical rtp stream will have smpte fec on top, allowing burst lost of say 20 packets, or random loss theoretically upto 5%
In the last 10 years of streaming rtp over the Internet the vast majority of failures is bursty. Reordering is more rare than you'd expect.
Looking at one sample from India to Europe over a period of 6 weeks, my 30mbit rtp stream was fine 99.998% with fec. The rest of the time that's why I dual stream, either timeshift (send the same packet again 100ms later - as I said most outages are time based), or dual path (although if the packets traverse the same router en route there are issues), or even both.