They did fuck up quite a bit though.
They injected their payload before they checked if oss-fuzz or valgrind or ... would notice something wrong.
That is sloppy and should have been anticipated and addressed BEFORE activating the code.
Anyway. This team got caught. What are the odds that this state-actor that did this, that this was the only project / team / library that they decided to attack?
This is a state sponsored event.
Pretty poorly executed though as they were tweaking and modifying things in their and other tools after the fact though.
As a state sponsored project. What makes you think this is their only project and that this is a big setback?
I am paranoid myself to think yesterdays meeting went like :
"team #25 has failed/been found out. Reallocate resources to the other 49 teams."
For the duration of a major release, up until ~x.4 pretty much everything from upstream gets backported with a delay of 6-12 months, depending on how conservative to change the rhel engineer maintaining this part of the kernel is.
After ~x.4 things slow down and only "important" fixes get backported but no new features.
After ~x.7 or so different processes and approvals come into play and virtually nothing except high severity bugs or something that "important customer" needs will be backported.
Sadly, 8.6 and 9.2 kernel are the exception to this. Mainly as they are openshift container platform and fedramp requirements.
The goal is that 8.6, 9.2 and 9.4 will have releases at least every two weeks.
Maybe soon all Z streams will have a similar release cadence to keep up with the security expectations, but will keep a very similar expectations that you outlined above.
C and C++ are HARD to use correctly, but
how many of those 60-70% vulnerabilities would have been resolved by just compiling with llvm address sanitizer? It would have stopped virtually all of them?
> In many cases we already have the tools. The problem is that people are not using them.
The problem with these tools are that the instrumentation code inserted by the compiler comes with a 50-100% program-wide performance loss (*) and that's not acceptable to C++ developers. So in practice, you don't just add -fsanitize=address to your builds, you add it to test builds and fuzz them. But now you're not just trusting your compiler, you're trusting your tests and coverage.
The promise of Rust is that many of the memory safety bugs are forbidden at compile time in safe code, and the stuff that has to be checked at runtime (self referential data structures, out of bounds, etc) is able to be added more granularly with unsafe opt-outs where appropriate which means that you're not going to pay 50-100% in raw performance.
* take this like all perf numbers with a heap of salt, do your own benchmarks and come to your own conclusions.
To developers that cargo cult performance, that is.
I have always enabled bounds checking, and never ever, did it matter for the kind of projects I was involved with.
Not everyone is really writing a VR engine for a console rendering at 120 FPS, but just like everyone wants to be Google, so do much of those developers.
That's kinda true, but if you use the compiler inserted address sanitizer code it will turn bugs from exploits into crashes. You can't exploit a OOB write if the write fails and the program crashes.
They are hard to use when you absolutely want to use them the hard way, std::span can provide you lots of safety, but people don't use available tools.
Anyway. Writing safe code in C is HARD but possible. Especially with good tooling.
That is not to say everyone should use C. C is an exceptionally hard language to use
safely and correctly and is not for everyone.
One meta-source for this is Prossimo[0]. They link to multiple vendor reports that range from 60 - 90%.
> Writing safe code in C is HARD but possible. Especially with good tooling.
I don’t disagree in theory, but I think it is so hard as to be impractical in almost every case. So, other than maintaining a legacy code base, why try at this point when other options are available?
> So, other than maintaining a legacy code base, why try at this point when other options are available?
What options would you recommend in 2024? I write C (not C++) and work on projects that are inherently memory unsafe (the last one required hand-written assembly code). I've explored potential C successors in the past, but have yet to discover one that matches the freedoms, simplicity, ergonomics, and performance of the C language.
The question is whether this actually "creates a 3d model based on the picture",
or if it "finds an existing model that looks similar to the picture and texture map it".
To me, a cynical old man, the threat sounds that "it might say something politically incorrect or that I disagree with". That is what I understand the threat is, not a robot uprising a-la Terminator.
There are arguments for this. When we are talking about "ai safety" == "don't swear or say anything rude, or anything that I disagree with politically".
What we are talking about is creating sets of forbidden knowledge and topics. The more you add these zones of forbidden knowledge the more the data looks like swiss cheese and the more lobotomized the solution set becomes.
For example, if you ask if there are any positive effects of petroleum use the models will say this is forbidden and refuse to answer and not even consider the effects on food production that synthetic fertilizers have had and how much worse world hunger would be without them.
He who builds an unrestricted AI will have the most powerful AI which will outclass all other AIs.
You can never build a "better" AI by restricting it. Just a less capable one. And will people use AI to create rude messages? Yes. People already create rude messaages today even without the help of AI.
Very early google was full of passion and people that wanted to build cool things for users. There was a passion where building things that would surprise and delight users.
The process when this changed was slow but I think started 2008-2010 where passion for building something was no longer what drove people but instead the promo-process, having impact and moving the needle became what drove people. Not passion but promo-process changed the culture dramatically over time.
Me and friends used to call it the LPA cycle. (L)aunch, get (P)romo, (A)bandon and switch team. And towards the second half of the 2010s it became a de-facto rule. Once something launches with a big fanfare, after next promo-cycle almost l5 and higher engineers leave to chase their next promo in a different team.
You can see this over and over after ~2015. High velocity and innovation until launch and shortly after it grinds to a stop. very sad to see this change from early google.
Anyway. This team got caught. What are the odds that this state-actor that did this, that this was the only project / team / library that they decided to attack?