Hacker Newsnew | past | comments | ask | show | jobs | submit | delusional's commentslogin

I don't understand the graphs presented here. On the first graph showing "New Memory Unsafe Code" and "Memory safety Vulns" we don't have any steady state. The amount of both "unsafe code" and "memory safety vulns" had apparently already been dropping before 2019. None the matter though, we see a great big drop at 2022 in both.

Then in the next graph, showing "Rust" and "C++", we see that the amount of C++ code written in 2022 actually increased, with rust not really having gained any significant momentum.

How can one possibly square those two pieces of data to point at rust somehow fixing the "memory safety vulns"? Somehow an increase in C++ code led to a decrease in the amount of both "New Memory Unsafe Code" and "Memory safety Vulns".

Also "this approach isn’t just fixing things, but helping us move faster." is an AI red flag.


> How can one possibly square those two pieces of data to point at rust somehow fixing the "memory safety vulns"? Somehow an increase in C++ code led to a decrease in the amount of both "New Memory Unsafe Code" and "Memory safety Vulns".

The first graph considers <memory unsafe> vs <memory safe> languages, while the second graph considers C++ vs Rust. There's more languages than just those two in the first graph.

Moreover the first graph is in percentage terms, while the second graph is in absolute terms.

In 2022 it appears a bunch of memory safe non-rust code was added. Java/python/...

> Also "this approach isn’t just fixing things, but helping us move faster." is an AI red flag.

That's a perfectly human phrasing lol.


I’m a little perplexed why every time something in rust compiles, there’s a blog post about it. I was under the impression Ada, especially when using provers, has been around much longer and is more robust. I just can’t decide if the massive Rust evangelism budget is a red flag or just a curious sociological case study, but I wish I knew the truth.

> How can one possibly square those two pieces of data to point at rust somehow fixing the "memory safety vulns"?

The code base contains Kotlin and Java as well


I don't think the greyscale camera is mainly a cost concern. I imagine the greyscale camera has better low light and noise performance, which is quite important for tracking.

The big difference seems to be that this headset doesn't have AR cameras at all, but reuse the mapping camera for some light passthrough duty.


The headsets that have AR cameras don't use them for tracking AFAIK. They all have monochrome cameras for that. The AR cameras are an additional cost that is only used for AR.

the "growing industry" can pay for itself.

I'm more confused that it's running SteamOS which is supposedly Arch based, but arch doesn't officially support ARM. You have to use the ArchLinuxARM distro for that, which is less maintained. They got to be doing something off label for that.

> arch doesn't officially support ARM

Doesn't really mean much to Valve as SteamOS vendor:

- linux kernel supports aarch64 just fine

- user space supports aarach64 just as fine

- Valve provides runtime for games (be it via proton or native linux), so providing aarch64 builds is up to them anyway

The main point of ArchLinuxARM is providing compatible binaries, which isn't something hard to do in-house.


Even if they are, Valve has a long track record of contributing back to open source projects.

Proton was a community led effort years back. The guy who started that is now an employee at Valve (IIRC) working on Proton, but also getting paid :)

Arch doesn't support ARM at all. Arm is somebody else hobby project.

Arch has been working with Valve on various build system improvements for some time [0], which as I understand it are targeted at making it more feasible for them to eventually support more architectures [1]. This doesn't release for several months; I wonder if there'll be an official Arch Linux ARM by then?

[0]: https://lists.archlinux.org/archives/list/arch-dev-public@li...

[1]: https://news.ycombinator.com/item?id=41696041


You mean valve's?

isn't Steam Deck arm based?

No, it's AMD based

No. It's an AMD x64 CPU married to an onboard GPU.

LOL. We're on hackernews. This sort of stuff is basically the bread and butter of this venture capital funded website.

Not far off

https://www.ycombinator.com/companies/tovala

A SMART-OVEN-PAIRED SUBSCRIPTION MEAL SERVICE.


Sounds like the right place to voice this suggestion.

If I only did it where everyone agrees with me it'd be a waste of time, of which none of us have very much.


Yea, one thing I've learned--if you ever even suggest that software engineers should be held responsible for writing harmful software or working on unethical projects, you're going to get downvoted so fast your head will spin. HN commenters will happily build the Torment Nexus as long as the JIRA ticket is filled out correctly and they have clear requirements.

Just ten more tickets and I can retire safely, it’s supposed to be “the next guy’s problem.” So long, suckers!

How much is "technically the best" worth compared to "convenient, secure, and good"?

To most, all they care about is picture quality and being able to watch the content they want.

Also, I assume you can still do what I did for my current LG TV, skip the wifi setup, plug in AppleTV and use it purely as dumb TV.


> all they care about is picture quality

I get that, but at some point you're installing it in your house with non-color corrected lighting and viewing it during the daytime with your terrible human eyes. I get why 200-300 lumens of peak brightness can make a difference, but does 2-3% of color correctness really matter to people as they watch their low bitrate netflix stream?

Maybe we'd all be better off if we calmed down a bit on chasing the specs, and focused on something else for a while.


Are there any streaming services that offer video quality on par with a high-end blu-ray?

Sony Pictures Core and Kaleidescape are as close as it gets, and both require expensive proprietary hardware.

That's really disappointing; I have zero interest in allowing a device like that on my network, or in spending that much on hardware for a single proprietary service that could go away or change its terms, or in having a service that only works with one device rather than many services that all work on the same device (e.g. Android TV).

Sigh. Where's the video equivalent of music stores for "just let me buy a high-quality DRM-free download I actually own" already?


I imagine it won't be long before the TVs come with eSIMs to connect directly to Tmobile/Verizon/ATT, and maybe add some cameras in the borders to track eye movement.

Then the advertisers could buy more accurate information to improve product placement in movies/tv shows.

The sci-fi version could be a TV that can recognize what kind of things are in the room or clues for the viewer's socioeconomic status and emotional state to bring up content (or even change it in real time) to maximize resonance with the viewer.


The cost is probably still too do prohibitive to do so.

I worked with IoT devices, generally the cost per GB of data is around dollar per GB. I doubt you would make that back in advertisements.

Also, there is cost per SIM so you wouldn't want situation where SIM is active if you don't need it which is why alot of IoT devices have you setup with a phone because they turn SIM on when you sign up for their plan. If consumer never puts TV on Wi-Fi network or cooperates with the phone, then you would have keep each SIM active and turn it off when it checks in via Wi-Fi. My guess is cost is not worth it if you get 98% cooperation. Write off 2% and call it a day.


The cheaper play (which could be implemented today, likely with few HW changes) would be to just use BLE or another 2.4GHz proprietary protocol to broadcast your usage data (maybe encrypted with a vendor key - let's be generous) to another TV or refrigerator in your area that is already internet-connected.

Is that not pretty similar to what Amazon already do with Ring devices?

People on HN have been saying this for years and TVs still aren’t shipping sims.

He did heil a couple of times. And created mechahitler. And lied about "white genocide" in South africa. And called a white supremacist talking point "the actual truth"

We should be careful of labeling people Nazis, but Elon does seem to be playing on the wrong side of that fence.


> He did heil a couple of times.

Also a democratic senator, Cary Booker, did the same salute recently [0]. But when a Democrat does it nobody really cares. Or rather people seem to understand that it's also just a natural wave towards a public that, in a certain snapshot could be interpreted as a nazi salute.

[0]: https://www.foxnews.com/politics/conservatives-drag-booker-n...

> And created mechahitler.

Just like Google Gemini was too woke before, and generated images of people with about +50% colored, even if it was asked to draw German soldiers in the second world war.

And both companies have apologised and corrected the issues. And many AI companies have their controversies.

It is indeed concerning, but I basically would never trust any propietary LLM, because you never know how it's trained. But I don't boycott any particular one because of their bias, but rather combine and compare them. Then you can see which ones filter what kind of information.

> And lied about "white genocide" in South africa.

What do you mean exactly with "lied"? I agree that probably the word genocide is exaggerated. But I think there is certainly nasty stuff going on against white people in South Africa. Not to say that it doesn't happen the other way around either, but then again you should be fair and have both sides have a voice.

> And called a white supremacist talking point "the actual truth"

I dived into this story at the time. And what I found, was that this one "truth" tweet was a response to a response from someone in a discussion about a certain Jewish organization (ADL). Elon was criticizing this organization for pusing an anti-white racism message. Basically because they were saying that Elon taking over Twitter would make it more antisemetic. So they were just blaming each other and it classicly escalated with a Elon tweet that he did later regret. But yeah, I again don't think this is necessarily suggests that he supports racist ideologies. Rather that he opposes it. But standing up against white-racism is often seen as being a white supremacist and antisemitic.


> I can't listen to my own music that I bought on the Music app

That doesn't change if you buy the subscription even. I moved to YT Music only because the Apple Music app asked me to subscribe every time I used it. I was already subscribed.


> Denmark's constitution does have a privacy paragraph, but it explicitly mentions telephone and telegraph

That's very much not how danish law works. The specific paragraph says "hvor ingen lov hjemler en særegen undtaglse, alene ske efter en retskendelse." translated as "where no other law grants a special exemption, only happen with a warrant". That is, you can open peoples private mail and enter their private residence, but you have to ask a judge first.


People continue to believe that the "Grundlov" works like the US constitution, and it's really nothing like that. If anything it's more of a transfer of legislation from the king to parliament. Most laws just leaves the details to be determined by parliament.

Censorship really is one of the few laws that are pretty unambiguous, that's really just "No, never again". Not that this stops politicians, but that's a separate debate.


And yet they wanted to push a proposal where the government would have free access to all digital communication, no judge required. So if it happens through a telephone conversation, you need a judge, while with a digital message, you wouldn't have, since the government would have already collected that information through Chat Control.

I don't know where you get your information, but that was not in the chat control proposal I read.

Patrick Breyer has some good thoughts on this.[1]

The relevant points I believe to be:

> All citizens are placed under suspicion, without cause, of possibly having committed a crime. Text and photo filters monitor all messages, without exception. No judge is required to order to such monitoring – contrary to the analog world which guarantees the privacy of correspondence and the confidentiality of written communications.

And:

> The confidentiality of private electronic correspondence is being sacrificed. Users of messenger, chat and e-mail services risk having their private messages read and analyzed. Sensitive photos and text content could be forwarded to unknown entities worldwide and can fall into the wrong hands.

[1] https://www.patrick-breyer.de/en/posts/chat-control/


> All citizens are placed under suspicion

> No judge is required to order to such monitoring

That sounds quite extreme, I just can't square that with what I can actually read in the proposal.

> the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State

It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.


Didn't the proposal involve automated scanning of all instant messages? How isn't that equivalent of having an automated system opening every letter and listening to every phone call looking for crimes?

Not from what I can tell. From what I can read, it only establishes a new authority, under the supervision and at the digression, of the Member state that can, with judicial approval mandate "the least intrusive in terms of the impact on the users’ rights to private and family life" detection activities on platforms where "there is evidence [... ] it is likely, [...] that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material".

That all sounds extremely boring and political, but the essence is that it mandates a local authority to scan messages on platforms that are likely to contain child pornography. That's not a blanket scan of all messages everywhere.


> platforms that are likely to contain child pornography

So every platform, everywhere? Facebook and Twitter/X still have problems keeping up with this, Matrix constantly has to block rooms from the public directory, Mastodon mods have plenty of horror stories. Any platform with UGC will face this issue, but it’s not a good reason to compromise E2EE or mandate intrusive scanning of private messages.

I would not be so opposed to mandated scans of public posts on large platforms, as image floods are still a somewhat common form of harassment (though not as common as it once was).


The proposal is about deploying automated scanning of every message and every image on all messaging providers and email client. That is indisputable.

It therefore breaks EtoE as it intercepts the messages on your device and sends them off to whatever 3rd party they are planning to use before those are encrypted and sent to the recipient.

> It explicitly states otherwise. A judge (or other independent authority) has to be involved. It just sounds like baseless fear mongering (or worse, libertarianism) to me.

How can a judge be involved when we are talking about scanning hundreds of millions if not billions of messages each day? That does not make any sense.

I suggest you re-read the Chat control proposal because I believe you are mistaken if you think that a judge is involved in this process.


> That is indisputable.

I dispute that. The proposal explicitly states it has to be true that "it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material;"

> How can a judge be involved

Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.

I suggest YOU read the proposal, at least once.


You must be trolling.

> it is likely, despite any mitigation measures that the provider may have taken or will take, that the service is used, to an appreciable extent for the dissemination of known child sexual abuse material

That is an absolute vague definition that basically encompasses all services available today including messaging providers, email providers and so on. Anything can be used to send pictures these days. So therefore anything can be targeted, ergo it is a complete breach of privacy.

> Because the proposal does not itself require any scanning. It requires Member states to construct an authority that can then mandate the scanning, in collaboration with a judge.

Your assertion makes no sense. The only way to know if a message contains something inappropriate is to scan it before it is encrypted. Therefore all messages have to be scanned to know if something inappropriate is in it.

A judge, if necessary, would only be participating in this whole charade at the end of the process not when the scanning happens.

This is taken verbatim from the proposal that you can find here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A20...

> [...] By introducing an obligation for providers to detect, report, block and remove child sexual abuse material from their services, .....

It is an obligation to scan not a choice based some someone's opinion like a judge, ergo no one is involved at all in the scanning process. There is no due process in this process and everyone is under surveillance.

> [...] The EU Centre should work closely with Europol. It will receive the reports from providers, check them to avoid reporting obvious false positives and forward them to Europol as well as to national law enforcement authorities.

Again here no judge involved. The scanning is automated and happens automatically for everyone. Reports will be forwarded automatically.

> [...] only take steps to identify any user in case potential online child sexual abuse is detected

To identify a user who may or may not have shared something inappropriate, that means that they know who the sender is, who the recipient was , what bthe essage contained and when it happened. Therefore it s a complete bypass of EtoE.

This is the same exact thing that we are seeing know with the age requirements for social media. If you want to ban kids who are 16 years old and under then you need to scan everyone's ID in order to know how old everyone is so that you can stop them from using the service.

With scanning, it is exactly the same. If you want to prevent the dissemination of CSAM material on a platform, then you have to know what is in each and every message so that you can detect it and report it as described in my quotes above.

Therefore it means that everyone's messages will be scanned either by the services themselves or this task will be outsourced to a 3rd party business who will be in charge of scanning, cataloging and reporting their finding to the authorities. Either way the scanning will happen.

I am not sure how you can argue that this is not the case. Hundreds of security researchers have spent the better part of the last 3 years warning against such a proposal, are you so sure about yourself that you think they are all wrong?


> This is taken verbatim from the proposal that you can find here

You're taking quotes from the preamble which are not legislation. If you scroll down a little you'll find the actual text of the proposal which reads:

> The Coordinating Authority of establishment shall have the power to request the competent judicial authority of the Member State that designated it or another independent administrative authority of that Member State to issue a detection order

You see, a judge, required for a detection order to be issued. That's how the judge will be involved BEFORE detection. The authority cannot demand detection without the judge approving it.

I really dislike you way of arguing. I thought it was important to correct your misconceptions, but I do not believe you to be arguing in good faith.


Let me address your points here and to make it more explicit, let me use Meta/Facebook Messenger as an example.

> You see, a judge, required for a detection order to be issued. That's how the judge will be involved BEFORE detection. The authority cannot demand detection without the judge approving it.

Your interpretation of the judge's role is incorrect. The issue is not if a judge is involved, but what that judge is authorizing.

You are describing a targeted warrant. This proposal creates a general mandate.

Here is the the reality of the detection orders outlined by this proposal:

1: A judicial authority, based on a risk assessment, does not issue a warrant for a specific user John Doe who may be committing a crime. 2: Instead, it issues a detection order to Meta mandating that the service Messenger must be scanned for illegal content. 3: This order legally forces Meta to scan the data from all users on Messenger to find CSAM. It is a blanket mandate, not a targeted one.

This forces Facebook to implement a system to scan every single piece of data that goes through them, even if it means scanning messages before they are encrypted. Meta has now a mandate to scan everyone, all the time, forever.

Your flawed understanding is based on a traditional wiretap.

Traditional Warrant (Your View): Cops suspect Tony Soprano. They get a judge's approval for a single, time-limited wiretap on Tony's specific phone line in his house based on probable cause.

Detection Order: Cops suspect Tony “might” use his house for criminal activity. They get a judge to designate the entire house a "high-risk location." The judge then issues an order compelling the homebuilder to install 24/7 microphones in every room to record and scan all conversations from everyone (Tony, his family, his guests, his kids and so on) indefinitely.

That is the difference that I think you are not grasping here.

With E2E, Meta cannot know if CSAM is being exchanged in a message unless it can see the plain text.

To comply with this proposal, Meta will be forced to build a system that bypasses their own encryption. There is no other way.

This view is shared by security experts, privacy organizations, and legal experts.

You can read this opinion letter from a former ECJ judge who completely disagrees with your view here:

https://www.patrick-breyer.de/wp-content/uploads/2023/11/Vaj...

I am sorry if you think that I am arguing in bad faith. I am not.

While there is nothing I can do to make you like my arguing style, just know that I am simply trying to make you understand your misconceptions about this law.


The ombudsman will say some strong words and everything will continue as is.

the EU is working on a system for age verification that won't identify you to the platform. The details are of course complicated, but you can imagine an openid like system run by the government that only exposes if you're old enough for Y.

The platforms asks your government if you're old enough. You identify yourself to your government. Your government responds to the question with a single Boolean.


Our German national ID supports just verifying that you are over age X, with no other info given.

But why would you give your id?

You don't need to, that's the thing. The site requests "are you over 18" and you use your ID to prove it without them getting any other information from it. Requires a phone with NFC, but the app is open source

And the reference implementation requires google play integrity attestation so you are forced to use a google approved device with google approved firmware and a google account to download the application in order to participate. Once this becomes implemented, you are no longer a citizen of the EU but a citizen of Google or Apple and a customer of the EU:

Quick google (on my phone, so not certain) says it works with microg as of August

Yeah, sorry I mixed up the old German Ausweisapp and the euID Reference App

How does the site verify that the ID being used for verification is the ID of the person that is actually using the account? How does the site verify that a valid ID was used at all?

If the app is open source, what stops someone from modifying it to always claim the user is over 18 without an ID?


Not that I understand it, but AFAIK that's cryptography doing it's thing.

And using someone else's Id and password is the same as every method of auth


hopefully the protocol is open source too. I'd hate to find that it just works on iOS and Google certified Android.

I think that ends up being a more difficult problem than just open source. There will have to be some cryptography at play to make sure the age verification information is actually attested by your government.

It would be possible for them to provide an open-source app, but design the cryptography in such a way that you couldn't deploy it anyway. That would make it rather pointless.

I too hope they design that into the system, which the danish authorities unfortunately don't have a good track record of doing.


Should all be open, but I don't know for sure. Works with ungoogled android unless something changed.

https://github.com/Governikus/AusweisApp


That's very cool and good to hear. Thanks for sharing!

It needs to be scaled to the EU level.

*Only for Google Android and Apple iOS users. Everyone else who don't want to be a customer of these two, including GrapheneOS and LineageOS users, will have to upload scans of identity papers to each service, like the UK clusterfuck.

Source: I wrote Digitaliseringsstyrelsen in Denmark where this solution will be implemented next year as a pilot, and they confirm that the truly anonymous solution will not be offered on other platforms.

Digitaliseringsstyrelsen and EU is truly, utterly fucking us all over by locking us in to the trusted competing platforms offered by the current American duopoly on the smartphone market.


This sounds like a temporary issue.

> This sounds like a temporary issue.

There is nothing more permanent than a temporary solution.


There is nothing less permanent than software. Permanent solutions in software last 5 years.

Why? It's not because a hardware token based solution that will work on desktops is technically impossible, but they literally wrote me that they have no plans to investigate the possibility of offering that. This is officially the plan for the permanent solution.

Permanence in software is measured in half decades.

This is an acceptable solution only if the government doesn't know which platform you are trying to access either.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: