Hacker Newsnew | past | comments | ask | show | jobs | submit | Silhouette's commentslogin

I too have used Caddy on multiple production systems. It's a great bit of software.

I try to avoid engaging in online flame wars but I will say that the developers - including Francis - have been nothing but helpful and courteous to me personally and I've also learned a lot from their numerous positive contributions to Caddy-related forums.


Unfortunately, MS knows it’s a captive audience and enterprises aren’t rushing to exit Microsoft anytime soon so they continue to get away with it.

That is the traditional spin but if they pull many stunts like this how long will it continue? Corporate IT teams are infamous for being slow and unwieldy but that's not entirely fair because the corporate IT teams are usually also the ones on the hook if anything goes wrong. Screw up some sort of data protection or regulatory compliance issue and that can have serious and lasting implications for the entire business. Move fast and break things is not what you want when x% of your global turnover is at stake if a regulator decides to make an example of you. Letting anyone else play with your toys is definitely not what you want in that scenario - why else do Windows Enterprise and Education editions not try to force the same hostile measures onto their customers as all the lower tiers?

This looks like a huge misstep to me - the kind of mess that could actually be big enough to move the needle. And for Microsoft the greatest danger is probably the needle moving enough for everyone to see it. Once no-one ever got fired for buying Microsoft except for that guy who just did it really could be the beginning of the end for them.

Microsoft also seems to have just abandoned one of the most powerful brands ever - Office - in a move that I can only assume was intended to show that Musk wasn't actually the craziest PR guy in the world when he renamed Twitter. There seems to be an outbreak of delusional thinking in Redmond and if this stuff doesn't get backtracked quickly I don't see a happy ending for MS here.


Who are these companies going to go to? Moving away from MS products is notoriously hard for larger enterprises.


Apple for hardware. Maybe a few orgs even try PCs running Linux as we've seen in some governments looking to save $$$.

Google or countless smaller but established services for all the online stuff. The days when all businesses used Windows and Office for everything and the competitors were half-baked also-rans are long gone.

There is no lack of IT companies that will take your money if you're willing to move out of the Microsoft bubble. This is a momentum problem not a lack of competition problem.


Apple hardware can work, so long as the business isn’t dependent on any legacy desktop windows stuff (but there’s always VMs or RDP in that case).

But it is still somewhat a competition problem. Google workspace has better collaboration/live editing IMO but still lacks in some areas. Microsoft’s compliance/DLP stuff is better, InTune while not the best has actually shaped up to be a pretty good MDM and works on all OSes both desktop and mobile (including Linux), and you get endpoint EDR bundled as well as zero trust VPN (global secure access).

Google has no answer to a big chunk of what locks big enterprise into M365, but workspace is fine for medium and smaller companies that don’t need all of what MS has.

But I think those smaller companies are where the disruption will start.


Ah yes, killed by Google...

Google is a pile of its own risks. For all the problems MS has they keep backwards compatibility running for years if not decades.


MS used to be legendary for its efforts to maintain backward compatibility. I think it has lost a lot of credibility after the way it has managed Windows forward compatibility and support recently though.

It is apparently scrambling to offer extended support schemes right now rather than deal with the inevitable fallout of forcibly retiring Windows 10 on so many still-functional devices with no upgrade path to 11 even available.

It hasn't been very long since it forcibly obsoleted the Windows 7/8 generations that were also still in use on a significant number of devices at the time but did not include the level of control (or intrusion if you prefer) that Microsoft evidently wanted of its users' devices.

The difference this time is that it's showing contempt for even large customers' control. Corporate IT departments with all those externally imposed constraints can resist compromise and surrendering control much more strongly than most home users or small organisations.


Voice recognition might be the only UI worse for safety and usability than a touchscreen for normal driving operations.

Not that you're wrong about the privacy angle either.


Why? Talking won't prevent you focusing on the road?


Yes: attempting to have conversation is found to diminish focus on driving to a large extent — I remember seeing a study on this, and can vouch with personal experience.

Yes, you can do most of the driving, but "at the edges", when quicker reaction time is needed, it becomes noticeable. Similar to, ahem, drunk driving, though obviously, not as bad, and you can stop a conversation whenever needed.

Obviously, talking to a computer in your car would be less taxing than to a person, but when it misrecognizes the input, it might be the opposite.


Because voice recognition is horribly imprecise. If you're controlling essential functions for driving then you need controls that are efficient, predictable, and reliable.


Sounds like a implementation problem, not a problem with voice control.

We have a 20 old navi with voice control. You can't just say free form things, but it's very deterministic. Most commands you want to say aren't free form, so this doesn't really matter. It also confirms everything, so it will never do something without you knowing. It also has the best voice I got to know. Natural, precise, short AND friendly; no clue why all these modern voices with way more compute all sound like garbage.


Talking with your phone because driving is boring as shit is fairly harmless

It really isn't. Driving while distracted by a phone conversation is comparable in safety to driving while heavily intoxicated or while barely awake. This is not a hypothetical. There is actual experimental research behind it and the evidence is consistent and overwhelming. There is no justification and no defence. It's not a debate. It kills people.

Everyone is right in saying that those drivers should take public transport instead if they want to work or call or play a game while they are on the move. And yes - maybe in time there will be self-driving options that will make the whole issue obsolete. But right now anyone driving but not paying full attention is a danger to themselves and others and this is not something we should condone or attempt to justify. It kills people.


Note that in all cases, Signal could/should still fully encrypt this data before pushing it to the provider; the backup wouldn't be expected to be "legible" to the user.

That seems like an unhelpful limitation for a lot of people. For me - and as far as I know literally everyone I communicate with using Signal - the reason to use it is the E2EE for the messages. Once we have the messages or media on our own devices we're fine with having control over them ourselves. By all means also provide an option to create a secured archive for those who want it. But as long as the data can only be read using a specific app on a specific device then whatever you're creating isn't really a backup for a lot of practical purposes.


Agree with the sentiment, but I can understand why they don't offer this. Rational or not, people will feel less safe if all their messages can just be easily exported to plaintext. A few scenarios where this might matter like the 'evil maid attack' where someone briefly has access to your unlocked phone.

But I just use this project to export my signal messages to plaintext: https://github.com/tbvdm/sigtop

I have it auto run periodically and it's great. Makes for easy full text searching of my message history.


Rational or not, people will feel less safe if all their messages can just be easily exported to plaintext.

IMHO the point is that it's not rational. Signal is as vulnerable to the analogue hole as any other messaging platform that displays the messages on a phone screen. There was never any credible way to prevent someone who has received your message from keeping or passing on the information it contained. The idea is as unrealistic as the "disappearing message/photo" applications when confronted with any cheap phone or camera separate to the one showing that message/photo. Ultimately if you don't trust the recipient of your information to treat it as you would wish then your only choice is not to send them the information in the first place.


People aren't rational/perfect and Signal wants to keep them feeling safe? ¯\_(ツ)_/¯

(and IMHO there are edge case scenario where the additional friction in exporting messages provides some protection. Particularly when your threat model involves imperfect actors)

edit: here's an example. Let's say I use 4 week disappearing message with everyone I chat with. That's imperfect of course, but let's say right now only about 5% of the people I chat with are proactively backing up/screenshotting my disappearing messages and the rest let messages expire. If Signal rolled out an "export all messages to plaintext" feature, then suddenly that 5% might become 50%. And now a lot more of my messages which used to disappear, are being preserved.

If everyone I chat with is a perfect 'threat actor' that always backups up every message they ever receive, then there's no difference at all. But most people aren't, so practically there's a big difference because now exporting to plaintext (and bypassing time restrictions) is trivial for the masses.


I appreciate Signal and that the Signal developers provide a very useful service to the community for free.

I also know a large number of people who won't use it because it locks your messages up in its own walled garden. People use apps like this precisely because they want to have control over their own communications without any third parties interfering! I have never understood what kind of threat model they think they're protecting against by not letting people take their own backups and store them according to their own preferences. Whatever the reasons it is clearly a deterrent to wider adoption.

This announcement might seem like progress but I doubt it will convince any of the people I know who won't use it because at the end of the day it's still a walled garden. If and when the promise of the comments near the end of the announcement is realised and we can back up our own messages and media freely from our own devices to our own (presumably also secure) backup facilities then it will be much more interesting.


Yes - there's been a very obvious shift in the "official" React positions over the last 2-3 years. It's regrettable that they have moved so sharply away from the simplicity and "doing one thing well" philosophy that made React so successful in the first place. I've used React since those early days and built successful, long-lived projects with it so I'm genuinely sad to see it fall so hard.

Objectively that sadness does not change reality however. At least within my own professional network no-one seems comfortable starting a new project using React today. Almost 100% of the paid front end work I've been involved with myself or discussed with others recently is now using alternatives - most often Vue though I've seen other choices at least seriously considered. I've even had a couple of recruiters I haven't worked with for years suddenly reappear desperately looking for someone to take on React work and openly admit it's because they are struggling to find anyone good who wants to go near it. All of this is a sharp contrast with the market of the early 2020s when React was clearly the preferred front end choice. And all of this is surely a direct response to the push to make React a full stack framework, the added complexity that has introduced, and the apparent capture of official React development by Vercel.


You could just not watch Netflix.

The digital hermit argument is not going to resonate with 99.9% of users. People buy devices because they want to do stuff. Telling them they shouldn't do what they want to do is never going to convince anyone.

The real question is where are the representatives who are supposed to be acting in the interests of their people while all this is happening? We seem to have regulatory capture on a global scale now where there isn't really anyone in government even making the case that all these consumer-hostile practices should be disrupted. They apparently recognize the economic argument that big business makes big bucks but completely ignore the eroding value of technology to our quality of life.


This is ultimately a form of collusion and anti-competitive behaviour - practices that we prohibit in other scenarios because we consider them harmful to our society. It's obvious why some large organisations would like more control over our lives. It's not obvious why we should let them have it.

Unfortunately for now it seems our representatives are letting them have it so personally I'm rooting for a snake-eating-its-tail moment as a result of Windows 10 losing support. There will inevitably be erosion of security and support for applications on Windows 10 once Microsoft declares it yesterday's OS - as we've seen with past versions of Windows. This time there is the added complication that a lot of perfectly good hardware can't run Windows 11 - largely because of the TPM/verification issue we're discussing.

So probably a lot of people who haven't moved to 11 yet aren't going to unless their current computer breaks and they get 11 by default when they buy a replacement. If the charts are correct then 11 only recently overtook 10 in user numbers. After all this time and despite all the pressure from Microsoft and the imminent EOL of Windows 10 over 40% of Windows users are still running that version. (https://gs.statcounter.com/os-version-market-share/windows/d...) So how exactly do the big organisations that want to control the client plan to deal with that over the next few years?

Unfortunately unless there is also some sort of intervention to deal with the collusion and market manipulation by vested interests I doubt enough Windows 10 refugees will jump to open platforms when their current devices fail for those open platforms to reach a critical mass of users. If five years from now Windows 10 user levels are negligible and almost all of the former users are now on Windows 11+ by default then the controlled client side probably wins effectively forever. I think it would take something dramatic happening that increased the desktop market share of open alternatives like Linux to say 10+% to avoid this fate. The only likely source of that drama I can see is if Valve's support for gaming on Linux encourages significant numbers of home users to switch and then general public awareness that you don't have to run Windows or macOS increases.


Am I the only one who finds the branding and privacy policies around these AI services (possibly deliberately) confusing?

For example Anthropic have an Anthropic Console that they appear to consider quite distinct from Claude.ai. Do these share a privacy policy and related settings? How do either of these fit in with the named plans like Pro and Max? What are you actually paying for when you give them money for the various different things they charge for? Is all API use under their Commercial Terms even if it's a personal account that is otherwise under the Consumer Terms? Why isn't all of this obvious and transparent to users?

OpenAI don't seem to be any better. I only just learned from this HN discussion that they train on personal account conversations. As someone privacy-conscious who has used ChatGPT - even if only a few times for experiments - I find the fact that this wasn't very clearly stated up front to be extremely disturbing. If I'd known about it I would certainly have switched off the relevant setting immediately.

I get that these organisations have form for training on whatever they can get their hands on whether dubiously legal or not. But training on users' personal conversations or code feels like something that should require a very clear and explicit opt-in. In fact I don't see how they can legally not have that first in places like the EU and UK that have significant data protection legislation.


Generally if you are not paying full price for something (in this case paying API rates) you are covering the additional cost with your data. This is true for pretty much all modern services.


That's a very convenient way to rationalize behaviour that is intrusive and potentially dangerous that the people at risk almost certainly didn't ask for or meaningfully agree to.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: