Hacker Newsnew | past | comments | ask | show | jobs | submit | headmelted's commentslogin

It's 1AM in San Francisco right now. I don't envy the person having to call Matthew Prince and wake him up for this one. And I feel really bad for the person that forgot a closing brace in whatever config file did this.

Agreed, I feel bad for them. But mostly because cloudflare's workflows are so bad that you're seemingly repeatedly set up for really public failures. Like how does this keep happening without leadership's heads rolling. The culture clearly is not fit for their level of criticality

> The culture clearly is not fit for their level of criticality

I don't think anyone's is.


How often do you hear of Akamai going down and they host a LOT more enterprise/high value sites than Cloudflare.

There's a reason Cloudflare has been really struggling to get into the traditional enterprise space and it isn't price.


A quick google turned up an Akamai outage in July that took Linode down and two in 2021. At that scale nobody's going to come up smelling like roses. I mostly dealt with Amazon crap at megacorp, but nobody that had to deal with our Akamai stuff had anything kind to say about them as a vendor.

At first blush it's getting harder to "defend" use of Cloudflare, but I'll wait until we get some idea of what actually broke. For the time being I'll save my outrage for the AI scrapers that drove everyone into Cloudflare's arms.


Was it a CDN or Linode failure?

The last place I heard of someone deploying anything to Akamai was 15 years ago in FedGov.

Akamai was historically only serving enterprise customers. Cloudflare opened up tons of free plans, new services, and basically swallowed much of that market during that time period.


> I don't envy the person having to call Matthew Prince

They shouldn't need to do that unless they're really disorganised. CEOs are not there for day to day operations.


> And I feel really bad for the person that forgot a closing brace in whatever config file did this.

If a closing brace take your whole infra. down, my guess is that we'll see more of this.


"In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 05, 2025 - 07:00 UTC"

No need. Yikes.


Claude offline too. 500 errors on the web and the mobile app has been knocked out.

I had to switch to Gemini for it to help me form a thought so I could type this reply. Its dire.

Seems like it. Claude just went offline and is throwing Cloudflare 500 errors on the web interface.

I was under the impression (admittedly from an article I read a couple of years ago) that the consensus within the company was pretty much always that robo-taxis were one man’s pipe dream.

Weren’t there also disclosure documents a couple of years ago when they were trying to license autopilot that said they believed internally they were at level 2 as opposed to 4/5? (I might be remembering this part wrong)


> I was under the impression (admittedly from an article I read a couple of years ago) that the consensus within the company was pretty much always that robo-taxis were one man’s pipe dream.

If robo-taxis were ready with the kind of economics outlined by Musk it would be financially irresponsible to actually sell the cars to others instead of just building a massive Tesla fleet and pivoting towards transportation services.

Tesla's still selling their cars? If so, then they're not robo-taxis.

Edit: The other option for Tesla would be selling the cars for a high enough premium to offset the lost taxi revenue. The fact that Tesla seems to be in a price war with other EV makers is not a promising sign for robo-taxis.


Everyone knows they’re at level 2. Level 4/5 is completely hands off, no supervision.

Not even their Supervised Full Self Driving does that


Do I taxis certainly aren’t just one man’s dream. Whether not not they are possible in the next 50 years is another matter, but plenty of people want them and are willing to invest in developing them.


Right but when do I get my cheap Tesla?


> because FC runs on any hardware evenwithout dedicated GPUs

Twenty years of memes disagrees wholeheartedly


You are mixing it up with Crysis?


I absolutely am! Doh!


It's fair, though - it was the original Crysis

There was a time when it took fairly impressive hardware. I think this was one of the first popular 64bit games, upgrading into it


> I think this was one of the first popular 64bit games, upgrading into it

I don't think so. I remember struggles and patches necessary to get it run when I moved to a 64 bit machine a few years after it came out and I wanted to replay it.


A trip down memory lane :) The patch for Far Cry to become 64bit:

https://www.anandtech.com/show/1677

They were technically beat by Chronicles of Riddick who shipped something on disk

Looking back, this did little for performance. I suspect the memory limitations and introduction of SMP around that time to be a lot of warts we recall


I think I remember seeing someone run Crysis in software on a 128core AMD Epyc and get a decent frame rate.


It’s great that this isn’t hurting them but it leaves out a lot that makes me a bit nervous about this being taken as advice.

They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

On that, the monolith talked about here can be hosted on a single VPS, again that’s great (and cheap!), but if it crashes or the hardware fails for any reason that’s potentially substantial downtime.

The other worry I’d have is that tying everything into the monolith means losing any defence in depth in the application stack - if someone does breach your app through the frontend then they’ll be able to get right through to the backend data-store. This is one of the main reasons people put their data store behind an internal web service (so that you can security group it off in a private network away from the front-end to limit the attack surface to actions they would only have been able to perform through a web browser anyway).


>They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

There is no universe in which _increasing your attack surface_ increases your security.


Considering the vast majority of exploits are at the application level (SQLi, XSS, etc), putting barriers between your various applications is a good thing to do. Sure, you could run 10 apps on 10+ VMs, but it's not cost efficient, and then you just have more servers to manage. If the choice is between run 10 "bare metal" apps on 1 VM or run 10 containers on 1 VM, I'll pick containers every time.

At that point, why are we making a distinction when we do run 1 app on one VM? Sure, containers have some overhead, but not enough for it to be a major concern for most apps, especially if you need more than 1 VM for the app anyway (horizontal scaling). The major attack vector added by containers is the possibility of container breakout, which is very real. But if you run that 1 app outside the container on that host, they don't have to break out of the container when they get RCE.


The VM/container distinction is less relevant to this discussion than you might think; both Amazon ECS and fly.io run customer workloads in VMs (“microVMs” in their lingo).


I agree in principal but not in practice here.

If you’re using a typical docker host, say CoreOS, following a standard production setup, then running your app as a container on top of that (using an already hardened container that’s been audited), that whole stack has gone through a lot more review than your own custom-configured VPS. It also has several layers between the application and the host that would confine the application.

Docker would increase the attack surface, but a self-configured VPS would likely open a whole lot more windows and backdoors just by not being audited/reviewed.


You'd have to be utterly incompetent to make a self-configured VPS have more attack surface.

I have a FreeBSD server, three open ports: SSH with cert-login only, and http/https that go to nginx. No extra ports or pages for potentially vulnerable config tools.


Given the huge number of wide open production Mongo/ES/etc. instances dumped over the years, I wager having heard of ufw puts you among the top 50% of people deploying shit.


This whole thread is incomprehensible to me.

I guess no one knows how to harden an OS anymore so we just put everything in a container someone else made and hope for the best.


I don’t think we need to be calling people incompetent over a disagreement.

Are you suggesting that not opening the ports to any other services means they’re no longer a vulnerability concern?

That would be.. concerning.


On the other hand. If by using containers it has become more feasible for your employees to use something like AppArmor, the end result may be more secure than the situation where the binary just runs on the system without any protection.


Containers don't really increase attack surface, it's all stuff provided by the OS anyway. Docker just ties it all together and makes things convenient.


> One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

This is false. Or so you think your host is secured by installing Docker? And when you scale, how do you get additional hosts configured?

True is, when you use Docker you need to not only ensure that your containers are secure, but also your host (the services running your containers). And when you scale up, and you need to deploy additional hosts, they need to be just as secure.

And if you're using infrastructure as code and configuration as code, it does not matter if you are deploying a binary after configuring your system, or Docker.


Complexity is the criminal in any scenario. However, if we simply focus on a vanilla installation of docker, then the namespace isolation alone can be viewed as a step up from running directly on the os. Of course complexity means a vulnerability in the docker stack exposes you to additional risk, whereas a systemd svc running as a service account is likely to contain any 0day better.


> They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

There are tools that make "bare metal" configuration reproducible (to varying degrees), e.g. NixOS, Ansible, building Amazon AMI images.


All of which would be better than what the post is advocating and I totally agree with this.


I never understood how one “breaches an app through the frontend”. SQLi messes with your data store, natively (no RCE). XSS messes with other users, laterally. But how does one reach from the frontend all the way through, liberally? Are people running JavaScript interpreters with shell access inside of their Go API services and call eval on user input? It’s just so far fetched, on a technical level.


Ahh yes, security through obscurity - if we make it so complex we can’t understand it then no one else can either, right?

The important thing is making walls indestructible, not making more walls. Interfaces decrease performance and increase complexity


Literally the entire guiding principle for security architecture for the past decade or even more has been that "there is no such thing as an indestructible wall".


I agree, perfection isn’t a realistic expectation. I also think effort spent building better defenses leads to fewer exploits over time than adding more of the same defenses. The marginal cost of bypassing a given defense is far lower than the initial cost to bypass a new defense


Literally no-one said that.

(Some of) the reasons why you would do this are explained (I thought clearly) above. None of this is security through obscurity.


That seems like the worst option. Everything up to the free tier would stay there forever with no way for you to ever request it to be deleted.


Turn on Advanced Data Protection before you rip up the key. Then it's all as good as deleted.


That’s a rather generous assumption.

Do Apple definitely not retain a key? If they don’t is the encryption quantum secure?


> Do Apple definitely not retain a key?

If this is the threat vector you’re worried about, you shouldn’t have had anything in iCloud (or any cloud for that matter) to begin with, rendering this debate completely moot.


> Were Linux and git made with Scandinavian longing?

He’ll never tell you. He’ll just stare sullenly at you on a crisp November evening through the frost-coated glass of your remote log cabin until slowly he’ll raise one hand bearing his middle finger, without breaking eye contact or changing his expression.

“Pass that along to Jensen Huang” he’ll whisper. Then with a surge of the creeping blizzard outside your window, he’ll be gone forever.


Who was that, Huang would inquire.

Oh... Just my arty ex.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: