Hacker Newsnew | past | comments | ask | show | jobs | submit | headmelted's commentslogin

I was under the impression (admittedly from an article I read a couple of years ago) that the consensus within the company was pretty much always that robo-taxis were one man’s pipe dream.

Weren’t there also disclosure documents a couple of years ago when they were trying to license autopilot that said they believed internally they were at level 2 as opposed to 4/5? (I might be remembering this part wrong)


> I was under the impression (admittedly from an article I read a couple of years ago) that the consensus within the company was pretty much always that robo-taxis were one man’s pipe dream.

If robo-taxis were ready with the kind of economics outlined by Musk it would be financially irresponsible to actually sell the cars to others instead of just building a massive Tesla fleet and pivoting towards transportation services.

Tesla's still selling their cars? If so, then they're not robo-taxis.

Edit: The other option for Tesla would be selling the cars for a high enough premium to offset the lost taxi revenue. The fact that Tesla seems to be in a price war with other EV makers is not a promising sign for robo-taxis.


Everyone knows they’re at level 2. Level 4/5 is completely hands off, no supervision.

Not even their Supervised Full Self Driving does that


Do I taxis certainly aren’t just one man’s dream. Whether not not they are possible in the next 50 years is another matter, but plenty of people want them and are willing to invest in developing them.


Right but when do I get my cheap Tesla?


> because FC runs on any hardware evenwithout dedicated GPUs

Twenty years of memes disagrees wholeheartedly


You are mixing it up with Crysis?


I absolutely am! Doh!


It's fair, though - it was the original Crysis

There was a time when it took fairly impressive hardware. I think this was one of the first popular 64bit games, upgrading into it


> I think this was one of the first popular 64bit games, upgrading into it

I don't think so. I remember struggles and patches necessary to get it run when I moved to a 64 bit machine a few years after it came out and I wanted to replay it.


A trip down memory lane :) The patch for Far Cry to become 64bit:

https://www.anandtech.com/show/1677

They were technically beat by Chronicles of Riddick who shipped something on disk

Looking back, this did little for performance. I suspect the memory limitations and introduction of SMP around that time to be a lot of warts we recall


I think I remember seeing someone run Crysis in software on a 128core AMD Epyc and get a decent frame rate.


It’s great that this isn’t hurting them but it leaves out a lot that makes me a bit nervous about this being taken as advice.

They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

On that, the monolith talked about here can be hosted on a single VPS, again that’s great (and cheap!), but if it crashes or the hardware fails for any reason that’s potentially substantial downtime.

The other worry I’d have is that tying everything into the monolith means losing any defence in depth in the application stack - if someone does breach your app through the frontend then they’ll be able to get right through to the backend data-store. This is one of the main reasons people put their data store behind an internal web service (so that you can security group it off in a private network away from the front-end to limit the attack surface to actions they would only have been able to perform through a web browser anyway).


>They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

There is no universe in which _increasing your attack surface_ increases your security.


Considering the vast majority of exploits are at the application level (SQLi, XSS, etc), putting barriers between your various applications is a good thing to do. Sure, you could run 10 apps on 10+ VMs, but it's not cost efficient, and then you just have more servers to manage. If the choice is between run 10 "bare metal" apps on 1 VM or run 10 containers on 1 VM, I'll pick containers every time.

At that point, why are we making a distinction when we do run 1 app on one VM? Sure, containers have some overhead, but not enough for it to be a major concern for most apps, especially if you need more than 1 VM for the app anyway (horizontal scaling). The major attack vector added by containers is the possibility of container breakout, which is very real. But if you run that 1 app outside the container on that host, they don't have to break out of the container when they get RCE.


The VM/container distinction is less relevant to this discussion than you might think; both Amazon ECS and fly.io run customer workloads in VMs (“microVMs” in their lingo).


I agree in principal but not in practice here.

If you’re using a typical docker host, say CoreOS, following a standard production setup, then running your app as a container on top of that (using an already hardened container that’s been audited), that whole stack has gone through a lot more review than your own custom-configured VPS. It also has several layers between the application and the host that would confine the application.

Docker would increase the attack surface, but a self-configured VPS would likely open a whole lot more windows and backdoors just by not being audited/reviewed.


You'd have to be utterly incompetent to make a self-configured VPS have more attack surface.

I have a FreeBSD server, three open ports: SSH with cert-login only, and http/https that go to nginx. No extra ports or pages for potentially vulnerable config tools.


Given the huge number of wide open production Mongo/ES/etc. instances dumped over the years, I wager having heard of ufw puts you among the top 50% of people deploying shit.


This whole thread is incomprehensible to me.

I guess no one knows how to harden an OS anymore so we just put everything in a container someone else made and hope for the best.


I don’t think we need to be calling people incompetent over a disagreement.

Are you suggesting that not opening the ports to any other services means they’re no longer a vulnerability concern?

That would be.. concerning.


On the other hand. If by using containers it has become more feasible for your employees to use something like AppArmor, the end result may be more secure than the situation where the binary just runs on the system without any protection.


Containers don't really increase attack surface, it's all stuff provided by the OS anyway. Docker just ties it all together and makes things convenient.


> One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

This is false. Or so you think your host is secured by installing Docker? And when you scale, how do you get additional hosts configured?

True is, when you use Docker you need to not only ensure that your containers are secure, but also your host (the services running your containers). And when you scale up, and you need to deploy additional hosts, they need to be just as secure.

And if you're using infrastructure as code and configuration as code, it does not matter if you are deploying a binary after configuring your system, or Docker.


Complexity is the criminal in any scenario. However, if we simply focus on a vanilla installation of docker, then the namespace isolation alone can be viewed as a step up from running directly on the os. Of course complexity means a vulnerability in the docker stack exposes you to additional risk, whereas a systemd svc running as a service account is likely to contain any 0day better.


> They’re advocating deploying a binary as preferable to using docker, fair enough, but what about the host running the binary? One of the reasons for using containers is to wrap your security hardening into your deployment so that anytime you do need to scale out you have confidence your security settings are identical across nodes.

There are tools that make "bare metal" configuration reproducible (to varying degrees), e.g. NixOS, Ansible, building Amazon AMI images.


All of which would be better than what the post is advocating and I totally agree with this.


I never understood how one “breaches an app through the frontend”. SQLi messes with your data store, natively (no RCE). XSS messes with other users, laterally. But how does one reach from the frontend all the way through, liberally? Are people running JavaScript interpreters with shell access inside of their Go API services and call eval on user input? It’s just so far fetched, on a technical level.


Ahh yes, security through obscurity - if we make it so complex we can’t understand it then no one else can either, right?

The important thing is making walls indestructible, not making more walls. Interfaces decrease performance and increase complexity


Literally the entire guiding principle for security architecture for the past decade or even more has been that "there is no such thing as an indestructible wall".


I agree, perfection isn’t a realistic expectation. I also think effort spent building better defenses leads to fewer exploits over time than adding more of the same defenses. The marginal cost of bypassing a given defense is far lower than the initial cost to bypass a new defense


Literally no-one said that.

(Some of) the reasons why you would do this are explained (I thought clearly) above. None of this is security through obscurity.


That seems like the worst option. Everything up to the free tier would stay there forever with no way for you to ever request it to be deleted.


Turn on Advanced Data Protection before you rip up the key. Then it's all as good as deleted.


That’s a rather generous assumption.

Do Apple definitely not retain a key? If they don’t is the encryption quantum secure?


> Do Apple definitely not retain a key?

If this is the threat vector you’re worried about, you shouldn’t have had anything in iCloud (or any cloud for that matter) to begin with, rendering this debate completely moot.


> Were Linux and git made with Scandinavian longing?

He’ll never tell you. He’ll just stare sullenly at you on a crisp November evening through the frost-coated glass of your remote log cabin until slowly he’ll raise one hand bearing his middle finger, without breaking eye contact or changing his expression.

“Pass that along to Jensen Huang” he’ll whisper. Then with a surge of the creeping blizzard outside your window, he’ll be gone forever.


Who was that, Huang would inquire.

Oh... Just my arty ex.


I do think you can build products and services that people will capital-L Love, even forgiving some warts too, but the bar for that is very high. I’m very skeptical of tech businesses not led by engineers for exactly this reason (the incentives don’t align with a level of quality people will care deeply about).


I mean, Op started right off the bat with the idea that op would love to live in a communist society.

By all means Op can say that they agree with some Marxist ideas, but when the real human costs of that model are very well proven out by now it should be a red flag if someone doesn’t acknowledge those costs from the jump. I’m not surprised this quickly went to suggesting Stalin wasn’t so bad.


I like the approach taken here. Nextcloud is becoming the defacto open-personal-cloud standard so it makes sense to integrate photos into that. If Nextcloud were getting up to shenanigans in the future I'm confident the project would be forked, and in the meantime I don't expect it would be hard to plug in an alternative backend.

I think for an open-source and/or self-hosted solution to come close to an approximation of google cloud/iCloud/whatever we need projects like this to be able to pick their niche and hyper-focus on it, which leaning on Nextcloud does in this case I feel.


I’d love to hear more about this switch as a family member is considering something similar as a 42 y/o.

If you don’t mind, was it long ago that you made this switch? Was there anything that you’d do differently in retrospect?

The big one I guess: do you feel that not having gone the “traditional” route made it more difficult to find roles in the early days?

Feel free not to answer if too invasive but I’d find any info really helpful.


It was in 2017. I wrote a lot about it at the time: https://rodrigohgpontes.github.io/

It was a different moment, not sure if better or worse for junior developers getting a first job. It was before companies were “desperate” to hire software developers, like about a couple of years ago, which made them hire more junior devs. But, it was also before the current hiring market contraction (as a proxy see a post mentioning the low number of posts in Who is Hiring thread). But, it was also before remote work was more common (it was effectively impossible to be hired remotely as a junior back then, now it is just hard). Not sure how all of this balances out.

I wouldn’t do anything different. I still vouch for not paying anything to learn to code. I used freeCodeCamp and it only got better since then. To see how I did it, the blog is a good source.

About not having a traditional background it both hurt and helped me.

I just reread this passage on my blog that I had forgotten:

”People will undervalue you. Chances are not all interviewers will be nice. On a promising application for a cool job, I got a call from the founder. He said something in the lines of "You know, you have to understand that you are competing with a lot of young guys who are coding since they are twelve. You have a lot of catch up to do. You have to expect an intern salary and even so work harder to show you can become a good developer. Because I'm not sure you can." Maybe he was just using some shitty negotiation technique to hire me on a low salary, maybe it was ageism, maybe he thought I was delusional on my aspirations and decided to give me a lecture to be more down to earth. Whether he was stingy, mean or patronizing, it was definitely a place that I wanted distance.”

So it hurted in this case. I also read some discouraging comments here on HN on a thread where I said I want to go from scratch to hired in 4 months. But, it also helped get my first job. I was hired to work a small team that one senior developer that was only 20 years old at the time. He was technically worth of being considered a senior, but had to improve in other areas. They saw me being a 37yo junior developer with a lot of professional experience and good communication skills a good match for him. Also, they valued my diligence and dedication on changing careers. Saw that as evidence that I would be continuously learning. So, my advice is to be able to demonstrate in an interview that previous professional experience will be useful in the new technical job. How do, depends on the background and strengths of each one.

I do think my blog has useful advice still in general.

I do think it is a good career change and possible at 42yo, and I would encourage them. The only small caveat is that they need to realize early if they “enjoy” coding. It is important. Few people are capable of committing to the continuous learning demanded to have a good career in software development “only” for the money.

Good luck to them!


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: