Hacker Newsnew | past | comments | ask | show | jobs | submit | talles's commentslogin

Who's fefe

https://en.wikipedia.org/wiki/Felix_von_Leitner who runs https://de.wikipedia.org/wiki/Fefes_Blog

Which often shares interesting things in the realm of technology, but with a high degree of skepticism - some of which practically results in incitement against minorities, with people being harassed by some of his follower crowd.

He had a stroke not too long ago, so this seems to be a sign he's recovered a bit.


German internet nerd.

Professionally he's running a successful code/security consultancy [2]. This pays his bills, so that nerd-wise he is running his own web server and content management system where everything is self-written, inclusive of his own libC implementation with a focus on bare minimum requirements. [1] He's been around in the German IT community for decades and was earlier involved in Chaos Computer Club (CCC) where he still used to attend their annual congress, which is kind of the "meet and greet" of the German IT community.

His self-hosted blog was/is very popular/controversial. He is pretty opinionated on nearly everything and he's not taking hostages when he criticizes someone. So the "woke" people don't like him, the nazis don't like him, the corpo guys don't like him. And he pretty much doesn't care.

Earlier this year he apparently suffered some critical health condition and went quiet without notice for more than 6 months, I believe.

[1] https://en.wikipedia.org/wiki/Dietlibc [2] https://www.codeblau.de


It's just me or no one has a clue what "life" is.


What separator would be better?


> Since reading code is harder than writing it,

Reading bad code is harder than writing bad code. Reading good code is easier than writing good code.


I beg to differ.


No need to beg. Everyone’s got their opinion. I just wish, this being Hacker News, that more people would articulate their different opinions instead of just stopping with “I disagree.”


Well, my first comment said "reading code is harder than writing code", your comment said "reading good code is easier than writing good code". I believe the two points are about equally articulated.


Neither comment is mine. I’m here in the outside wanting to understand the arguments you have in your heads. Sure, the two comments you mention are equally under-articulated. Either continue the discussion for the benefit of others on the site, or leave it as it stands. Stating “I beg to differ” is pointless.



This is the sign of seniority IMO. First you learn to write code. Then you learn to write code that can be read. Then you learn to modify code. Then you learn to read other people’s code. Then you learn to modify other people’s code. Then you learn to own code regardless of who reads or writes it.

At this point in my career 35 years in I find reading and writing code whether I wrote it or other did irrelevant. Bad or good code, it’s all the same. By far the most effective work I do involves reading a lot of complex code written by many people over many years and seeing the exact one line to change or improve.

I find LLM assisted coding very similar frankly. I’ve finished maybe 20 projects or more on the last seven months on my own time that I never would have been able to do in my lifetime for want of free time to learn minutia in stuff I am not familiar with. The parts it get hung up on I’m able with quick inspection to recognize and unwedge it, just like any junior engineer. The junior engineers also are often much better versed in XYZ library than I am.


This is the thing.

LLM assisted coding ("vibe coding") is just project management.

You ask it to do things, then you check the work to a sufficient degree.

The better the specifications and documentation you give it, the better the result will be. Keeping tasks short and verifiable also helps a lot.

I've written SO many small tools for myself during the last year it's not even funny. Upgraded some shitty late night Python scripts to proper Go applications with unit tests and all, while catching up on my TV shows.

Converted my whole rat's nest of Docker compose files to a single Opentofu declarative setup.

None of this would've gotten done without an LLM assistant.


Funny I end up working on 5-6 things at once that are fairly varied. My favorite rats nest is rebuilding by DIY NAS as a NixOS declaration so I can rebuild the filer root from GitHub.


I’m at the same point as well. Doing more reading than writing.

Just want to add one more point: code is not fiction or even nonfiction, “good or bad” style can be subjective, but correct or not correct is not, regardless of the reviewer’s mental model.

The difficulty of reading code is to understand its logic and logical consequences. The more complex the codebase (not just the line we are reading), the more risky to modify it.

That is why I use LLM to write a lot of tests and let it review the logs to help me understand the logic. Even the tests can be disposable.


Why is reading code harder than writing it?


I think it has to do with mental model. If you already know what to write and it is reasonably complex you'll have a mental model ready and can quickly write it down (now even faster as LLMs autocomplete 3-4 lines at a time). While reading someone else code you'll have to constantly map the code in your mind with code written and have to then compare quality, security and other issues.


Yeah, it's exactly this. Having to create a mental model from the code is much harder than having one and just writing it out.


I just tend to find LLM code output extremely to read, I guess. It tends to be verbose and do a lot of unnecessary stuff, but I can always get the point easily and edit accordingly.


I'd say just reading your own code from a few years back will be as hard as reading someone else's.


Don't forget the cost of managing your one big server and the risk of having such single point of failure.


My experience after 20 years in the hosting industry is that customers in general have more downtime due to self-inflicted over-engineered replication, or split brain errors than actual hardware failures. One server is the simplest and most reliable setup, and if you have backup and automated provisioning you can just re-deploy your entire environment in less than the time it takes to debug a complex multi-server setup.

I'm not saying everybody should do this. There are of-course a lot of services that can't afford even a minute of downtime. But there is also a lot of companies that would benefit from a simpler setup.


Yep. I know people will say, “it’s just a homelab,” but hear me out: I’ve ran positively ancient Dell R620s in a Proxmox cluster for years. At least five. Other than moving them from TX to NC, the cluster has had 100% uptime. When I’ve needed to do maintenance, I drop one at a time, and it maintains quorum, as expected. I’ll reiterate that this is on circa-2012 hardware.

In all those years, I’ve had precisely one actual hardware failure: a PSU went out. They’re redundant, so nothing happened, and I replaced it.

Servers are remarkably resilient.

EDIT: 100% uptime modulo power failure. I have a rack UPS, and a generator, but once I discovered the hard way that the UPS batteries couldn’t hold a charge long enough to keep the rack up while I brought the generator online.


Being as I love minor disaster anecdotes where doing all the "right things" seem to not make any difference :).

We had a rack in data center, and we wanted to put local UPS on critical machines in the rack.

But the data center went on and on about their awesome power grid (shared with a fire station, so no administrative power loss), on site generators, etc., and wouldn't let us.

Sure enough, one day the entire rack went dark.

It was the power strip on the data centers rack that failed. All the backups grids in the world can't get through a dead power strip.

(FYI, family member lost their home due to a power strip, so, again, anecdotally, if you have any older power strips (5-7+ years) sitting under your desk at home, you may want to consider swapping it out for a new one.)


For sure, things can and will go wrong. For critical services, I’d want to split them up into separate racks for precisely that reason.

Re: power strips, thanks for the reminder. I’m usually diligent about that, but forgot about one my wife uses. Replacement coming today.


My single on-premise Exchange server is drastically more reliable than Microsoft's massive globally resilient whatever Exchange Online, and it costs me a couple hours of work on occasion. I probably have half their downtime, and most of mine is scheduled when nobody needs the server anyhow.

I'm not a better engineer, I just have drastically fewer failure modes.


Do you develop and manage the server alone? It's a quite a different reality when you have a big team.


Mostly myself but I am able to grab a few additional resources when needed. (Server migration is still, in fact, not fun!)


A lot of this attitude comes from the bad old days of 90s and early 2000s spinning disk. Those things failed a lot. It made everyone think you are going to have constant outages if you don’t cluster everything.

Today’s systems don’t fail nearly as often if you use high quality stuff and don’t beat the absolute hell out of SSD. Another trick is to overprovision SSD to allow wear leveling to work better and reduce overall write load.

Do that and a typical box will run years and years with no issues.


> My experience after 20 years in the hosting industry is that customers in general have more downtime due to self-inflicted over-engineered replication, or split brain errors than actual hardware failures.

I think you misread OP. "Single point of failure" doesn't mean the only failure modes are hardware failures. It means that if something happens to your nodes whether it's hardware failure or power outage or someone stumbling on your power/network cable, or even having a single service crashing, this means you have a major outage on your hands.

These types of outages are trivially avoided with a basic understanding of well-architected frameworks, which explicitly address the risk represented by single points of failure.


don't you think it's highly unlikely that someone will stumble over the power cable in a hosted datacenter like hetzner? and even if, you could just run a provisioned secondary server that jumps in if the first becomes unavailable and still be much cheaper.


> don't you think it's highly unlikely that someone will stumble over the power cable in a hosted datacenter like hetzner?

You're not getting the point. The point is that if you use a single node to host your whole web app, you are creating a system where many failure modes, which otherwise could not even be an issue, can easily trigger high-severity outages.

> and even if, you could just run a provisioned secondary server (...)

Congratulations, you are no longer using "one big server", thus defeating the whole purpose behind this approach and learning the lesson that everyone doing cloud engineering work is already well aware.


Do you actually think dead simple failover is comparable to elastic kubernetes whatever?


> Do you actually think dead simple failover is comparable to elastic kubernetes whatever?

References to "elastic Kubernetes whatever" is a red herring. You can have a dead simple load balancer spreading traffic across multiple bare metal nodes.


Thanks for switching sides to oppose yourself, I guess?


> Thanks for switching sides to oppose yourself, I guess?

I'm baffled by your comment. Are you sure you read what I wrote?


I don't know about Hetzner, but the failure case isn't usually tripping over power plugs. It's putting a longer server in the rack above/below yours and pushing the power plug out of the back of your server.

Either way, stuff happens, figuring out what your actual requirements around uptime, time to response, and time to resolution is important before you build a nine nines solution when eight eights is sufficient. :p


> It's putting a longer server in the rack above/below yours and pushing the power plug out of the back of your server

Are you serious? Have you ever built/operated/wired rack scale equipment? You think the power cables for your "short" server (vs the longer one being put in) are just hanging out in the back of the rack?

Rack wiring has been done and done correctly for ages. Power cables on one side (if possible), data and other cables on the other side. These are all routed vertically and horizontally, so they land only on YOUR server.

You could put a Mercedes Maybach above/below your server and nothing would happen.


Yes I'm serious. My managed host took several of our machines offline when racking machines under/over ours. And they said it was because the new machines were longer and knocked out the power cables on ours.

We were their largest customer and they seemed honest even when they made mistakes that seemed silly, so we rolled our eyes and moved on with life.

Managed hosting means accepting that you can't inspect the racks and chide people for not cabling to your satisfaction. And mistakes by the managed host will impact your availability.


I hope that "managed host" got fired in a heartbeat and you moved elsewhere. Because they don't know WTF they're doing. As simple as that.


We did eventually move elsewhere because of acquisition. Of course those guys didn't even bother to run LACP and so our systems would regularly go offline for a bit whenever someone wanted to update a switch. I was a lot happier at the host that sometimes bumped the power cables.

Firing a host where you've got thousands of servers is easier said than done. We did do a quote exercise with another provider that could have supported us, and it didn't end up very competitive ... and it wouldn't have been worth the transition. Overall, there were some derpy moments, but I don't think we would have been happier anywhere else, and we didn't want to rent cages and run our own servers.


It's unlikely, but it happens. In the mid 2000's I had some servers at a colo. They were doing electrical work and took out power to a bunch of racks, including ours. Those environments are not static.


In my experience, my personal services have gone down exactly zero times. Actually not entirely true, but every time they stopped working the servers had simply run out of disk space.

The number of production incidents on our corporate mishmash of lambda, ecs, rds, fargate, ec2, eks etc? It’s a good week when something doesn’t go wrong. Somehow the logging setup is better on the personal stuff too.


I also have seem the opposite somewhat frenquently: some team screws up the server and unrelated stable services that are running since forever (on the same server) are now affected due messing up the environment.


Not to mention the other leading cause of outages: UPS's.

Sigh.


UPSes always seem to have strange failure modes. I've had a couple fail after a power failure. The batteries died and they wouldn't come back up automatically when the power came back. They didn't warn me about the dead battery until after...


That’s why they have self-tests. Learned that one the hard way myself.


My UPS was supposedly "self testing" itself periodically and it still happened!


Oof, sorry.


The last 4-5 years taught me that my most often single point of failure where I can't do a thing is Cloudflare not my on premise servers



> Don't forget the cost of managing your one big server

Is that more, less than or about the same as having an AWS/Azure/GCP consultant?

What's the difference in labour per hour?

> the risk of having such single point of failure.

At the prices they charge I can have two hot failovers in two other datacenter and still come out ahead.


Don't forget to read the article.


I'll take a (lone) single point of failure over (multiple) single points of failure.


The predictable cost, you mean, making business planning way easier? And you usually have two, because sometimes kernels do panic or whatever.


AWS has also been a single point of failure multiple times in history, and there's no reason to believe this will never happen again.


OK, not GitHub "because Microsoft". But is there any particular reason why Forgejo and not GitLab, Gitea, or Gogs?

I'm not throwing shade at Forgejo or anything like that, I'm genuinely curious if there's anything about Forgejo that made it a better alternative than the other options.


Forejo is/was a soft fork of Gitea due to some licensing / trademark bruhaha that I think got blown somewhat out of proportion. (And Gitea was famous for using Github instead of dogfooding their own software, which I've always thought was a pretty strange choice). I'm not familiar enough with the development roadmap of both teams to make a good call on whether following the fork is a good idea or not, but I know a lot of projects are just bandwagoning on the fork due to generally frustrating Gitea governance. GitLab is open core and I know a lot of people are frustrated with their UX and with their high resource consumption for self-hosting (although I would expect YJIT probably made a big improvement here). I've only seen maybe one project use Gogs seriously, I don't get the sense that it has the same level of adoption as the other three.


Yes, Issue with GitLab is their "Enterprise" maximalist feature set. Seems like they want to be the solution for the entire SDLC for every conceivable team.

I remember thinking a decade ago "wow these guys are biting off a lot to chew, maybe in a decade they'll be able to tackle all these things in a comprehensive way" and my opinion now is they are still probably a decade out. I appreciate their ambition and wish them luck, but it's not for me.

If if a project requires more maintenance than I could potentially do by myself in a pinch because of complexity or having a massive supply chain of dependencies that keep it on a treadmill I will hesitate to depend on it.


A decade out? They do Code, Artifacts and CI basically that all works.

What missing sdlc features are going to take them a decade to write?


In fairness, I've not used GL in a couple of years, but before that I used it a lot, and it all worked, but it never worked very well. Issue organisation was painful, and there was always some new trick that made it slightly easier but never enough (boards, nested workspaces, sprint tools, etc). CI had about a hundred different ways of doing the same thing, because every so often the GL devs would realise their current system wasn't quite general or powerful enough, and add a new way of defining DAGs, or a new way of sharing jobs, or a new way of managing environments. You didn't need to switch, but trying to figure out how all these different approaches interacted by reading the docs was a nightmare.

In general, the documentation and UI were painful, and trying to figure out how to do something usually took me to a GL issue that would describe my problem but either be closed (with little indication of whether the feature was added or what form it had taken in the end), or open with no discussion apart from a bunch of comments from a community manager saying "a bronze supporter said that this is a blocker for them". Trying to figure out where features or configuration lived in the UI was also like pulling teeth, especially with GL's love of icons to explain what everything is.

So it's not that the features are missing, it's that they're all half-baked, and it would take Gitlab another ten years to polish them off and round them out.


Just about every week I find "new" GitLab bugs which, after a quick search, turn out to be 5+ years old, with lots of community engagement, but seemingly zero movement from GitLab itself. I wonder what GitLab devs actually work on, because none of the new features in the last couple of years seem as impactful as fixing one of those bugs would be. (I still prefer it to GitHub, especially the CI model.)


It's not about features or if they work it's about the conceptual load presented to the user by the quality of how those features are integrated, how much configuration they require to do only what you want them to, ask of you only what you want them to, and no more.

When I'm interacting with a maximalist system designed to be everything to everyone, I still only want to have to worry about the things I care about.

They do seem to hold this as a value, but it's secondary to the maximalism.


There is a plan for hosting gitea on gitea: https://github.com/go-gitea/gitea/issues/1029


Yes but the project is almost a decade old. If they can't use it themselves by now why would anyone else.


[flagged]


The bit about the name seems to be a complete hallucination / tokenization error. The project's docs say: "Forgejo (pronounced /forˈd͡ʒe.jo/ (hear an audio sample)) is inspired by forĝejo, the Esperanto word for forge." I would expect the rest of the AI summary to be similarly unreliable / hallucinated—I compared the test directories for both projects and they both seemed to have about the same amount of activity.


Thank You for pointing this out. Although I am not entirely why my comment was downvoted into oblivion .


For all sorts of reasons, this will probably happen to the vast majority of LLM output copy-pastes here.

It's a bit like copy pasting a search result link (anyone can do the search for themselves). LLM outputs don't provide the insight people are looking for on HN. They are unreliable. They sound boring. They may sound like ads for the LLM providers. There's probably someone out there actually knowing the thing and able to answer. There's usually a better way of finding out stuff. For instance, if you want to learn about Forgejo, going to its website or its Wikipedia page will be far more straightforward and reliable.

You also stated that something that contains hallucinations is surprisingly decent, which can mislead people (who should strive to stay alert though).

Many people also just plain dislike generative AI.

You are not the first to whom it happens, and you'll probably not be the last.

At that point, I believe HN's guidelines should be updated to discourage posting LLM outputs in most contexts.


> OK, not GitHub "because Microsoft".

all well and good to host your own code. but from a contributer's point of view, it is between managing dedicated accounts per project you want to participate in...or sign in with github [1]

openid exists, and is arguably older, but odds are most people would not be using it to begin with.

[1] https://code.ffmpeg.org/user/sign_up


"Literally stop existing"? Having broken links and notifying everyone when a migration happens is for sure a hassle, but migrating a git repo is the easiest thing in the world.

That's kinda the whole point of a distributed VCS.


Migrating the repo is easy, migrating issues and MRs and whatever other ancillary features you’re using is not.


I invite you to migrate all your GH workflows to GitLab pipelines as the "easiest thing in the world" as an exercise for the reader.


There are considerable privacy concerns regarding pix, some Brazilian government officials are able to obtain transaction information without a court order, which is needed when it comes to traditional methods that came before pix.


Yes, as commented elsewhere Brazilians in general are very accepting of government surveillance, with the omnipresent CPF and now complete disclosure of almost all consumer transactions to the State. It's always surprised me, TBH, given the very recent history of dictatorship and unbounded potential for abuse.


Most people don't think about that. Once they realise things change, Brazil also uses Bitcoin a lot because of a lack of trust. Pix would be even more widely used if the government took longer to start using it as a weapon (as it has already done)


Source?


Why the "written in Rust" in the title?


It's +5hp, like these two white stripes on a car hood.


Technology feels like magic when you don't understand it.


Even more so when even the creators of the technology don't understand it.


The creators understand it well. The math is pretty a lot, but, you can literally do it with pen and paper. There are plenty of blog[1] posts showing the process.

Anyone claiming AI is a black box no one understands is a marketing-level drone trying to sell something that THEY don't understand.

[1] https://explainextended.com/2023/12/31/happy-new-year-15/


No, they only understand it on a superficial level. The behavior of these systems emerges from simpler stuff, yes, but the end result is difficult to reason about. Just have a look at Claude's prompt [1] that leaked some time ago, and which is an almost desperate attempt of the creators to nudge the system into a certain direction and make it not say the wrong things.

We probably need a New Kind of Soft Science™ to fill this gap.

[1] https://simonwillison.net/2025/May/25/claude-4-system-prompt...


Where did u master humor from ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: