Dairy in the UK also tastes far better than in the US. British people often comment how hard it is to deal with the dairy in the US which tastes like water in comparison.
Experienced that too, coming from a country that has 100.0% free-range grass-fed cows the beef in the US was pretty dire. Presumably if I'd paid a large amount of money I'd have got something decent but the generic restaurant stuff I've had was what I'd expect from a Ten-dollar-Tuesday meal here.
Rails 8.1 and ruby 3 are also very surprisingly fast, and coming back to an “omakase” framework is honestly a breath of fresh air especially now that with AI tools you can implement a lot of stuff from scratch instead of using deps.
I heard that the environment there is 996 with high turnover. So you might be paid double in comparison to a FAANG job but you work double as well. (This was about dev positions not researchers)
Anyone know if that’s true? I only heard it second hand.
Employees also have no moat, and that is not unique to OAI. You either work as much as the other people on your team or you get replaced by one of the many people eager to work harder than you for that money.
This assumes a constant stream of an available workforce. Meanwhile, in the US where OpenAI is based, scrutiny and pressure from the current administration is making it harder to hire at their largest locations.
One really useful usecase for Garage for me has been data engineering scripts. I can just use the S3 integration that every tool has to dump to garage and then I can more easily scale up to cloud later.
The parallel agent model is better for when you know the high level task you want to accomplish but the coding might take a long time. You can split it up in your head “we need to add this api to the api spec” “we need to add this thing to the controller layer” etc. and then you use parallel agents to edit just the specific files you’re working on.
So instead of interactively making one agent do a large task you make small agents do the coding while you focus on the design.
Not to be overly negative but I’m kinda disappointed with this and I have been a JetBrains shill for many years.
I already use this workflow myself, just multiple terminals with Claude on different directories. There’s like 100 of these “Claude with worktrees in parallel” UIs now, would have expected some of the common jetbrains value adds like some deep debugger integration or some fancy test runner view etc. The only one I see called out is Local History and I don’t see any fancy diff or find in files deep integration to diff or search between the agent work trees and I don’t see the jetbrains commit, shelf, etc. git integration that we like.
I do like the cursor-like highlight and add to context thing and the kanban board sort of view of the agent statuses, but this is nothing new. I would have expected at the least that jetbrains would provide some fancier UI that lets you select which directories or scopes should be auto approved for edit or other fancy fine grained auto-approve permissions for the agent.
In summary it looks like just another parallel Claude UI rather than a Jetbrains take on it. It also seems like it’s a separate IDE rather than built on the IntelliJ platform so they probably won’t turn it into a plugin in the future either.
I feel like I've tried many similar combos and there ends up being some tiny, silly, trivial thing that bothers me in the end. For example, I remember fighting with one of them that forced trailing slashes, and another that didn't allow apex domains (i.e. non-www address) for static sites.
I absolutely refuse to actually ship valuable things though so thanks for the suggestion and I'll probably spend some time trying it out.
I agree, for me it’s my current weekend project to try to figure out a dirt cheap and high performance self hosted cloud for hosting stuff.
So I’m still sticking with Route53 cause it’s the least annoying registrar and DNS api, for CDN I’m going with bunny and for dirt cheap object storage I’m going with b2.
Then the fun part is the actual self hosting: I’m going with Garage for my normal self hosted S3 api (b2 is for backups etc.), Scylla for DDB, Spin for super fast Wasm FaaS…
Then this weekend I got deep into trying to build my cloudwatch alternative I think I’m going with dumping logs with vector into b2 and then using quickwit for searching the logs.
I think AI coding is another part of why this is seeing a resurgence. It’s a lot quicker to build quick and dirty scripts or debug the random issues that come up self hosting.
This isn’t true anymore we are way beyond 2014 Hadoop (what the blog post is about) at this point.
Go try doing an aggregation of 650gb of json data using normal CLI tools vs duckdb or clickhouse. These tools are pipelining and parallelizing in a way that isn’t easy to do with just GNU Parallel (trust me, I’ve tried).
I had to do something like this for a few TB of json recently. The unique thing about this workload was it was a ton of small 10-20mb files.
I found that clickhouse was the fastest, but duckdb was the simplest to work with it usually just works. DuckDB was close enough to the max performance from clickhouse.
I tried flink & pyspark but they were way slower (like 3-5x) than clickhouse and the code was kind of annoying. Dask and Ray were also way too slow, but dask’s parallelism was easy to code but it was just too slow. I also tried Datafusion and polars but clickhouse ended up being faster.
These days I would recommend starting with DuckDB or Clickhouse for most workloads just cause it’s the easiest to work with AND has good performance. Personally I switched to using DuckDB instead of polars for most things where pandas is too slow.
reply