Hacker Newsnew | past | comments | ask | show | jobs | submit | agrippanux's commentslogin

Their multiple rounds of VC funding are predicated on their vision of collaboration so they gotta make a go at it.


Management and product needing vision and foresight is an excellent call out. I can't help but think a lot of these self-proclaimed 9-9-6 startups are in reality 11-3-6 startups with a bunch of wasted time padding to 9-9-6.


Oh I remember a time before CDNs and a big part of your startup fundraise was to build out your own setup inside a data center.


It's not the specialization around hosting that's the problem, but that entities running CDNs realized they're in a privileged position in the network, and decided to capitalize on it.


That's not what CDNs are for. They exist for primarily two purposes: a) speed up video loading for end-users, and b) anonymize IP addresses and routes for businesses.

Cloudflare built a business around b). This doesn't save on hosting costs, only lowers some operational and legal risks.


Their burn agent mode is pretty badass, but is super costly to run.

I'm a big fan of Zed but tbf I'm just using Claude Code + Nvim nowadays. Zed's problem with their Claude integration is that it will never be as good as just using the latest from Claude Code.


Same except with Helix.

The integration in Zed is limited by what the Claude Code SDK exposes. Since about half of the /commands are missing from the SDK, they don’t show up in Zed.

I think ACP was a good strategic move by Zed, but all I personally really need is Claude Code in a terminal pane with diffs of proposed edits in the absolutely wonderful multibuffer view


These extreme rounded corners are super triggering my desktop OCD

Text on frosted glass over other text is really hard to read

We need an option to turn these “improvements” off

FWIW my system does feel more snappy and the improvements to Spotlight are nice


I love Zed but this has all the hallmarks of something being totally rushed out the door.

It works off the Claude Code SDK, which mean it doesn't support many of the built in slash commands - it doesn't support /compact, which is 100% necessary because when you use this implementation enough, you'll eventually get a "Prompt too long" error message with no ability to do anything about it. Since you can't see how far you are in the context window, it's a deal breaker, since you have to start a fresh chat and might run out of room before you can ask it to create a summary prompt for continuing.

There is no way to switch models that I can tell - I think it just picks up on your default model - and there is no way to switch to Plan mode, which has become absolutely crucial to my workflow.

I didn't see Zed picking up on problems reported in the IDE, it was defaulting to running 'tsc -b' in my directories.

At this point it's better to run a terminal inside Zed and work from there. The official response in the Zed Discord has been "talk to your local Anthropic rep" to get them to support Zed's Agent Client Protocol (ACP).


The Agent Model came out very recently, I’ve been following the GitHub issue over the past days and you can see it was rushed out. But I don’t see anything wrong with that, many AI topics are being rushed out and adding slash commands and other small things are very small things to add once the foundation is there.


Tbf I never use /compact but clear instead, and load in the relevant context anew. I just haven’t seen compacted context to be very useful, so far.


The model is usually so confused after a /compact I also prefer a /clear.

I set up my directives to maintain a work log for all work that I do. I instruct Claude Code to maintain a full log of the conversation, all commands executed including results, all failures as well as successes, all learnings and discoveries, as well as a plan/task list including details of what's next. When context is getting full, I do a /clear and start the new session by re-reading the work log and it is able to jump right back into action without confusion.

Work logs are great because the context becomes portable - you can share it between different tools or engineers and can persist the context for reuse later if needed.


The trick is to parametrize the /compact. Something like "/compact focus on the XZY, the next steps will be FOOBAR, and keep a high level summary of BARFOO"

That makes the compaction summary a lot more focused and useful.

edit: But a work log/PRD is essential regardless!


I’ve been using PRD specs at kick things off, but curious about how to a “work log”. Are there examples of how to do this with CC?


"Implement phase 1 of the PRD, when done update the PRD and move on to phase 2."


yep, exactly, using it like this myself

I think both /compact and /clear are valuable / have their own use cases.

my small mental mode: - really quick fix / need to go over board with context -> just /compact + continue pushing - next phase -> ask for handover document or update worklog, and then send fresh one to new phase.


Thank you for this. I didn't know this was an option.


I notice when I'm getting close and I tell it how to document current state into an .md file. Then I hit /clear and @ the new file.

This is probably very similar to /compact except I have a lot of control over the resulting context and can edit it and /clear again and retry if I run into an issue.


Seems like those issues are largely limited by SDK so urging Anthropic to adopt is the only realistic move


Yeah I was initially excited here, but it feels more like a demonstration of what's possible rather than a working tool.

I found the interface very nice but quickly ran up against limitations on prompt length (it wasn't that long) for example. I am used to being able to give detailed instructions, or even paste in errors/tracebacks.

I'll check back in in a few months.


Bun has been awesome for me and my team fwiw


From the article:

> What I found interesting is how it forced me to think differently about the development process itself. Instead of jumping straight into code, I found myself spending more time articulating what I actually wanted to build and high level software architectural choices.

This is what I already do with Claude Code. Case in point, I spent 2.5 hours yesterday planning a new feature - first working with an agent to build out the plan, then 4 cycles of having that agent spit out a prompt for another agent to critique the plan and integrate the feedback.

In the end, once I got a clean bill of health on the plan from the “crusty-senior-architect” agent, I had Claude build it - took 12 minutes.

Two passes of the senior-architect and crusty-senior-architect debating how good the code quality was / fixing a few minor issues and the exercise was complete. The new feature worked flawlessly. It took a shade over 3 hours to implement what would have taken me 2 days by myself.

I have been doing this workflow a while, but Claude Code released Agents yesterday (/agents) and I highly recommend them. You can define an agent on the basis of another agent, so crusty-architect is a clone of my senior-architect but it’s never happy unless code was super simple, maintainable, and uses well established patterns. The debates between the two remind me of sitting in conf rooms hashing an issue out with a good team.


I love how I am learning about a new claude code feature in a comment on HN - nowhere to be found on their release notes https://docs.anthropic.com/en/release-notes/claude-code

Thanks for the tip!

I've been attempting to do this kind of thing manually w/ mcp - took a look at "claude swarm" https://github.com/parruda/claude-swarm - but in the short time I spent on it I wasn't having much success - admittedly I probably went a little too far into the "build an entire org chart of agents" territory

[EDIT]: looks like I should be paying attention to the changelog on the gh repo instead of the release notes

https://github.com/anthropics/claude-code/blob/main/CHANGELO...

[EDIT 2]: so far this seems to suffer from the same problem I had in my own attempts which is that I need to specifically tell it to use an agent when I would really like it to just figure that out on its own

like if I created an agent called "code-reviewer" and then I say - "review this code" ... use the agent!


Roo Code has had Orchestrator mode doing this for a while with your models of choice. And you can tweak the modes or add new ones.

What I have noticed is the forcing function of needing to think through technical and business considerations of ones work up front, which can be tedious if you are the type that likes to jump in and hack at it.

For many types of coding needs, that is likely the smarter and ultimately more efficient approach. Measure twice, cut once.

What I have not yet figured out is how to reduce the friction in the UX of that process to make it more enjoyable. Perhaps sprinkling in some dopamine triggering gamification to answering questions.


You planned and wrote a feature yesterday that would have taken yourself 2 whole days? And you already got it reviewed and deployed it and know that 'it works flawlessly'?

....

That reminds me of when my manager (a very smart, very AI-bullish ex-IC) told us about how he used AI to implement a feature over the weekend and all it took him was 20 mins. It sounds absolutely magical to me and I make a note to use AI more. I then go to review the PR, and of course there are multiple bugs and unintended side-effects in the code. Oh and there are like 8 commits spread over a 60 hour window... I manually spin up a PR which accomplishes the same thing properly... takes me 30mins.


This sounds like a positive outcome? A manager built a proof-of-concept of a feature that clearly laid out and fulfilled the basic requirements, and an engineer took 30 mins to rewrite it once it's been specified.

How long does it typically take to spec something out? I'd say more than 20 mins, and typical artifacts to define requirements are much lossier than actual code - even if that code is buggy and sloppy.


Not at all.

What was claimed was that a complete feature was built in record time with AI. What was actually built was a useless and buggy piece of junk that wasted reviewer time and was ultimately thrown out, and it took far longer than claimed.

There were no useful insights or speed up coming out of this code. I implemented the feature from scratch in 30 mins - because it was actually quite easy to do manually (<100 loc).


This seems more of a process problem than a tooling problem. Without specs on what the feature was, I would be inclined to say you manager had a lapse in his "smartness", there was a lot of miscommunication on what was happening, or you are being overly critical over something that "wasted 30 minutes of your time". Additionally, this seems like a crapshoot work environment...there seems to be resentment for the manager using AI to build a feature that had bugs/didn't work...whereas ideally you two sit down and talk it out and see how it could be managed better next time?


Not at all, there is no resentment - that's your imagination. There is nothing about what I described that indicates that it's a bad work environment - I quite like it.

You're bringing up various completely unrelated factors seemingly as a way of avoiding the obvious point of the anecdotal story - that AI for coding just isn't that great (yet).


Would you mind sharing the prompts you use for your subagents? It sounds very interesting!


How exactly to you "create an agent" with the personalities you are talking about?


/agents


The parent commenter had agents with personalities before the release of the agents feature in Claude Code, that's why I was asking.


Prompt it to behave in a certain way?


How does this compare to Google Diffusion? Diffusion writes out at seemingly the speed of thought.


we're quite a bit faster and specifically training for merging code edits.

Google diffusion is a swing at a generalist model. Super cool work nonetheless


I use AI to help my high-school age son with his AP Lang class. Crucially, I cleared all of this with his teacher beforehand. The deal was that he would do all his own work, but he'd be able to use AI the help him edit it.

What we do is he first completes an essay by himself, then we put it into a Claude chat window, along with the grading rubric and supporting documents. We instruct Claude to not change his structure or tone but edit for repetitive sentences, word count, correct grammar, spelling, and make sure his thesis is sound and pulled throughout the piece. He then takes that output and compares it against his original essay paragraph-by-paragraph, and he looks to see what changes were made and why, and crucially, if he thinks its better than what he originally had.

This process is repeated until he arrives at an essay that he's happy with. He spends more time doing things this way than he did when he just rattled off essays and tried to edit on his own. As a result, he's become a much better writer, and it's helped him in his other classes as well. He took the AP test a few weeks ago and I think he's going to pass.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: