Hacker Newsnew | past | comments | ask | show | jobs | submit | bredren's commentslogin

  Location: Portland, Oregon
  Remote: Yes
  Willing to relocate: No
  Technologies: Python (Django, FastAPI), PostgreSQL, Redis/Celery,
  AWS, Docker, Terraform, LLM integration (GPT, Claude), data pipelines/ETL
  Resume: https://banagale.com/cv/
  Email: [email protected]
Hello. I am a Senior backend engineer with 10+ years building Django-based systems and data pipelines.

Most recently, I designed and implemented an LLM-assisted data pipeline that converted security bulletins into actionable intelligence for an enterprise cyber security product.

I enjoy working with Django, previously migrated live auth systems with zero downtime and took SaaS products from prototype to production.

I have founded a startup and grown the business from zero to profitable exit.

I'm seeking a senior backend, data engineering, of founding engineer role at a stable, product-focused company. I'm strong in API design, data modeling, and production AI integrations.

Please reach out if you would like to chat. I look forward to meeting with you.


OpenAI picking up where Apple Intelligence continues to severely lag.

I'd prefer these features were bundled into MacOS.

Where possible, process using FoundationLLM, and having Apple reach for their own privately hosted instance of a frontier model when needed.

It seems obvious to me the company must transform macOS's capabilities here as quality AI assistance is enmeshed in the operating system's UX as a whole.

I think Apple Intelligence probably has good bones to begin with but is vastly underpowered in the local model and needs to hide frontier model usage completely in its tech stack.


The whole integration thing is weird. Siri sucks. ChatGPT can be triggered in a similar way. Siri can use ChatGPT. AppleIntelligence is garbage. I think apple is in a weird crisis spot where they can't quite figure out how to integrate it all, and are scared of ditching Siri entirely. Or maybe any kinds of ChatGPT integrations have just been stopgaps.

Or, they go way deeper into integrations. They let ChatGPT in deeper. And they even give up that coveted 'default search' spot that Google pays them ~20b a year for. Atlas seems like it would compete with Safari?

It is interesting that OpenAI seems to be doing an Apple-first approach with some of its projects (sora2, Atlas)


> Siri sucks.

I’m surely in a niche group here, but I’m appreciating Siri more and more.

It’s a mostly competent tool for basic operations and simple questions. For something I interact with over audio, I’ll choose that over a bullshit machine any day of the week.


That's interesting. It has literally gotten worse over time. The same queries that were fine earlier now fail or go wrong. Too often it fails with basically "I can't do that" but not explain why.

A tool like that shouldn't just be "mostly competent" -- the failures just mean it's not worth the time to try (i might as well use another guaranteed tool rather than the coin flip and time sink of asking siri).


I only use Siri for setting reminders and thankfully it works just fine for that.


Siri does nothing except set timers. It is completely useless.


We use it to play music in the car on Spotify sometimes. While it is really bad ad this we found out it mishears quite reliably so there are now some crazy things that I would have never played myself that my family has heard in the car. This has brought a lot of joy to some mundane drives.

But when I am alone on a run I really wished it would just work because without someone else to laugh about it, it really sucks


That's funny you mention that—very similar experience here. My partner and I often get a laugh out of the strange occasional errors. Things like responding with "…huh?", and then completing the task.

I'd rather see a robot fail rather than eat the world and fill it with trash. But the running use case does sound very annoying!


> I'd prefer these features were bundled into MacOS.

I’d prefer they wouldn’t.

> It seems obvious to me the company must transform macOS's capabilities here as quality AI assistance is enmeshed in the operating system's UX as a whole.

A hundred thousand times no! Today’s Apple is highly incompetent¹ on the software and UX design fronts. They’re making macOS more broken by the release and you want them to screw it up even more with invasive features that they’ve proven they’re not good at? Might as well switch to <insert OS you don’t like> already.

¹ I believe they could do better if they had more time between releases or smaller scopes. But they don’t do that, so the result is the same.


I think my expectations for apple intelligence were just too high. All I've really seen it do on my macbook is suggest "Sounds great!" or "See you there!" as responses in Messages, and it's like, really, it took the best engineers on earth working round the clock to come up with this?


Not touching os integration for this shit with a 20 ft pole.


  Location: Portland, Oregon
  Remote: Yes (hybrid in Portland area preferred)
  Willing to relocate: No
  Technologies: Python (Django, FastAPI), PostgreSQL, Redis/Celery,
  AWS, Docker, Terraform, LLM integration (OpenAI, Claude), data pipelines/ETL
  Resume: https://banagale.com/cv/
  Email: [email protected]
Hello. I am a Senior Backend Engineer with 10+ years building Django-based systems and data pipelines.

At Eclypsium, I designed medallion architectures for normalizing unstructured advisory data at scale, integrated LLMs for extraction workflows, built tooling for AI agent distribution across engineering teams, and automated vendor data ingestion pipelines.

I previously migrated live auth systems with zero downtime and took SaaS products from prototype to production.

I'm seeking a senior backend or data engineering role at a stable, product-focused company. Strong in API design, data modeling, and production AI integration.


Maybe consuming the resources internally.


Don’t rely entirely on CC. Once a milestone has been reached, copy the full patch to clipboard and the technical spec covering this. Provide the original files, the patch and the spec to Gemini and say ~a colleague did the work and does it fulfill the aims to best practices and spec?

Pick among the best feedback to polish the work done by CC—-it will miss things that Gemini will catch.

Then do it again. Sometimes CC just won’t follow feedback well and you gotta make the changes yourself.

If you do this you’ll be more gradual but by nature of the pattern look at the changes more closely.

You’ll be able to realign CC with the spec afterward with a fresh context and the existing commits showing the way.

Fwiw, this kind of technique can be done entirely without CC and can lead to excellent results faster, as Gemini can look at the full picture all at once, vs having to force cc to consume vs hen and peck slices of files.


I've done evaluations of Github Copilot, Sourcegraph Cody and Gitlab Duo and Copilot is not garbage, but rather the by far leader among these other options.


Did you compare to Cursor? We gave up on Copilot a while back after Cursor blew us away. In the context of this article though, Cursor is very obviously tuned better towards Claude than OpenAI in my experience.


Cursor's agent is better, but the in-editor suggestions by Copilot when you're actually the one coding are very useful. Claude's agent is better than Cursor, so I'm not sure where Cursor fits in with this ecosystem.


"Best option among loser tools" isn't the high praise you think it is, though.


Did you compare it against any actual leading tools? Cody and Duo, really? Did you try Cursor and Claude Code?


Cursor and Claude Code are different modalities. The org provides licenses and API access to both these tools.

However, many engineers still use vscode, jetbrains and vim. I maintain copilot is the best for cross IDE modality.


It is a pretty serious problem. New model with no product to effectively demo it.


If you have it right, there is a brief discussion on semantic linting in this recent interview with Boris Cherny and Catherine Wu on the Latent Space podcast related to AI-assisted CLI behavior here: https://www.youtube.com/watch?v=zDmW5hJPsvQ&t=1760s

I've not explored this use of CC yet, anyone actively using AI-assisted CLI in CI/CD? Not automated PR review but either to semantically pass / fail an MR or some other use of terminal-capable, multi-context mashup during CI/CD?


Undoubtedly. It would otherwise reduce the perceived value of their current product offering.

The question is how much better the new model(s) will need to be on the metrics given here to feel comfortable making these available.

Despite the loss of face for lack of open model releases, I do not think that was a big enough problem t undercut commercial offerings.


I have experimented with instructing CC to doubt itself greatly and presume it is not validating anything properly.

It caused it to throw out good ideas for validation and working code.

I want to believe there is some sweet spot.

The constant “Aha!” type responses followed by self validating prose that the answer is at hand or within reach can be intoxicating and can not be trusted.

The product is also seemingly in constant flux of tuning, where some sessions result in great progress, others the AI seems as if it is deliberately trying to steer you into traffic.

Anthropic is alluded toward this being the result of load. They mentioned in their memo about new limits for Max users that abuse of the subscription levels resulted in ~subpar product experiences. It’s possible they meant response times and the overloaded 500 responses or lower than normal TPS, but there are many anecdotal accounts of CC suddenly having a bad day from “longtime” users, including myself.

I don’t understand how load would impact the actual model’s performance.

It seems like only load based impacts on individual session context would result in degraded outputs. But I know nothing of serving LLM at scale.

Can anyone explain how high load might result in an unchanged product performing objectively worse?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: