Hacker Newsnew | past | comments | ask | show | jobs | submit | destel's commentslogin

Some examples are mind blowing. It’s interesting if it can generate web/app designs


I just tried it for an app I'm working on.. very bad results


I had similar bad results with gpt 4o/5 when they got these image generation capabilities.

Don’t know what’s the reason: my bad prompting or these models being tuned to work with photos/illustrations only


UPD.

I've just made a small but important clarification to the article. While in many cases it's easier and even preferred to calculate all results, accumulate them somewhere, then sort; this article focuses on memory bound algorithms that support infinite streams and backpressure.


Thanks, but I'd still use a queue over this solution.

Real-time Log Enrichment: perfect for my example [0], you're firing off endless tasks. RT logs have a timestamp.

Finding the First Match in a File List: Files tend to be static. I'd use a queue to first build an index and then a queue to process the index.

Time Series Data Processing: Break the data into chunks, you mention 600MB, which isn't that big at all given that Cloud Run memory maxes out at 32GB.

[0] https://news.ycombinator.com/item?id=45094387


I've just made a small but important clarification to the article. While in many cases it's easier and even preferred to calculate all results, accumulate them somewhere, then sort; this article focuses on memory bound algorithms that support infinite streams and backpressure.


Wow, that’s some seriously sophisticated stuff - it’s not that often you see a heap used in typical production code (outside of libraries)!

Your first example definitely gives me merge-sort vibes - a really clean way to keep things ordered across multiple sources. The second and third scenarios are a bit beyond what I’ve tackled so far, but super interesting to read about.

This also reminded me of a WIP PR I drafted for rill (probably too niche, so I’m not sure I’ll ever merge it). It implements a channel buffer that behaves like a heap - basically a fixed-size priority queue where re-prioritization only happens for items that pile up due to backpressure. Maybe some of that code could be useful for your future use cases: https://github.com/destel/rill/pull/50


Hah not sure about “production”, I am currently in between jobs and am taking advantage of that to work on a docker/k8s/file TUI log viewer.

I am using those techniques respectively for loading backups (I store each container log in a separate file inside a big zip file, which allows concurrent reading without unpacking) and for servicing the various log producing goroutines (which use the docker/k8s apis as well as fsnotify for files) since I allow creating “views” of containers that consequently need to aggregate in order. The TUI itself, using tview, runs in a separate goroutine at configurable fps reading from these buffers.

I have things mostly working, the latest significant refactoring was introducing the btree based reading after noticing the “fix the order” stalls were too bad, and I am planning to do a show hn when I’m finished. It has been a lot of fun going back to solo-dev greenfield stuff after many years of architecture focused work.

I definitely love golang but despite being careful and having access to great tools like rr and dlv in goland, it can get difficult sometimes to debug deadlocks sometimes especially when mixing channels and locks. I have found this library quite useful to chase down deadlocks in some scenarios https://github.com/sasha-s/go-deadlock


Hi everyone, I’m the author of the article. Happy to answer any questions or discuss concurrency patterns in Go. Curious how others tackle such problems.


I might be too late for you to see this, but I'm curious why your final example requires the function "f" to receive the canWrite channel. I must be missing something. Couldn't the Ordered map signature be:

  func OrderedLoop[A, B any](in <-chan A, done chan<- B, n int, f func(a A))
Instead of:

  func OrderedLoop[A, B any](in <-chan A, done chan<- B, n int, f func(a A, canWrite <-chan struct{}))
Or is there a reason that "f" can't be wrapped in an anonymous function to handle the blocking logic?


You're not late. Just yesterday I've configured new reply notifications via RSS.

Typically function "f" does two things. 1. Performs calculations (this can be parallelized). 2. Writes results somewhere (this must happen sequentially and in the correct order).

Here's the typical OrderedLoop usage example from the article:

  OrderedLoop(in, out, n, func(a A, canWrite <-chan struct{}) {
   // [Do processing here]
   
   // Everything above this line is executed concurrently,
   // everything below it is executed sequentially and in order
   <-canWrite
   
   // [Write results somewhere]
  })

Without the "canWrite" the entire body of the "f" will be executed either concurrently or sequentially. With it - we can have this dual behavior, similar to critical sections protected by mutexes.

It's worth mentioning that OrderedLoop is a low-level primitive. User-level functions like OrderedMap, OrderedFilter and others do not need the "canWrite" channel to be exposed.


chan chan Foo seems like a cool trick, looking forward to use it in the code; thanks for the idea.

PS. I realize you present even better solution; still, first version seems like a thing nice enough to have in a toolbox


Thanks. This replyTo pattern is very similar to promises in other languages.


> Curious how others tackle such problems.

What do you think about the order preserving simplicity of Java?

  List<Input> inputs = ...;

  List<Output> results = inputs.parallelStream()
                             .map(this::processTask)
                             .collect(toList());
If you want more control or have more complex use cases, you can use an ExecutorService of your choice, handle the futures yourself or get creative with Javas new structured concurrency.


Their planned semantics don't allow for that - there's no backpressure in that system, so it might race ahead and process up to e.g. item 100 while still working on item 1.

If everything fits in memory, that's completely fine. And then yeah, this is wildly overcomplicated, just use a waitgroup and a slice and write each result into its slice index and wait for everything to finish - that matches your Java example.

But when it doesn't fit in memory, that means you have unbounded buffer growth that might OOM.


I haven’t used Java for about a decade, so I’m not very familiar with streams api.

Your snippet looks good and concise.

One thing I haven’t emphasized enough in the article is that all algorithms there are designed to work with potentially infinite streams


Often in go I’ll create some data structure like a map to hold the new value keyed by the original index (basically a for loop with goroutines inside that close over the index value) - then I just reorder them after waiting for all of them to complete.

Is this basically what Java is doing?

I think that maybe the techniques in this article are a little more complex, allowing you to optimize further (basically continue working as soon as possible instead of just waiting for everything to complete and reordering after the fact) but I’d be curious to know if I’ve missed something.


It's a reasonable solution. The problem with this solution is mentioned in the article, you necessarily have the worst case memory usage because you have to store everything in the map first. If you don't have too much to store, it will work.


Hi everyone. I built a tiny web game where the goal is to pick the lowest number that nobody else picked. It runs once per day — if someone picks the same number as you, you’re both eliminated.

Since the mechanics are simple, I focused on feel: vintage-poster styling, small animations, and a snappy number-picking interaction.

By day I’m a Go backend developer, so this pushed me into much more frontend/design than usual.

I’d love feedback on both the gameplay and UX. I’m also genuinely curious: how large can the smallest winning number get as the player count grows?

Stack: Nuxt, Supabase and Cloudflare workers


Author here. While the article uses database updates as the example, the same pattern works great for any high-frequency API calls or operations that could benefit from batching. The goal was to make the solution completely transparent - developers just call a normal-looking function, and the batching happens automatically under the hood. I've used several variations of this pattern in production, and it's been particularly effective at reducing costs and latency.


Author here. Recently started a personal dev blog - happy to answer any questions.


Thank you. I "stealed" the name from scala. They have the similar value+error type. Maybe in context it rill the better name could have been "Item"


I have to be honest - I haven't ever heard about it. Just checked and found it's very mature and popular, though it seems to have had no commits in the last 2 years.

After quick scanning, I'd highlight these 2 differences: - Rill is type safe and uses generics - Rill does not try to hide channels, it just provides primitives to transform them


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: