Hacker Newsnew | past | comments | ask | show | jobs | submit | listeria's commentslogin

presumably the derivation would involve a cryptographically secure, non-reversible function so as to not compromise the secret should one of them be leaked.


It's actually seconds since epoch, not milliseconds. Here's a small program to verify it:

  date -u --date='2038-01-19T03:14:08' +%s | perl -ne 'printf "%#x\n", $_'
It is also mentioned in perldoc Time::Piece [1], as a warning for people whose build of perl uses 32-bit integers for time.

[1]: https://perldoc.perl.org/Time::Piece#Use-of-epoch-seconds


something like this would do it:

  package main
  
  import (
    "sync"
    "time"
  )
  
  type WaitGroup struct {
    sync.WaitGroup
  }
  
  func (wg *WaitGroup) Go(fn func()) {
    wg.Add(1)
    go func() {
      defer wg.Done()
      fn()
    }()
  }
  
  func main() {
    var wg WaitGroup
    wg.Go(func() { time.Sleep(1 * time.Second) })
    wg.Wait()
  }


This is really amazing, thank you so much!


It's called struct embedding, there an article about it in Go by Example if you're interested: https://gobyexample.com/struct-embedding


Thank you @listeria, today I learned about struct embedding lol!


I rediscover this about once a year and am always so happy when I do.


They link to an old article [1] that was featured in HN [2] somewhat recently, in which there's a workaround for older standards with regards to typeof.

[1]: https://danielchasehooper.com/posts/typechecked-generic-c-da...

[2]: https://news.ycombinator.com/item?id=44425461


They mention using this as the backing array for a power-of-two-sized hash table, but I don't think it would be very useful considering that the hash table won't provide stable pointers, given that you would need to rehash every element as the table grows. Even if you just wanted to reuse the memory, rehashing in-place would be a PITA.


I think they mentioned it's for an arena, where stability is necessary. You might happen to use said arena for a hash table


From TFA

> This page is a riff on Patrick Collison's list of /fast projects.

Maybe as a HN post, but the blog is in response to https://patrickcollison.com/fast


This is the first I've heard of using an open pipe to poll for subprocess termination. Don't get me wrong, I don't hate it, but you could just as easily have a SIGCHLD handler write to your pipe (or do nothing, since poll(2) will be fail with EINTR), and you don't have to worry about the subprocess closing the pipe or considering it some weird stddata fd like tree does here.


`SIGCHLD` is extremely unreliable in a lot of ways, `pidfd` is better (but Linux-specific), though it doesn't handle the case of wanting to be notified of all grandchildren's terminations after the direct child dies early.


In what ways is it unreliable? You're notified when a child is stopped, terminated, or continued, and you can make it so that you're only notified of termination using SA_NOCLDSTOP.

How is `pidfd` better, why would you use it instead of a SIGCHLD handler writing to a pipe?


Imagine you are a library, that needs to fork (and exec, of course) some children processes to do its work. Since you're a library, you can't just set a SIGCHLD handler (for instance, the main application could have set SA_NOCLDWAIT for itself, and let's not even get into the question of which thread will actually receive the signals). Polling waitid(2) for each of them would require a separate thread per child process, unless you can put all of your children, current and future, into a single separate process group — which you normally can't. I guess you could try to use a single thread to do waitid(2) with WNOWAIT for all children processes and filter out only those you're interested in... but you probably will keep getting the one you're not interested in over and over again, until the main thread will reap it. There are some improvements to this I can think of, but honestly, all the traditional waitXXX(2) functions simply are not properly composable.


This is actually a well known technique, often called the self-pipe trick. There’s a good overview in the following SO answer from Rich Felker, muslc’s author:

https://stackoverflow.com/questions/3703013/what-can-cause-e...


The technique described in the SO answer doesn't really apply here, since the write end of the pipe would be closed on exec in that case. Whereas in this case they're waiting for it to be closed after the child dies.


Sure, it may not apply in this specific case. But you said “This is the first I've heard of using an open pipe to poll for subprocess termination” and I was just pointing out that this is a well known technique.


have you heard of script/scriptreplay?

  script --log-timing file.tm --log-out script.out
  # do something in a terminal session ...

  scriptreplay --log-timing file.tm --log-out script.out
  # replay it, possibly pausing and increasing/decreasing playback speed


weirdly enough, the provided implementation of rand_float() actually generates numbers in the interval [0, 1).

I haven't read the implementation of _generate_canonical_generic(), but the following special case in generate_canonical() applies here:

  if constexpr (prng_is_bit_uniform && sizeof(T) == 4 && sizeof(generated_type) == 8) {
    return (static_cast<std::uint32_t>(gen()) >> exponent_bits_32) * mantissa_hex_32;
  }
which boils down to:

  return (static_cast<std::uint32_t>(gen()) >> 8) * 0x1.p-24;
that is, a number in the interval [0, 2^24) divided by 2^24.


Thank you for noticing! Turns out [0, 1] is an artifact of the old documentation carried back from the time where generic GCC / clang approach used to produce occasional 1's due to some issues in N4958 specification. This is fixed in a new commit.

For floats there are essentially 3 approaches that are selected based on the provided range & PRNG:

  1) Shift + multiply (like `(rng >> 11) * 0x1.0p-53`)
  2) "Low-high" from the paper by J. Doornik
  https://www.doornik.com/research/randomdouble.pdf
  3) Canonical GCC implementation
All of these generate values in [0, 1) range which is now reflected in the docs properly. For most actual cases 1st method is the one selected.


In this case I would suggest using the high bits of the RNG output when generating a float, since some generators have better entropy around the high bits.

So when you're generating floats with a 64-bit generator, instead of masking out the high bits with the static_cast, you may want to use the following:

  return (gen() >> 40) * 0x1.0p-24;


he's talking about how in C++ you write the modifier once, and all subsequent declarations have the same access specifier, i.e:

  class A {
  public:
    int a;
    float b;
  private:
    ...
  };


Ok, thanks - but surely this is the most sensible and readable way of doing things? Otherwise, why not make every member begin with "class A"? There probably is some language that requires this, I guess :-) Isn't language design wonderful.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: