Hacker Newsnew | past | comments | ask | show | jobs | submit | nesarkvechnep's commentslogin

For 10+ years in the industry I can safely say that almost nobody knows or cares about HTTP caching. It’s sad.

I thought it is something like Liquid Haskell...


If we embraced REST, as Roy Fielding envisioned it, we wouldn't have this, and all similar, conversations. REST doesn't expose identifier, it only exposes relationships. Identifiers are an implementation details.

I'm thinking of using C++ for a personal project specifically for the lambdas and RAII.

I have a case where I need to create a static templated lambda to be passed to C as a pointer. Such thing is impossible in Rust, which I considered at first.


Yeah, Rust closures that capture data are fat pointers { fn*, data* }, so you need an awkward dance to make them thin pointers for C.

    let mut state = 1;
    let mut fat_closure = || state += 1;
    let (fnptr, userdata) = make_trampoline(&mut &mut fat_closure);

    unsafe {
        fnptr(userdata);
    }

    assert_eq!(state, 2);

    use std::ffi::c_void;
    fn make_trampoline<C: FnMut()>(closure: &mut &mut C) -> (unsafe fn(*mut c_void), *mut c_void) {
        let fnptr = |userdata: *mut c_void| {
            let closure: *mut &mut C = userdata.cast();
            (unsafe { &mut *closure })()
        };
        (fnptr, closure as *mut _ as *mut c_void)
    }
    
It requires a userdata arg for the C function, since there's no allocation or executable-stack magic to give a unique function pointer to each data instance. OTOH it's zero-cost. The generic make_trampoline inlines code of the closure, so there's no extra indirection.


> Rust closures that capture data are fat pointers { fn, data }

This isn’t fully accurate. In your example, `&mut C` actually has the same layout as usize. It’s not a fat pointer. `C` is a concrete type and essentially just an anonymous struct with FnMut implemented for it.

You’re probably thinking of `&mut dyn FnMut` which is a fat pointer that pairs a pointer to the data with a pointer to a VTable.

So in your specific example, the double indirection is unnecessary.

The following passes miri: https://play.rust-lang.org/?version=nightly&mode=debug&editi...

(did this on mobile, so please excuse any messiness).


This is a problem for all capturing closures though, not just Rust's. A pure fn-ptr arg can't have state, and if there's no user data arg then there's no way to make a trampoline. If C++ was calling a C API with the same constraint it would have the same problem.

Well, capturing closures that are implemented like C++ lambdas or Rust closures anyway. The executable stack crimes do make a thin fn-ptr with state.


If Rust has a stable ABI on where the data* is in the function arguments (presumably first?), you don't need to do anything if it matches the C code's expected function signature including the user context arg.

Unfortunately a lot of existing C APIs won't have the user arg in the place you need it, it's a mix of first, last, and sometimes even middle.


I know about this technique but it uses too much unsafe for my taste. Not that it's bad or anything, just a personal preference.


It can be done in 100% safe code as far as Rust is concerned (if you use `dyn Fn` type instead of c_void).

The only unsafe here is to demonstrate it works with C/C++ FFI (where void* userdata is actually not type safe)


Yes but my problem wasn’t with the user data pointer but the fact that I needed a STATIC generic lambda. Static because the C library then forks and continues to call the lambda in the new process but I also type based conversions in it.


In Rust, could you instead use a templated struct wrapping a function pointer along with #[repr(C)]?


I’m yet to see a spreadsheet workflow successfully replaced by something else.


Programming in a spreadsheet is an anti-pattern. Does anyone review your workflow? Write tests for it? Use a real programming language; a notebook at least.


Streamlit apps or similar are doing a great job at this where I'm at.

As simple to build and deploy as Excel, but with the right data types, the right UI, the right access and version control, the right programming language that LLMs understand, the right SW ecosystem and packages, etc.


Are they actually as simply to deploy as Excel? My guess would be that most streamlit apps never make it further than the computer they're written on.


If you have the right tooling (e.g. Streamlit.io) then yes, literally.

To 'deploy' an Excel file I go to Office365 and create my excel file and hit share

To 'deploy' a Streamlit file I go to Streamlit and write my single file python code (can be a one liner) and hit share


maybe the strategy in those cases is to augment the core spreadsheet with tools that can unit test it and broadcast changes etc


Well, take the corporate and user interest which Linux sees into account. FreeBSD is a niche desktop OS. We can’t expect everything to work. The easiest way is for me and you to start contributing and things might change for the better.


I don’t disagree with that, but I don’t see why this matters to the end user.


Wait until the author discovers FreeBSD’s Capsicum. I believe it’s superior to most of the APIs provided by other major OSs.


Like most thing in web development, this is backwards. Applications should be generated from the spec, not the other way around.


Care to explain the advantage of starting off with the spec rather than with code?


Because, you can iterate on the spec with all the stakeholders without ever writing a line of business logic. There are tools which can create a dummy web server from the specification and you can build clients without implementing business logic. I thought the advantages of spec-first development are obvious but I hope I helped.


Thanks. The advantages of having a spec is obvious to me as well. I'm just not sure why building business logic that generates the spec (that generates the client) is a bad idea. That way you still have a single source of truth, and a spec.


What's wrong with using any BSD? Can't people use whatever suits their needs?


Of course, I'm genuinely curious why BSDs are more popular as firewalls.


Because of pf[1]. It's just a very capable firewall with a pleasurable configuration language.

[1] https://www.openbsd.org/faq/pf/


Agreed, `pf` is a delight to use.

Borrowing a demonstration from https://srobb.net/pf.html

    tcp_pass = "{ 22 25 80 110 123 }"
    udp_pass = "{ 110 631 }"
    block all
    pass out on fxp0 proto tcp to any port $tcp_pass keep state
    pass out on fxp0 proto udp to any port $udp_pass keep state

Note last rule matching wins, so you put your catch-all at the top, "block all". Then in this case fxp0 is the network interface. So they're defining where traffic can go to from the machine in question, in this case any source as long as it's to port 22, 25, 80, 110, or 123 for TCP, and either 110 or 631, for UDP.

<action> <direction> on <interface> proto <protocol> to <destination> port <port> <state instructions>


One can further parametrize things with, e.g.,

    int_if = "fxp0"
The BSDs still tend to use device-specific names versus the generic ethX or location-specific ensNN, so if you have multiple interfaces knowing about internal and external may help the next person who sees your code to grok it.


doing the same thing with nftables is not really complicated either


The documentation on BSDs, and in particular OpenBSD, are generally high quality


How rewriting something internally makes Racket not mature? Sounds like refactoring to me and with an extensive test suite there's nothing to be hysterical about.


Maybe I just have a different working definition of these words. To me "mature" means "fully developed" and "polished" means "achieved a high level of refinement". To me, rewriting it all to introduce a major feature that fills in a longstanding hole in the language doesn't say "mature and polished". Because often times many bugs are introduced into a codebase on a major rewrite despite extensive test suites, especially at the interfaces between features. Typically people might prefer a mature codebase to one that's just been rewritten precisely because it hasn't been vetted over years. "mature rewrite" sounds like an oxymoron to me, and I guess no one else agrees but I find it strange. That is all.


I think they don't always communicate well with industry practitioners, and your reactions are great evidence of that.

Racket is lead by professors, and (as is sometimes the case in systems research) some of them are very highly skilled software developers, well above HN average. But they have not been working in industry, and some have never worked in industry, so they don't always know what notes to hit, and they don't always know current subtleties of practice.

My best example of this is when someone kept saying the platform was "batteries included". My reaction was, my god, no, please don't say that: the first time the wrong person sees that, invests time with that expectation, and finds all the ways that is absolutely not true by industry convention, they will rip the ecosystem a new one.

Set expectations properly, and you attract the right people, who will love it, and they will also disproportionately be great programmers.

That said, the software engineering quality situation is much better than the impression you seem to have. They've done a very solid job of rehosting Racket internals, and of generally maintaining backward-compatibility over the years. Much better than Python, for example. (Also, Racket docs are usually much better than most ecosystems I have used in recent years.)


The rewrite started in 2017.

Fears about refactoring introducing bugs are fine and valid - but after eight years, haven't really happened. Seems the extensive test suite did its job.

This isn't a case of Python 2 v 3. Packages weren't broken en masse. The API remained stable.

If anything, the rewrite has proved that it is mature. Because they could perform a refactor without breaking everyone's everyday.


I agree. I remember very few bugs caused by the rewrite, but I don't remember recent ones.

For example, I found a bug running the tests of the r7rs package, it was simplified to a bug in "plain" Racket and later fixed, 3 days after the initial report. It was in June 2019 https://github.com/racket/racket/issues/2675 Note that at that time, the default version of Racket was he old one (before the rewrite).


Nobody would consider Chrome or Firefox to be immature or lacking polish because they have replaced entire compilers several times over the past few years? I don't have an exact count, but they probably do this every 3-5 years which puts them way ahead of Racket.

I'd also note that Chez Scheme was a commercial implementation bought and open-sourced by Cisco. It wasn't something they threw together. Because it is a complete scheme v6 implementation they are building on instead of rolling their own implementation in C. Coding against a stable Scheme API has to be easier and less buggy than what they had before (not to mention Chez being much faster at a lot of stuff).


I think you should read the old posts as to why Racket transitioned it's guts from primarily C to Chez Scheme. It would save a lot of time in the discussion here if you became familiar with that transition.

The short story is the same as anything written in C: it's an unwieldy language.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: