Hacker News new | past | comments | ask | show | jobs | submit | more pornel's comments login

UK regulations forced banks to create APIs for money transfers, and UK banks now have instant free transfers: https://en.wikipedia.org/wiki/Faster_Payments

USA is just bad at governing. Tries not to tell corporations what to do, so it ends up with toothless half-assed laws that do nothing except being a tool for regulatory capture.


That's just an overcomplicated way of doing pre-authorization.

Talk about decentralisation and anti-deplatforming makes no sense here. Concerts are a physical thing happening in the real world, organized by selected "centralized" entities. Venues can refuse to host an artist. Artist can "rug pull" by refusing to host. Imaginary tokens can't do anything about that, and we already have laws, contracts, and currencies that have been dealing with that for as long as these things existed.


Card Network Rules: Payment processors and card networks have rules about the use of pre-authorizations. Excessive or inappropriate use can be flagged, potentially leading to penalties, holds on your account, or even termination of your merchant account. Customer Experience: Imagine a customer who participates in several auction bids and has a pre-authorization placed for each bid. This can lead to: Blocked Funds: A large amount of their credit limit could be temporarily blocked, making it difficult for them to use the card for other transactions. Confusion: Customers might be confused about multiple holds on their account, leading to inquiries and chargebacks. Negative Experience: A poor customer experience can hurt your reputation. Risk of Expiration & Release: If pre-authorizations expire, and the auction is not completed, you might have to re-authorize, which can be disruptive and annoying to the customer. False Availability of Funds: Since not all bidders will win, placing holds on all bidders' accounts gives a misleading view of how much funding you might actually have available to you. Chargebacks & Disputes: Confused customers with multiple pre-authorizations are more likely to dispute charges, which can hurt your merchant standing and reputation. Processor Scrutiny: A merchant running a high volume of pre-authorizations relative to actual sales could be perceived as risky behavior. Processors will scrutinize businesses with higher dispute rates and high pre-authorization-to-capture ratios.

All of that can be custom for the industry, the same way air travel has had custom rules in credit card processing since forever.

Can be … oh we may negotiate with the middlemen to not deplatform us. How nice. Blockchain doesn’t solve any problems in the same way that giving people universal single payer health insurance didn’t solve any problems since you can always find a good employer who will just treat you well.

I find they're useful for the first 100 lines of a program (toy problems, boilerplate).

As the project becomes non-trivial (>1000 lines), they get increasingly likely to get confused. They can still seem helpful, but they may be confidently incorrect. This makes checking their outputs harder. Eventually silly bugs slip through, cost me more time than all of the time LLMs saved previously.


You're making a mistake assuming that the push for HTTPS-only Web is about protecting the content of your site.

The problem is that mere existence of HTTP is a vulnerability. Users following any insecure link to anywhere allow MITM attackers to inject any content, and redirect to any URL.

These can be targeted attacks against vulnerabilities in the browser. These can be turning browsers into a botnet like the Great Cannon. These can be redirects, popunders, or other sneaky tab manipulation for opening phishing pages for other domains (unrelated to yours) that do have important content.

Your server probably won't even be contacted during such attack. Insecure URLs to your site are the vulnerability. Don't spread URLs that disable network-level security.


The numbers are sent in this peculiar format, because that's how they are stored in the certificates (DER encoding in x509 uses big endian binary), and that's the number format that OpenSSL API uses too.

It looks silly for a small value like 65537, but the protocol also needs to handle numbers that are thousands of bits long. It makes sense to consistently use the same format for all numbers instead of special-casing small values.


For very big numbers (that could appear in these fields), generating and parsing a base 10 decimal representation is way more cumbersome than using their binary representation.

The DER encoding used in the TLS certificates uses the big endian binary format. OpenSSL API wants the big endian binary too.

The format used by this protocol is a simple one.

It's almost exactly the format that is needed to use these numbers, except JSON can't store binary data directly. Converting binary to base 64 is a simple operation (just bit twiddling, no division), and it's easier than converting arbitrarily large numbers between base 2 and base 10. The 17-bit value happens to be an easy one, but other values may need thousands of bits.

It would be silly for the sender and recipient to need to use a BigNum library when the sender has the bytes and the recipient wants the bytes, and neither has use for a decimal number.


OpenStreetMap often has building outlines, but not building height. This would be a nice way to augment that data for visualisations (remember: OSM doesn't take auto-generated bot updates, so don't submit that to the primary source).


It does have building height. That's why flightsim 2020 had those weird spikes all over the place, people putting "999" (or similar) as height on OSM.


It varies. New public APIs or language features may take a long time, but changes to internals and missed optimizations can be fixed in days or weeks, in both LLVM and Rust.


Couple of things that are commonly misunderstood/unappreciated about this:

• Uninitialized bytes are not just some garbage random values, they're a safety risk. Heartbleed merely exposed unitialized buffers. Uninit buffers can contain secrets, keys, and pointers that help defeat ASLR and other mitigations. As usual, Rust sets the bar higher than "just be careful not to have this bug", and therefore the safe Rust subset requires making uninit impossible to read.

• Rust-the-language can already use uninitialized buffers efficiently. The main issue here is that the Rust standard library doesn't have APIs for I/O using custom uninitialized buffers (only for the built-in Vec, in a limited way). These are just musings how to design APIs for custom buffers to make them the most useful, ergonomic, and interoperable. It's a debate, because it could be done in several ways, with or without additions to the language.


> Uninitialized bytes are not just some garbage random values, they're a safety risk.

Only when read. Writing to "uninitialized" memory[1] and reading it back is provably secure[2], but doesn't work in safe Rust as it stands. The linked article is a proposal to address that via some extra complexity that I guess sounds worth it.

[1] e.g. using it as the target of a read() syscall

[2] Because it's obviously isomorphic to "initialization"


Obviously, initialized memory isn't an uninitialized memory any more.

There are fun edge cases here. Writing to memory through `&mut T` makes it initialized for T, but its padding bytes become de-initialized (that's because the write can be a memcpy that also copies the padding bytes from a source that never initialized them).


Note that if you have a `&mut T` then the memory must already be initialized for T, so writing to that pointer doesn't initialize anything new (although as you say it can deinitialize bytes, but that only matters if you use transmute or pointer casting to get access to those padding bytes somehow).


ADHD meds contain controlled substances, and there's an annual production quota for them set by the DEA. The quota is intentionally set very tightly, so it's easy to hit it when the demand increases even slightly above projections.

Most international pharmaceutical companies have some presence in the US, so the US quota has a world-wide effect.

Additionally, prescriptions are for very specific doses of specific variants of the meds. Because it's a controlled substance, pharmacies aren't allowed to use any substitutes (not even something common-sense like dispensing 2x30mg for a 60mg prescription). This makes shortages happen even before all of the quota runs out, because some commonly used doses run out sooner.


Okay so the DEA is causing people with legitimate prescriptions to not have access to medication.

Are they doing anything about that? Seems like a very tractable problem.


Why would they do anything about that? It’s their job to set and enforce quotas, not to ensure access. From their perspective, I’d imagine that tight quotas make them feel reassured that they’ve got a lid on diversion concerns.

It does sound like the quota-setting system was designed for an era where the “legitimate” growth wasn’t on the order of “10% a year for 15 years”:

https://www.additudemag.com/adderall-shortage-dea-stimulants...


You're right that the DEA's quota system prioritizes diversion control over access, and it's clearly stuck in a bygone era unfit for todays demand growth. But it's baffling that Big Pharma, with its lobbying muscle, hasn't pushed Congress to modernize this bottleneck. Surely they'd profit from looser quotas.

Instead of hoping for a Trump EO to nuke the DEA (literally or figuratively), why not redistribute Controlled Substance Act enforcement? Agencies like the FBI or HHS already handle overlapping domains. The DEA's rigid gatekeeping, especially on research and quotas, stifles innovation more than it curbs abuse.


Or if the court overturned Wickard v Filburn. The Federal power to regulate substances like this at all is based on a butterfly effect version of the commerce clause.


There are alternatives that are not on the EU list for controlled substances. Like for example:

https://en.wikipedia.org/wiki/Lisdexamfetamine


Lisdexamfetamine is still a C2 in the US, as somebody with a script for it the headaches of pharmacies running out is real.


Vyvanse never hit the same the one month my doc wanted me to try it out :(


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: