Hacker Newsnew | past | comments | ask | show | jobs | submit | jdewerd's commentslogin

How long until it comes with a DRM AI and then my anti-DRM AI will have to fight it in a virtual arena (with neon lights and killer soundtrack, of course)?

Completely independent of bandwidth, higher frequencies also fall off faster. That's bad if you are trying to cover max space but good if you are trying to avoid noisy neighbors.

The most efficient way to extract money from people is to sell off the spectrum to the highest bidding rent seeker, I agree.

As for most efficient use of the resource, well, consulting my spectrum analyzer, ISM bands are winning by a mile and we should want more of them.


Sure obviously giving it to WiFi and then installing town wide free WiFi would be the absolute most efficient option but I'm trying to stay realistic.

Wi-Fi is not a very efficient way to cover a whole town, due to its inherently low range (at least when involving consumer devices on one end). You'd be spending a lot of resources on base stations that never see any usage.

WiFi literally covers basically all of the urban US already, I'm not understanding this point.

It's true that there's no single service one can sign up for and you have to bounce around cafe and Xfinity and whatever "Free WiFi!" networks are being offered. Which is definitely annoying and it's nice to have a single company sell you service in a neatly packaged "phone" product.

But again, trying to phrase that as a technical point is ridiculous. Free bands are just plain better, technically. You get more data to more people for less money using open spread spectrum protocols than you do with dedicated bands. Period.


I never said anything about free or government-run WiFi, just about auctioning off the spectrum. Companies that build out the infrastructure should be able to charge for access, but they shouldn't be able to prevent others from competing by paying the government for exclusivity. That's a scam.

It's a technical/commercial necessity to have exclusive use over the spectrum in a given area. If you don't believe me why doesn't every city in the world have a paid wifi network? With 5Ghz it should be faster than typical 4G/5G speeds, and it only needs lampost level APs, pretty similar to the microcells that carriers deploy but an order of magnitude cheaper. Instead mobile carriers would rather buy 3 or 6ghz spectrum that only ever gets used in cities anyway, why not wifi in the cities?

ISM is tragedy of the commons; make it free, let anyone do anything and it becomes junk. Carriers need something they have exclusive use of.


ISM is thriving, the only tragedy is that carriers haven't figured out how to charge rents on it and that's a tragedy for them, it's a spectacular success for everyone using it for free.

Carriers don't need 6GHz for backhaul. They have fiber and cable and (other) microwave. Not to mention the ability to shape their own links with antennas and beam forming and do a good job of it rather than a "default job." What they don't have -- and shouldn't be given under any circumstances -- is the excuse to build a moat in the bustling public park.


At the very least, I don't see a need to grant exclusivity across an entire country. e.g. from my home, I can see 5 wifi networks including mine. Of those, only 1 other than mine has a 5GHz signal that reaches me, and everything other than mine is in the -80 to -95 dBm range. There's simply no need to reserve short-range signals in the suburbs in the way that there is for block of giant apartment buildings each with 100s of networks on top of each other.

On top of that, mobile data is quite expensive in the US, so the only time I have data when out and about is... when I'm on free public wifi networks (which is most of the time). So I don't see much reason to give more of a monopoly to mobile providers. I honestly don't even see a use-case for cell service outside of super rural areas; the only reason I even have it is because it's necessary for MFA. Cell providers are legacy tech as far as cities are concerned IMO.

To me it'd make way more sense to me to let wifi have more bands with stricter limits on power levels, and any exclusivity should be to municipalities who can contract with companies to build and manage their infrastructure.


> On top of that, mobile data is quite expensive in the US

It's not 2015 - that narrative is long dead. There are countless options for unlimited mobile data (5G, with hotspot) for $15-$20/mo.


I certainly agree about regional licensing. I think the best scenario would actually be to allow some for WiFi and some for carriers, especially since selling licenses is a two way door in a way that ISM isn't.

Is that because defense doesn't like them or is it because (non-wartime) defense moves on geological timescales and these are "new"?


As compact-ish explanation: A "standard" wideband RF system in an EW or RF reconnaissance platform covers between 0-18 GHz (DC up to the Ku band), or at least as much of it as possible (and Ka/mmW becoming common on new systems); and they have challenging requirements compared to a communication system. Communication systems are simpler to design since both sides of the link cooperate and filter out a wide swath of potentially interfering signals, but a military system wants to see as many signals as possible so they can be collected or jammed. It has not been advantageous to use an integrated RFSoC in the past given the requirements. If a company were spending millions of dollars designing a complicated front end, they might as well pick a separate ADC/DAC that maximizes the performance they cared about, rather than go with the "easy" integrated RFSoC option that might not have the absolute best performance. Now the industry is just getting to the point that a system like a direct sampling ADC/DAC integrated into a Versal might be able to process massive bandwidths at high enough bit rates that they can do useful things for military applications; it may actually be worth it now because you can push to really high data rates, and the additional processing might make up for a small loss in ADC/DAC performance. Give it a couple years for these to make it into new designs and get fielded.

So I guess the tl;dr, is that it is not because defense doesn't like integrated packages, they just haven't been worth it considering the design goals. Defense does move slow, but this is more about being able to field "military-grade" solutions that work well in challenging RF environments, and once that is possible the government will start to pay for it.


Which comparably-priced ADC/DAC ICs are pushing 6 GSPS on 8x8 channels like the $500 (actual price, not fuck-you DigiKey price) RFSoCs?


> The prime-counting function approximation tells us there are Li(x) primes less than x, which works out[5] to one prime every 354 odd integers of 1024 bits.

Rule of thumb: Want a 1024-bit prime? Try 1024 1024-bit candidates and you'll probably find one. Want a 4096-bit prime? Try 4096 4096-bit candidates and you'll probably find one.

The approximate spacing of primes around p is ln(p), so ln(2^1024) = 1024*ln(2), and ln(2)=0.693 so if you are willing to absorb 0.693 into your rule of thumb as a safety margin you get the delightfully simple rule of thumb above. Of course, you'll still want to use a sieve to quickly reject numbers divisible by 2, 3, 5, 7, etc, and this easily rejects 90% of numbers, and then do a Fermat primality test on the remainders (which if you squint is sort of like "try RSA, see if it works"), and then do Miller-Rabin test to really smash down the probability that your candidate isn't prime. The probabilities can be made absurdly small, but it still feels a bit scandalous that the whole thing is probabilistic.

EDIT: updated rule of thumb to reflect random candidate choice rather than sequential candidate choice.


It's been a while since I've looked at the literature on RSA prime generation, but I seem to remember that picking a random starting point and iterating until you find a prime is discouraged because primes aren't evenly distributed so key generation timing could reveal some information about your starting point and eventual prime choice.

I'm not sure how realistic of an issue this is given the size of the primes involved. Even if an attacker can extract sensitive enough timing information to figure out exactly how many iterations were required to find a 1024 bit prime from a 1204 bit random starting point, I'm not aware of a good way to actually find either value. You do also introduce a bias since you're more likely to select prime numbers without a close neighbor in the direction you are iterating from, but again I'm not sure how practical an attack on this bias would be.

Still, to avoid any potential risk there I seem to remember best practice being to just randomly generate numbers of the right size until you find a prime one. With the speed of modern RNGs, generating a fresh number each time vs iterating doesn't seem like a significant penalty.


Yes, excellent point! I originally omitted this detail for simplicity, but on reflection I don't think it actually achieved much in the way of simplifying the rule so I changed it to reflect reality. Thanks for pointing that out.

EDIT: the rush of people offering up sieve optimizations is pushing me back towards formulating the rule of thumb on a consecutive block of numbers, since it makes it very clear that these are not included, rather than implicitly or explicitly including some subset of them (implicit is bad because opacity, explicit is bad because complexity).


I've seen many implementations that relied on iterating (or rather, they used a prime sieve but it amounts to the same thing in terms of the skewed distribution). While maybe not a problem in practice I always hated it - even for embedded systems etc. I always used pure rejection sampling, with a random one in each iteration.


it might be this https://facthacks.cr.yp.to/fermat.html

if N=p*q and p-q < sqrt(p) then its easy to factor


Encountering this by chance is exceedingly unlikely of course, if p and q are randomly generated. In probability terms it amounts to the first half of p (or q) all being zero (apart from a leading 1) so roughly 2^(-n/4) where n is the bit size of n. So for RSA 2048 the probability of this happening is on the order of 2^-512, or in base-10 terms 0.0000000...0000001, with roughly 150 zeros before the one!


> Rule of thumb: Want a 1024-bit prime? Try 1024 1024-bit candidates and you'll probably find one.

Where probably is 76% [1], which is not that high depending on what you are doing. For example, you wouldn't be ok with GenerateKey failing 24% of the time.

To get a better than even chance, 491 [2] 1024-bit candidates are enough.

[1]: https://www.wolframalpha.com/input?i=1+-+%281+-+li%282%5E102... (using li(x) as a slightly better approximation of π(x) than x/ln(x), see [3])

[2]: https://www.wolframalpha.com/input?i=1+-+%281+-+li%282%5E102...

[3]: https://en.wikipedia.org/wiki/Prime-counting_function


Iterating over some huge search space in an essentially sequential manner is generally not going to be nearly performant as simply selecting an odd number at random. You could try using a generating polynomial instead such as f(x) = x^2 + x + 41 but even that isn't going to help much in the long run. (There are Diophantine equations which one day may prove useful for generating random primes however AFAICT finding efficient solutions is still currently considered a hard problem.)


Yes, but the more we mix sieve rejection into candidate selection the more we complicate the rule of thumb. "Reject even numbers as prime candidates" is probably OK to leave as an exercise for the reader, as is the equivalent "round every candidate to odd" optimization. The point about random vs sequential is well taken, though, and it doesn't complicate the rule of thumb, so I changed it.


Incrementing a bignum is faster than another PRNG cycle.


Neither is a significant amount of the time required to reject a candidate factor. The cheapest rejection test is "Bignum Division by 3" and something like 2/3 candidates will need more expensive further tests.

https://news.ycombinator.com/item?id=40093136


Oh, he only busted the Great Depression, won WWII, built half of the infrastructure that we keep kicking the expiration date on, and negotiated 80% of the beneficial fine print in your employment contract. Don't you think he could have done a bit more?

My list would be: 1. FDR, 2. Carter, 3. Teddy. Carter because he sacrificed his career to fix inflation (Republican attempts to rewrite history notwithstanding), and Teddy because he wasn't merely an excellent man with excellent politics, but also because whenever present-day Republicans try to claim the man without claiming his politics I can turn it into a teachable moment, and putting him on a list with the other two is the perfect bait.


He didn't end the depression. it clearly continued right to wwii. You can dabate how things might have been if he had been allowed all his ideas (some of which were as undemocratic as what trump wants)


He steered us to join the war which did end the depression.

Whether it be the new deal or non-isolationist policy, his direction led us out of the great depression which started before his presidency and ended before he died.


> won WWII

or so Hollywood would have us believe


Sure, the parts Stalin didn’t win.


Historical note: Stalin and Hitler agreed to start WW2 by invading Poland in September 1939.

So Stalin may have played a part in ending WW2, but don't forget his part in starting it.


FDR, Stalin, and Churchill all won that war. History is super messy!


Team America to the rescue! But it does seem like without FDR it would have been won for the allies anyway.


I think this kind of counterfactual is pretty impossible to do.

Do you mean that it would have been won without any US involvement? Or do you mean the US involvement would have proceeded similarly with a different president?


From my cursory sense of things it seems like both are probably true. The U.S with FDR obviously contributed to it ending when it did, but so did other countries who were there earlier and sacrificed more. The U.S seems like more of a winner in the sense that they sacrificed relatively little while getting the most out of it, but I don't know if that's a good use of the term "won" in the context of a world war.


So did Hitler. I mean, he did end up killing Hitler, that's gotta count for something!


*yet


> Stalin didn’t win

...even backed by crucial US supplies


Infrastructure was mostly built in Eisenhower's era, not FDR's. Helping Soviets during WWII was a major mistake and it can be personally attributed to FDR - a radical leftist - himself. Many people around him advised him of the dangers of helping Commies.

U.S. should have ignored Soviet-German war. Then finish Commies with nukes.


> Then finish Commies with nukes.

If they'd done that they'd be down in history as worse than the worst of communism. It was bad enough that they dropped 2 on the Japanese which scores American civilisation a questionable footnote in the history books. "Only people to use a weapon this terrible".

The problem with unprincipled aggression is that, sooner or later, other people match it. The US ended up doing much better by defeating the communists without directly fighting them - one of the few wars the US unambiguously won and why people don't want to learn that lesson is one of the great mysteries. Victories through overwhelming prosperity are both decisive and comfortable.


But that is the point! Get rid of everyone who wasn't friendly/under control, who could match it. Thus achieving worldwide democracy for all nations who could support it immediately, and unlimited time to get everyone who can't, prepared (with potentially unlimited violence applied to force them to). Achieve a sustainable hegemony.


I love cheap and reliable TP-Link routers as much as the next guy, but it's definitely also a security issue. The CCP almost certainly has a backdoor. Maybe a respectable one in the form of an undisclosed bug or the ability to lean on an update provider, but the point stands: it's absolutely a security issue and denying this is cope.

Routers are going to be a bit more expensive and a bit less reliable for a while. We'll live.


Probably a better approach than the futile attempt to excise all routers with backdoors or bugs would be to continue the ongoing efforts to make network security router agnostic.


I thought Shor's algorithm could attack ECC too and the lattice crypto with the sci-fi crystal names (Kyber and Dilithium) was the response?

If I go to https://www.google.com using Chrome and Inspect > Security, I see it is using X25519Kyber768Draft00 for key exchange. X25519 is definitely ECC and and Kyber is being used for key encapsulation (per a quick google). I don't know to what extent it can be used independently vs it's new so they are layering it up until it has earned the right to stand on its own.


It's new so they are layering it up. At https://pq.cloudflareresearch.com/ you can also see if your browser supports X25519MLKEM768, the X25519Kyber512Draft00 and X25519Kyber768Draft00 variants are deprecated ('obsolete'?)


The 50 000 000th prime is 982451653, but fun fact: you may have already memorized a prime much larger than this without even realizing you have done so!

2^255-19

This is where Curve 25519 and the associated cryptography (ed25519, x25519) gets its name from. Written out, 2^255-19=57896044618658097711785492504343953926634992332820282019728792003956564819949.


You could have memorized even large one if you are familiar with the full name of the Mersenne Twister PRNG: MT19937 is named so because its period is 2^19937-1 which is a prime number (in fact, the largest known one at the time of Zigler's writing). In my knowledge any larger prime number hasn't been used for the practical purpose.


Cool, I hadn't run into it before so thanks for introducing me!

I was going to include the digits for comparison, but yes, on second thought 6002 digits is probably too much for polite inclusion in a HN post.


Yeah, although that's better than 19937 ones in a row.


https://oeis.org/A004023 "Indices of prime repunits: numbers k such that 11...111 (with k 1's) = (10^k - 1)/9 is prime."

OEIS says "19937 ones in a row" isn't prime, but "1031 ones in a row" is.

And "8177207 ones in a row" is at least a probable prime. (Which you can maybe remember as a seven-digit phone number, or as either BITTZOT or LOZLLIB depending on how you prefer to hold your calculator. But those mnemonics are wasted if (10^{81777207}-1)/9 turns out to be merely pseudoprime.)


On a Linux command line:

    echo '2^255-19' | bc


> There's already a panoply of CUDA alternatives

Is there?

10 years ago, I burned about 6 months of project time slogging through AMD / OpenCL bugs before realizing that I was being an absolute idiot and that the green tax was far cheaper than the time I was wasting. If you asked AMD, they would tell you that OpenCL was ready for new applications and support was right around the corner for old applications. This was incorrect on both counts. Disastrously so, if you trusted them. I learned not to trust them. Over the years, they kept making the same false promises and failing to deliver, year after year, generation after generation of grad students and HPC experts, filling the industry with once-burned-twice-shy received wisdom.

When NVDA pumped and AMD didn't, presumably AMD could no longer deny the inadequacy of their offerings and launched an effort to fix their shit. Eventually I am sure it will bear fruit. But is their shit actually fixed? Keeping in mind that they have proven time and time and time and time again that they cannot be trusted to answer this question themselves?

80% margins won't last forever, but the trust deficit that needs to be crossed first shouldn't be understated.


This is absolutely it. You pay the premium not to have to deal with the BS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: