I haven't watched it recently, but here are the main takeaways I remember:
Peltier coolers are neat because they're very small and quiet - as opposed to vapor compression systems solutions. However, they are an order of magnitude less energy efficient.
Also Peltier coolers still have to obey the laws of thermodynamics, which means that to cool one side of the mechanism, you must heat the other side. In order to do any substantial cooling, you need a way to dispose of that heat on the other side. This usually involves the use of radiators and fans, which negate much of the size and noise benefits.
As a result, Peltier coolers are pretty niché. Your use case would have to require only a little bit of cooling. You'd have to need a form factor that cannot accomidate a vapor cooling solution. And you'd have to be willing to make the system very energy inefficient.
AFAIK, no one has tried to build a Peltier cell paired with a heat pump. I am not an expert, but I would imagine that it's a path that could bring higher efficiencies. Thoughts?
Also not an expert, but I’m struggling to find a combination where one of those couldn’t be replaced with a passive thermoconductive element. It’s hard to beat the efficiency of “free”.
I think there are some applications though. I remember PWMing a peltier element to cool something to more or less exactly 35°C. I didn't need to be efficient cooling, it just needed to be reliable under space constraints.
I am no hardware guy and I remember there was a giant heatsink despite the constraints. It was some kind of photosensor + lamp if I remember correctly.
I think there was also some software logic to reduce water condensing at something too cool compared to the ambient temperature.
• Fundamental Design and Coefficient of Performance (COP): Peltier elements are technically heat pumps. However, their efficiency, measured by the Coefficient of Performance (COP), is inherently low, ranging from "somewhere between zero and very bad" in practical applications. In theory, with perfect conditions and very low current, a COP between 1 and 2 might be achieved, but this generates hardly any cooling. In contrast, vapor-compression heat pumps used in traditional refrigerators can achieve a COP of 3 or more, meaning they can move significantly more heat energy than they consume in power.
• Impact of Temperature Difference: The efficiency of a Peltier element varies wildly depending on the temperature difference it is working against. The materials used to construct a Peltier element conduct heat even when not running. The greater the temperature difference between the hot and cold sides, the more heat energy leaks back through the element itself, dramatically reducing its efficiency.
• Internal Heat Generation from Current: The more current you attempt to push through a Peltier element, the more heat it generates within itself, which further diminishes its cooling performance.
• Comparison to Traditional Refrigeration: A small personal refrigerator using a Peltier element was found to consume about 55 watts of power constantly. This is significantly more energy than a standard mini-fridge, which might average only 21.7 watts to keep items cold over three hours because its compressor doesn't run all the time due to a thermostat. Even a much larger refrigerator, holding over 50 times more items, uses slightly less energy annually than the Peltier-based toy fridge.
I know some people who are experimenting with using shorter certificates, i.e. shorter certificate chains, to reduce traffic. If you're a large enough site, then you can save a ton of traffic every day.
With half of the web using Let's Encrypt certificates, I think it's pretty safe to assume the intermediates are in most users' caches. If you get charged out the ass for network bandwidth (i.e. you use Amazon/GCP/Azure) then you may be able to get away with shortened chains as long as you use a common CA setup. It's a hell of a footgun and will be a massive pain to debug, but it's possible as a traffic shaving measure if you don't care about serving clients that have just installed a new copy of their OS.
There are other ways you can try to optimise the certificate chain, though. For instance, you can pick a CA that uses ECC rather than RSA to make use of the much shorter key sizes. Entrust has one, I believe. Even if the root CA has an RSA key, they may still have ECC intermediates you can use.
The issue with the lack of intermediates in the cert isn't browsers (they'll just deal with it). Sure, if they aren't already in the cache then there's a small hit first time. The problem is that if your SSL endpoint is accessed by any programming language (for example, you offer image URL to a B2B system to download so they can perform image resizing for you, or somesuch) then there's a chance the underlying platform doesn't automatically do AIA chasing. Python is one-such system I'm aware of, but there are others that will be forced to work around this for no net benefit.
That is a really good point. Googles certificate service can issue a certificate signed directly by Google, but not even Google themselves are using it. They use the one that's cross signed by GlobalSign (I think).
But yes, ensure that you're serving the entire chain, but keep the chain as short as possible.
This hasn’t been the case since TLS1.3 (over 5 years ago) which reduced it to 1-RTT - or 0-RTT when keys are known (cached or preshared). Same with QUIC.
Good to know, however "when the keys are know" refers to a second visit (or request) of the site right ? That isn’t helpful for the first data paquets - at least that what I understand from the site.
Without cached data from a previous visit, 1-RTT mode works even if you've never vistited the site before (https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/#1-rtt-mode). It can fall back to 2-RTT if something funky happens, but that shouldn't happen in most cases.
0-RTT works after the first handshake, but enabling it allows for some forms of replay attacks so that may not be something you want to use for anything hosting an API unless you've designed your API around it.
I have been developing Lua-heavy embedded products as a freelancer for about 20 years now, including VoIP devices, home automation controllers, industrial routers, digital video recorders, and more. These systems typically consist of a Linux kernel, some libc implementation, the lua interpreter and a few 3d party libs support libs to help building the app. The Lua apps ranges from 30k to 100k lines of code, depending on the application. Some of these devices can be considered 'small' in 2025 terms: 8MB of flash, 64MB of ram. Lua is doing great here.
All of these products are still alive today, actively supported and making my customers good money.
Some things come very natural to Lua: Lua <=> C interfacing is a breeze, and while some modern languages are still struggling to figure out how to do proper async, Lua has been able to do this for decades. The language itself is minimal and simple but surprisingly powerful - a few smart constructs like coroutines, closures and metatables allow for a lot of different paradigms.
For new projects at this scale, I would still choose Lua + C/C++ as my stack. Over the last few years I have been visiting other ecosystems to see what I'm missing out on (Elixir, Rust, Nim), and while I learned to love all of those, I found none of them as powerful, low-friction and flexible as Lua.
I am currently working on an embedded system with 264Kb of RAM and 4Mb of flash. Do you think Lua could be used in such limited settings? I am also considering the berry scripting language [0].
Assuming your flash allows XIP (execute in place) so all that memory is available for your lua interpreter data, you should at least be able to run some code, but don't expect to run any heavy full applications on that. I don't know Berry but it sounds like a better fit for the scale of your device.
But sure, why not give it a try: Lua is usually easy to port to whatever platform, so just spin it up and see how it works for you!
I haven't worked on a system that limited (not even OpenWRT routers) since a dev board in college.
The experience I had there might be your best bet for something productive. That board came with a 'limited C-like compiler' (took a mostly complete subset of C syntax and transcribed it to ASM).
You'll probably be doing a lot of things like executing in place from ROM, and strictly managing stack and scratch pad use.
The 64MB of RAM and 8MB (I assume that's 64Mbit) of ROM allow for highly liberating things like compressed executable code copied to faster RAM, modify in place code, and enough spare RAM otherwise to use scripting languages and large buffers for work as desired.
It's more than generous. You can run it with much less resource utilisation than this. It only needs a few tens of kilobytes of flash (and you can cut it right back if you drop bits you don't need in the library code). 32 KiB is in the ballpark of what you need. As for RAM, the amount you need depends upon what your application requires, but it can be as little as 4-8 KiB, with needs growing as you add more library code and application logic and data.
If you compare this with what MicroPython uses, its requirements are well over an order of magnitude larger.
This article is partly wrong, and partly nonsense. Apples and oranges. But still:
> Work with only 3 files [...] Boot in under 5 seconds [...] Use commands without spaces [...] Run CPU opcodes natively [...] Be real
I have a little PCB here on my desk that runs linux with 2 files: vmlinux and busybox. It boots in about two seconds and yes, it runs CPU opcodes natively.
I'm not sure how being able to use commands without spaces or running in real mode is considered better or worse than the alternatives.
On the other hand, when Linux was new (I switched to Linux in early 1992), you could install it all on a floppy and boot in a few seconds. Not much difference really. Just way more flexible with a Linux floppy than a DOS one.. I kept a Linux floppy around for doing various stuff with problematic PCs.
It's way more important (or was, at the time) that MS-DOS could run on 8088/8086 and '286, unlike regular Linux.
Not sure what you mean by practically usable Linux kernel. It was perfectly possible to use the < 1.0 kernels on a floppy, with enough tools to do useful work.
I'm afraid it would: in 2002 I was involved with the development of a very early wifi AP implementation at Freehosting; this was running uclinux on an ARM7 with a pretty bare kernel and the whole OS fitting in under a megabyte. Booting was already pretty much instantaneous then.
But this is an artifact of a distant past, a small glimpse of the Geocities era. We would write whatever was interested to us enjoying the whole process. Look at the Lisp "article":
just a few notes with the conclusion "Well, that's enough LISPing for now. I may add more to this page if I actually ever learn anything else about LISP."
For me this submission isn't so much about the content, rather about how different the world was back then.
Well, the whole point is that it isn't a [censored] snake on a [censored] plane, and I think the title is already a pretty clear allusion to SoaP as it is.
So, what will be the proper technology to apply here? I have no problem with verification of my age (not the date of birth, just the boolean, >18yo), but I do have a problem with sending any party a picture of my face or my passport.
Discord got me to do this about 2 weeks ago (I'm Australian so they seem to be rolling this out here too), at least for the face scan the privacy policy said it occurred on device, so if you believe that you're not sending anyone images of your face.
we don't store your face [just the unique biometric metadata weights]. a computer doesn't need a picture to identify you, just store the numbers and you can legally claim you aren't "storing the picture".
Maybe someone like apple will make a "verify user looks over 18" neural net model they can run in the secure enclave of iphones, which sends some kind of "age verified by apple" token to websites without disclosing your identity outside your own device?
Having said that, I bet such a mechanism will prove easy to fake (if only by pointing the phone at grandad), and therefore be disallowed by governments in short order in favour of something that doesn't protect the user as much.
Apple lets you add IDs to your wallet in some jurisdictions. I wouldn't be surprised if they eventually introduce a system-wide age verification service and let developers piggyback on it with safe, privacy-preserving assertions.
This is a social problem and as such cannot be solved with technology. You would have to make social media so uncool that young people didn't use it. One of the easiest ways of doing this is associating it with old people. Therefore the fastest way to get young people off discord is to get geriatric on discord and en-mass.
The issue isn't social media is bad, the issue is that social media has no effective moderation. If an adult is hanging out at the park talking to minors, thats easy to spot and correct. there is a strong social pressure to not let that happen.
The problem is when moving to chat, not only is a mobile private to the child, there are no safe mechanisms to allow parents to "spot the nonce". Moreover the kid has no real way of knowing they are adults until it's too late.
Its a difficult problem, doing nothing is going to ruin a generation (or already has), doing it half arsed is going to undermine privacy and not solve the problem.
OIDC4VCI(OpenID for Verifiable Credential Issuance)[0] is what I think has the most promise.
My understanding is that an issuer can issue a Credential that asserts the claims (eg, you are over 18) that you make to another entity/website and that entity can verify those claims you present to them (Verifiable Credentials).
For example, if we can get banks - who already know our full identity - to become Credential Issuers, then we can use bank provided Credentials (that assert we are over 18) to present to websites and services that require age verification WITHOUT having to give them all of our personal information. As long the site or service trust that Issuer.
It doesn't have to be your bank if you don't want, have the DMV be an issuer or your car insurance, or health insurance or cell phone service etc.
You choose which one you want you want to have assert your claim. They already know you. It's a better option than giving every random website or service all of your info and biometric data so you can 'like' memes or bother random people with DM's or whatever people do on those types of social media platforms
For Australia (who will need something like this this year per current legislation), the only sensible location is the government my.gov.au central service portal. None of the other services have an incentive or requirement to do it (Medicare, drivers license issuers, Centrelink). And given the scope of the rollout (all major social media, as nominated by the gov), it would need almost all of the banks or super funds to implement the same API for the project to not fail.
But I don't think anyone has told my.gov.au that needs to happen, so we are either going to get some proprietary solution from social media companies (tricky, since they will need to defend it in court as they are liable, but maybe discord saying 'best we can do sorry' or 'better than our competitors' will let them off). Or just switching off the services for a few days until the politicians panic about the blow back and defer the rollout until some committee can come up with a workable solution (ideally in the next election cycle).
> It doesn't have to be your bank if you don't want,
"If I don't want"? I would get no choice at all about who it would be, because in practice the Web site (or whoever could put pressure on the Web site) would have all of the control over which issuers were or were not acceptable. Don't pretend that actual users would have any meaningful control over anything.
The sites, even as a (almost certainly captured and corrupt) consortium, wouldn't do the work to accept just any potentially trustworthy issuer. In fact they probably wouldn't even do the work to keep track of all the national governments that might issue such credentials. Nor would you get all national governments, all banks, all insurance companies, all cell phone carriers, all neighborhood busibodies, or all of any sufficiently large class of potentially "trustable" issuers to agree to become issuers. At least not without their attaching a whole bunch of unacceptable strings to the deal. What's in it for them, exactly?
Coordinating on certifying authorities is the fatal adoption problem for all systems like that. Even the X.509 CA infrastructure we have only exists because (a) it was set up when there were a lot fewer vested interests, and (b) it's very low effort, because it doesn't actually verify any facts at all about the certificate holder. The idea that you could get around that adoption problem while simultaneously preserving anything like privacy is just silly.
Furthermore, unless you use an attestation protocol that's zero-knowledge in the identity of the certifier, which OpenID is unlikely ever to specify, nor are either issuers or relying parties going to adopt this side of the heat death of the Universe, you as a user are still always giving up some information about your association with something.
Worse, even if you could in fact get such a system adopted, it would be a bad thing. Even if it worked. Even if it were totally zero-knowledge. Infrastructure built for "of adult age" verification will get applied to services that actively should not have such verification. Even more certainly, it will extended and used to discriminate on plenty of other characteristics. That discrimination will be imposed on services by governments and other pressuring entities, regardless of their own views about who they want to exclude.
And some of it will be discrimination you will think is wrong.
It's not a good idea to go around building infrastructure like that even if you can get it adopted and even if it's done "right". Which again no non-zero-knowledge system can claim to be anyway.
Counterproposal: "those types of social media platforms" get zero information about me other than the username I use to log in, which may or may not resemble the username I use anywhere else. Same for every other user. The false "need" to do age verification gets thrown on the trash heap where it belongs.
> Don't pretend that actual users would have any meaningful control over anything.
You do have control, you just don't like the option of control you have which is to forgo those social/porn sites altogether. You want to dictate to businesses and the government how to run their business or country laws that you want to use. And you can sometimes, if you get a large enough group to forgo their services over their policies, or to vote in the right people for your cause. You can also wail about it til the cows come home, or you can try and find working solutions that will BOTH guard privacy and allows a business to keep providing services by complying with laws that allow them to be in business in the first place. It's not black & white and it's not instant, it's incremental steps and it's slow and sometimes requires minor compromise that comes with being an Adult and finding Adult solutions. I'm not interested in dreaming about some fantasy of a libertarian Seasteading world. Been there done that got the t-shirt. I prefer finding solutions in the real world now.
> The false "need" to do age verification gets thrown on the trash heap where it belongs.
This is something you should send to your government that makes those rules. The businesses (that want to stay in compliance) follow the government rules given to them. The ones that ask for more are not forcing you against your will to be a part of it.
I get you don't like it, I don't care for it either; but again, you can throw a fit and pout about it - or try tofind workable solutions. This is what I choose to do even though I made the choice long ago to not use social media (except for this site and GitHub for work if you want to count those) porn sites or gambling or other nonsense. So all these things don't affect me since I don't go around signing up for or caring for all the time wasting brain rot(imo) things. But I am interested in solutions because I care about data privacy
Those businesses also have control. They just don't like the option of control they have, which is to stay out of those countries altogether.
> This is something you should send to your government that makes those rules.
My government hasn't made those rules, at least not yet. Last time they tried, I joined the crowd yelling at them about it. It's easier to do that if people aren't giving them technology they can pretend solves the fundamental problems with what they're doing.
> Those businesses also have control. They just don't like the option of control they have, which is to stay out of those countries altogether.
Yes. ?
Apparently they don't want to leave and are happy staying there and complying. If you don't like a businesses practice, don't use them. . .
> Last time they tried, I joined the crowd yelling at them about it.
Good. I hope more people that feel as strongly about the subject as you will follow your lead.
> It's easier to do that if people aren't giving them technology they can pretend solves the fundamental problems with what they're doing.
No one is "giving" them technology that pretends anything. There is a community effort to come up with privacy focused, secure solutions. If you noticed the OIDC4VC protocols are still in the draft phase. If it's fubar no one will use it. Worse than that is, if nothing comes of any proposed solutions, the state won't just say oh well you tried.
Either we will continue to deal with the current solution of businesses collecting our ids and biometrics and each one having a db of this info to sell/have stolen, or, some consultant that golfs with some gov official will tell them the tech industry can't figure it out but they have a magic solution that's even better and will build a system (using tax dollars) that uses government IDs with the added bonus of tracking and then all of our internet usage can be tracked by the government.
Wantonly dismissing any effort to make things better in an acceptable way is not going to make it magically go away forever. That ship has sailed. You can resist efforts to find a privacy focused solution and get stuck with an even worse one from the state, or, get your crowd yelling hat back on and help make sure data and privacy protections are solidly baked into these solutions the tech community is trying to build.
I might be mistaking, but I don't see how this is novel. As far as I know, this has a proven DSP technique for ages, although it it usually only applied when a small amount of distinct frequencies need to be detected - for example DTMF.
When the number of frequencies/bins grows, it is computationally much cheaper to use the well known FFT algorithm instead, at the price of needing to handle input data by blocks instead of "streaming".
The difference from FFT is this is a multiresolution technique, like the constant-Q transform. And, unlike CQT (which is noncausal), this provides a better match to the actual behavior of our ears (by being causal). It's also "fast" in the sense of FFT (which CQT is not).
There exists the multiresolution FFT, and other forms of FFT which are based around sliding windows/SFFT techniques. CQT can also be implemented extremely quickly, utilising FFT's and kernels or other methods, like in the librosa library (dubbed pseudo-CQT).
I'm also not sure how this is causal? It has a weighted-time window (biasing the more recent sound), which is farily novel, but I wouldn't call that causal.
This is not to say I don't think this is cool, it certainly looks better than existing techniques like synchrosqueezing for pushing the limit of the heisenberg uncertainty principle (technically given ideal conditions synchrosqueezing can outperform the principle, but only a specific subset of signals).
“we’re back where we started four years ago, hard coding everything, except now in a much crappier language.”
Not sure if I agree with this. A proper designed DSL has the advantage of being much closer to the domain of the problem it is supposed to solve. Your code written in the DSL now might end up as 'hard coded' part of the application, but it likely conveys much more meaning in much less code because it is tailored to the core functionality of the application.
Design a DSL. But instead of implementing it, implement the same abstractions in the functions (or classes or whatever) of your code. Effectively, you are implementing the DSL without the parser and AST.
When you chain these functions together into business logic they will be just as readable as the DSL would have been. But you still get an IDE with code completion, debugging, etc.