I see a lot of people mentioning Pydantic here, but you should take a look into TypedDict. It provides a type structure ontop of a plain dictionary, and sounds like exactly what you’d want, and is a built-in that you don’t need a dependency for.
Mypy for example can also see the types of the dictionary are supposed to be when you use it just like a normal dictionary.
I once had a VPN utility that HAD to be closed with a Keyboard Interrupt in order for it to shut-off properly, so my systemd setup for it didn’t work. I ended up making bash aliases for tmux commands to run it and send the keyboard interrupt signal into it to stop it. I’m sure there was a way to do this with systemd, but tmux was easy, if a bit jank.
One problem I have with switching exclusively to Yubikeys(or similar) entirely with no other 2FA option, is the lack of support in embedded browsers.
I’m not entirely sure what the support for this is like on Windows or some Linux systems, but for example on MacOS, if an application authenticates with SSO or something in an embedded browser window(one example would be like Cisco AnyConnect, but there are plenty of others. zScaler did recently update their MacOS client to authenticate inside a real full browser though so that’s nice), most every application I’ve come across uses the stripped down version of WebKit in these that doesn’t support FIDO2 or security keys at all, so I’m forced to use some other option like an authenticator app.
This is perhaps less of a problem depending on what types of auth your IDP supports, but for example with Microsoft it’s either Phone Call or SMS, their Authenticator app, or FIDO2.
This has improved leaps and bounds over the past year and a half.
I’ve been rocking FIDO2 with a Yubikey on macOS and iOS and it’s been solid. Support is there in web views. You can even use Yubikey-based PIV certificates now.
Not all of Microsoft’s apps have migrated to support this at the same pace. And anything still using ADAL over MSAL on iOS is going to probably ask for a different authentication path. Some of the older PowerShell modules don’t support FIDO2 or certificate authentication at all, but those are being rapidly deprecated.
Throughout my company’s pursuit of moving everything under the sun into AWS I have done my best to keep everything able to be migrated, we have some systems which are just, simply going to have to be completely rebuilt if we ever needed to move them off of AWS, because there is not a single component of the system that doesn’t rely on some kind of vendor lock-in system AWS provides.
I aim to keep everything I’m working on using the simplest services possible, essentially treating AWS like it’s Digital Ocean or Linode with a stupidly complex control panel. This way if we need to migrate, as long as someone can hand me a Linux VM and maybe an S3 interface we can do it.
I really just have trouble believing that everyone using Kubernetes and a bunch of infrastructure as code is truly benefiting from it. Linux sysadmin isn’t hard. Get a big server with an AMD Epyc or two and a bunch of RAM, put it in a datacenter colo, and maybe do that twice for redundancy and I almost guarantee you it can take you at least close to 9 figures revenue.
If at that point it’s not enough, congratulations you have the money to figure it out. If it’s not enough to get you to that point, perhaps you need to re-think your engineering philosophy(for example, stop putting 100 data constraints per endpoint in your python API when you have zero Postgres utilization beyond basic tables and indexes).
If you still really genuinely can’t make that setup work, then congratulations you are in the 10%(maybe) of companies that actually need everything k8s or “cloud native” solutions offer.
I would like to note that given these opinions, I do realize there are problems that need the flexibility of a platform like AWS, one that comes to mind is video game servers needing to serve very close to a high number of geographic areas for latency concerns.
> I aim to keep everything I’m working on using the simplest services possible, essentially treating AWS like it’s Digital Ocean or Linode with a stupidly complex control panel.
What's the benefit of AWS then, if you're not using any of the managed services AWS offers, and are instead treating AWS as an (overly expensive) Digital Ocean or Linode?
Are smaller plugins like this potentially an easy place for first time contributors to work in?
My Rust skills are certainly very lacking but if there is some big list of arbitrary flake8 plugins that there is a desire to have ported, that sounds like it could be a relatively easy place to get one’s feet wet.
I maintain a few open source libraries and we have recently switched to Ruff from Flake8 exclusively because of the lack of pyproject.toml support.
These are relatively speaking small codebases, and the speed improvements really don’t make a difference for us, but with trying to get our own projects in line with current standards flake8’s refusal to support it was the final nail in the coffin.
I think the point was reasonable when pyproject.toml first came about and parsing it was a bit more of a Wild West, I can certainly understand the unwillingness to try and support it then. Now with Python itself having toml support in the standard library, and every other major tool standardizing on it, not to mention TOML being the de-facto config file for Rust projects. It just seems like a bizarre hill to die on to me now.
It’s not as if the work hasn’t been done, there are forks of flake8 with pyproject.toml support, but as far I’m aware multiple PRs with support for it have been denied.
I used to be really involved in RuneScape private server development in the early to mid 2010’s. It is to this day some of the most enjoyable programming I’ve done. The challenge presented by reverse engineering an obfuscated client and building a server framework off of that works almost like a puzzle game.
There were a lot of really talented developers in the space, but unfortunately the predatory donation systems and toxic culture between different servers/projects generally caused the good developers to move away from the community(or be just outright banned from community forums for no sane reason). It’s really nice to see a project in this space building a strong community with good open source culture.
I didn’t see on the website anywhere, but I’m curious if Jagex has given them any kind of permission for distributing the copyrighted assets/client? They may not care since it is focused on very old versions that Jagex themselves don’t even have or intend to profit from ever, but it would be cool to see them actually endorse a community project like this.
I've always wondered how private servers were developed. How was the netcode for private servers developed without any documentation? Were there source code leaks that helped?
My favorite RSPS was 2speced. It was probably one of the most famous 317 servers. I still remember the forum names of the developers, Tyler and Blurr. I sometimes wonder what they do now. The RSPS scene was awesome.
I work on a Python game engine called Arcade[1] and other projects within it's Github organization such as pytiled-parser. We also help to drive continued development and improvement within Pyglet[2]. Recently, my efforts have been focused on creating a version which can be run in web browsers by using Pyodide and WebGL[3], though that is still fairly early stages.
Arcade's primary focus is on being an educational tool for beginner programmers, so my hope is that with browser compatibility we can lower the barrier to entry further and make it more accessible and easy to get started with. In a similar vein to the goals of browser compatibility, we've recently enabled full compatibility with Raspberry Pi through the use of OpenGL ES(and this was largely only possible thanks to the huge amount of work that everyone involved in the Mesa project puts in)
I'm not the original author of Arcade, but I am a current maintainer and put a substantial amount of time into it and it's community.
I used a Framework for a while, and I’d definitely give it my vote for the best Linux laptop I’ve used(Also PopOS has the best out of the box support for most laptops in my experience).
This being said, I traded my framework out for an M1 MacBook Pro, and just use OS X now. The reality is my OS X environment is functionally identical to Linux(and this includes very heavy use of Docker, I don’t know what anyone complaining about Docker on M1 are on about) and from a hardware perspective the M1/2 MacBooks just absolutely stomp every competitor for me, this is especially true if you care about battery life.
Disclaimer: I do not daily drive a laptop, and exclusively use a laptop when traveling or otherwise am incapable of using my desktop
I'm a huge Linux fan and currently run it on a Dell Latitude.
The company where I consult gave me a MacBook, so my work is on that device.
The difference in hardware is just night and day. Sound, touchpad, battery life, ... . I don't get it why any other hardware company can't even get close to what Apple is offering.
Anyway, when I need to replace my own laptop, chance is very high that it will be a MacBook. Although I still like my Linux Mint way more than MacOS, the difference between those two seems less than the hardware difference between a MacBook and anything else out there.
I’ve got a colleague who has been daily driving since he joined our company, he’s been really enjoying it as a developer machine. Looks super sleak and he most recently upgraded to the intel gen 12 with significant speed bumb and use the previous guts for a side project.
If you are a tinker this laptop is for you and the abilities are endless.
My experience with docker sucking on an M1 was based entirely around the images being non-native to ARM, and that completely destroying the entire laptop's performance.
I have recently changed teams though, my new setup doesn't require non-native images, and now it's fine again.
Sorry for a late reply, I am generally using ARM based images and I have run into VERY few things that availability was a problem for. I’ve never seen anything based on Debian or Alpine not have ARM variants. The only base image I’ve really ever run into problems with was Cent, and I only had like one thing that used it which I switched over to Debian and it’s not caused any problem for me.
I can definitely see mileage varying depending on the specific things you need to run, but I wasn’t ever really concerned with it because I usually have enough control over the images I use that I can switch something to a different base if need be.
Mypy for example can also see the types of the dictionary are supposed to be when you use it just like a normal dictionary.