You mean AMD's unified architecture. They were a founder of the HSA Foundation that drove innovation in this space complete with Linux kernel investments and unified compute SDKs, and they had the first shipping hardware support.
AMD's actual commitment to open innovation over the past ~20 years has been game changing in a lot of segments. It is the aspect of AMD that makes it so much more appealing than intel from a hacker/consumer perspective.
Sharing for awareness in case of value for any HNers, this little app helped get subscriptions online for a hobby project and has now been open sourced (MIT). There are bigger established alternatives and as is explained in the readme, this one is designed to be encapsulated, ideomatic, and end-user privacy-preserving.
Seconding this recommendation! I've never been a super advanced vim user, but Evil is the best and most complete vim emulator I've ever used, and I try them on every editor or IDE I ever run.
Seems some of these are for Vim too, but I haven’t tried them yet:
https://github.com/jkitching/awesome-vim-llm-plugins
Scanning the list quickly, dense-analysis/neural perhaps sticks out since it’s written by the author of ALE, which is a very high-quality plugin.
Another option that is perhaps more the Unix way, is to run an LLM client in a terminal split (there are lots of CLI clients), and then use vim-slime to send code and text to that split.
Personally I’m still using ChatGPT in the browser and mobile app. Would love to try something else, but the OpenAI API key seems to cost extra, and something like llama probably takes time to setup right.
That Media Realm site is mine. Thanks for linking! RDS is one of those technologies that’s been around for decades and still amazes people. You can buy hardware encoders for about AU$500 these days, and I love introducing stations to it and getting their name and song data to show up on car radios.
She was aware but the talk is a walk through how easy hacking is. Not giving up and being interested enough to find out more. Really recommend the talk and her blog.
These are great. Having tooling to get stuff out fast and as safely as possible to get to iterating openly.
Here’s a bash script I posted a while back on a different thread that does similar thing if of interest for anyone. It’s probably less nice than op’s for ex it only works with digitalocean (which is great!) - but it’s simple small and mostly readable. also assumes docker - but all via compose, with some samples like nginx w auto-ssl via le.
Imho there is hype and bubble yes but not at the core: at the core is AGI in 5 years and significant change. The bubble/hype is coming from where it seems to always come from - the non-core camp.
I'd recommend trying it. It takes a few tries to get the correct input parameters, and I've noticed anything approaching 4× scale tends to add unwanted hallucinations.
For example, I had a picture of a bear I made with Midjourney. At a scale of 2×, it looked great. At a scale of 4×, it adds bear faces into the fur. It also tends to turn human faces into completely different people if they start too small.
When it works, though, it really works. The detail it adds can be incredibly realistic.
That magnific.ai thingy is taking a lot of liberty on the images, and denaturing it.
Their example with the cake is the most obvious. To me, the original image shows a delicious cake, and the modified one shows a cake that I would rather not eat...
Every single one of their before & after photos looks worse in the after.
The cartoons & illustrations lose all of their gradations in feeling & tone with every outline a harsh edge. The landscapes lose any sense of lushness and atmosphere, instead taking a high-clarity HDR look. Faces have blemishes inserted the original actor never had. Fruit is replaced with wax imitation.
As an artist, I would never run any of my art through anything like this.
Look for SuperResolution. These models will typically come as a GAN, Normalizing Flow (or Score, NODE), or more recently Diffusion (or SNODE) (or some combination!). The one you want will depend on your computational resources, how lossy you are willing to be, and your image domain (if you're unwilling to tune). Real time (>60fps) is typically going to be a GAN or flow.
Make sure to test the models before you deploy. Nothing will be lossless doing superresolution but flows can get you lossless in compression.
I haven't explored the current SOTA recently, but super-resolution has been pretty good for a lot of tasks for few years at least. Probably just start with hugging-face [0] and try a few out, especially diffusion-based models.
This is called super resolution (SR). 2x SR is pretty safe and easy (so every pixel in becomes 2x2 out, in your example 800x600->1600x1200). Higher scalings are a lot harder and prone to hallucination, weird texturing, etc.