At the speeds AI is moving, we've effectively used it all; the high quality data you need to make smarter models is coming in at a trickle. We're not getting 10^5 Principia Mathematicas published every day. Maybe I just don't have the vision to understand it, but it seems like AI-generated synthetic data for training shouldn't be able to make a smarter model than whatever produced that data. I can imagine synthetic data would be useful for making models more efficient (that's what quantized models are, after all), but not pushing the frontier.
To be fair to good JS backend devs, my view is biased by the fact that me and my team do Python and Golang work and the only time we interact with JS devs is when there is frontend to be written. Some frontend guys are good, most of them are poorly trained and those think they can write backend code and lobby managers to let them have a go at it. With disastrous results. I have never worked with JS backend devs who were any good. I am sure they exist, but I have yet to meet one.
It helps decipher JS error stack traces by applying source maps to them. In my previous company, we struggled to configure Sentry to work with source maps, and every error message was cryptic due to minification. First, I created a little command line utility and later made it a web app.
An app where you can draw with emojis and then copy your art as text and send it via, e.g., messenger. I want to make it a full-featured drawing app with undo/redo, line drawing tool, circle/square, etc.
A tool to unminify JavaScript stack traces. You paste your stack trace and provide source maps, and it deciphers your stack trace and allows you to find the origin of an error.