This display is definitely programmable to show any output you'd want. A similar display is the Turing Smart Screen (linked in the article) which is a small image-over-usb display.
I think it would have sounded more reasonable in French, which is my actual native tongue. (i.e. I subconsciously translate from French when I'm writing in English)
((this comment was also written without AI!!)) :-)
Oh, my honest apologies then, Greg! :) I am not a native speaker myself. And as far as i can tell, the phrasing is absolutely grammatically correct, but there's some quality to it that registers as LLM-speak to me.
I wonder how the causal graph looks here: do people (esp those working with LLMs a lot) lean towards LLM-speak over time, or both LLMs and native speakers picked up this very particular sentence structure from a common source? (eg a large corpus of French-English translations in the same style?)
No apologies needed, but thanks for your kind words! I think that we’re all understandably “on edge” considering that so much content is now llm-generated, and it’s hard to know what’s real and what isn’t.
I’ve been removing hyphens and bullet points from my own writing just to appear even less llm like! :)
Great stylistic chicken and egg question! French definitely tends to use certain (I’m struggling to not say “fancier”) words even in informal contexts.
I personally value using over-the-top ornate expressions in French: they both sound distinguished and a bit ridiculous, so I get to both ironically enjoy them and feel detached from them… but none of that really translates to casual English. :)
We got rid of all Rails apps (that needed a backend). We've moved our Postgres databases to Neon, and run our docker containers on Google Cloud Run (these are containers that don't need to run 24/7, we're paying just a few cents each month, also cold starts are much faster and more reliable than on Heroku).
>> and what did you use to manage git push deployments, setting env vars to replicate the heroku features?
Yes Digital Ocean did all this, they were very feature-close to Heroku. We have over time migrated everything stable/prod to AWS just because AWS has more products and hence you have everything in one place inside a VPC (e.g. vector db)
For Replit, i'd use it for anything I can in early-stages. It helps to prototype ideas you are testing. You can iterate rapidly. For PROD we'd centralize onto AWS given the ecosystem.
> and last q :-) re AWS - once you moved there, did you use something like elasticbean or app runner? or did you roll your own CI/CD/logging/scaling...?
We started with Lambdas because you can split work across people and keep dependencies to a minimum. Once your team gels and your product stabilizes, it is helpful to Dockerize it and go ECS, that is what we did. Some teams in the past used EKS but IMHO it required too much knowledge for the team to maintain, hence we've stuck with ECS.
All CI/CD via Github --> ECS. This is a very standard pipeline and works well locally for development also. ECS does the scaling quite well, and provides a natural path to EKS when you need the scale bigtime.
For logging, if I could choose I'd go Datadog but often you go with whatever the budget solution is.
Apologies for the long video! I didn't want to rush too much as it's supposed to serve as a tutorial.
CLAVIER-36 is a musical instrument, so it will necessarily take some time to master it.
You can jump to 7:14 in the video - https://youtu.be/rIpQmJVMjCA?feature=shared&t=434 - to hear and see how it works. It's a grid based instrument, where you place "operators", or functions on the grid.
That's the 10 second version. The longer version will require a bit of a time investment unfortunately. But it's quite interesting once/if you get into it and start making patches.
Also, if you click OP's link - https://clavier36.com/p/LtZDdcRP3haTWHErgvdM - you should be brought to an example patch. Is that working for you? Unfortunately, a mobile version is not available right now (it would be tricky to port it, without having to dramatically rethink the UI).
((I really wanted the latter display to work on my Mac, but there's unfortunately some OS-level USB buffering (I think) that ends up creating a corrupted image - https://github.com/mathoudebine/turing-smart-screen-python/i... ))
reply