It seems that the 140x lower storage cost comes from:
1. S3 (OO) vs EBS (ES): about 5x
2. No indexing: About 10x ?
3. No data duplication (due to using S3 I assume) in HA deployment: 3x
Is my math right? Or do you use something different for compression?
2 Orders of magnitude of storage saving is pretty impressive.
I don't know anyone who uses it for this; specifically tests are really bad if they're subtly wrong.
Maybe to scaffold the test function, but the actual test if completely useless if you don't trust it.
So like... generate code and have robust tests, or write robust code... but, it's really really daft to generate tests that might hallucinate some random crap (and copilot really does sometimes).
Other people have said this, but copilot is auto complete.
You use it every time you press tab.
Do you accept an auto-complete suggestion 40% of the time? ...mmmm, yes, well, guess why 40% of code is generated by copilot.
It's not because people are generating tests with it.
In a recent personal C project (a custom archive file extractor), I have used chatgpt to generate tons of unit tests.
And honestly, I was extremely impressed. With a bit of context, it was able to generate almost correct test data for a fairly complex binary data format.
I've also leverage it heavily to:
* generate doxygen comments
* get usage examples of libs I never used
* create a whole bunch of utils functions.
And honestly for all these tedious tasks, it has done a far better job than I would ever had.
Comments were consistent in styling, the base examples were fairly good, and the utils functions were well made, specially the error handling (which I would probably have semi-consciously skipped tbh).
In fairness, I modified most of this code slightly to make it fit my project structure, or tweaked it a bit what the IA didn't quite get it right.
(it never fully understood some offset fields in the file format, but got pretty close to at times. And in fairness my naming for these offset fields was a bit questionable).
Heck, as a test, I even threw at it an RFC-like spec of the file format, and asked him to generate a python parser. The result was not 100% correct, but definitely a good start to iterate on.
In the end, this side project took me 2 weeks to implement with chatgpt probably saving around 1 week of dev. It also greatly helped improving the quality of the project (better doc, better tests).
that would be surprising - it does a fantastic job of producing e.g. dumb unit tests. for instance, Copilot + a Go table testing template means you can churn out the simple "make a request to this url with this data, ensure I get a 200 and the response contains 'id' and not 'error'" extremely quickly. the code is trivial but tedious, so you can quickly inspect for sanity and run to ensure they pass, then commit and have them checking future changes.
> it's really really daft to generate tests that might hallucinate some random crap (and copilot really does sometimes).
variations on this comment are all over these threads, which is bizarre. hallucinating means you waste a few seconds reading the code it produced, not that you commit incorrect code.
this isn't like ChatGPT advising people to drink bleach, Copilot is a dumb tool offering an expert (you) suggested solutions for your expert consideration
For me the worst part about this is that writing tests tediously makes me reach for layers of abstraction on my test code. And then suddenly my test code is complicated and needs its own test code, and changing a test can often be problematic to the abstractions I foolishly employed.
Being able to churn out the boilerplate for tests, which can make DRY a non-concern, is great. I just hope that if I do this, I never get sloppy, and I always review the tests.
I agree! I try not to make test code too complicated, and prefer repetitive code over abstractions (else you have to test the tests!) so Copilot is useful there.
In my case about 50% of the test code is just boilerplate, so I usually type the test name and generate the rest. Most of the time I have to rewrite the actual test logic (although very trivial tests are sometimes correct), but I probably keep more than 40% if we're talking tests specifically.
A common occurrence is something like a pool of 10 actions where a bunch of tests each do 3 to 7 of them. This is very hard to abstract with a function call.
In the case I was referring to, mainly to keep the code consistent with what was already there and the PRs small, as the code base is primarily owned by a different team. It's also pretty innocuous short tests that read well as they are.
I guess this could be true in some languages and settings, for example when a notable portion of your tests verify correct behavior with null values, arguments of wrong type, etc. When talking about unit tests that verify the actual logic, you should naturally become more careful with your copilot suggestions. Especially if copilot also wrote the code it's testing.
I use caret mode, visual selection and P quite often to search for the text in visual selection in a new tab with the preferred search engine. I also like yf for yanking a link to clipboard.
One problem with Tauri is that it doesn't bundle Node runtime, meaning that you can't take advantage of the npm packages that works outside of a browser environment. It does allow you to use Rust packages, but that's another story than Javascript.
Yes. As a user, I’m rooting for Tauri, but as a developer, I could probably build my app faster using Electron. Unfortunately, past experience tells me that companies usually optimize for ease of development instead of performance, most of the time :(
> After work I prep for dinner, keep working on my projects, and then drive 5 min west to the beach right before sunset and sit in the back of my car writing in my notebook. This is my favorite time of the whole day, a solid 30 min break where I don't look at my phone or any screen
Sounds like a wonderful time for unwinding. What would you be writing in those time?
Ideas, haikus, sketches. A long time ago a manager gave me a high end notebook as a end of year gift and since then I've really enjoyed writing with a lead pencil (I'm a mechanical pencil nerd) and my hi-poly eraser to erase mistakes :)
I think its the "process" of putting the pencil to the paper that I really enjoy, and seeing letters appear.
Get good paper and a good pen, it's great. Rhodia pads are great all around paper if you're not aware. I just scribble lists and notes tbh, but they're fantastic.
Very interesting idea! I think I've heard something like this before on giving reward randomly yielding to addictive behaviours, like gambling. Do you have any recommended reference on this idea?
Stanford Neuroscience Lectures. Dr. Spoolansky. Free online on Youtube. Forgot which lecture covered it. The whole class is worth a watch. Including the forbidden lecture if you can find it (lets just say it ruffles some feathers by noting some commonalities between OCD and spiritual ritualistic behavior).
> Stanford professor Robert Sapolsky gave the opening lecture of the course entitled Human Behavioral Biology and explains the basic premise of the course and how he aims to avoid categorical thinking.
The keywords you might be looking for are "partial/intermittent reinforcement" and "variable ratio schedule." The original descriptions are in a 1957 book by Ferster and Skinner, but any psychology textbook should have a description of them in the chapter on conditioning.
Karabiner is incredible for making keyboard shortcut to do some pretty complicated task.
Obsidian, what a note-taking app! Makes Evernote looks like a dinasour.
Vim (and NeoVim), you learn it once and use it for everything.
Hope it helps someone!