I watched a video recently that dove deep into this as well[1]. It turns out there's not an easy way to figure out if it's borosilicate other than if it has "made in France" on it or if you know it was purchased in Europe. AFAICT, you can't really buy borosilicate Pyrex in the US.
The video does also show off a cool "mineral oil test" to tell the difference, but probably is only effective if you had something to compare it against.
My takeaway though was that I need to thrift some Corningware, though!
It's a non destructive test. Quote from the video (with funny youtube transcription spelling errors):
"Without getting too technical, the gist is if you put the mineral oil in a vessel made of boroilic and then dip another glass made of boroilicate into it, that glass will seem to disappear while others will not. So I filled a vintage what I think is made of borosilicate Pyrex vessel with mineral oil. Then it dipped in a vintage what I think is made of boro silicut loaf pan and it seemed to disappear right before my eyes. Eureka I thought the experiment works well until I dipped a new Pyrex piece lowercase that I know is not made of borocyic and it disappeared too. Once again, another spokesman at the Cording Museum of Glass that I reached out to said that even the mineral test isn't a sure thing. According to Brady Spalling, he says in order for glass to quote unquote disappear in oil, the glass being submerged must have a similar refractory index, which allows light to pass through both without significantly bending. Mineral oil and borosyic do have similar refractory indexes. So what you've heard is correct. This method is often used to quickly ascertain whether a glass object is borosyicate.
However, variability of glass recipes makes it difficult to rely solely upon this method. In short, it may work and it may not."
I'm surprised that there is enough difference between borosilicate glass and soda lime glass refractive index. They are also both slightly mis-matched to mineral oil but I guess good enough for a visual check. Appears its not consistently reliable.
In the video he claims it does not work in his case — but I saw a very clear (ha ha) difference between the two. You could easily see the edges of the non-boro glassware.
This depends on the source of the raw materials, the green is typically from iron and you can have higher purity soda lime glass that is much more clear. I didn't realize there was a visible blue tint in consumer borosilicate glass, I wonder what causes this.
I wanted to provide an anecdote because I hold the opposite opinions of the author in a variety of ways, but still have used Helix as my primary editor for years now.
I don't chase shiny new tools nor do I aspire to replace my toolchain with things just because they're built in Rust. I've used vim/neovim ~15 years. I don't use many TUIs (I actually can't think of any others besides my editor), but my development workflow is entirely terminal-based. I use native splits/tabs in my terminal emulator instead of screen/tmux/zellij. I spent years balancing having a minimal vim configuration that included plugins (but not compiled ones so that it was portable) but didn't include hundreds or thousands of lines in my vimrc. I'm excited to see how neovim is making progress with native LSP, but for years getting it working meant continuously tweaking vimscript/lua code or adopting a massive plugin written in TypeScript.
When I first tried Helix, LSP just worked; it read what was on the $PATH and used it. That's perfect because it solves for another source of annoyance: having different versions of tools for different projects. As the author notes, there are some LSP features that don't work with Helix, but whenever I dig into the issues, I almost always come to the conclusion that the issue lies in LSP being a VSCode monoculture rather than a deficiency in Helix itself. However, using the right version of a tool for a specific project and not spending any time configuring LSP servers were the top problems plaguing my usage of neovim.
If you're a vim user and you're concerned about muscle memory, by the first week I was proficient and by two weeks, Helix was the default in my brain.
I was a huge supporter of neovim -- I actually was submitting patches to the vim mailing list to fix vim on a beta version of macOS at the time taruda posted his original async patches that kicked everything off. If you had asked me the day before I tried Helix that you could reimplement a vim-like codebase from scratch well enough to abandon the original vim code, I would've agreed with you.
> I'm excited to see how neovim is making progress with native LSP, but for years getting it working meant continuously tweaking vimscript/lua code or adopting a plugin written in TypeScript.
Lua and native LSP supports were introduced at the same time. Not getting how you would tweak Lua code to get LSPs to work before Neovim had native support.
> LSP just worked; it read what was on the $PATH and used it.
This is how Neovim loads LSPs as well. You don't need a plugin to download and manage LSPs. You can just install them externally yourself.
> not spending any time configuring LSP servers were the top problems plaguing my usage of neovim.
It's been years since it's just a few lines to enable LSPs with the config shipped in nvim-lspconfig if you don't want to override any server-specific settings. And even then its still pretty easy.
I will give you that Neovim should ship with nvim-lspconfig and just load a compatible LS if its already in the PATH. Enabling each server one wants to use is annoying. But again, it's just a few lines (and I'm pretty sure a lot of people wage war if they did either of those things because "bloat").
I'd explicitly configure which servers were triggered which filetypes (aka autocmd and when I first started doing this, the binding for autocmd didn't even exist in lua yet) and have to bind lsp functions to keybindings across languages. FWIW, I have no idea what I would've done in vimscript, lua is a godsend with tables, loops, and lambdas. At this point in time, I was an early adopter neovim's built-in LSP and everyone else was recommending coc.nvim.
But the juxtaposition at the time was that Helix ships `languages.toml` that includes all of this already out of the box. You can override it, if you want, but actually all I wanted was cohesive keybindings for basic LSP functions.
What helix gives you is opinionated LSP support. With vim you first have to add some configuration and choose key bindings. I have used LSPs with vim before, but I wasn't sure which configuration makes the most sense. In addition, helix is optimized for discoverability: You get context menus for basic editing commands and LSP commands. This really helps with learning how to take advantage of the editor and the LSP features.
I'm currently discovering Helix because I'm creating an LSP. I had tried to do the work in Zed, but Zed requires compilation with each update. It's for the sake of sandboxing, but it slows everything down immensely.
I was never a big user of netrw or nerdtree, but Helix has <space>+f for a fuzzy-finding file browser or more recently they added <space>+e or <space>+E for a hierarchical directory explorer.
I build HEAD from source using brew, so I'm not actually sure if the directory explorer is in a stable release.
How do you create a new file deep in a nested folder? In Helix, I think I used touch from a new terminal, but that’s a pain with deep folder hierarchies when I’m already in the correct slot in my editor.
The % register contains the path for the current buffer, you can insert that into prompt commands with <C-r>%. <C-w> at the command prompt deletes the last word, which in this case will be the filename of the current buffer, leaving the directory path.
So:
:o <C-r>%<C-w>new-filename<ret>
Would open a new buffer at /path/to/the/previous/buffer/new-filename. The file isn’t created on disk until you explicitly write, so :w! to save the first time.
If you literally just wanted to create a new file instead of opening a buffer, you could do that from inside Helix with :run-shell-command (aliases sh or !) instead of another terminal:
:sh touch <C-r>%<C-w>new-filename<ret>
The :o method has the advantage of LSP integration. For example, when I create a new .clj file that way in a Clojure project, the new buffer is pre-populated with the appropriate (ns) form, preselected for easy deletion if I didn’t want it.
I've seen projects like this for years and I still have the genuinely honest question: what are people doing that managing their dotfiles is significant problem for them?
I've managed my dotfiles (12 different configuration files all compatible with cygwin, wsl, linux, macOS) for the past decade in a git repo with a 50 LOC shell script that creates symlinks for me in an intelligent way. What am I missing?
You're missing the empathetic way to comment on someone's obviously unpaid, labor-of-love work. Instead the conversation is about you: your years of seeing "projects like these", your smart, minimal way of managing your config. Make a project and show it to us. Save your pathos for its documentation.
I use chezmoi, and I didn’t have to write a 50-line script - just install chezmoi on a new machine, run a command pointing it to a git repo, and up pop all the dot files and configs I need to have a consistent environment everywhere.
Chezmoi also handles variations in config files for personal vs. Work machines, or even differences between machines themselves.
I agree it’s not a tremendous lift to write a bespoke solution for this (and I did so in the past) but at some point it becomes lower-effort to use something off the shelf.
In addition to that, chezmoi templating can be used to fill in environment variables like secret keys, you just need to unlock rbw or whatever other password manager it is that you use.
I have some that I export in my shell config, and this setup allows me to have the repo in a public place and not worry about who finds it.
+1, I also don’t understand these tools. Especially these days, many apps are using ~/.config so i barely need a for loop to link everything. I like being able to slap my dotfiles on any box with my only dependencies being `bash` and either `git` or `curl | tar xzf`. At Berkeley I spent a lot of time sshing into various machines where I wasn’t root and I was only gonna use the system for a few hours. Like sshing in to each desktop in a computer lab looking for one nobody is running a build on.
It’s worked fine for me for 15 years, macOS, many Linux distros, FreeBSD.
You're missing Windows, GUI apps, and the other dozens of cli apps above 12? Also cases where symlinks break because apps delete configs before saving and the ability to differentiate between systems easier. Also the final output config is cleaner/more readable if cross-OS compatibility is offloaded to a config manager. Then templates/vars can make configs cleaner /easier to update (e.g. moved some portable apps from C to D to save ssd space, can update one var).
Also can limit config diff noise by ignoring unimportant changes like latest app window position
So during my college days and a bit after that as well I used these tools because these tools were there to be used. I then learned it is not a problem for me, it never was. I used them for the sake of using them. So I stopped using those tools and tools like config backup etc.
I think such tools will be useful for people who use hundreds of apps and have to often migrate/reset or replicate those setups.
Kind of like you, my .dotfiles folder is a private github repo now which has barely 10 files and I don't even use symlinks anymore.
So I think it's also kind of a hobby, if I may say so.
chezmoi provides a handy table for features it and other dotfile managers have. I just use a bare git repo cause it's simple but i have wanted to have easier diffs between machines and secrets management at times.
If that works for you, great. I split files up into multiple repos and manage them with VCSH. The modular approach lets me configure multiple machines differently. I have config on my work laptop that shouldn’t end up on my personal devices, vice versa. I don’t really need my i3 config on my MacBook Pro. Ditto for XDG paths, just as I have macOS config that doesn’t have a natural fit on a Linux desktop.
I could use one repo for work and one for personal and live with the mess of useless files, but I like the cleanliness and having simple git histories. I also don’t have to have conditional statements all over the place.
I deploy my dotfiles somewhat regularly. At my day job I almost exclusively work on virtual machines, with sometimes different distros. Being able to install my usual setup (fish for the shell, helix as the editor, and a bunch of tools like eza and bat) in a few commands just saves me time.
Now to be fair, since I use rotz for this, I also install extra packages with it. So its not purely dotfiles.
Depends on your requirements. For example, if you have any values you want to keep secret in your config files, then using a config manager can help you to not expose them in a Git repository. Also, if you work across multiple operating systems, you can use config managers to alter your config files based on the current OS.
I use home-manager for my dotfiles, and they manage quite a few things.
For example:
1. My editor config (neovim) including downloading and installing all my plugins, including the dependencies of the plugins (including deno, rust-analyzer, clippy, etc)
2. All the other tools I use, like ripgrep, python, and so on, including installing the same version on every machine I've got.
3. A bunch of misc programs and scripts I've written for myself, including all their dependencies
4. All the xdg configuration to open http links in the right browser profile, pdfs in evince, etc, as well as all the programs needed for that
5. systemd user units to run various daemons, like syncthing to sync some documents between computers, gpg-agent, etc etc.
And this works on all machines running any linux distro, from arch linux to nixos to ubuntu.
home-manager is just another dotfile manager, but since nix makes packages isolated, such that my user version of $pkg doesn't conflict or depend on the host version, home-manaer can also safely manage applications I run, like my editor, browser, developer tools, systemd units, etc etc etc.
I agree that most dotfile managers are weird overengineered projects that could have been a short shell script.
I think nix+home-manager is a weird overengineered project that could not have been a shell script.
There's even more. In games, we also use the term 'Producer' which manages the production, ie they're mostly a project manager. The design team is the product design role. Even here, the producers can and will dabble in design discussions in so far as is needed to meet timelines.
The nice thing is no one gets inflated with a manager title they think makes them the boss of every department. You get engineering lead, production lead etc.
After reading about the recommendation system breakthrough[1], I'm more curious about just how much we're leaving on the table with classical algorithms. If you raised the amount of money being funneled into quantum computing and spent it purely funding classical algorithm research, would you be better off?
This is an incredibly useful one-liner. Thank you for sharing!
I'm a big fan of jq, having written my own jq wrapper that supports multiple formats (github.com/jzelinskie/faq), but these days I find myself more quickly reaching for Python when I get any amount of complexity. Being able to use uv scripts in Python has considerably lowered the bar for me to use it for scripting.
Hmm. I stick to jq for basically any JSON -> JSON transformation or summarization (field extraction, renaming, etc.). Perhaps I should switch to scripts more. uv is... such a game changer for Python, I don't think I've internalized it yet!
But as an example of about where I'd stop using jq/shell scripting and switch to an actual program... we have a service that has task queues. The number of queues for an endpoint is variable, but enumerable via `GET /queues` (I'm simplifying here of course), which returns e.g. `[0, 1, 2]`. There was a bug where certain tasks would get stuck in a non-terminal state, blocking one of those queues. So, I wanted a simple little snippet to find, for each queue, (1) which task is currently executing and (2) how many tasks are enqueued. It ended up vaguely looking like:
I think this is roughly where I'd start to consider "hmm, maybe a proper script would do this better". I bet the equivalent Python is much easier to read and probably not much longer.
Although, I think this example demonstrates how I typically use jq, which is like a little multitool. I don't usually write really complicated jq.
uv has a feature where you can put a magic comment at the top of a script and it will pull all the dependencies into its central store when you do “uv run …”. And then it makes a special venv too I think? That part’s cloudier.
I heavily recommend writing a known working version in there, i.e. `"httpx~=0.27.2"`, which, in the best case, would allow fixes and security patches (e.g. when httpx 0.27.3 releases), and, in the worst case, would let you change to `~=` to `==` in case httpx manages to break the backwards compatibility with a patch release.
And, of course, always use `if __name__ == "__main__":`, so that you can e.g. run an import check and doctests and stuff in it.
Import checks and doctests you can run before any script code anyway, and exit() if needed. The advantage of `if __name__ == "__main__":`, is that you can import the script as module on other code, as in that case __name__ will not be __main__.
That is amazing! I might use this instead of bash for some scripts.
I could imagine a Python wrapper script that parses the Python for import statements, then prepends these comments and passes it off to uv. Basically automating the process.
Just wanted to say thanks for such a good write-up and the great work on Otter over the years. We've used Ristretto since the beginning of building SpiceDB and have been watching a lot of the progress in this space over time. We've carved out an interface for our cache usage a while back so that we could experiment with Theine, but it just hasn't been a priority. Some of these new features are exciting enough that I could justify an evaluation for Otter v2.
Another major for on-heap caches that wasn't mentioned their portability: for us that matters because they can compile to WebAssembly.
I actually modified SpiceDB to inject a groupcache and Redis cache implementation. My PoC was trying to build a leopard index that could materialize tuples into Redis and then serve them via the dispatch API. I found it easier to just use the aforementioned cache interface and have it delegate to Redis.
I recommend folks check out the linked paper -- it's discussing more than just confidentiality tests as a benchmark for being ready for B2B AI usage.
But when it comes to confidentiality, having fine-grained authorization securing your RAG layer is the only valid solution that I've seen in used in industry. Injecting data into the context window and relying on prompting will never be secure.
Is that sufficient? I'm not very adept at modern AI but it feels to me like the only reliable solution is to not have the data in the model at all. Is that what you're saying accomplishes?
Why wouldn't the human mind have the same problem? Hell, it's ironic because one thing ML is pretty damn good at is to get humans to violate their prompting, and, frankly, basic rational thought:
Container runs OCI (docker) compatible by creating lightweight VMs.
This repository houses the command-line interface which is powered by containerization[0], the Swift framework wrapping Virtualization.framework to implement an OCI runtime.
I am going to show my ineptitude by admitting this, for the life of me I couldn’t get around to implement the Mac Os native way to run linux VMs and used vm-ware fusion instead. [0]
I’m glad this more accessible package is available vs docker desktop on mac os or the aforementioned, likely to be abandoned vmware non enterprise license.
Lima makes this really straightforward and supports vz virtualization. I particularly like that you can run x86 containers through rosetta2 via those Linux VMs with nerdctl. If you want to implement it yourself of course you can, but I appreciate the work from this project so far and have used it for a couple of years.
VMWare Fusion very much feels like a cheap one-time port of VMWare Workstation to macOS. On a modern macOS it stands out very clearly with numerous elements that are reminiscent of the Aqua days: icon styles, the tabs-within-tabs structure, etc.
Fusion also has had some pretty horrific bugs related to guest networking causing indefinite hangs in the VM(s) at startup.
Parallels isn't always perfect sailing but put it this way: I have had a paid license for both (and VBox installed), for many years to build vagrant images, but when it comes to actually running a VM for purposes other than building an image, I almost exclusively turn to Parallels.
I still can run the latest ARM Fedora Workstation on Apple Silicon with Fusion, and similar distros straight from the ISO without having to tweak stuff around or having problems with 3D acceleration, unlike UTM.
The video does also show off a cool "mineral oil test" to tell the difference, but probably is only effective if you had something to compare it against.
My takeaway though was that I need to thrift some Corningware, though!
1: https://youtube.com/watch?v=2DKasz4xFC0