Just as a thought experiment, would it be viable to send up an array of traditional hard drives? Arrange them all for use as reaction wheels, then spin them up to persist/de-stage data while changing/maintaining targets.
Probably worse than sending up well-shielded flash, but I don't think the Seagate/WD warranty expressly forbids this usage.
Who needs a house fire? A bit of quartz glass, a blow torch, and an oxygen supply, and you can convert your unused diamonds into carbon dioxide without losing the house*
He also covered this in a more recent talk [1] which has some better audio and a direct feed of the slides. It also comes with an entirely different set of interesting stories for anyone inclined to listen :)
With regards to your call for interesting language support, I'd add a very low-priority suggestion for documentation and specification formats. Plain text, Markdown, simple HTML paragraphs and sections. If a new paragraph or sentence is added to a spec, or a MUST becomes a MAY, it'd be neat to surface the context of the change instead of a word/line diff.
As the Semantic History is not yet available, how do you envision it being displayed at the moment? What sort of information are you currently collecting? Is this tracked across the project history? To that end, are you building Tree-Sitter grammars and queries yourself, or are you using the pre-existing grammars and building the language support into Asqi?
For context, I've got a back-burner project that maps reimplemented code (manually mapped, with annotations or structured comments) to identifiable items in an upstream codebase and git repository. When the content of an item is changed upstream, affected downstream code could be flagged for review. It's still early in the design/prototyping phase, but it feels like there's some interesting overlap with Asqi.
YES - this was very much on my mind actually. I was thinking we really need section outlining for Markdown (and for diffs etc, so like "two people want to edit this paragraph" when we have collision detection later). It didn't make it into the first cut but was and still is top of mind as I'm dogfooding it.
I have had some good ideas and some stupid ideas for semantic history, and many very mixed prototypes. The simplest option, and what I'll probably go with, is just a table of commits-modifying-this-thing. But that's not all the information; for example, a linear list of commits doesn't convey branch/merge topology well. I'm not sure how useful that information is, but I'd kinda like to see it.
The data I collect is two things: first, we can do a semantic git-diff operation in the backend; and second, we search-index the diffs so you can say "which commits modify entity X" and get a fast answer. That's what the UI will do when you focus on a function.
I use off-the-shelf TS grammars and write custom queries for them. I've got a custom abstraction in the backend that lets you query for ranges of nodes at once, which is how we jump to the top of comments above a function rather than to the function definition itself.
Your back-burner project is exactly where Asqi is headed -- and in fact, the backend for this is 100% done in Asqi but the frontend doesn't show the data yet. The idea is to determine things like, "two branches modify function X in two different ways" and even if the diffs don't collide, you get to see that there's a potential semantic conflict coming up. There's some potential opportunity to use past change data to detect when a function is sensitive to multiple editors, or when it's just a big list of calls/hooks or some such where it doesn't matter so much. So in the long term, I think of it as "how important is this potentially interesting conflict scenario" as some type of user-attention number that is priority-sorted so the user sees most-important first.
(Btw, I would personally not like to hear that my pet project was being implemented by a non-free product; the silver lining here is that Asqi will always let you analyze private repos locally if you don't need to pull them automatically -- i.e. if you have local clones -- so it's probably free for what you're doing unless you're launching a SaaS product. I may also add some type of data export API later so you can use the Asqi backend to power other frontends.)
> a table of commits-modifying-this-thing [...] a linear list of commits doesn't convey branch/merge topology well.
Agreed. Presenting both the local diff and the location in the commit graph seems like a better bet for helping people glean understanding of a change's purpose and context. I'm also thinking of using a table of per-item changes that's tied to the commit graph for topologically sorted history and reachability information. This will probably be backed by a per-commit list of item identifiers with their hashed content for easier comparison.
It sounds like your abstraction is doing a great job of representing file structure. For the most part, I'm just looking at telling users that they should look at a set of related symbols and revisions after an identifier's body has changed. The user is then responsible for performing a review and updating the "last-approved" information.
As a more concrete example, I'm expecting users to maintain their own mappings to items in the upstream sources:
#[rawr(
codebase = "reality",
kind = "constant",
identifier = "f_pi",
path = "src/constants.h",
revision = "123abc456",
notes = "This probably shouldn't change, but it would be good to know if \
the upstream team makes non-Euclidean alterations to the simulator."
)]
const PI: f64 = 3.14159;
If f_pi's contents have changed since revision 123, the new value can be flagged for review. In the example case, upstream's f_pi was changed to a new value. The user should be informed that PI was updated in Reality's src/constants.h@1897246. They can review the upstream change, reimplement it in the downstream codebase, and update the metadata to reflect the coordinates of the last change.
#[rawr(
codebase = "reality",
kind = "constant",
identifier = "f_pi",
path = "src/constants.h",
revision = "1897246",
notes = "Required by Legal Counsel for compliance with bill #246."
)]
const PI: f64 = 3.2;
I'm starting to think that the best way to present the changelist is to spit out deep links into an Asqi instance. By the sounds of it, you've also got all the necessary data in the self-hosted Asqi container's /db volume. If you don't mind, I'd like to see if I can directly consume that instead of building my own Tree-Sitter integration.
(Personally, I wouldn't want to hear that my non-free product was being implemented by someone's pet project :) Thankfully, I think we're heading in different directions, leveraging and presenting the same dataset in very different ways. In this case, I'm actually thrilled that someone else is implementing the machinery required by my pet project. Now I'm closer to exploring and following the fast-moving codebases that I wanted to reimplement in the first place!)
Perhaps too late, but Rails 7.1[1] introduced composite primary key support, and there's been a third-party gem[2] offering the functionality for earlier versions of ActiveRecord.
Nope, all Bells will be Taco Bells after the Franchise Wars. I wouldn't trust any other restaurant for my telecommunications and networking needs.
Case in point, their 7-layer Burrito [1] offering maps perfectly to the OSI Model [2]. It is fully self-encapsulating (though occasionally leaky), and operates bidirectionally with full duplex support for correctly-configured clients.
You should also be able to restore your data in a calm, controlled, and correct manner. Test your backups to be sure they work, and to be sure that you're still familiar with the process. You don't want to be stuck reverse-engineering your backup solution in the middle of an already-panicky data loss scenario.
Remain calm, follow the prescribed steps, and wait for your data to come back.
Probably worse than sending up well-shielded flash, but I don't think the Seagate/WD warranty expressly forbids this usage.