If it sounds impractical, it's because the whole industry got used to not learning databases beyond most basic SQL, and doing everything by hand in application code itself. But given how much of code in most applications is just ad-hoc reimplementation of databases, and then how much of the business logic is tied to data and not application-specific things, I can't help but wonder - maybe a better way would be to treat RDBMS as an application framework and have application itself be a thin UI layer on top?
On paper it definitely sounds like grouping concerns better.
While stored procedures/triggers etc. can be powerful, it has been taught for decades now that it is an antipattern to put business logic to the RDBMS (for more or less valid reasons). Some concerns I would have would be vendor lock-in and limits of the provided language.
In very simple systems that makes sense. But as soon as your validation requires talking to a third party, or you have side effects like sending emails you have to suddenly move all that logic back out. You end up with system that isn't very easy to iterate on.
You can model external system interactions with tables representing "mailboxes" - so for example if a DB stored procedure needs to call a third-party API to create a resource, it writes a row in the "outbox" table for that API, then application-level code picks that up, makes the API call, parses the response (extracts the required fields) and stores it in an "inbox" table so now the database has access to the response (and a trigger can run the remainder of the business process upon insertion of that row).
Surely some RDBMS has the ability to run REST queries, possibly via SQL by pretending it's a table or something.
I can imagine that working on a good day. I don't dare imagine error handling (though would love to look at examples).
Ultimately, it probably makes no sense to do everything in the database, but I still believe we're doing way too much in the application, and too little in the DB. Some of the logic really belongs to data (and needs to be duplicated for any program using the same data, or else...; probably why people don't like to share databases between programs).
And, at a higher level, I wonder how far we could go if we pushed all data-specific logic into the DB, and the rest (like REST calls) into dedicated components, and used a generic orchestrator to glue the parts together? What of the "application code" would remain then, and where would it sit?
> treat RDBMS as an application framework and have application itself be a thin UI layer on top?
Stored procedures have been a thing. I've seen countless apps that had a thin VB UI and a MSSQL backend where most of the logic is implemented. Or, y'know, Access. Or spreadsheets even!
And before that AS/400&al.
But ORMs came in and the impedance mismatch is then too great. Splitting data wrangling across two completely differing points of views makes it extremely hard to reason about.
If you think of an existing database, like Postgres, sure. It’s not very convenient.
What I am saying is, in a perfect world, database and server will be the one and run code _and_ data at the same time. There’s really no good reason why they are separated, and it causes a lot of inconveniences right now.
Sure in an ideal world we don't need to worry about resources and everything is easy. There are very good reason why they are separated now. There have been systems like 4th dimension and K that combine them for decades. They're great for systems of a certain size. They do struggle once their workload is heavy enough, and seem to struggle to scale out. Being able to update my application without updating the storage engine reduces the risk. Having standardized backup solutions for my RDBMS means is a whole level of effort I don't have to worry about. Data storage can even be optimized without my application having to be updated.
Something similar but in the opposite direction of lessening DB-responsibilities in favor of logic-layer ones: Driving everything from an event log. (Related to CQRS, Event-Sourcing.)
It means a bit less focus on "how do I ensure this data-situation never ever ever happens" logic, and a bit more "how shall I model escalation and intervention when weird stuff happens anyway."
This isn't as bad as it sounds, because any sufficiently old/large software tends to accrue a bunch of informal tinkering processes anyway. It's what drives the unfortunate popularity of DB rows with a soft-deleted mark (that often require manual tinkering to selectively restore) because somebody always wants a special undo which is never really just one-time-only.
I think that's the main issue. It's not enough to have a database that can automatically sync between frontend and backend. It would also need to be complex enough to keep some logic just on the backend (because you don't want to reveal it and entrust adherence to the client) and reject some changes done on frontend if they are invalid. Database would become the app itself.
I don't think a stored procedure that operates only on master copy of the database can reject update comming from a second copy and nicely comminicate thus happened so that the other copy can infrom the user through some ui.
It's the same issue that killed the image-based programming in favor of edit-compile-run cycle we're all doing. "How do I test? How do I do version control? How do I migrate?".
These are valid concerns, but $deity I wish we focused on finding solutions for them, because the current paradigm of edit/compile/run + plaintext single source of truth codebase, is already severely limiting our ability to build and maintain complex software.
While I don't like the idea of putting logic to the DBRMS (if not for a really good reason), you can do unit tests and code reviews. In a serious project you already should have a way to make migrations and versioning of the DB itself (for example using prisma, drizzle, etc.). Procedures would be just another entry in the migrations and unit tests can create testing temporary DB, run the procedures and compare the results. I agree tooling is (AFAIK) not good and there will be much more work around that, but it is possible.
The other issue, from experience, is needing to reimplement logic as well -- you end up with stored procedures that duplicate logic that also must be run either in your server or on your client. eg given the state of the system, is this mutation valid.
Then those multiple implementations inevitably suffer different bugs and drift, leading to really ugly bugs.
Should I write this logic in the DB itself ? Seems impractical.