If your going to join data that usually means the application needs a representation of the data that is not naturally stored in 1 table. To me thats a red flag which tells me this "logic" should be stored with the app, not with the database.
Joins are the thing that make relational databases worth using. It's not about creating the correct representation of the data; it's about providing the correct data with the correct relationships. The power of the relational database is that it can accommodate data of almost any shape and allows for just about any kind of access. Embedding the 'shape' of your data in your application makes it more fragile and gives you less control, not more.
It's hairsplitting at that point, though-- somewhat reminds me of similar issues in biology with classifying organisms.
Most SQL database packages are designed to have a wide range of queries stored with the database (many will always only exist in front ends or developer tools, of course), for several reasons. One reason is that it sheds light on data structure and business logic. Another is avoiding needless duplication, since a particular snapshot which seems only to be needed today may often be needed again.
Perhaps it would be better if there was more tidy separation of data vs. logic, it's hard to say until someone finds a way to do it that's a clear improvement for everyone. We have to work with the tools we've got right now, though, and straying too far from established practices just makes your work harder for the next person to decipher.
It's most certainly not hair splitting. If the data isn't natural to one table, as the GP says, then it's strong indicator of two things:
1. NoSQL of any kind is not the solution
2. Joining at the location of the data is always preferable to joining in application code due to the Principle of Locality[1]. The amount of data sent over a wire should always be minimized.
I'm not sure that point 2 is always clearcut. Because of the cartesian product issue, searching for large amounts of data with multiple joins can produce a huge number of returned rows containing a lot of redundant data. I've had a few small optimization issues that I fixed by breaking up one large query with several joins with 2 or 3 queries.
Something we discovered years ago in relation to joins in all of the SQL DBMS systems we used (from the lowly MS-Access to ORACLE and SQL-Server). The order in which you create the joins in the SQL determines how bad the cartesian product cardinality is. The DBMS's seemed incapable of rearranging the joins internally to get the smallest number of records.
We had quite a few instances where by a judicious change in the join structure of the SQL meant a many order of magnitude change is speed due to the decrease of the cardinality.
Our process was to test different orders of joins in problematic SQL and see what effects would be produced by the DBMS. It usually required we knew how many tuples were in each of the tables and how they would be restricted. The upshot was that it should have been possible for the DBMS to do its own analysis to produce the best result. We found none of them capable of this task in any meaningful way.
That's strange. While the cartesian product is not commutative (because of the order in which items appear in the final product), the cardinality of the result result set should be independent of the order in which records are joined because the cardinality is obtained by multiplying the cardinality of each constituent set.