I used to use Spark SQL for this purpose, but I've switched. I now use Spark for when I want to transform data. But when I'm writing ad-hoc data exploration/investigation queries, PrestoDB is my jam; it's what it's designed for. Parquet as the data storage method makes both of these quite workable.
Plus you can mix and match R, Scala and SQL all together.