The problem with a text search is that you have to get your keywords exactly right. With LLMs you ask inexact questions like 'has the topic of X ever been discussed?'[1] without needing to have an exact match on X. An LLM front end which could return references to the full text seems like the best use of both.
[1] For example, your query might have had X be 'crime' and the transcript would have references to multiple specific types of crime such as 'muggings', 'vandalism' etc. which a full text search isn't going to match. Further, with the LLM front-end you could refine the query to ask about violent crime etc.
For many cases yes. With llm based embeddings you get “semantic search”, so for example if someone searches for “pets” they will most likely get results that include “dogs” and “cats”. This is not the case for regular text search.