If only XML wasn't so damn complicated, it might have had a chance. People love to point at close tags, but there's a long list of more serious problems.
External entities, namespaces within namespaces, CDATA, namespaces that confusingly look exactly like URLs, but aren't, parser vulnerabilities.
The way XML does namespaces is one of the best things about it - you can host any XML dialect within another, with no name clashes, and without losing the schema identity of embedded parts. Infinite data composability, subject only to the constraints placed by schema authors. And namespaces are URIs, so they can be URLs.
The bad smell was mostly coming from all the things inherited from SGML - CDATA, entities etc. Although I would also add that separation into elements and attributes is not a particularly convenient way to model many things; it mixes up differences that should really be orthogonal (ordered/unordered, atomic/composite etc).
All of the things you mentioned serve a purpose that, at scale, would have to be replicated in some ad-hoc manner in JSON.
Yes XML is more expensive to parse, but so is all the extra stuff you have to do to get around the various limitations of JSON, which could have been done cheaply by an optimized XML parser.
External entities, namespaces within namespaces, CDATA, namespaces that confusingly look exactly like URLs, but aren't, parser vulnerabilities.