I can only assume so, given the nature of the experiment. But it seems counterintuitive to add yet more of it. Naively speaking, it feels a bit like the actual data is being "guided" towards the simulation.
However, I have zero subject matter expertise with physics, so my intuition is probably not worth much.
Since the EHT is an interferometer, the observations contain less information than what would be available from a single telescope of the same equivalent size. Therefore to reconstruct the 'full image' you need to find the model which best fits your interferometric observables. As far as I can tell, what they've done with PRIMO is basically a fancy version this kind of modelling. The data isn't necessarily being guided towards the simulations, it's more like we have a better computational technique to precisely fit the data.
As I had to explain to the execs at a lab on a chip startup I was helping to put together, you're generating the proposition of what the data might be that still needs to be qualified with physical testing, and not actual data.
This concept seems to be worryingly lost in the flurry of excitement at using ML/AI in academic research by some.
The data clearly supported the hole in the middle, and only reconstructions with a hole did fit the data. The data support the hole and that one side is brighter, but not much more than that. Thus a blurred image does not make one believe that we know more than we do.
Yes, but those choices had good scientific basis which was the best we could do at that time.
They are just starting data collection at a higher frequency band that will allow higher resolution. Unfortunately our baseline is currently limited by the diameter of the earth so the only way to get sharper data is by using shorter wavelengths.
Interferometry relies on measuring the interference pattern between two points simultaneously measuring incoming radio waves. Each element of a baseline must be measured at the same time.
If we were to allow the Earth to rotate around the Sun and measure components of the same baseline at different times, we would violate this.
I don't know what exactly the tradoffs are, but I suspect this approach has a lower sensitivity due to size of the dishes, it's more difficult to get enough telescopes to form a good image, and transmitting the data back is likely to be a challenge (the black hole observations were shipped on hard drives instead of transmitted via the internet. Even achieving a broadband-speed transmission rate with a deep space object is difficult)
The original image was an amalgamation of several different reconstruction techniques with varying dependence on prior information. They also had data driven techniques during the original analysis but I don't think they were used to generate the final image shown to the public