Postgres is pretty bad at bulk insert. You need to use COPY INTO and create an in memory CSV file or mess around with unnest() to get decent performance.
`copy to` yes. That's what bulk insert is. How can having the feature make postgres bad at it?
"in memory csv" no. You don't need to create an in memory csv file in order to load in bulk. Your language binding should provide a way to stream data into a `copy to` process
I can't imagine how unnest is related to bulk loading. It's a query feature to unnest arrays.
The relationship between bulk loading and unnest is as follows.
A single insert of an array using an unnest can insert many rows. The performance is worse than copy to, but in the same ballpark. But there are use cases where you'd like to bulk load through a stored procedure for a variety of reasons, and now calling the procedure with an array and using unnest internally is a straight win.