That’s not true in general. As long as a format stores images in blocks that can be individually accessed, and has stores multi-resolution pyramids, it is just as fast. Some examples are N5, Imaris file format (which is a HDF5 variant), and to some extend KLB (doesn’t have multi-resolution) and CATMAID (stores 2D tiles, not 3D blocks).
Regarding use of HDF5 as BigDatViewer format:
We chose it exactly because it provides the capabilities mentioned above: chunked datasets (blocked images) and many datasets in one file (filesystem in a file, storing multiple resolutions, timepoints, channels, …).
It has some serious drawbacks that let me doubt whether we would choose it again. In particular, it doesn’t support multithreaded writing and it has no journaling, i.e., if the computer crashes while writing/modifying a huge HDF5 file, it is likely that the whole file is unrecoverably corrupted.
The default implementation of N5 stores image blocks in individual files, which conveniently delegates these capabilities (multithreaded writing, journaling) to the file system. From my point of view, the only drawback is that you end up with millions of small files, which makes it cumbersome to copy datasets etc. In this regard, I like HDF5 better, where you can split the data into a few files.
Also, I should add, that the BDV perspective is a bit limited, because the BDV file format only uses a fraction of what HDF5 has to offer (structured data besides images, metadata, …)
Regarding the original question: one advantage of HDF5 over storing data in zip files etc certainly is that it has a system-independent definition of datatypes and how they are stored in binary form. E.g., issues such as endianness across different architectures etc are solved, metadata for datasets in an HDF5 tells you which datatype it is, etc. This you would have to think about, define, and implement in a homebrewed zip based format.