Because the term "compression" means completely different things for lossless codecs versus lossy codecs, trying to compare their compression powers is like comparing apples to apple peels.
In principle, there's a fixed limit to how far data can be compressed with perfect fidelity. That limit is called the data's intrinsic entropy. The entropy of real-world imagery can't be measured exactly, but we can get a good idea of the limit by seeing how far we can compress it if we take all the time and memory we need. In the case of 24-bit RGB real-world images, our best statistical models yield a compression ratio of around 3, corresponding to an entropy of 8 bits per pixel. In other words, in the future, when computers are hundreds or thousands of times faster than they are today, we'll be able to improve SheerVideo's real-time compression power by a further 36%. For comparison, SheerVideo lossless video compression is already 120% more powerful than uncompressed video.
In contrast, there's no limit to the compression power of an approximating compressor such as JPEG or MPEG. After all, any approximating compressor can discard all the information in a movie, yielding a zero-length movie file, which amounts to infinite compression power. Of course, the reconstructed movie would then have zero fidelity to the original, so it wouldn't have any practical value. In other words, when you decompress any zero-length movie, you can't count on the result bearing any resemblance whatsoever to the original. All the same, it might be fun to write a prank codec that compresses every movie down to zero bits and decompresses them all to a video of, say, Elvis.
In other words, approximating codecs can theoretically always achieve higher "compression" power than perfect-fidelity codecs, just by throwing more information away. So you'd think that even at their highest quality setting, lossy codecs would always have way higher compression power than lossless codecs, right? After all, they get to cheat by ignoring information that's hard to compress.
Surprisingly, no. Here's a chart showing how lossless image codec SheerVideo lines up in terms of error rate and power against high-quality lossy image codecs.
The error rates quoted here are the root-mean-squared error per sample of the encoding+decoding cycle, is defined as the square root of the sum of squared differences between the restored image and the original image, where the sum is taken over every sample of every pixel of every frame:
rms error = (Sumsample(reconstructedsample - originalsample)2)1/2In case you don't speak algebra, what the RMS error measures is the difference between the original and reconstructed images, formulated in terms of distance. To express this in bits, simply take the base-two logarithm:
error = log2(rms error)
As you can see from this chart, in spite of the fact that lossless image codec SheerVideo has zero error, its compression power is comparable to that of lossy image codecs Photo JPEG and Motion JPEG at their Best quality settings. Although SheerVideo's compression power is somewhat lower for RGB images, where the JPEG algorithms throw away half the chroma information per pixel, SheerVideo actually has significantly higher compression power than Motion JPEG for 2vuy images. On top of that, SheerVideo compresses many times as fast as any high-quality approximating codec. So if you care about quality enough to use Photo JPEG or Motion JPEG at the Best quality setting, switching to nondestructive SheerVideo will save you a huge amount of time as well as avoiding all your codec artifacts, at little cost or even a savings in disk space.
And if you were under the impression that DV is a high-quality codec, take a look at its error rate: a whopping 3.75 bits per sample for 2vuy, and 2.48 bits per sample for RGB. Taking into account the fact that the worst error rate a codec could possibly spew out for 8-bit samples is 7 bits, this means that only half the bits in DV images have anything to do with the original images!