introduction to R'G'B' color space
The human eye sees color images using three types of cone-shaped light receptors lining the retina,
which contain photreceptive pigments that are sensitive to three distinct but broadly overlapping regions of the light spectrum,
corresponding to red (R), green (G), and blue (B), light.
This means that all the colors we can see can be represented by mixtures of red, green, and blue light.
[Complications arise because the ranges of the visual pigments overlap each other and vary among people,
but these are usually ignored.]
Accordingly, the color film in movie cameras, the image sensors in color video cameras,
and the renderers in computer image synthesizers
all capture a color image by recording the mixture of red, green, and blue light at each point in the image.
Likewise, color cathode-ray-tube displays, color projectors, color liquid crystal displays,
color plasma displays, and other color displays
all reproduce these color images by mixing the appropriate amounts of red, green, and blue light at each point in the image.
The amounts of red, green, and blue light in a perceived or reproduced color are independent,
and each one can range from none at all, or all the way off (0) to as bright as possible, or all the way on (1).
This means that the space spanned by all possible mixtures of red, green, and blue
can be represented by a unit cube, known as the RGB color cube,
with black at the origin (R,G,B) = (0,0,0); white (1,1,1) at the apex;
the primary colors red (1,0,0), green (0,1,0), and blue (0,0,1) at the near corners;
and the complementary colors cyan (0,1,1), magenta (1,0,1), and yellow (1,1,0) at the far corners.
introduction to Y'CbCr color space
The sensitivity of the eye to differences in brightness is nonuniform among different colors.
The unequal sensitivity of the red, green, and blue cone types perceptually distorts the RGB color space.
Green cones are about twice as sensitive to brightness as red cones, and red cones are about three times as sensitive as blue cones.
Because of the differential sensitivity of the different cone pigments,
blue intensity can be quantized three times as coarsely as red,
and red intensity can in turn be quantized twice as coarsely as green, with little perceptible effect.
In analog terms, the blue component can be assigned a third the bandwidth of the component,
which can in turn needs only half the bandwidth of the green component.
Furthermore, the human retina also contains rod-shaped photoreceptors,
which do not distinguish color, but are more sensitive to overall brightness, or luma (Y').
Outside the central fovea region of the retina, the rods are packed much more densely than the cones,
so that our peripheral vision can perceive brightness details better than color details.
Because of this discrepancy in spatial and intensity resolution,
the color, or chroma (C), of an image can be quantized more coarsely than the luma,
or assigned a smaller bandwidth than the luma, with little perceptible effect.
Historically, the green > red > blue bias of the human brightness percept
is reflected in the sepia bias of monochrome video and photography.
Similarly, the luma > chroma bias is reflected in the mere fact
that monochrome photography, filmography, and videography preceded color versions of those technologies,
as well as in the relatively small bandwidth allocated to chroma relative to luma in storage and transmission formats.
The digital Y'CbCr luma-chroma spaces are modelled after and designed for ease of interchange with
the color spaces used in international analog color television standards,
such as Y'IQ of the North-American color-television standard NTSC,
and Y'UV of the European color-television standards PAL and SECAM.
The digital Y'CbCr spaces differ from these analog spaces chiefly in that,
for ease of computation, the luma axis (Y') in RGB space is not quite perpendicular to the chroma axes -
blue-yellow chroma (Cb) and red-cyan chroma (Cr).
The Y'CbCr representation is used today in most popular digital color image formats,
including the lossy still-image compression standards JPEG-DCT and PhotoYCC,
and the current lossy moving-image compression standards D-5, D-1, Digital Betacam, DV, Motion-JPEG, Photo JPEG, MPEG, and H.263.
The most popular standard relating R'G'B' to Y'CbCr is given in Recommendation ITU-R BT.601, adopted in 1990.
In the Y'CbCr representation, the luma value (Y'), which is a unitary combination of red, green, and blue,
ranges from 0 (black) to 1 (white); while the chroma values (Cb, Cr) each range from -1/2 to +1/2.
Moreover, the three values are independent of each other.
So, as with the RGB space, the Y'CbCr space can be represented as a unit cube.
But here the luma axis extends not along the main diagonal,
but from black at the origin (Y',Cb,Cr) = (0,0,0) in the center of one face
to white the center of the opposite face (1,0,0).
And the Cb axis ranges from an undisplayable "negative-bluish" yellow (0,-1/2,0) at one edge of the face containing the origin
to an undisplayable "negative-yellowish" blue (0,1/2,0) at the opposite edge;
while the Cr axis ranges from an undisplayable "negative-reddish" cyan (0,0,-1/2) at an adjacent edge of that face
to an undisplayable "negative-cyanish" red (0,0,1/2) at the opposite edge.
the color-conversion problem
A system that converts between different units can be conceptualized
as a set of meshing gears, as suggested by the Synchromy logo.
A clockwork, for example, uses interlocking gears to convert between ticks, seconds, minutes, and hours,
and, in more elaborate clockworks, even days, weeks, months, and phases of the moon and planets.
If the gears aren't kept in synchrony by interlocking teeth -
if the teeth don't mesh properly or the clockwork uses toothless wheels,
then they will slip and lose accuracy.
In the case of converting between RGB and Y'CbCr color spaces,
Synchromy solves the much more difficult problem of conveting between three-dimensional units.
So, conceptually, instead of each gear rotating in a single plane around a single axis,
the gears need to rotate freely in four-dimensional space around three axes simultaneously, or around an arbitrary axis.
It's easy to see why, until the invention of Synchromy, this problem was always assumed to be impossible to solve.
the Synchromy breakthrough
BitJazz's proprietary Synchromy technology is based on a branch of mathematical information theory
company founder Andreas Wittenstein calls discrete deprecision theory,
which was previously used to develop BitJazz's PhotoJazz and SheerVideo products.
Synchromy maintains the theoretically highest possible accuracy
in converting between RGB and Y'CbCr color spaces.
In cases where the two color spaces have the same precision,
Synchromy has measurably lower error than any other method.
Given a Y'CbCr precision two bits greater than the RGB precision,
the error is exactly zero when using Synchromy to convert from RGB to Y'CbCr and back.
For example, in an RGB → Y'CbCr → RGB workflow,
if you start with uncompressed 10-bit RGB[A] material,
convert it to Y'CbCr[A] 4:4:4[:4] data of at least 12 bits per component with Synchromy,
edit it with a high-precision Y'CbCr editor, and convert it back to 10-bit RGB[A] with Synchromy,
any pixels not affected by the editing will be maintained with perfect fidelity,
and the edited pixels will be interpolated with the maximum possible accuracy.
Similarly, given an RGB component precision one or two bits greater than the Y'CbCr precision,
converting with Synchromy from RGB to Y'CbCr and back yields exactly zero error,
except, of course, that non-displayable colors outside the RGB space are projected to the surface of the RGB cube.
For example, in a Y'CbCr → RGB → Y'CbCr workflow,
if you use Synchromy to convert 10-bit uncompressed Y'CbCr[A] 4:4:4[:4] footage
to RGB[A] pixels of 12 or more bits per component,
edit it with a high-precision RGB compositor, and convert it back to 10-bit Y'CbCr[A] 4:4:4[:4],
untouched pixels will be mathematically identical to the originals,
and the remaining pixels will be interpolated as accurately as theoretically possible.
Note that the distorted RGB cube fits inside the Y'CbCr cube in such a way that most of the possible Y'CbCr values -
slightly more than 3/4 of them, in fact - do not correspond to valid displayable colors.
Such invalid values often occur as a result of filter overshoot, chroma interpolation, and other common editing processes.
In theory, since the RGB pixel representation does not leave room for subblack or superwhite values,
it is impossible to preserve these out-of-range colors when converting to RGB and back, even for Synchromy.
Synchromy prevents information loss from rounding errors, but not from overflow errors.
Synchromy supports fixed-point RGB[A] and Y'CbCr[A] 4:4:4[:4] pixel formats of any precision,
from 8 to 16 bits per component and beyond,
in all standard RGB and Y'CbCr color spaces.
Synchromy also supports other color spaces, such as HSV, HLS, and L*a*b*.
Because Synchromy operates on one pixel at a time,
it supports any resolution, including SD and HD, NTSC and PAL,
4:3 and 16:9, progressive and interlaced.
Synchromy is built into the latest release of BitJazz's acclaimed real-time nondestructive
for use with QuickTime on Mac and Windows.
In addition, Synchromy will be released as an independent set of cross-platform uncompressed codecs,
first for QuickTime and then for Windows Media (AVI).
The technology is also available for licensing.