|
Pete,
Thanks for the excellent exposition on setup of the CTC-2. As with all CRT setup procedures, the difficult thing is to get tracking for a constant white point at all brightness ranges. This was even more difficult in the early sets because the red phosphor required so much more current than the green or especially the blue.
As you know, most people do not get anywhere near illuminant C when setting up the CT-100, because without a proper comparison or measurement, they just don't realize how hard the red must be driven and how far the blue and green must be toned down. It is definitely worth the trip, because avoiding an excessively blue white means that the grays are closer to flesh tone and therefore color level does not have to be boosted, and variations in the source material are less severe.
Anyway, I'm sure you realized I would chime in on this topic, so here goes:
Illuminant C (and D65) are both meant to be approximations to a bright overcast condition, not direct sunlight, which is closer to 5500 kelvin. Illuminant C is actually a physical approximation made with an incandescent bulb shining through a particular bluish chemical solution of a certain density, and it misses daylight slightly by being slightly magenta. D65 is based on a series of daylight color temperatures produced by natural daylight spectra. As such, it is never found exactly in nature on any particular day, and its spectrum is only approximated by physical sources; but it gives a more accurate CALCULATION of the appearance of colored objects in natural light. If there is no illuminant C source nearby for comparison, a TV set to D65 will look essentially identical to one set for illuminant C, as the viewer's eye will adapt completely to this slightly different color.
Philosophically, D65 is sort of the ideal artist's working environment. It is recommended for viewing of photos on computer monitors. I personally have a problem with this, as photography has historically used 5000k for viewing. Color transparencies films were designed to be projected with incandescent lamps, so probably had some blue shading to reduce the orange coloration. What this meant for the actual gray color aimed for on the screen I am not sure, but I doubt it was Ill C.
TV makers quickly found that they could not reliably produce the extreme current ratios between red and the other guns to make Ill C, without suffering differential blooming of the spot sizes of R, G, and B. Also, as a tube reached end of life, the red gun would run out of current sooner and shorten the useful life of the tube. It was decided to reduce the amount of red drive (although it was still considerably more than green and blue), resulting in the "9300 Kelvin + 27 MPCD" specs that you see on later sets. This was just an obscure way of saying "not enough red." The 9300K was a point on the black body locus, and the + (plus) 27 MPCD meant 27 "minimum perceptible color differences" perpendicular to the black body locus towards green. The result of these two coordinates was simply more cyan, or, the same thing, less red. This was unfortunate for color stability, as the color amplitude (and therefore the amplitude of variations) was turned up by most users to get flesh tones that were not too much toward cyan.
Further thoughts:
1) Yes, NTSC green was P1 (Willemite), the same phosphor used in oscilloscopes. It is much less yellow than the later sulfide green, but not as bright. Except: the sulfide green becomes non-linear at high current densities, so in early projection color sets, makers went back to P1 for a while. This messed up the color in some of these early projos, as the demodulator chips had been designed to make an approximate compensation for the color of the sulfide green.
2) You are right about the NTSC blue being more cyan. RCA couldn't make a decent sulfide blue, because of contamination by copper, which turned it into a sulfide green. Once processes were developed to prevent contamination, the sulfide blue could be used. I believe the engineers intended to have the more violet blue at first because it gives more vivid purples and magentas. They used such a blue in the triniscope sets, where each phosphor was confined to its own tube, and color filters could be used also to trim the color if necessary. It also appears that the FCC spec for the blue x, y coordinates may have contained a typo of interchanged digits that supported the more-cyan color, but we may never know for sure.
3) The modern rare earth red is not much different from the NTSC red; much of the difference in later NTSC sets came from the color matrixing, as you have heard. Neither original nor recent red can reproduce traffic signal red, although both can cover brake-light red. The cadmium-sulfide red used in the all-sulfide tube of the early 60s was more orangey, and also turned even more orange at high beam currents. The differences you see in flesh tones on NTSC today are not due to the red phosphor, but due to the much larger difference in green phosphors and the approximate matrix corrections made for that. The HDTV system, which is designed for the modern phosphors, and has the colors properly matrixed back at the camera where the signals are linear, can produce reds the equivalent of the early NTSC, but cannot do the Kelly greens and deep cyans of NTSC because the green is more yellow.
Net result, as you note, is that flesh tones from a proper NTSC source viewed on a CT-100 are more accurate; but flesh tones on a calibrated HDTV displaying a proper HD source should be equally accurate. The cases that get messed up are all the later NTSC sets with non-NTSC phosphors (or HDTV sets that are left in the sales-floor "searchlight" mode). From what I've seen, current makers still tend to set the white point bluer than D65. Custom installers make their living recalibrating sets to HDTV studio colorimetry.
|