• FAQ
  • History of Display Technology
  • CRT

Part 1: The CRT

Part 1: The CRT

Remember the good ol' days of CRT monitors when migrating to a 21 inch screen brought on co-worker envy?

The CRT in a Nutshell

In short, the CRT (Cathode Ray Tube) is a vacuum tube that, like a light bulb, requires a vacuum to operate. The CRT contains three electron guns that produce and fire a stream of electrons at a fluorescent screen where the image is produced (see the Technology Tidbits: CRT Technology on page 2 for a more detailed description on how a CRT works).

The First 50 Years

The CRT has a long and rich history. It was Sir William Crooke in 1878 who pioneered the vacuum tube which has become a forerunner for modern day x-ray machines. In 1897, the concept was developed further by Karl F. Braun in experiments that led to the invention of the cathode ray oscillograph, a tool that further helped to develop the oscilloscope — a signal measurement instrument — several decades later. At the end of the century, improvements for wider usage of this technology were carried out mostly by the Army Signal Corps in experiments to increase the speed and focus of the electron beam for eventual use in radar.

After World War I, CRT development accelerated. It was during the 1920s that Allen B. DuMont founded DuMont Laboratories and produced the first cathode ray oscilloscope. At the same time, AT&T, RCA and Westinghouse were doing original research on the television. However, in 1926, Kenjiro Takayamagi — one of the founders of JVC of Japan — succeeded in getting the first flickering images to appear on a CRT screen. He pioneered and built the world's first electronic black and white (B&W) TV. Within a few years, DuMont and Zenith produced the first commercial electronic B&W television sets.

Old Nanao CRTWorld War II interrupted the development of television and resources of the industrialized world were put toward the production of such devices as radar. During the war, experiments with colour television began, and in 1948, CBS announced the development of a colour system. The National Television Standards of Colour (NTSC) was adopted in 1953 — a standardization of how colour television signals were to be transmitted through the current B&W system. The principle of the colour system became the main component for the emerging computer monitor.

The Rise of the Computer

The emergence of the current day colour monitor followed in stride with the appearance of the computer, increased applications for use, and the ultimate need for a high-resolution display. As colour TV broadcasting became more prevalent by the mid-1950s, so too came the need for the monitor as a tool to aid in post production, colour editing and image evaluation. As this monitor connected directly into the NTSC system, its image quality was adequate only for the television industry.

Picture of an Old IBM MainframeOver the 1950s and 1960s, the computer was largely represented by the bulky, expensive mainframe produced by such companies as IBM and Burroughs, and was used only by those who could afford it, namely major universities and large corporations. It was in their push, beginning in the 1960s, to make the computer more affordable that such companies as Digital Equipment and Hewlett-Packard developed the minicomputer. At first, several technologies (similar to CRT) emerged with the ultimate target of commercial users, such as the vector refreshed tube (VRT) and the direct view storage tube — both technologies were combined with the minicomputer to create terminals. Both provided a high-resolution monochrome solution for CAD/CAM users. For example, resolution on the VRT monitor could go as high as 4K x 4K. Regardless, both the mainframe and the minicomputer provided a golden opportunity for the first entrepreneurs of the graphic industry, such as Evans & Sutherland, GE, Raster Technologies, Ramtek and Megatek, to introduce graphic controllers to create high resolution colour graphics for such applications as flight simulation and CAD/CAM. These were large stand]alone boxes that housed graphic processing equipment and were the forerunners to the smaller, palm-sized graphic cards that we know today. It was at this point that CRT manufacturers such as Hitachi, Mitsubishi and Conrac began designing high-resolution colour CRTs for commercial use.

Old HP z100 MicrocomputerThe advent of the microcomputer (or PC) beginning in the late 1970s became the platform for the development of the graphic card. The first graphic cards were add-ons to the DOS-based microcomputer and were required for the running of application software used to produce graphics (i.e., VersaCAD, AutoCAD). Resolution and colour density evolved gradually from 160 x 200 at 4 colours (CGA) to 800 x 600 at 256 colours (VGA), respectively. The change in the monitor's screen size was also part of this evolutionary change.

Sony Trinitron CRT MonitorUp to this point, all monitor manufacturers were producing fixed frequency monitors — meaning that they could function at one set resolution only. In the late 1980s, the first offerings of multi]frequency monitors were small in size (i.e., 13") and ill-designed with a high failure rate. Compared with the much improved standards of the mid 1990s, it seemed as if the first generation of multi-frequency monitors was a reluctant design with no heart. The evolution of the CRT monitor continued on several fronts, mostly from Hitachi, Mitsubishi (Matsushita) and Sony — all original equipment manufacturers. Differing design changes were applied to the shadow mask by Hitachi and Mitsubishi, while Sony moved in a different direction altogether by creating a new concept: the Trinitron aperture grill — a far superior technology to the shadow mask (see the Technical Tidbits: CRT Technology on page 2 for further details).

From here on in, the technology surrounding the CRT remained virtually the same, except for improvements in reducing the geometric curvature of the tube face, the width of the CRT and improvements in the electron gun structure. Additionally, manufacturers improved the electronics within the monitor to produce better specifications, including the design of more effective multi-frequency monitors.

The CRT's Decline

Global Production of CRTs and LCDsIn the late 1980s, it was forecasted that the CRT would become obsolete by the 1990s as LCD technology had started arriving on the scene in the 1980s. However, this was clearly not the case as the CRT monitor continued to reign supreme well into the 2000s. Today, the demand for CRT screens has fallen so rapidly that they have, for the most part, disappeared from the scene. Hitachi, in 2001, halted production of CRTs at its factories. In 2005, Sony announced their plan to stop production of CRT displays as did Mitsubishi just about the same time. This demise, however, adapted more slowly in the developing world. According to iSupply — an industry-based statistical organization — production of CRTs was not surpassed by LCDs until the fourth quarter of 2007, due largely to CRT production at factories in China (see the graph to the right).

CRTs — despite research to better the technology — always remained relatively bulky and occupied far too much desk space in comparison with newer display technologies such as LCD. Consumers eager to be part of the trend showed more interest in the emerging displays such as LCDs and plasmas. Today, the LCD has taken on a dominant role in all areas where the CRT was once the king.

In our next instalment of this series, we will focus on the Liquid Crystal Display (LCD).

Research and insights written by Megatech, as featured in the MIS White Papers, Oct. 2010.

Megatech Integrated Services Ltd.

351 Steelcase Rd. West, Unit 8
Markham, ON L3R 4H9
T: (905) 470-8183

Sitemap