This article describes the history and evolution of oscilloscope technology.
The earliest method of generating an image of a waveform has been through a tedious and cumbersome process of measuring the voltage or current of a spinning rotor at particular locations about the rotor axis and in consideration of measurements with a galvanometer. By slowly moving away around the rotor, a general standing wave can be drawn onto the graph paper by recording the degree of rotation and the gauge’s strength at each position.
This process was initially partially automated by Jules François Joubert with his step-by-step waveform measurement process. This consisted of a special single-contact commutator mounted on the shaft of a spinning rotor. The contact point can be moved around the rotor according to a precise degree indicator scale, and the output will appear on a galvanometer until hand graphed by the technician. This process can only produce a very rough approximation waveform since it formed over a period of several thousand wave cycles. Still, it was the first step in the science of signal processing.
The first electronic run indicator uses a galvanometer to move a stylus over a scroll or drum of paper to scroll the detection wave pattern on a continuously moving scroll. Due to the relatively high-frequency velocity of the waveforms compared to the slow response time of the mechanical components, the waveform was not drawn directly but built up over a period, forming small pieces of many different waveforms to form an averaged shape.
The device known as Hospitalier ondograph was used in this method of waveform measurement. It automatically calculates a capacitor from each 100-wave. It discharges the stored energy through a recording galvanometer with each charge of the capacitor drawn from a point slightly further along the shaft.
To allow direct measurement of the waveforms, the recording device needed to use a very low-mass measuring system that can move with sufficient speed to match the movement of the actual waves measured. This was with the development of the movable coil oscilloscope by William Duddell, which was also referred to as a mirror galvanometer. This reduced the gauge to a small mirror that can move at high speeds to match the waveform.
To perform a waveform measurement would be a slide past a window where the light beam exits, or a continuous roll of moving-picture film would be slid over the aperture to record the waveform over time. Although the measurements were much more accurate than the built-paper recorders, there was still room for improvement due to developing the exposed images before they could be examined.
In the 1920s, a tiny tilting mirror with a diaphragm attached to a horn’s top is a good response up to a few kHz, maybe even 10 kHz. A timebase is synchronized, provided by a polygon’s rotating mirror, and a collimated beam of light from an arc lamp projects the waveform onto the laboratory wall or screen.
Earlier, audio with a diaphragm applied to the gas supply to a flame made the flame height vary, and a spinning mirror polygon gave an early glimpse of waveforms.
Moving paper runner with UV-sensitive paper and advanced multi-channel shots provided in the mid-20th-century mirror galvanometers. The frequency response was in at least the low audio range.
Cathode ray tubes were developed in the late 19th century. At this point, the tubes were primarily meant to show and explore the physics of electrons. Karl Ferdinand Braun invented the CRT oscilloscope as a physics curiosity in 1897 by applying an oscillating signal into electrically charged baffles in a phosphor-coated CRT. Braun tubes were labware, with a cold cathode emitter and very high voltages. Only vertical deflection applied to the inner plates; the tube’s face was observed through a rotating mirror to provide a horizontal time base. In 1899 Jonathan Zenneck equipped the cathode ray tube with beam shaping plates and used a magnetic field to sweep the track.
Early cathode ray tubes had experimentally applied for laboratory measurements as early as 1919 but suffered from the vacuum’s low stability and the cathode emitter. The application of a thermal emitter permissible operating voltage has dropped to several hundred volts. Western Electric introduced a commercial tube of this type, relying on a small amount of gas in the container to assist in focusing the electron beam.
VK Zworykin described a permanently sealed, high-vacuum cathode-ray tube with a 1931 thermion source. This stable and reproducible component allowed General Radio to produce an oscilloscope that could be used outside of a lab environment.
The first dual-beam oscilloscope was developed in the late 1930s by the British company ACCossor. The CRT was not a true two-beam type but used a split beam by placing a third plate between the vertical deflection plates. It was used during the Second World War for the development and maintenance of radar equipment. Although very useful for examining the pulse circuits’ performance, it was not so calibrated could not be used as a measuring device. However, it has been in the production of the IF circuits’ response curves and thus a great help in their precise alignment.
Allen B. Du Mont Labs. Made moving film cameras, in which a continuous film movement as well as the time base. The horizontal deflection was probably disabled, though a very slow sweep would have spread phosphorus wear. CRTs with P11 phosphor were either standard or available.
Long-term persistence CRTs, sometimes used in oscilloscopes to display slow-changing signals or single-shot events, use a phosphor, such as P7, which consists of a double layer. The inner layer fluoresces bright blue from the electron beam, and its light is excited by a phosphorescent “outer” layer, visible directly inside the envelope. The latter store the light and release it with a yellowish glow with decaying brightness over ten seconds. This phosphor type was also used in radar analog PPI CRT displays, which is a graphic decoration in some TV weather forecast scenes.
The technology for horizontal deflection, the oscilloscope that generates the horizontal time axis, has changed.
The early oscilloscope is a synchronized sawtooth waveform generator around the time axis. The sawtooth generator would be performed by charging a capacitor with a relatively constant current, increasing tension. The increasing voltage would be fed to the horizontal baffles to create the pass. The rising voltage would also be supplied to a comparator; if the capacitor reaches a certain level, the capacitor would be discharged. The trace would go back to the left, and the capacitor would start another traverse. If the operator were to adjust the charging current, the sawtooth generator would have a slightly longer period than a multiple of the vertical axis signal. For example, if you look at a 1 kHz sine wave, the operator can adjust the horizontal frequency to a little more than 5 ms. If the input signal is not present, the momentum at that frequency would be free running.
If the input signal were present, the resulting display would not be stable in the freewheeling frequency’s horizontal deflection because it is not a sub-multiple of the input signal. To fix that, the sweep generator would be synchronized by adding a scaled version of the input signal to the sweep generator’s comparators. The added signal causes the comparator to trip a little earlier and thus synchronize to the input signal. The operator can set the synchronization levels; For some designs, the operator might choose the polarity. The sweep generator would turn off the beam during the flyback.
The resulting horizontal deflection velocity is uncalibrated because the flow rate is adjusted by changing the sawtooth generator’s slope. The time per division on display depended on the sweep the freewheeling frequency, and horizontal gain control.
A synchronized sweep oscilloscope might not display a non-periodic signal because it could not synchronize the sweep generator to this signal. Horizontal circuits often AC-coupled.
During World War II, a few oscilloscopes used for radar development had so-called powered sweeps. These sweep circuits remained dormant. The CRT beam cut off until a drive pulse from an external device lit the CRT and started a constant speed horizontal track; the calibrated speed allows measurement of time intervals. When the sweep was finished, the sweep circuit blanked the CRT, settling and waiting for the next drive pulse. The Dumont 248, a commercially-produced oscilloscope produced in 1945, had this feature.
Oscilloscopes became a much more useful tool in 1946 when Howard Vollum and Jack Murdock introduced the Tektronix Model 511 triggered sweep oscilloscope. Howard Vollum first saw this technology in action in Germany. The triggered sweep has a circuit that develops the driven sweep’s drive pulse from the input signal.
Triggering allows the permanent display of a repeating waveform since multiple waveform repetitions are drawn on the same lane on the phosphor screen. A triggered sweep maintains the calibration of the running speed, which makes it possible to measure the waveform characteristics, such as frequency, phase, rise time, and others that would otherwise not be possible. Furthermore, triggering is done at different time intervals, so the input signal doesn’t need to be periodic.
Triggered-sweep oscilloscopes compare the vertical deflection signal with an adjustable threshold, referred to as a trigger level. As well as the trigger circuits, the vertical signal’s slope direction also detects when it is above the threshold, whether the vertical signal becomes positive or negative at the intersection. This is called the trigger polarity. When the vertical signal sets the trigger level and in the desired direction, the trigger circuit of the CRT blanks and starts an accurate linear sweep; after completing the horizontal deflection, the next loop will occur when the signal crosses the threshold again.
Variations in triggered-sweep oscilloscopes are models with CRTs offered with long-term persistence phosphors, such as type P7. These oscilloscopes were used for applications where the horizontal track speed was plodding or a long delay between runs to create a persistent screen image. Oscilloscopes without triggered sweep could also be subsequently developed with a triggered sweep with a solid-state circuit by Harry Garland and Roger Melen in 1971.
As oscilloscopes have become more powerful over time, advanced triggering capabilities allow the capture and display of more complex waveforms. For example, trigger holdoff is a feature in most modern oscilloscopes that can be used at a certain time after a trigger during which the oscilloscope should not trigger again. This makes it easier to create a stable view of a multi-edged waveform that would otherwise cause different triggering.
Vollum and Murdock nonetheless founded Tektronix, the first manufacturer of oscilloscopes to be calibrated. Subsequent developments of Tektronix involve developing multi-trace oscilloscopes to compare signals either by time-division multiplexing or by the presence of multiple electron guns in the tube. In 1963, the Tektronix Direct View introduced the Bistable Storage Tube, which allows observation of single pulse waveforms rather than just recurring waveforms. Using microchannel plates, many photomultiplier emissions within the CRT, and behind the front panel, the most advanced analog oscilloscopes could display a visible trace of a single-shot event, even when operating at breakneck sweep speeds. This “scope went to 1 GHz.
In tubes’ scopes made by Tektronix, the vertical amplifier The delay line was a long frame, L-shaped for space reasons, which performed several dozen discrete inductors and a corresponding number low-capacitance adjustable cylindrical capacitors. These “scopes had plug-in vertical input channels.” A high-pressure gas-filled mercury-wetted reed switch generates high-speed rise pulses that went straight to the later stages of the vertical amplifier for adjusting the delay line capacitors. A bath or bump and touching a capacitor made its local part of the waveform change. Configuring the capacitor made its bulge disappear. Finally, a flat top led.
Tube Amplifiers Beginning Wideband Scopes used radio transmitter tubes, but they consumed a lot of energy. Picofarad Capacity’s limited bandwidth to ground. A better design, called a chain amplifier, uses multiple tubes, but their inputs were connected along a threaded LC delay line. Their outputs were also compared to another tapped delay line whose output was feeding the baffles.
Nicolet Tester of Madison, Wisconsin, invented the first Digital Storage Oscilloscope. It was a low-speed ADC used primarily for vibration and medical analysis. Walter LeCroy designed the first high-speed DSO after producing high-speed digitizers for the CERN research center in Switzerland. LeCroy remains one of the three largest manufacturers of oscilloscopes in the world.
Starting in the 1980s, the best digital oscilloscopes became prevalent. Digital storage oscillographs use short analog-to-digital converters and the memory chips to capture and display a digital representation of the waveform, which provides significantly greater flexibility for triggering, analysis, and display than a classic analog oscilloscope. Unlike its analog predecessor, the digital storage oscilloscope can show pre-trigger events, opening another dimension of recording rare or intermittent events and troubleshooting electronic glitches. As of 2006, most new oscilloscopes are digital.
Digital oscilloscopes rely on the effective use of installed memory and trigger capabilities: not enough memory and the user will miss the events they want to investigate; If the scope has a large memory but does not trigger as desired, the user will have difficulty holding the event.
Due to the recent increase in the prevalence of personal computers, PC-based oscilloscopes have become more common. Typically, a signal is captured on the external hardware and transmitted to the computer, processed, and displayed. Manufacturers include Pico Technology, Hantek, and Analog Arts.