Saturday, February 18, 2017

History of Computing I: The Colossi

The earliest notion of a "computer" that most people had in the nineteenth and twentieth centuries had one key feature: enormous size. Babbage's 1837 "Analytical Engine," widely regarded as the earliest ancestor of the computer, would -- had it ever been completed -- have filled a warehouse-sized room and weighed nearly 30,000 pounds. Babbage's designs used interlocking gears with various ratios to perform calculations, and his system contemplated a punch-card I/O unit, a calculating unit known as the "Mill" (a sort of CPU), and a storage unit he called the "Store." In his search for backers, he enlisted Lord Byron's daughter Ada Lovelace, who wrote an erudite explanation of the Engine's operations for a French journal; in 1983, a new computer language designed for the US Department of Defense was christened "Ada" in her honor. Two working models of his machine have been built in recent years; you can see one of them in action here.

Few significant advances in computing were made until the late 1930's and early 1940's, when military needs -- calculating target data, and (most importantly) breaking secret codes such as Germany's ENIGMA, provided both the impetus and the funding. In the UK, researchers at Bletchley Park, led by the young computer genius Alan Turing, constructed machines they named "bombes" which used electrical relays and motors to run through hundreds of thousands of possible combinations of the wheels and wires of an Enigma machine. Later, they constructed a far more advanced machine, known literally as "Colossus," for the same task.

The first fully digital machine along these lines was ENIAC (short for Electronic Numerical Integrator And Computer) which used vacuum tubes -- more than 17,000 of them! -- as relays and switches. The machine had to be literally re-wired for each different kind of operation; the task was entrusted to a group of young women who, even though many of them had college degrees in mathematics and engineering, were regarded at first as little more than glorified switchboard operators. All of the units together weighed more than 60,000 pounds, and consumed 150 kilowatts of power -- all to perform roughly 5,000 calculations per second. While this was many times faster than any earlier machine, it's equivalent to a CPU speed of 5 kHz -- fifty million times slower than the average desktop computer of today.

The key invention which began the change from room- and-building-sized machines to something that could actually fit in an office or a home was of course the transistor, developed at Bell Labs in 1947. The basic idea was to use a semiconductor sandwiched between more conductive materials; such a device, like a radio tube, could be used either as a signal amplifier or a switch. There were several key advantages: transistors produced less heat, were cheaper to manufacture, and -- even in their early state -- much smaller. Each of the women of ENIAC shown in the photo above is holding a unit with the same storage capacity; the first two decreases in size are due to smaller, specially-made tubes, but the last is due to transistors. A typical smart phone today has nearly a million times the number of transistors in this smallest unit.

With the war over, business demands drove the computer market. The first commercial computer introduced for this market was the UNIVAC, introduced in the early 1950's. For around $750,000, you got a CPU speed of 1.9 kHz, about 1.5 Kb of memory, and tape drives, each the size of a small refrigerator, which held about 1.5 Kb per tape. A decade later, IBM introduced its 1401 system; with the top model, one could now have 16 Kb of memory, and perform almost 23,000 calculations per second -- 23 kHz. IBM did not sell the 1401, but you could lease one for around $2,500 a month. Home computing on a practical scale was still far in the future; although the SIMON and other home-kit computers were available throughout this period for home hobbyists, their size -- 8 binary switches -- made them useless for any but the most limited tasks.

Friday, February 10, 2017

Ghosts in the Machine: Early Television from 1928

The image to the left is a single frame from the earliest known television recording of a human face, made by the inventor John Logie Baird. The subject, a Mr. Wally Fowlkes, was a young lab assistant undistinguished save by his willingness to sit for lengthy periods under the bright, hot lights required to make television recordings. And, amazingly, these recordings were made almost entirely using mechanical means -- a giant disc with glass lenses was linked directly to a Columbia Records turntable equipped with a cutting stylus -- and predate any electronic images of humans by several years! They were preserved on discs that look much like audio recordings, and the frequency of the image data is so low that, if played through speakers, a sound in the audible range is produced. Indeed, Baird claimed that he could distinguish, just by listening to them, a recording of a face from say, a recording of a pair of scissors or a soccer ball. Baird called his process Phonovision, and although he abandoned it as offering too brief, and posing too many technical obstacles, it was nevertheless the first system of recorded television in history.

These recordings were little-known until a few years ago, when recording engineer Donald McLean collected several of them, and transferred their analog signal into digital form. Once this was done, he was able to correct for all kinds of problems that plagued Baird's engineers -- mechanical resonance ("rumble"), pops and scratches on the disc, speed irregularities, and problems with frame registration. The earliest recordings are still quite primitive, but one can at least recognize the faces.

Even more remarkably, in addition to these laboratory discs, there exist home recordings, made using "Silvatone" aluminum discs (one of these was referenced recently in The King's Speech). Silvatone discs used a heavy, weighted cutting stylus, and could record any sort of signal, whether of the human voice or a radio broadcast. And, due to the relatively low frequency of the signal, they could be used to record television broadcasts as well. During the brief period from the late 1920's through to the early 1930's, when Baird was able to send out television signals with the BBC's co-operation, a number of amateur recordings were made; these, too, have been restored by Mr. Mclean. There are about a half-dozen different snippets: dancing girls (of course!), a marionette show, and a singer by the name of Betty Bolton. McLean actually located Miss Bolton, by then 92 years old, and she was able to personally identify herself as the subject of the recording!

During this era -- in 1930 -- the BBC broadcast the very first television drama, an adaptation of Pirandello's play "The Man with a Flower in his Mouth." Although this does not survive, there is a re-enacted version, using the exact same script, the original music and title cards, and an identical 30-line Baird camera system -- you can watch it here, along with comments on the original broadcast and the recreation.

Mr. McLean has kindly permitted me to show his restored original Baird recordings to you -- but in class only -- as he is concerned to protect his rights in the restored versions. So look for some haunting images at Wednesday's class!

SIDEBAR: Here's a chart I've prepared showing the relative frequency and bandwidth of television signals, from the days of the Baird discs to HDTV.

ADDITIONAL LINKS: The excellent Television History site, a film of the 1936 Radiolympia demonstration broadcast as well as the High-def opening ceremony later that year. Both feature versions of the commissioned theme song, with its curious lyrics:
A mighty maze, of mystic, magic rays
Is all about us in the blue
And in sight and sound they trace
Living pictures out of space
To bring this enchantment to you ...
Here also you can see a modern 32-line mechanical TV in action; a 1938 Nazi TV station ident (they named the station after Paul Nipkow, inventor of the Nipkow disc, so as to claim TV as an "Aryan" invention); and lastly, a TV advert for Dumont TV featuring Wally Cox, later a "Hollywood Squares" regular and voice of Underdog.

Tuesday, February 7, 2017

3D Movies: Always Just Over the Horizon

From its first appearance in 1922 to the current wave of films today, 3D has always been hailed as a great technical advance which would bring the cinema closer to its future as an all-encompassing form of entertainment. This future, alas, has always remained just over the horizon, and the reason is plain to see: it has always required special, add-on technologies that have made films more expensive to produce, project and view. This has led to cost, which has led to its being seen as a premium entertainment, which has prevented it from becoming more widely used. Doubtless the current wave of 3D will fade, but in the meantime, it might be educational to take a look at Teleview, the very first 3D system for the cinema, as nearly all of the technological elements -- and all of the hurdles -- were there are the start, nearly ninety years ago.

Basically, there have always been two methods of achieving the effect of 3D -- one, as with Kinemacolor, was an active method using alternating frames of the film for left-eye and right-eye views; such systems then required either a polarizing filter (with the projected images also alternating in polarity) or a synchronized, electrical shutter for every viewer (this was the method of Teleview, and seen in the diagram of the viewer above). Oddly, this is not only the earliest, but the latest, system: 3D television similarly uses alternating frames, along with a special set of electronic glasses designed so that each eye sees only the frames made from "its" perspective (at $50 a pair, they're hardly cheap).

The other method, the passive one, is to project both left-eye and right-eye perspectives simultaneously, and use either red/blue or polarizing eyeglasses so that the overlapping images are "sorted out" by each eye. This has the advantage of cheap, disposable means of reception, but the disadvantage that the image on the screen will be poor to anyone without the eyewear. While we often associate this system and its red/blue glasses with the earlier heyday of 3D in the 1950's, polarizing glasses were in fact far more commonly used, primarily because such films did not have to be printed on colored stock, or use color at all.

Today, converting a modern multiplex cinema to 3D costs about $300,000 a screen -- which, at some larger houses, would mean several millions of dollars. The practice has therefore been to convert only a few screens, which means that any film released in 3D will be on fewer screens, and even with a premium will make less for both the studios and the exhibitors. The dwindling economic returns of such a thing, especially in the current recession, have caused some studios, such as Warner Brothers, to pull out of earlier commitments to making films, such as the last Harry Potter features, in 3D. The jury is still out on 3D TV, and my bet is that, before too long, we will once again associate 3D, that magnificent technology of the future, with the past.

Saturday, February 4, 2017

Later Developments in Cinema

The history of the development of cinema after the early portion of the silent era is largely -- though not entirely -- a question of the gradual progress towards both sound and color. Each of these, as we've already seen, started much earlier than generally imagined; sound began with Dickson's "Experimental Sound Film" of 1894, and hand-painted color had already reached a high-water mark with Georges Méliès's 1900 version of Joan of Arc. With sound, the great problem was synchronization; there were all kinds of schemes for keeping sound -- as a phonograph record, an optical code, or any other pre-recorded substrate -- in time with image. When it came to color, hand-painted films -- even with stencils, and armies of (mostly female) colorists, it remained a premium mode without a premium payback. The main use of color in commercial film, in fact, was with tinting -- a process in which certain segments of film to be edited were run through chemical baths. An emotional scene might be bathed in red, while another encounter would be shown in blue or purple. The advantage of tinting was that all the varied colors could be achieved in post-production, at the director's discretion. Such scenes as the "mellow yellow" of the frame from an unknown film of this era, were common indeed. In some cases, tinted prints survive and have been restored; in others, the indications for tinting have been recreated in restoration.

At the same time, efforts progressed toward a technology that would bring about the appearnce (at least) of full color. The pioneer in this field was Charles Urban, an American expat in England who had already achieved success with his black-and-white films in the era of the "Cinema of Attractions." Urban realized that persistence of vision, the same principle that enabled the illusion of motion, could enable an illusion of color as well; this was the basis of his "Kinemacolor" system. Black-and-white was shot through a special camera using a spinning filter which filtered alternate frames in red and green. After developing the film, it was played back through alternating color filters, so that the "red" frames were tinted red and the "green" frames green; the result was something very close to the feeling of full color (though in fact the process missed part of the spectrum -- with dark blue being very imperfectly reproduced). Urban's process also had the huge technical advantage that, although special cameras and projectors were needed, the film was just ordinary black-and-white stock. Urban promoted his system through ambitious, epic-sized films shown in specially built, luxurious cinemas. Unfortunately for Urban, he was sued by cinema pioneer William Friese-Greene, who (falsely) claimed he had had the idea for this kind of color alternation before. As has happened with modern patent lawsuits, the British judges had no grasp of the technology on which they were ruling, confusing concept with practical art, and Friese-Greene's scheme of staining alternate frames (which produced only a muddy mess) with Urban's far superior pictures. They ruled in favor of Friese-Green, and Urban was eventually forced into bankruptcy. Friese-Greene was never able to bring his system to the point commercial success, though his son Claude, using a process much more like Urban's system than his father's, made a number of fine early color films.

Ironically, it was to be one of William Friese-Greene's original concepts -- dyed film which was glued or bonded together -- which would ultimately be the precursor of modern color processes. The Technicolor company started out with a red/green system much like Urban's; they called this "System 1." Films made with this system have a haunting, greenish-yellowish hue which, while perfect for horror features such as "Dr. X" (1932) was less well suited for dramatic or comedic subjects. They next developed "System 2," a subtractive color process in which two dyed films were cemented together, but the finished film was prone to bubbling and cupping. A third system transferred the dyed prints to a fresh single film, but was still limited to two colors.

By the mid-1903's Technicolor shifted to a three-strip system, which was shot on three separate films, which were then dyed and transferred to produce the final prints. This offered the first commercially successful full color image, although red and green still had the most zing -- thus Victor Fleming's choice of ruby slippers and green witch's makeup for 1939's The Wizard of Oz. Not many people realize it, but "Color by Technicolor" was a licensed process not owned by the studios; directors had to hire Technicolor's camera operators and technical consultants, as well as entrusting post-production to their facilities.

Now, as to sound: at nearly the same time, different technologies were being tried to synchronize sound with moving pictures. Emile Berliner was involved with a disc-based system; Edison offered a cylinder-based one, but neither achieved real success. All the various attempts at sound stumbled with the issue of synchronization until the development of optical soundtrack systems, which in turn had to wait until amplified electrical recording became possible in the mid-1920's. These, because they could be recorded on to the actual film, and duplicated along with it, were both reliable and economically feasible, though of course exhibitors would have to invest in new equipment. Although hailed as the first sound picture, 1927's "The Jazz Singer" in fact only had sound in certain portions of the film, and still relied on the old sound-on-disc system. Rival technologies -- RCA's "Photophone" system, Western Electric's variable density system -- vied for the new industry standard.

The introduction of sound to film brought with it a host of technical problems: microphones had limited range, and had to be hidden in potted plants and tableware; camera noise was too easily picked up, and cameras had to be encased in sound-proof coverings. Mary Pickford, one of the greatest stars of her day and a founder of United Artists, had a terrible experience with her 1929 sound film, "Coquette"; she had to strain her voice to get it picked up by the microphones, and the results were far from complimentary. Her UA partner Charlie Chaplin, though he eventually embraced the idea of using musical scores on his soundtracks, put off the use of voice; aside from a phonograph recording, a one-liner ("Get back to work!") and a nonsense song in 1936's "Modern Times," Chaplin did not use spoken dialogue in any of his films until "The Great Dictator" in 1940, though some years later he recorded narrative voice-overs for many of his early features. Nevertheless, sound, well before color, became a standard feature of film very soon after its introduction.

Next up: 3D film -- in 1922?!