Thursday, April 12, 2018

Copyleft

The history of the "copyleft" movement is more or less coeval with the emergence of the Web as a means and mode of sourcing, creating, and publishing works, whether texts, images, music, or multi-media productions. Some trace the movement to the desire of some computer programmers in the 1970's to create versions of programming languages and software that could be developed independently of corporate-owned commercial products. Others see its roots in the xeroxed or mimeographed "zine" culture, or in pubk-rock designers' preferences for public-domain clipart. Whatever its exact origins, a signal moment arrived in 1988 when Richard Stallman created the first copyleft license, which he dubbed the EGPL (for Emacs General Public License), which evolved into the GNU General Public License in 1989. A version of this license was, until recently, used by the Wikipedia and many other wiki- and crowd-sourced sites; in essence, it declares that the material licensed may be shared by anyone as long as a) they indicate the source; and b) the same license to share is imposed upon all subsequent users. The GNU license originally didn't contemplate "remixing" in which the GNU-licensed material might be slight, nor did it account for problems that might occur when someone tried to copyright a longer text that quoted from GNU-licensed material. In 2009, in part because of such concerns, the Wikimedia Foundation dropped the GNU license in favor of a Creative Commons license known as CC-BY-SA, which has similar attribution and share/alike, but is friendlier to quotation/remix, and doesn't require reproducing the full text of the license in every work (Creative Commons maintains human-readable and legal versions of all its licenses).

Creative Commons licenses are now quite common, and have been used on blogs, text archives, wikis, streaming music and media, and downloadable media. I haven't yet seen one on a printed book, but it's certainly conceivable that it will be on certain e-books or other reference works. They offer the advantage of a variety of licenses, which allow for a) Copying; b) Modifying; c) doing either with or without attrubution; and d) allowing commercial or non-commercial use. Creators of content can thus exercise as much, or as little control over the re-use of their material as they like. Of course, no CC licensed material has yet been at issue in a courtroom, and as a result there is no case law to indicate how effective these licenses may be when it comes to the vital question of whether and to what extent they are enforceable. And perhaps that's best, for now.

Tuesday, April 10, 2018

History of Copyright

The idea of an inherent right of the author of a written work to protect it from unauthorized copying is, in terms of western history, quite recent indeed. The 1709 "Statute of Anne" was the first legal recognition of the rights of an author. It presented itself as "an act for the encouragement of learning," with the implicit argument that allowing authors the exclusive right to publish their work for a limited term would enable them to earn some reward for their labors, while at the same time eventually allowing their work to be used freely. As with earlier systems of intellectual property, such as "Letters Patent," the Act's term was limited -- 14 years, which could be extended for 14 more, after which the rights of the author expired; it was understood then, as it is now, that authors, like inventors, quite frequently drew from the works of those who have come before them, and that preserving such rights indefinitely would stifle creativity. One thing that has certainly changed since 1709 is the term of copyright; US copyright eventually settled on a period twice as long as the Statute of Anne (28 years, renewable for 28 more years); revisions to this law in the past three decades have extended these 56 years to 80, 100, and even as many as 120 years; the last of these, the "Sonny Bono Copyright Extension Act," went further and even re-instated copyright in works where it had become extinct, freezing the date at which works could enter the public domain at 1923. Many creative artists feel that this law has exercised a stifling effect upon creativity; many of them joined in support of a legal case, Eldred vs. Ashcroft, that challenged these extensions on the basis of the Constitution's reference to copyright law being for a "limited term." The Supreme Court eventually ruled against Eldred, saying in effect that Congress could establish any length of term they wanted, so long as it was not infinite. Could, is of course, not should.

The result has been, ironically, that in the very age when the ability of writers, artists, and musicians to draw upon, alter, and incorporate what the copyright office calls "previously existing works" is at its greatest, the legal barriers against doing so have been raised to the harshest and longest in the history of copyright protections. This is offset, to a degree, by two factors: 1) "fair use," a doctrine established in the 1977 revision of the law, whereby a certain limited amount -- say, less than 10% of the original "work" -- may be used so long as it is not employed for profit, is used in an educational context, and/or used spontaneously; and 2) simple lack of enforceability. It's quite impossible to police all the billions of web servers, web pages, and personal computers and devices, to ensure that no copyrighted material has been taken or stored; enforcement, as a result, tends to be spotty if dramatic (as in the case of a woman in the midwest who was assessed a fine of 1.5 million dollars because her son had shared 24 music files on his Napster account).

It needs to be noted that copyright also functions very differently depending on the medium in question. Printed texts are straightforward enough, but in the case of physical media such as a sculpture or a painting, possession of the physical object confers certain property rights, including the right -- if one desires -- to restrict or prohibit "derivative" works such as photographs of these works, although the issue of non-manipulated or "slavish" copies is a murky one. Music is the most complex form: there are at least four layers of copyright in a recorded song: 1) The composition itself, and its embodiment in sheet music; 2) The performance of that composition on the recorded matter, including the act of interpretation and any variations on the composition; 3) The physical embodiment, if any, of this performance, known as "mechanical" rights; and 4) The right to transmit the performance. All of these, of course, were once separate domains: the sheet-music industry/print, the recording studio, the record company or "label," and radio stations -- but all are now merged indistinctly into a single, complex activity that can all be achieved on a single device, even a smartphone.

In Hip-hop, nearly all original samples were taken from vinyl, and most consisted of a few measures of the "break beat" -- far less than 10% of the original recording.  And yet, by cutting or looping this beat, it could in fact become the rhythm track for an entire recording.  How then to measure that, in terms of originality? Legal cases attempted to separate the "essence" of a song from its literal embodiment in part or whole -- a measure which tripped up George Harrison, whose "My Sweet Lord" was found to have unconsciously copied the essence of the Chiffons' "He's So Fine." And yet, even if the "essence" is not copied, copying enough the of the literal bits and pieces (riffs, beats, harmonies, etc.) can lead to the same conclusion of infringement. The technical term that has been developed for this is "fragmented literal similarity," and it would seem to be the measure most applicable to to use of samples in Hip-hop. But how much similarity, how many fragments, are enough?

As things have turned out, neither of the two most significant cases of copyright litigation involving Hip-hop ended up dealing with this question.  In one case, where Biz Markie's half-singing the chorus from Gilbert O'Sullivan's "Alone Again (Naturally)" was represented by his lawyers as being part of the natural creative practice of Hip-hop -- an argument which they lost spectacularly, as the recording was quashed, all copies ordered destroyed, and substantial damages awarded.  In the other case, one which reached the Supreme Court, Luther Campbell and 2 Live Crew were sued for their parodic Hip-hop version of "Pretty Woman," originally made famous by Roy Orbison.  Their lawyers quite wisely avoided the originality issue altogether, arguing that the 2 Live Crew version was a satire, an argument accepted by the "Supremes" on first amendment grounds.  After all, if we're going to use our free speech to parody or mock others, we'll have to be imitative, won't we?  It's too bad Biz's lawyers didn't make the same argument.

The most spectacular recent case was that over Robin Thicke's "Blurred Lines," which was found in a jury trial to have infringed the copyright for Marvin Gaye's "Got to Give it Up." In this case, the most unusual feature was that the judge, John A. Kronstadt of the U.S. District Court, prohibited the jurors from listening to phonorecordings, insisting that any evidence had to be based on the sheet music. For a jury, most of whose members could not read sheet music, the days of expert testimony must have been mind-numbing. Many musicians decried the resulting verdict as far too broad, in effect prohibiting any music from drawing inspiration from any past recordings, and signed on to an appeal in 2016. In an amicus brief filed by a University of Washington law professor, it was further argued that both compositions were essentially "aural" -- that is, composed and recorded without reference to sheet music -- and that the exclusion of the recordings was a fatal error. A ruling has yet to be issued.

Thursday, April 5, 2018

Surveillance

We are being watched. And yet, although public security cameras are the most visible and obvious signs of surveillance, there are now many more efficient ways to follow an individual person. A few years ago, a German politician, Malte Spitz, made headlines when he asked for his records from Deutsche Telekom, and found that they had recorded his exact location in terms of longitude and latitude more than 35,000 times in one six-month period. And yet this is nothing new; cell phones only function when they can be located by the cellular system; the only news was that the information was retained. Indeed, there have been many sociological studies made using anonymous cellphone data to examine traffic patterns, pedestrian flow, and other broader areas of human society. Such data exists, and it would be a small leap indeed to link it with personal information.

Fear seems to predominate, and although we may all have a creepy feeling of being followed, few want to rock the boat by complaining. Fears over terrorism, indeed, have made many people feel safer with the cameras running.In London, one of the most surveilled cities on earth, some have estimated that there is one camera for every 14 citizens -- 421,000 in London alone. And yet there have been relatively few instances of vandalism of these cameras. In more rural areas of England, in contrast, roadway cameras designed to catch speeders -- the hated "Gatsos" -- have been frequently vandalized, with the means varying from spray paint and hammers to, in a number of instances, bombs. Are rural Brits angrier than urban ones? Apparently, a speeding ticket is more hated than the idea of being watched while one shops (or else rural folk are more likely to reach for a blowtorch or a sledgehammer).

But all this belies the strange truth of how our modern Internet's Big Brother came into being: we summoned him ourselves. Like Aladdin rubbing his lamp, we asked for all kinds of goods and services, little realizing that each of our requests created a valuable little bit of data about ourselves.  We asked for the convenience of on-line banking, of getting our medical test results, of qualifying ourselves for mortgages, all without having to leave home or sign a physical piece of paper. And it's those millions of acts that have made us most susceptible to invasions of our privacy, whether by the government, corporations, or the eager army of hackers. It's turned out that we Big Brother doesn't have to bother to "watch" us -- in many instances we're freely giving our information to him.

And we all know the story about the frog in the pot that was gradually heated up. We've gotten used to some of this intrusiveness, and are more willing today than we were yesterday to give up a little privacy in return for convenience. As one sign of this: Google's "new" feature (announced this week!) allowing people to share their own locations on Google Maps is in fact nearly identical to a feature Google killed eight years ago, known as Google Latitude. Back then, privacy concerns put the kibosh on the feature, but today it seems to be as welcome a development as sliced bread.

Wednesday, April 4, 2018

Hand-held Media

I suspect that, given the topic for this week, you probably weren't expecting to see .. a "transistor radio." It was the latest in hand-held media technology when it débuted in 1954, in an era when a "radio" was a piece of furniture only slightly smaller than the sofa. It was co-produced by Texas Instruments, who would later be a pioneer the field of computing; four years later, its laboratory would be the birthplace of the integrated circuit, and it would produce the first hand-held calculator in 1967 -- for the low low price of $2,500! It was probably around that year that I first got my own transistor radio, complete with a single monaural ear-bud, and saw one of the TI calculators my dad had brought back from the lab at General Electric (he actually had to sign it out, since it was such an expensive piece of hardware).

Of course no one foresaw in these early days that there would come a slow, Frankenstein-like convergence which would create a new device that would serve not only as a radio and a calculator, but also as a camera, video camera, music player, and telephone. The sheer weight and size of all the devices and media that an iPhone or Android smartphone replaces would easily top a hundred pounds, and take up an entire living-room wall. Weighing in at an average of 140g (about 5 ounces), the LP's that would be needed to equal a 32 GB iPhone loaded with music (400) stacks up to 125 pounds, not counting the weight of the sleeves and covers!

The milestones along the way are worth remembering, even as they fade from our sight: the Walkman (1978), the Discman (1984), and the first iPod (October 2001, scarcely fifteen years ago, if that's possible), and the first smartphone (IBM's Simon in 1992). One could very well ask, what could possibly be next? Or will the hand-held be supplanted by the strapped-on-the-head, the wearable, or even the implanted?

Sunday, April 1, 2018

Make Love Not Warcraft

Sometimes, there is a narrative which so perfectly casts into relief the connections and differences between various old and new media forms that it becomes a kind of date/time stamp for the history of media. Orson Welles' Mercury Theatre production of War of the Worlds, Gary Ross's 1998 film Pleasantville, or the recently viral video of a baby trying to use pinch and swoosh finger controls on books and magazines, are among those that come to mind. And to that very select list, one should really add episode 147 of South Park, Make Love Not Warcraft.

The episode's first stroke of genius was to collaborate with Blizzard Entertainment, which custom-produced the computer game segments, even adding features -- such as synchronizing mouth movements with speech -- not actually available in the game. The second stroke was using the South Park character voices with their Warcraft avatars (as the game would look and sound to those using software add-ons such as Roger Wilco or TeamSpeak) so that the kids' voices come out of their tough-looking overbuilt Warcraft selves. Thus there is irony in every scene, the more so when a balding, beer-bellied, potato-chip munching man wearing glasses and a carpal-tunnel brace turns out to be the big-bad fellow who is "killing everyone in the game." Blizzard executives are show being stunned to discover that this character has become so strong, he's even killing their admins -- he has been playing WarCraft all day every day since it came out, such that he must have no life whatsoever outside of the game. So, as one Blizzard exec asks in Master Po fashion, "How do you kill that which has no life?" The kids will show us the way.

Friday, March 23, 2018

Social Media II

The range and size of social media networks has increased almost exponentially in the early years of the twenty-first century. We've gone from early forums in which only a few hundred people might participate, such as a BBS or a LISTSERV list, to truly mass media such as Facebook and Twitter, which have billions of users around the globe.

But much more than just size has changed. At a certain 'tipping point,' social media begin to function in ways that, when they were smaller, would have been impossible. Facebook and Twitter have been credited as playing roles in the "Arab Spring" in the Middle East, particularly in Egypt and Tunisia; Facebook's founder has been the subject of a major Hollywood film; and twitter feeds and cell-phone photos has brought down politicians of every party, sometimes within a matter of mere hours. It certainly sounds as though these technologies have crossed some threshold, altering the fabric of reality itself -- but then, of course, one can look back at similar claims made about virtual-reality video helmets (anyone remember Lawnmower Man?) and wonder whether these revolutions will seem such a few years from now.

Three key developments have shaped this period: 1) Social media with "presence" -- a main page at which users can add or copy content, offer images, texts, or video of their own making or choosing; 2) Sites with instant linkability -- the ability of users to add (or subtract) active and immediate connections to other users; and 3) Sites that bundle essential tools (e-mail, instant messaging, and other software capabilities. Finally, all of the above, or at least the survivors in this highly competitive field, have gone multi-platform; no social medium of the future will thrive unless it is available on desktops, laptops, tablets, and smartphones, and has some system of synchronizing all its users' preferences and updates.

So what next? The spaghetti is still being hurled at the (virtual) refrigerator wall; Blippy, a site that enabled shoppers to instantly "share" posts about their purchases was hacked, and credit cards compromised -- so much for that! -- Google tried to launch its own "Wikipedia killer," dubbed Knol, but the site filled up with spam so quickly that it became almost useless, and Google discontinued it; it also failed to generate "Buzz," a hot-button social networking site that irritated users with its auto-generated list of "contacts," and Apple stumbled with Ping! an addition to its popular iTunes platform meant to enable people to share news about music purchases and performances. The latest entry Pinterest, allows users to "pin" content to one another, with a focus on bargain shopping, and has the unusual distinction that a majority of its users, in many surveys, are women. But will it go the way of the Lifetime network? And what of sites that advertise themselves as 'Pinterest for men'?

It may seem we're already "shared" too much in this era of TMI, and these social media may be reaching their limits -- but I wouldn't bet on it.

Thursday, March 22, 2018

Social Media I

The evolution of social media can be conceived of in many ways -- in one sense, it could be said that language itself was the first social medium. Even then, considering a "social medium" to be any means of transmitting or recording language over time and space, alphabetic writing could well be seen as the earliest, followed swiftly by the development of the "letter" as a social form, which dates back to at least the seventh century BCE. The ancient Library of Ashurbanipal, King of Assyria from 668 to 627, included personal letters written in cuneiform on clay tablets.

The telegraph and telephone come next in line; even if, as a recent NY Times article noted, the phone is experiencing a slow decline, it remains our oldest electronic social media. I'm old enough to remember the old "Reach out and touch someone" adverts for Ma Bell, and for a while, there was nothing more direct and personal than a phone call. Electronic mail protocols over ARPANET and its successors debuted in 1969, but did not become a common form of communication until the late 1980's; well before then, home computer users setting up BBS sites where they could post notices and download simple programs. My home town of Cleveland had a huge site, Freenet, where you could also get medical advice from doctors at Case Western Reserve and University Hospitals. The WELL, a large social site based in San Francisco, was the first home of integrated mail, chatroom, and file services; perhaps not coincidentally, it was also the site of the first case of online impersonation that went to court (a man was sued by two women for pretending to be a different, older woman who was a mutual friend).

In academia, the LISTSERV protocol brought people together by field and interest, and made it possible to, in effect, send a message to hundreds of people at once in search of advice or response; LISTSERVs were often associated with archives where you could search through older messages. Early online game spaces, such as MUDs and MOOs go back to the late 1970's, and many became highly social, with tens of thousands of "inhabitants" maintaining spaces there. All of these interactions were exclusively text-based, and the only "graphics" consisted of what could be cobbled together out of ASCII characters.

It wasn't until the arrival of the commercial internet in 1993, and the WWW protocol the next year, that social media really took off; by the end of the decade, Six Degrees, LiveJournal, Blogger, and eOpinion had launched. In 2003, Second Life offered its users a virtual retake on their first lives, albeit with a graphical interface that looks primitive by today's standards; that same year, MySpace became the first modern social networking platform, and a model for Facebook two years later. With half a billion users, including everyone from the President to the Pope to Adam West, it certainly has the critical mass to change the face of human communication -- and yet, in recent years, the loss of many of its younger ("Millenial" generation) users has some people wondering whether it may someday go the way of MySpace.

Thursday, March 15, 2018

History of Computing II: Mouse forward

The move toward the possibility of a computer that could truly be called "personal" begins in many ways with Douglas Engelbart's question: "If in your office, you as an intellectual worker were supplied with a computer display backed up by a computer that was alive for you all day and was instantly responsive, how much value could you derive from that?” The question was posed on December 9, 1968, at what has come to be called the "Mother of all Demos," where Engelbart and his team at Augmentation Research Center at Stanford Research Institute in Menlo Park, California. Unlike anyone else in 1968, Engelbart had some concrete answers to this seemingly abstract question: he was seated at a console which included a chorded keyboard (not unlike that employed by the operator of the "Voder" at the 1939 World's Fair) as well as the first operational three-button mouse, which Engelbart had designed together with Bill English starting in 1963. Using this interface, as well as an audio and video projector, Engelbart demonstrated the other capacities of his system, which included collapsible and relational menus, a simple mapping system, a text editor, and basic programming tools.

Of course the computers that backed up Engelbart's console were still massive, and required a number of other human operators and technicians (he chats with several of them in the course of the demo). It would be nearly another sixteen years before advances in microchips, display screens, and hardware would enable the production of the Apple Macintosh, the first computer to incorporate a mouse along with a graphical user interface (GUI) and some degree of WYSIWYG (What you see is what you get) graphics. It was these technologies, much more than earlier screen and keyboard machines, that turned the modest interest in home computers into the revolution in personal computers that enabled the "Internet" age.

Interestingly, although Engelbart is actually depending on a remote set of machines connected over a cable, it would be a long time before the computer was not only an independent platform but also a means of communications. Early modems were slow and unreliable, and took hours to send long files; even then, they more often connected to a remote "host" which was itself isolated from the 'net, such as a BBS system. The earliest version of the Internet, known as ARPANET, was created from plans developed for the US Department of Defense by the RAND corporation, its architecture designed to link DoD facilities with contractors and research universities, with a "distributed" set of nodes which was chosen as the most likely to survive a Soviet nuclear attack. Even well into the late 1980's, when I sent my first e-mail (I was then a grad student doing a work study job at Brown's Graduate School offices) 90% of the traffic on the Internet went from one big host computer to another at universities and research institutes. I remember sending a message to someone with an odd-sounding hostname, and finding out only later that the user was in Tel Aviv, Israel!

The Internet was not opened to commercial traffic of any kind until 1993, and it was around this time that Sir Tim Berners-Lee released his hypertext "world wide web" protocol, and Mosaic, the first widely-used browser, came into use. This software, because it enabled terminal-to-terminal communication using an interface which worked in much the same way as the GUI's of individual computers (well, Macs in any case!), was the key step toward the 'net becoming a true mass medium. And it was only then that the answer, or rather answers, to Engelbart's question became clear, with nearly two billion Internet users worldwide, and global e-commerce quickly becoming the dominant means of trade and exchange throughout the developed world. And, of course, the humble "intellectual worker" -- such as yours truly -- has, and continues to derive great value from all this; in the case of my most recent book, which took about six years to research, I'd estimate that, without access to Internet-based historical materials, the project would have taken at least twice and long, and cost tens of thousands of dollars in airfare to travel to and search through archives around the world.

Thursday, March 8, 2018

History of Computing I: The Colossi

The earliest notion of a "computer" that most people had in the nineteenth and twentieth centuries had one key feature: enormous size. Babbage's 1837 "Analytical Engine," widely regarded as the earliest ancestor of the computer, would -- had it ever been completed -- have filled a warehouse-sized room and weighed nearly 30,000 pounds. Babbage's designs used interlocking gears with various ratios to perform calculations, and his system contemplated a punch-card I/O unit, a calculating unit known as the "Mill" (a sort of CPU), and a storage unit he called the "Store." In his search for backers, he enlisted Lord Byron's daughter Ada Lovelace, who wrote an erudite explanation of the Engine's operations for a French journal; in 1983, a new computer language designed for the US Department of Defense was christened "Ada" in her honor. Two working models of his machine have been built in recent years; you can see one of them in action here.

Few significant advances in computing were made until the late 1930's and early 1940's, when military needs -- calculating target data, and (most importantly) breaking secret codes such as Germany's ENIGMA, provided both the impetus and the funding. In the UK, researchers at Bletchley Park, led by the young computer genius Alan Turing, constructed machines they named "bombes" which used electrical relays and motors to run through hundreds of thousands of possible combinations of the wheels and wires of an Enigma machine. Later, they constructed a far more advanced machine, known literally as "Colossus," for the same task. Advances in cryptography would eventually render all these computers obsolete -- indeed, a carefully-done "one time pad" or "Vernam cipher"is unbreakable once the pad text is destroyed (as witness a message found on a skeletized pigeon, though some have claimed to have deciphered it).

Yet other problems, such as calculating trajectories, remained. The first fully digital machine along these lines was ENIAC (short for Electronic Numerical Integrator And Computer) which used vacuum tubes -- more than 17,000 of them! -- as relays and switches. The machine had to be literally re-wired for each different kind of operation; the task was entrusted to a group of young women who, even though many of them had college degrees in mathematics and engineering, were regarded at first as little more than glorified switchboard operators. All of the units together weighed more than 60,000 pounds, and consumed 150 kilowatts of power -- all to perform roughly 5,000 calculations per second. While this was many times faster than any earlier machine, it's equivalent to a CPU speed of 5 kHz -- fifty million times slower than the average desktop computer of today.

The key invention which began the change from room- and-building-sized machines to something that could actually fit in an office or a home was of course the transistor, developed at Bell Labs in 1947. The basic idea was to use a semiconductor sandwiched between more conductive materials; such a device, like a radio tube, could be used either as a signal amplifier or a switch. There were several key advantages: transistors produced less heat, were cheaper to manufacture, and -- even in their early state -- much smaller. Each of the women of ENIAC shown in the photo above is holding a unit with the same storage capacity; the first two decreases in size are due to smaller, specially-made tubes, but the last is due to transistors. A typical smart phone today has nearly a million times the number of transistors in this smallest unit.

With the war over, business demands drove the computer market. The first commercial computer introduced for this market was the UNIVAC, introduced in the early 1950's. For around $750,000, you got a CPU speed of 1.9 kHz, about 1.5 Kb of memory, and tape drives, each the size of a small refrigerator, which held about 1.5 Kb per tape. A decade later, IBM introduced its 1401 system; with the top model, one could now have 16 Kb of memory, and perform almost 23,000 calculations per second -- 23 kHz. IBM did not sell the 1401, but you could lease one for around $2,500 a month. Home computing on a practical scale was still far in the future; although the SIMON and other home-kit computers were available throughout this period for home hobbyists, their size -- 8 binary switches -- made them useless for any but the most limited tasks.

Thursday, February 15, 2018

Ghosts in the Machine: Early Television from 1928

The image to the left is a single frame from the earliest known television recording of a human face, made by the inventor John Logie Baird. The subject, a Mr. Wally Fowlkes, was a young lab assistant undistinguished save by his willingness to sit for lengthy periods under the bright, hot lights required to make television recordings. And, amazingly, these recordings were made almost entirely using mechanical means -- a giant disc with glass lenses was linked directly to a Columbia Records turntable equipped with a cutting stylus -- and predate any electronic images of humans by several years! They were preserved on discs that look much like audio recordings, and the frequency of the image data is so low that, if played through speakers, a sound in the audible range is produced. Indeed, Baird claimed that he could distinguish, just by listening to them, a recording of a face from say, a recording of a pair of scissors or a soccer ball. Baird called his process Phonovision, and although he abandoned it as offering too brief, and posing too many technical obstacles, it was nevertheless the first system of recorded television in history.

These recordings were little-known until a few years ago, when recording engineer Donald McLean collected several of them, and transferred their analog signal into digital form. Once this was done, he was able to correct for all kinds of problems that plagued Baird's engineers -- mechanical resonance ("rumble"), pops and scratches on the disc, speed irregularities, and problems with frame registration. The earliest recordings are still quite primitive, but one can at least recognize the faces.

Even more remarkably, in addition to these laboratory discs, there exist home recordings, made using "Silvatone" aluminum discs (one of these was referenced recently in The King's Speech). Silvatone discs used a heavy, weighted cutting stylus, and could record any sort of signal, whether of the human voice or a radio broadcast. And, due to the relatively low frequency of the signal, they could be used to record television broadcasts as well. During the brief period from the late 1920's through to the early 1930's, when Baird was able to send out television signals with the BBC's co-operation, a number of amateur recordings were made; these, too, have been restored by Mr. Mclean. There are about a half-dozen different snippets: dancing girls (of course!), a marionette show, and a singer by the name of Betty Bolton. McLean actually located Miss Bolton, by then 92 years old, and she was able to personally identify herself as the subject of the recording!

During this era -- in 1930 -- the BBC broadcast the very first television drama, an adaptation of Pirandello's play "The Man with a Flower in his Mouth." Although this does not survive, there is a re-enacted version, using the exact same script, the original music and title cards, and an identical 30-line Baird camera system -- you can watch it here, along with comments on the original broadcast and the recreation.

Mr. McLean has kindly permitted me to show his restored original Baird recordings to you -- but in class only -- as he is concerned to protect his rights in the restored versions. So look for some haunting images at Wednesday's class!

SIDEBAR: Here's a chart I've prepared showing the relative frequency and bandwidth of television signals, from the days of the Baird discs to HDTV.

ADDITIONAL LINKS: The excellent Television History site, a film of the 1936 Radiolympia demonstration broadcast as well as the High-def opening ceremony later that year. Both feature versions of the commissioned theme song, with its curious lyrics:
A mighty maze, of mystic, magic rays
Is all about us in the blue
And in sight and sound they trace
Living pictures out of space
To bring this enchantment to you ...
Here also you can see a modern 32-line mechanical TV in action; a 1938 Nazi TV station ident (they named the station after Paul Nipkow, inventor of the Nipkow disc, so as to claim TV as an "Aryan" invention); and lastly, a TV advert for Dumont TV featuring Wally Cox, later a "Hollywood Squares" regular and voice of Underdog.

Tuesday, February 13, 2018

Later Developments in Cinema

The history of the development of cinema after the early portion of the silent era is largely -- though not entirely -- a question of the gradual progress towards both sound and color. Each of these, as we've already seen, started much earlier than generally imagined; sound began with Dickson's "Experimental Sound Film" of 1894, and hand-painted color had already reached a high-water mark with Georges Méliès's 1900 version of Joan of Arc. With sound, the great problem was synchronization; there were all kinds of schemes for keeping sound -- as a phonograph record, an optical code, or any other pre-recorded substrate -- in time with image. When it came to color, hand-painted films -- even with stencils, and armies of (mostly female) colorists, it remained a premium mode without a premium payback. The main use of color in commercial film, in fact, was with tinting -- a process in which certain segments of film to be edited were run through chemical baths. An emotional scene might be bathed in red, while another encounter would be shown in blue or purple. The advantage of tinting was that all the varied colors could be achieved in post-production, at the director's discretion. Such scenes as the "mellow yellow" of the frame from an unknown film of this era, were common indeed. In some cases, tinted prints survive and have been restored; in others, the indications for tinting have been recreated in restoration.

At the same time, efforts progressed toward a technology that would bring about the appearnce (at least) of full color. The pioneer in this field was Charles Urban, an American expat in England who had already achieved success with his black-and-white films in the era of the "Cinema of Attractions." Urban realized that persistence of vision, the same principle that enabled the illusion of motion, could enable an illusion of color as well; this was the basis of his "Kinemacolor" system. Black-and-white was shot through a special camera using a spinning filter which filtered alternate frames in red and green. After developing the film, it was played back through alternating color filters, so that the "red" frames were tinted red and the "green" frames green; the result was something very close to the feeling of full color (though in fact the process missed part of the spectrum -- with dark blue being very imperfectly reproduced). Urban's process also had the huge technical advantage that, although special cameras and projectors were needed, the film was just ordinary black-and-white stock. Urban promoted his system through ambitious, epic-sized films shown in specially built, luxurious cinemas. Unfortunately for Urban, he was sued by cinema pioneer William Friese-Greene, who (falsely) claimed he had had the idea for this kind of color alternation before. As has happened with modern patent lawsuits, the British judges had no grasp of the technology on which they were ruling, confusing concept with practical art, and Friese-Greene's scheme of staining alternate frames (which produced only a muddy mess) with Urban's far superior pictures. They ruled in favor of Friese-Green, and Urban was eventually forced into bankruptcy. Friese-Greene was never able to bring his system to the point of commercial success, though his son Claude, using a process much more like Urban's system than his father's, made a number of fine early color films.

Ironically, it was to be one of William Friese-Greene's original concepts -- dyed film which was glued or bonded together -- which would ultimately be the precursor of modern color processes. The Technicolor company started out with a red/green system much like Urban's; they called this "System 1." Films made with this system have a haunting, greenish-yellowish hue which, while perfect for horror features such as "Dr. X" (1932) was less well suited for dramatic or comedic subjects. They next developed "System 2," a subtractive color process in which two dyed films were cemented together, but the finished film was prone to bubbling and cupping. A third system transferred the dyed prints to a fresh single film, but was still limited to two colors.

By the mid-1903's Technicolor shifted to a three-strip system, which was shot on three separate films, which were then dyed and transferred to produce the final prints. This offered the first commercially successful full color image, although red and green still had the most zing -- thus Victor Fleming's choice of ruby slippers and green witch's makeup for 1939's The Wizard of Oz. Not many people realize it, but "Color by Technicolor" was a licensed process not owned by the studios; directors had to hire Technicolor's camera operators and technical consultants, as well as entrusting post-production to their facilities.

Now, as to sound: at nearly the same time, different technologies were being tried to synchronize sound with moving pictures. Emile Berliner was involved with a disc-based system; Edison offered a cylinder-based one, but neither achieved real success. All the various attempts at sound stumbled with the issue of synchronization until the development of optical soundtrack systems, which in turn had to wait until amplified electrical recording became possible in the mid-1920's. These, because they could be recorded on to the actual film, and duplicated along with it, were both reliable and economically feasible, though of course exhibitors would have to invest in new equipment. Although hailed as the first sound picture, 1927's "The Jazz Singer" in fact only had sound in certain portions of the film, and still relied on the old sound-on-disc system. Rival technologies -- RCA's "Photophone" system, Western Electric's variable density system -- vied for the new industry standard.

The introduction of sound to film brought with it a host of technical problems: microphones had limited range, and had to be hidden in potted plants and tableware; camera noise was too easily picked up, and cameras had to be encased in sound-proof coverings. Mary Pickford, one of the greatest stars of her day and a founder of United Artists, had a terrible experience with her 1929 sound film, "Coquette"; she had to strain her voice to get it picked up by the microphones, and the results were far from complimentary. Her UA partner Charlie Chaplin, though he eventually embraced the idea of using musical scores on his soundtracks, put off the use of voice; aside from a phonograph recording, a one-liner ("Get back to work!") and a nonsense song in 1936's "Modern Times," Chaplin did not use spoken dialogue in any of his films until "The Great Dictator" in 1940, though some years later he recorded narrative voice-overs for many of his early features. Nevertheless, sound, well before color, became a standard feature of film very soon after its introduction.

Next up: 3D film -- in 1922?!

Saturday, February 3, 2018

The Origins of Cinema

Although its basic technical details are clear enough, the origins of cinema are shrouded in doubt, dispute, and even death. As with other media technologies, among the earliest uses of sequential images were in scientific projects, such as those of Marey and Muybridge. The technical problem confronting them both was how to get a series of images in quick, measured sequence. Muybridge used timers and tripwires to obtain sequential images; Marey, more direct, invented a cinematic gun which "fired" a cylinder of small photonegatives; it looked somewhat like a Thompson submachine gun but was limited to 12 exposures. What was really needed was some kind of double movement -- a shutter which would open and close quickly and repeatedly, and a mechanism which would advance the photosensitive material. When the material in question was glass plates, the problem was overwhelming -- but with the invention of celluloid photo "film" by George Eastman, a solution was in sight, and the prize belonged to the inventor who could best employ it.

Louis Augustin Le Prince (above) is my personal favorite among the many candidates for first filmmaker. He had gotten his start working on painted panoramas -- great circular paintings which created a sort of Victorian virtual reality -- where his job was projecting glass plate photos onto the canvas for artists to trace. Arriving in Leeds, England, in the late 1880's, he married into a well-off family, and his father-in-law financed further experiments. Le Prince's first design was a 16-lens camera, using a series of "mutilated gears" to fire off 16 frames in short order on two strips of film. He later designed a single-lens camera, with a mechanical movement using smooth rollers (sprockets not yet having been tried) to advance the film. He planned to stage a grand début in New York City, and had rented a private mansion for his demonstration; his equipment was packed into custom-made crates, and his tickets were purchased for crossing on a luxurious Cunard liner. And yet just then, as he was returning from visiting his brother in Dijon, France, he vanished from the Dijon-Paris express and was never seen again, alive or dead.

As with many early cinematographers, Le Prince's films do not survive. Eastman's celluloid turned out to be volatile; it could disintegrate into a brown powder, burst into flame, or even explode without warning. However, at some point, paper prints were made of three of his films, and these have been reconstructed into short, viewable sequences. The films were made in 1888, earlier than any others. His first film, "Roundhay Garden Scene," shows his family dancing about in his father-in-law's back garden; his second, "Leeds Bridge," shows traffic and pedestrians crossing a bridge in the city where he worked; the third, untitled, shows his young son playing an accordion as he dances upon a set of stairs. The only question is: with what camera were these shot? Distortions and perspective problems with the frames, as well as the fact that there are rarely more than 16 of them, suggest that the 16-lens camera is the most likely source, but some believe he used his single-lens camera for some or all of the films. If so, he was certainly the first person in the world to make what we have come to regard as cinema film.

As with other media we've encountered, the nascent art of film took a long time figuring out its subject matter. Very early films (pre-1900) tended to use a fixed camera, although movement was sometime obtained by attaching the camera to a train (these train-front films were called "phantom rides"). Early commercial operators such as Edison were limited by the method of presentation, which was originally a wooden box known as a "Kinetoscope" which held only 50 feet of film in a loop. These one-act films, which included fixed shots of buildings, street scenes, and common sights, are sometimes known as the "cinema of attractions." Vaudeville entertainers (Professor Welton's Boxing Cats), sideshows (Annie Oakley shooting clay pigeons), ventriloquists, and magicians were also invited to have their acts recorded on celluloid.

Indeed, one of the first masters of the cinema, Georges Méliès, was a magician himself, and became renowned for his 'trick' special-effects films. In one, he takes his head off his body three times, and the assembled heads join in song; in another, he blows up his head to enormous size with a bellows and a tube. Eventually, films expanded in length and began to tell sequential narratives, and actors and scene-designers trained especially in film produced increasingly ambitious subjects. One of Edison's most successful films was a pioneer in this regard: The Great Train Robbery introduced intercut narratives, switching between close and wide shots, and even some camera movement. The final scene, where the robber turns to the camera and fires, is one of cinema's most iconic.

Sunday, January 28, 2018

Earliest Sound Recordings

The history of sound recording was once thought to begin with Thomas Alva Edison's phonograph of 1877. As with many of his inventions, Edison sketched out the idea, and gave it to his engineer, John Kruesi. Tests and improvements occupied most of the year, and the patent was finally filed in December. Legend has it that the first recording was of "Mary Had a Little Lamb," recited by Edison himself. Although Edison made later recordings of the same text, there is no surviving recording of any sound using the Edison system until more than a decade later, with the 1888 recordings of the Handel Festival at London's Crystal Palace (one of which can be heard here).

And yet, it turns out, there are actually sound recording which do survive from nearly 20 years earlier than Edison's invention. These were made using the Phonautograph (shown above) invented by Édouard-Léon Scott de Martinville. His device was not intended to permit the playback of sound; instead, using a sound-sensitive cone which etched its trace on paper coated with a fine layer of charcoal dust, the aim was to produce a visual record of sound. It was only in the twenty-first century that these visual traces were, with the aid of computer models, rendered back into audible sound, and even then there were glitches. The 1860 record of "Claire de Lune," though to be have been sung by a woman, turned out to be of much lower pitch, and sung by Scott himself! This device, indeed was extensively tested and deployed, and rumors circulate as to recordings of famous persons of the day, among them Abraham Lincoln. Such a recording would indeed be a find!

The capitalization of sound recording happened in many phases. Edison's own company, founded in 1878, though it offered the first "talking dolls," failed to find any broader market for its recordings until more than a decade later, when improvements by other inventors -- chiefly Alexander Graham Bell -- rendered the Edison system practical for widespread use. The original system of tinfoil-covered paraffin was discarded in favor of various waxy compounds, which had the advantage that, though soft enough for recording, they could be hardened through baking. Later systems enabled the making of a wax matrix, which could be used to make molds to cast duplicate cylinders, enabling mass production of commercial recordings.

One of the lesser-known aspects of the Edison Cylinder system was that one could buy special "brown wax" cylinders and use them to make home recordings. This made the cylinder the one of the technologies prior to the home reel-to-reel and cassette tape decks in which the end user could make his or her own recordings.

There remained problems with Edison's invention -- the acoustical horn used in recording had trouble picking up fainter sounds (one reason that brass band music and operatic singing were frequent offerings), and the various materials and needles used in reproduction all had problems with surface noise (click here to hear a modern series of recordings made using Edison's original materials) In addition, all of Edison's early discs used "hill and dale" recording, in which the sound waves formed, and later reproduced, impressions by degrees of vertical movement. This system had limited fidelity, and posed many technical hurdles; switching to a lateral (side-to-side) movement offered promise, but was not made commercially practical until Emile Berliner came up with the circular disc as opposed to the cylinder. Cylinder and disc fought it out from the late 1890's through the early 1920's, when Edison finally ceased cylinder production.

All these systems were mechanical -- the actual sound waves moved the needle, and the needle physically reproduced them. The next step was what was called "electrical recording," using microphones to capture the sound, and relaying the signal to an electromagnetic cutting stylus. Mechanical systems could only be used with fairly loud instruments and voices; the ordinary spoken voice, or quieter instruments such as the guitar or banjo, could scarcely be recorded. Electrical recording, thanks to amplification, could be much more sensitive in the studio -- and much louder on playback.

Such a system did not come into wide use until 1927, at which time record companies made enormous efforts to send out "field recording" vans which used this new technology to capture popular forms of music -- country blues, jug bands, fiddlers, and banjoists -- whose talents could now be cheaply recorded and mass produced. The substate -- a mixture of shellac, carbon black, and clay -- still had a problem with surface noise (for a sample of what a record of this era would have sounded like without this issue, listen to these Louis Armstrong recordings recovered from metal masters).

 The Great Depression put an end to most of these efforts, and it wasn't until after World War II that the recording "industry" began its greatest epoch. Cheap players and cheaper records -- the constant-value cost of a 45 rpm single was a fraction of a 78 rpm record -- along with the rise of radio as a promotional tool, turned the record business into a global, multi-billion dollar behemoth. The arrival of digital CD's at first only extended and multiplied this vast empire, in part because people bought the same music again in the new format.

And yet, with the advent of the internet and audio compression paradigms such as MP3, the industry began to fizzle; its old bargain of turning the ephemeral -- music performance -- into the physical -- a disc or cylinder or tape -- was undone, as MP3's were almost as ephemeral, and as readily copied and transported, as the music itself. In the 2000's, the CD business has essentially collapsed into a small specialty market, and even online sales have fallen below the pace (due in part to unpaid downloads, and in part to users transferring their older recordings to the new format). Music is, once again, in the hands of the people.

Thursday, January 18, 2018

Writing as Technology

We are accustomed to think of books, and print in general, as old and familiar things. To us, books are the "real" which may or may not be supplanted by the "virtual" -- Kindles, Nooks, and Google e-books. This makes it a bit difficult for us to recover the sense that the book, like the scroll before it, and the clay tablet before that, is a technical development, one which initially seemed strange to a world which had not known any means of preserving words and keeping them "stored" for another day. There's a video, which I like to call "Book 1.0" on YouTube that illustrates this perfectly. The book is no more a "natural" object than is a smartphone or an automobile; it has simply been around so long that we have gotten used to it, and now begin to fear that we may "miss" it.

Walter J. Ong, the brilliant Jesuit scholar and pupil of Marshall McLuhan, was one of the first scholars to realize and emphasize the technological status of writing. For Ong, writing not only changes our practical lives, it actually restructures our consciousness. This happens in a number of ways; our tendency to think of knowledge as persistent, as capable of being stored elsewhere -- and with it our sense that we ourselves don't have to precisely remember anything -- is one key effect. Beyond this, though, our whole sense that by naming, cataloging, and finding form in things that we are in fact re-figuring the world; that our mental abstractions seem to have shape and permanence; that there can even be a thing such as "capitalism," "Marxism," or "psychology" are also after-effects of writing and print. Print, by making massive amounts of text cheap to make, distribute, and preserve, accelerated these changes; with the dawn of the internet, this process has taken another enormous leap. The disappearance of objects -- the book, the music CD, the videocassette or DVD -- and their replacement by the mere making available of media streamed from somewhere else, is one notable result of this accelerating process.

At the same time, Ong emphasized the complexity and sophistication of the non-literate mind (he disliked the term "pre-literate" at it presumes a progression toward writing as inevitable). The ancient Irish bards had to memorize hundreds of lengthy poems; in the 1920's in Yugoslavia, Ong's mentor Walter Lord found pairs of men who could, by singing interlocked lines back and forth between each other, reproduce an epic poem of tens of thousands of lines. Such poems are as ancient as speech itself, and a few -- the Elder Edda, Beowulf, the Kalevala, and Homer's Iliad and Odyssey -- survived into the manuscript era, the print era, and are now downloadable as e-books. And yet, in this disposable era, when computers and cellphones complete the circuit from shiny new tech devices to e-rubbish in a landfill in a few short years, the old belief -- that writing something down preserves it -- may yet be reversed.

Some say that E-books aren't proper books at all. Some point to events such as Amazon's silent deletion of copies of George Orwell's Animal Farm from Kindle readers as a cautionary tale. The Pew Charitable Trust recently completed a survey of e-books and readers, and some of its findings are quite unexpected.

So where do we go from here? Will e-readers be the death of the book? Will a dusty old paperback become a sort of weird antique, joining 78 rpm records, 16 mm film, and Betamax cassettes in the dead media junkpile? Or will we always, whatever else we have with them, have books?