You might find this difficult to believe but there was a time when digital raster displays didn't exist! Shortly after I started my PhD, my research group was lucky enough to acquire a VAX/11-780, the first VAX in the University of London; but it was a while before the group was able to afford a framestore. During those months, the only output device I had at my disposal was a line printer. Ingenuity knows no bounds of course, so armed only with a copy of the first edition of Gonzalez and Wintz, I rustled up something that used over-printed characters on the line printer to "display" pictures. To view these pictures, you either put them on the floor and looked at them without your spectacles on (nicely blurring everything) or, if you happened to be cursed with perfect eyesight, you pinned them up at the end of a long corridor and retreated until the characters all merged together. I have retained a tremendous fondness for pictures printed in this way, which has came to be known as ASCII Art.
My favourite such pictures were not ones I printed myself, they were ones of Buzz Aldrin standing on the Moon, and of the Moon itself. I printed these off when I was an undergraduate but never had a machine-readable copy myself. From time to time, I searched around the Web for them but was never able to find them. Until now.
In early March 2015, I was idly poking around — and stumbled across an FTP site that contained a zip-file of ASCII Art. Actually, it contained two zip files, one in EBCDIC and the other in ASCII. Now, no-one in their right mind would encode characters in EBCDIC unless they had to, so this suggested that the ASCII art was generated on an IBM machine a long time ago. Almost shaking with anticipation, I downloaded the ASCII zip-file, unpacked it — and found the long-lost images I was looking for.
These files were not in a form that could easily be printed on modern (Unix) systems, so I wrote a bit of Python that converts them into more accessible forms. This web page describes that process.
Thankfully for posterity, the author of the over-printed images is given in a header page on each image:
PRINCETON UNIVERSITY COMPUTER CENTER CLINIC +------------+ + + + COMPUTER + + + + PICTURE + + + +------------+ THE MOON COMPUTER TRANSCRIPTION PROCESS BY SAMUEL P. HARBISON COMPUTER CENTER CLINIC PRINCETON UNIVERSITY 87 PROSPECT ST. PRINCETON, NEW JERSEY 08540 COPYRIGHT 1973 BY S. P. HARBISON SHEET 1 OF 5 SHEETS THE COMPLETE PICTURE IS CONSTRUCTED BY TAPING THE SHEETS TOGETHER SO THAT THE LAST CHARACTER ON THE FIRST SHEET MEETS THE FIRST CHARACTER ON THE NEXT SHEET WITH NO WHITE SPACE BETWEEN THEM. THIS WILL REQUIRE TRIMMING THE SHEETS WITH SCISSORS. DO NOT PUT TAPE ON THE PRINTED SIDE OF THE PAPER; TAPE FROM BEHIND.
so this puts Sam Harbison's work six or seven years before I started doing the same time (grr). He was probably being quite brave by doing this: I was told off by The Powers That Be when printing my copies of these pictures as an undergraduate because it was a "misuse of computer time".
If you search for Harbison's name on the web, you'll turn up a few pages that refer to these images, the most interesting of which is due to Mike Loewen because he was able to make contact with Harbison. For posterity, the following is taken verbatim from Mike Loewen's website, with Loewen's questions italicized:
What process was used to digitize/scan the original pictures?
I was at Princeton from 1970-74 as an undergraduate in Mathematics/Computer Science (before there was a CS department). This work was done around 1973.
I worked part-time in the Computer Graphics Laboratory of the Department of Biochemistry. Biochem had a digital densitometer that was used to scan X-ray diffraction films of crystals; that data was used to determine the structure of large molecules like myoglobin. (The CGL had a state of the art Evans and Sutherland LDS-1 vector graphics computer, which I programmed to display 3D stick models of those big molecules. And to play Asteroids, but that's a different story...) The densitometer included a drum scanner on which transparency film was placed, and it wrote 8-bit gray-scale data to a 9-track magnetic tape. I've forgotten what the DPI was.
I also worked at the Princeton Computer Center Clinic, the "help me!" office of the computer center. That gave me the opportunity to spend long hours with the IBM 360/91 computer, get 9-track tapes, get favors from the operators, etc.
I had seen some pictures rendered using printed characters before, and I wanted to try it out. I took a black-and-white 35mm photograph of the picture I wanted to scan, got the film processed at a camera store, mounted the negative on the densitometer, and loaded a blank tape. I want to say the scan took 15-30 minutes. The machine was only lightly used those days, so I had no problem getting time on it. I carried the tape to the computer center, where I dumped the data to a disk file. (We used punched cards and IBM 360 Job Control Language in those days. Keeping data in a disk file instead of on tape or cards was very cool.) I wrote a FORTRAN program that would give me simple dumps of the data so I could find the data rectangle that was the picture in the middle of the larger data "window"; you couldn't precisely control how much the densitometer scanned. I also had a program that rotated or flipped the data so that the picture was oriented the right way.
How were the digitized values converted into overstrike patterns?
The rendering itself was done with a FORTRAN program that evolved over time as I tried to get better and better quality. It read data that specified the rendering process: row/column numbers that "cropped" the data, and some numbers that specified the low-density cutoff (all values below were black) and the high-density cutoff (all values above were white). The densities in between were linearly mapped to approximately 16 overstrike patterns that formed my printer grayscale. Since the data was from a photographic negative, the scale had to be reversed, of course. Choosing the right mapping parameters was trial-and-error. I went through lots of bad renderings before getting the parameters correct for each picture. I had to discard some pictures--they simply didn't look good in black-and-white. The program had the ability to look only at every second or third data point, so I could print smaller renderings; I think I averaged the surrounding data points to get a value to print. Finally, while the data points were originally recorded by the densitometer on a square grid, the horizontal (character-to-character) and vertical (line-to-line) spacings on the printer were not equal, so the program had to perform more scaling so that the final picture came out with the correct proportions. I used no complicated mathematical techniques, like filtering or sharpening.BTW, these were "chain printers"; see http://en.wikipedia.org/wiki/Line_printer.
My print jobs specified all-white paper, so they were queued until the regular times the operators mounted white paper for people printing papers or theses with the "roff" text formatter. (Green/white striped paper was the norm.) The printers were in the hardware room. I could see and hear them, but I never had hands-on access to them. When the overstriking job ran, the operators had to be reassured that the funny noise the printer made was not a malfunction. They took the printed paper off the printer and placed it in bins by user name, which I then retrieved. Most pictures were printed in multiple "strips" of paper that had to be carefully trimmed and taped together by hand. The largest picture was perhaps 4 feet square, and looked best at a distance of about 20 feet. (For that picture, of the moon, I scanned a high-quality 2.5-inch negative a friend provided.) I assembled the pictures and, every so often, late at night, I taped them to the white cinder-block walls of the computer center's "ready room" (where jobs were submitted and people waited for their output to be placed in bins). Most people were impressed; no one asked me to remove them.
I chose the particular overstrike patterns by trial-and-error. I figured out from manuals how to direct the 132-character line printer to print one line over the previous one, and how to avoid skipping a few lines at the top and bottom of each page. I also increased the lines-per-inch to minimize the gaps between lines. I printed out a full page of each overstrike pattern, and arranged the pages on the computer center floor at night, eyeballing them to see if they formed a smooth gray scale. If not, I tried some other patterns. A few attempts actually ripped the paper. Eventually, I got ones that were good enough. By the way, not only must the darkness be correct, but the overstrike pattern needs to be relatively symmetric and centered in the character position; otherwise adjacent gray levels aren't pleasing. I can't remember the maximum number of overstrikes--I think a couple of gray levels used three or four. These overstrikes were chosen specifically for our printers, their EBCDIC character font, the black ink ribbons, and the paper the University used. With different printers, I doubt if the gray scales would be quite right.
The programs and the raw data perished after I left Princeton. I left the print files on disk, so anyone could print out copies of the pictures after I left. It would be easy to reproduce the technique, but the interesting challenge was working within the limitations of the printers in those days.
Was the Mona Lisa one of yours, as well?
No. Many of my pictures had my name printed in the bottom corner. Not all. There were about a dozen of them: a large picture of the moon; Buzz Aldrin on the moon; Mr. Spock; a Playboy Playmate (Lenna Sjooblom, Nov 1972, and probably others); my girlfriend's terrier; a close-up of my cat's face; others I can't remember.
Should these be printed at 6 or 8 lines per inch?
The pictures that I printed were not noticeably stretched in either direction, nor did they have wide whitespace gaps between lines. I tried to compress the lines (increase the LPI) so that each character was surrounded by an even amount of whitespace. So, if the picture you have is stretched, or there is much space between lines, then the LPI is wrong. The rendering was so optimized for those IBM line printers and ink ribbons, I doubt if printing them on other printers could ever be quite right.
Hope this helps.
The printable images in the zip-files were produced by a Fortran program. In Fortan IV (the most up-to-date version in 1973) and subsequently in Fortran-77, the first character of each line is a "carriage control" character which is interpreted by the line printer's driver as shown in the following table.
|[space]||start a new line before printing|
|1||start a new page before printing|
|0||leave a blank line before printing|
|-||leave two blank lines before printing|
|+||over-print the current line|
The last entry in the table is obviously the most important one here.
I've written a Python program, aaa (for ASCII art archaeology) that converts these carriage control characters into what more modern systems regard as end-of-line characters and form-feeds. (If you download it, you'll also need a copy of my EVE library for it to run. ) This works on Unix-like systems such as MacOS X and Linux; I cannot comment compatibility with Windows because I do not use it. By default, aaa is used along the lines of:
aaa moon.lpt > moon.txtto convert the picture of the Moon. If you are able to print directly to the printer, just enter a command like
lpr moon.txtto print it; but if, like me, this is tricky and you need to print Postcript, then you can use GNU enscript to convert that into PostScript. I find that
enscript -MA4 -r -B -fCourier8 -s0 -omoon.ps moon.txtworks nicely.
aaa supports a number of qualifiers that govern what the program does; you can find out what they are by typing
After having written the program to process the carriage controls, I wondered if I could take it further by re-constituting the original pixels. Each set of overprinted characters corresponds to a particular grey level, so all I need to do is find the different sets of possible overprinted characters and work out a mapping of them onto grey levels. That didn't sound too difficult. The number of grey levels will be much fewer than in the original image of course, but that shouldn't really be a problem.
How does one determine the mapping of characters onto grey levels? At first I thought I'd have to work out a table of mappings outside the program but then I realized that I actually had the answer in the characters themselves: if each character's shape is a set of pixels that are either set or clear, then working out all of the pixels that are set gives the blackness of that location in the image. Fortunately, I had the shapes of characters to hand as part of my Python computer vision package, EVE. These were produced by my friend and video effects guru, Nick Glazzard. Python has dictionaries, data stuctures indexed by strings, so this should be pretty straightforward. A little code hacking later and I was able to have aaa convert the over-printed characters into grey-level pixels and re-order the vertical stripes of image intended for printing into a correctly-shaped image.
Running the program on the Buzz Aldrin and Moon images, I found the contrast was poor and that the visual appearance could be improved by histogram-equalizing the image produced by aaa. The results are shown below. While the Aldrin image will be familiar to most of us, I'm probably the first person to have seen that Moon image properly since the early 1970s! The command to do this conversion was something like
aaa -noprint -display -resample moon.lptThe -resample qualifier tells aaa to take the rectangular-shaped pixels needed to make the over-printed image look right when printed and re-sample them to be square.
The following table lists the over-printed images contained in the zip-file, produced from its contents file. There is a remark in it that it was
CATALOGED AND RE-RECORDED ON NON-LABEL TAPE BY TOM MULLINS, JULY, 1980 AT OHIO UNIVERSITY
(I've left this in upper case for posterity — PDP-8s used a 6-bit byte and so couldn't do lower case.) Note that there are quite a few nudes images.
|astronaut||Buzz Aldrin with the LEM reflected in his visor|
|beethovn||from a painting of Beethoven|
|bridge||aircraft over the Golden Gate bridge|
|dean||John Dean on the cover of Time magazine|
|dog||"Famous painting of some dog"|
|gothic||farmer and wife with pitchforks|
|lgcat||large picture of a cat|
|lgsept||large picture of Miss September|
|mona||Leonardo's famous painting|
|peasant||market scene (by Breughel?)|
|river||boy fishing by a river|
|smcat||small picture of a cat|
|smsept||small picture of Miss September|
|spock||Mr Spock holding a model of the Enterprise|
|starnite||Van Gogh's night painting|
|sunday||Sunday afternoon scene|
|sylvette||painting of Sylvette|
If you have other vintage ASCII art images, I'm interested in having a copy of them; please contact me.
If you like ASCII art and would like to have a go at printing your own, then you have several options. Inevitably, there are websites that purport to do this. The EVE library alluded to above includes a routine, ascii_art, that lets you do it, including splitting the image up into vertical stripes. If you're a hands-on kind of person, then the EVE routine lets you specify your own set of characters to use when over-printing — the default set is a compromise between print quality, speed of printing and use of ink and is what I used back in the early 1980s; you should be able to do better.