|Clock rate||2 MHz (1.9968 MHz to be exact)|
|ROM||16 KB, holding the BASIC interpreter and FCS (File Control System)|
|User RAM||8 KB, 16 KB, or 32 KB standard|
|Display||64x32 character cell display, with 8 fg and 8 bg colors; cells could display either one of 128 character glyphs, or a 2x4 block graphics pattern, enabling 128x128 dot addressable graphics.|
|Cursor||hardware generated; blinking white overbar and underbar|
|Storage||5.25" single sided floppy disk; 40 tracks, 10 sectors/track, 128 bytes/sector (50 KB / disk)|
|I/O||RS-232 serial interface|
Memory Map (link)
|0x0000 - 0x3FFF||16 KB of ROM (BASIC and FCS)|
|0x4000 - 0x5FFF||empty 8 KB range|
|0x6000 - 0x6FFF||fast access to 4KB of video RAM|
|0x7000 - 0x7FFF||normal access to 4KB of video RAM|
|0x8000 - 0xFFFF||Up to 32 KB of program RAM|
The empty 8 KB range at 0x4000 to 0x5FFF was put to use in various after-market products. One obvious product was to put 8 KB of RAM there, and another product put an socketed EPROM that filled that address range, allowing storing various preloaded programs, such as an assembler and debugger. One product even allowed stacking eight 8 KB banks into that one range, with software selection between which bank was active at any given time.
The reason the memory mapped video address range is duplicated is to allow a trade-off between speed and quality. Because the display RAM is multiplexed between the video generation circuitry and the need for CPU accesses, something must be done when both want to access video memory at the same time. In the normal address range, where the user is typically just poking a byte at a time while typing or printing text, stalls the 8080 CPU until the horizontal blanking interval, at which point the video generation doesn't need to access RAM.
The other address range, 0x6000 to 0x6FFF, gives priority to the CPU over video generation. If the CPU accesses the display RAM at the same time the video generation logic does, the CPU wins and there is a visible "tear" in the video stream. This mode is used only when the ROM is clearing the screen, where 4 KB bytes must be written as quickly as possible, and a little tearing is not a problem since the screen is going to be displaying a solid color anyway.
Port Map (link)
|0x00 - 0x0F||TMS 5501 I/O and timer chip|
|0x10 - 0x1F||alias of the range 0x00 - 0x0F|
|0x60 - 0x6F||SMC 5027 CRT timing controller chip|
|0x70 - 0x7F||alias of the range 0x60 - 0x6F|
|0x80 - 0x9F||32 bytes of ROM (read only)|
There is no reason why a couple of the I/O port ranges appear twice; it is just incomplete decoding of the port address bus, saving a bit of logic.
The purpose of each of these ranges is described next.
5501 I/O Chip (link)
The Compucolor design relied on LSI chips as much as possible in order to fit into the small cabinet at a reasonable price. Parallel and serial communications and a handful of programmable timers were managed by the TMS 5501 I/O chip.
The 8b parallel port was multiplexed with a bit of external logic in order to scan the keyboard and to access the internal floppy disk drive. The machine could also be configured with a 2nd, external, floppy drive, which was connected to the ribbon cable that connected the keyboard to the computer.
The 5501 also contains an async serial controller, which was overloaded to provide to services for the computer. In one mode, the chip operated as intended, and drove the RS-232 serial port, allow the usual range of link speeds (110 bps to 9600 bps) and stop bit options. The other mode, described later, used an undocumented feature of the chip to drive the floppy interface.
The timers had a count range of 8 bits, and each counter advanced one count every 64 microseconds or so. Although the chip had five timers, not all of them were used by the Compucolor II.
- Timer #1: unused, limited use
- Timer #2: available to the user
- Timer #3: keyboard scan
- Timer #4: unused and unusable
- Timer #5: unused and unusable
The chip also accepted an input derived from the vertical sync timing of the video timing generator. This is a 60 Hz signal which goes active ones every field. This signal would interrupt the 8080 for a moment, while a small routine would update update three bytes, at locations 33211 = 0x81BB = hours, 33210 = 0x81BA = minutes, 33209 = 0x81B9 = seconds.
Keyboard Scanning (link)
The keyboard used by the Compucolor II was connected via a ribbon cable via the 5501 parallel port. The scheme was similar to what some other microcomputer systems, like the TRS-80, did. Rather than having a self sufficient keyboard which used an somewhat expensive LSI chip to map key presses to ASCII codes, the Compucolor instead would scan the keyboard itself.
At a roughly 60 Hz rate, timer #3 would fire, issue an RST 3 instruction to the 8080. That routine would then drive one row of the keyboard, then read that parallel port to sense which keys connected to that row were depressed. After scanning the entire keyboard matrix, software would turn that into an extended ASCII code. Because there were no diodes to ensure each row could be sensed independently of key depressions on other rows, the keyboard didn't have any rollover capability. One advantage of this style of scanning is that for game play, it was possible to tell when a given key was depressed, held down, or released, not just that it had been pressed.
The shift, caps lock, control, and repeat keys on the keyboard could be independently sensed. Most keys could produce four values, depending which combination of of the shift and control keys were pressed at the same time a regular key was struck. With the deluxe keyboard option, all 256 values could be directly produced by the keyboard.
The most innovative part of the Compucolor, as compared to most microcomputers of the same time, was its color display. Although it was fundamentally a character oriented display (not bitmapped), it was possible to mix text of two different sizes with 128x128 graphics in eight different colors.
Here is a quick summary of the video display, described in more detail shortly.
- 64x32 character cell display
- A given cell could have chose between either one of 128 glyphs, or any of the 256 combinations of a 2x4 block graphics glyph. The glyphs did not contain any lower case letters, and instead had a variety of icons and shapes available.
- Glyphs were 5x7 drawn on 6x8 pixel grid
- Character glyphs had the option of being drawn in a 2x tall mode
- Each cell had its own choice of one of 8 foreground and 8 background colors
- Each cell could specify 1.875 Hz blinking mode, where the foreground color would be drawn as black
Timing was controlled by a SMC 5027 CRT timing controller. This chip had sixteen registers, some readable, some writable, to control display timing and to set the position of a hardware cursor. It also had a row offset register, allowing an efficient scrolling of the display without having to shuffle all the bytes to effect the scroll. Except for the scroll offset and the cursor addressing registers, the timing was set shortly after the CPU came out of reset, and the expectation was that it wouldn't be written again. In fact, the user was warned that if they misprogrammed some of the timing registers, it could damage the CRT control electronics. (this is because the operation of the CRT timing depended on tuned circuits, resonant to the expected frequency; if those frequencies were off, it could lead to large currents going through certain coils as they have less impedance at low frequencies than at the one it was designed for).
Although the Compucolor was built into a TV set cabinet, the electronics were not that of a regular TV. There was no need to module and demodulate the video signal: the graphics generator drove the CRT red, green, and blue guns directly. Not only did this save costly electronics, the color bandwidth was greater than NTSC would permit, leading to crisper graphics than would have been possible with modulation.
It would have been simple enough to initialize those CRTC registers from a table in ROM, but the Compucolor design didn't do that. If they had, tweaking the timing, or changing it for the European market would have meant creating a new 16 KB mask ROM chipset. Instead, a small 32 byte PROM, read in from 8080 input port 0x80 to 0x8F, provided the timing values. In practice, only the first seven bytes were used.
Video was generated from a 4 KB memory mapped region of the address space. Each row displayed 64 characters, and there were 32 rows on the screen. Each of the 2K characters was represented by two bytes: a character byte, and an attribute byte.
When the "plot" bit (bit 7) of the attribute byte was low, it indicated that the character byte was to be interpreted according to the table below. Bits 6:0 of the byte selected one of 128 glyphs: normally the letters A-Z, the digits 0-9, and the usual ASCII punctuation, but also a wide variety of small icons and shapes that could be pieced together.
Bit 7 of the character byte was used to indicate whether the character should be normal height (bit 7=0), or double height (bit 7=1). When this bit is set, odd rows display the top half of the selected glyph doubling up each scan line; odd rows display the bottom half of the selected glyph, doubling up each scan line. In order to see a coherent character, the same character and attribute bytes needed to be stored in corresponding positions of an even/odd row pair. That was enforced only through software convention, and by poking memory one could do weird things like draw the top half blinking and the bottom half not, or the top half one color and the bottom half something else.
|even byte (character)||odd byte (attributes)|
When the "plot" bit (bit 7) of the attribute byte was high, it indicated that instead of generating one of the 128 glyphs, the 8b of the character byte should be displayed instead as a 2x4 grid of fat pixels.
|even byte (character)||odd byte (attributes)|
|2x4 pixel block||1||blink||background||foreground|
Floppy Disk Drive (link)
The Compucolor II was ahead of the microcomputer curve in including a floppy disk as a standard feature. In 1978, TRS-80s and Apple IIs came standard with a cassette, and a floppy was many hundreds of dollars more. The implementation of the floppy drive was simultaneously clever and dreadful, but more importantly, inexpensive.
Low Level Details
As mentioned briefly before, the 5501 has a single serial channel that normally operates at a maximum of 9600 bps. This same logic is used to serialize/deserialize data to/from the floppy disk. This means that when reading from or writing to the floppy, the serial port is disconnected and any transfers during that time are lost.
It would normally also mean that the floppy would be pathetically slow. However, the 5501 has an undocumented test mode where the baud rate divider is cut by a factor of eight, meaning that the serial interface can run at 9600*8, or 76800, baud. (Allowing the chip to run at a higher speed allows the chip tests to complete faster, saving money.) This fares poorly as compared to other contemporary disk controllers which typically did 125 Kbps or 250 Kbps. Because the disk turns at a fixed 300 RPM, the bit rate also dictates the disk capacity:
(0.2s / revolution) * (76800 bits/s) * (10 bits / byte) = (1536 bytes / track)
Once other overheads are factored in, the disk format stored 10 sectors of 128 bytes per sector, or 1280 bytes per track. Being a single sided drive with 40 used tracks, the total capacity of the drive was 51,200 bytes.
To save money, Compucolor bought the raw drive mechanism from Wangco (later bought by Siemens) and used their own drive electronics. The electrical interface was rather simple, and anything not strictly necessary was not supported. The internal disk drive was connected via a 16-pin socket on the main logic board:
- GND reference, common to analog, digital, and motor circuits (yikes!)
- +12V to drive the motor
- +5V to power the floppy logic circuits
- drive select
- r/w head stepper phase[2:0]
- read data
- write enable
- write data
Note there is no sensor for reporting track 0, and no write protect notch
sensor. To address the first problem, the software blindly steps the r/w
head out 42 times when it wants to home the head to track 0. The source code
for FCS literally calls this the
POUND routine. For the second
problem, well, there was no concept of write protected disks on the
When the drive is selected, the motor spins up the floppy. The phase[2:0] signals explicitly drive the r/w head stepper motor. At a later point they switched to a four-phase stepper design; although the interface is the same, the phase controls are decoded into a four phase drive to the stepper motor. The 3-phase floppies had one step per track, but the 4-phase floppies had half the step size, requiring two steps per track.
The 5501 has a simple RS-232 style serial interface: one start bit, 8 data bits, one stop bit, costing 10 bits per byte. No modulation scheme is employed in recording to the disk (such as FM, MFM, or GCR), which is problematic. It means the data stream might have long-term DC content, yet the recording medium is incapable of tracking it; it also means that there are no illegal bit encodings, such that there are no easy to distinguish encodings to use to sync to; it also means that the read logic can not use AGC (automatic gain control) and must have an absolute threshold for distinguishing a one bit from a zero bit. Imagine having a one millisecond of no bit transitions then suddenly getting a burst of 1s and 0s and having to decode them flawlessly; it is a tough problem.
Without any simple sync token, how can the system tell where a sector begins? The answer is to have a large gap between sectors without any transitions at all. If the software sees about five byte times of mark (1), it assumes it is an intersector gap and begins looking for a start bit.
The lack of modulation on the data signal also made the drive quite sensitive to the disk speed. Most drives have a PLL which allows the data stream to be recovered even with disk speed variations, as there is guaranteed to be a transition at least every bit (FM) or every few bits (GCR). Because it used the raw 5501 serial data stream, in the worst case there might be eight '0' data bits following the '0' start bit before the final '1' stop bit. If the disk speed is off by a few percent, especially if the system which wrote the sector was different than the system reading it, bits could be dropped or inserted.
There was another aspect which made the internal floppy drive on the Compucolor II troublesome. The unit didn't shield the drive at all, putting the r/w head and associated analog amplifier and signal conditioning circuitry inches away from a high voltage CRT and also the switching power supply. Some people removed the floppy and mounted it externally to the monitor to reduce the problem. Compcolor actually had the audacity to release a "Special Note to Customers" noting various defects of the machine (but without any indication of how to fix them) stating:
5. The Disk Drive may experience some READ errors. This is a normal occurrence with the Disk Drive mounted so close to the CRT display.
Sector Level Details
The disk has 41 tracks, but track 0 contains nothing and isn't used by the system. Physical track 1 is the first logical track. Because there are 40 tracks in use and each track has 10 sectors per track, there are 400 sectors total. FCS (the File Control System) routines treat the disk as 400 sectors, rather than a track/sector addressing scheme.
Each sector begins with about 10 byte times (100 bits, or 130 us) of gap -- nothing but mark ('1') state and no transitions. The sector is broken into two pieces: an ID block and a Data block. The ID blocks are created at the time the disk is formatted. Subsequent write operations overwrite the data block but don't touch the ID block associated with a given sector.
The ID block consists of addressing information and a CRC to guard against bad reads.
- 0x55 ID mark byte
- track number byte
- sector number byte
- low CRC byte
- high CRC byte
The Data block consists of the following:
- 0x5A Data mark byte
- 128 data bytes
- low CRC byte
- high CRC byte
The 16b CRC guards against corruption. If there is a read error, the only recovery mechanism is to retry the read. There is no sophisticated error recovery scheme.
The discussion above omitted one detail. When writing a sector, the drive actually starts in read mode, scanning for intersector gaps. Once a gap is found, the ID block is read. If the wrong track/sector address is found, it simply waits for the next sector to come around and try again. Once the right track and sector are found, the software switches on the write enable and then writes the data block. Because it takes a number of cycles to identify that the right sector was found and to switch the state of the write line, there is some ambiguity of when the cut-over will happen. To solve this, the formatter writes three dummy 0xFF bytes between the ID block and the Data block. When reading a sector, the software discards the first few few bytes is might read until it sees the 0x5A sync byte.
The Compucolor II Disk Format.pdf document contains more information about this level of the disk encoding.
File System Level Details
The predecessor of the Compucolor II was the ISC 8001, which originally used an 8-track drive for storage. The floppy disk system is only slightly more sophisticated than that used by the 8-track drive. Unlikely most contemporary (or modern) disk system, the file system software was burned in ROM and thus couldn't ever be enhanced. Most other system have a tiny ROM routine to read a boot sector from the disk, which then loads in a 2nd level disk operating system, allowing for bug fixes and enhancements after the fact. On the other hand, by putting all the FCS in ROM, there was no need to waste precious disk space on every floppy to hold it.
FCS views the disk as 400 sectors, sector 0 through sector 399. Sector 0 starts with a small descriptor indicating how many sectors are to be used for the disk directory, and also contains a 10 byte volume label. The remainder of sector 0 contains 5 directory entries. If more than one sector is allocated for the directory (the default is three sectors), each subsequent block contains six file directory entries.
Each file entry occupies 21 bytes and contains the following information:
- byte 0: attribute
- bytes 1-6: 6 character filename
- bytes 7-9: 3 character file type (eg, BAS, TXT, DOC, PRG)
- byte 10: file version
- bytes 11-12: start block
- bytes 13-14: size, in blocks
- byte 15: last block size
- bytes 16-17: load address
- bytes 18-19: start address
- byte 20: reserved
The attribute byte acts as a sentinel to indicate the first unused entry. Slots in the directory table are fully packed and searched in order.
The first file appears immediately after the last sector reserved for the directory entries, and is allocated contiguously. The next file appears immediately after that. All sectors after the last file are "free", and are used when the next file is written. When a file is deleted, FCS will shuffle down all the files which come after it in the directory, repacking the directory and moving the later file contents to fill the created hole. To make this operation even reasonable, FCS would like to large blocks of reads and writes, but in a BASIC environment, it cannot count on having free memory (the resident BASIC program might be occupying all of it). The solution was that FCS uses the 4KB of screen RAM as a temporary buffer. As a result, during a FCS DELETE operation, the screen is filled with colorful psychedelic garbage, which is then cleared after the DELETE is done. One other thing: FCS doesn't always make full 4K block transfers. Instead, it appears to move one file at a time.
Because files are only ever appended to the directory, saving
FOO.BAS to disk, editing it a bit, then saving it again will
result in two entries:
The file version byte is incremented on each save. If the user later
FOO.BAS, FCS assumes it means the version with the
highest revision number.
The last block size is used in order to be able to fix a file size down
to the byte: 128*(size, in sectors) + last_block_size. CP/M didn't have this
and thus it was just a software convention to somehow signal the last
meaningful byte of the file, eg
<CTRL>-Z in a text file.
The 16 bit load address indicates where in memory to save the bytes when the file is read in, and for machine language programs, the start address indicates the program entry point.
Although the Compucolor file system had a number of shortcomings and is laughably simple by today's standards, it was still light years ahead of the cassette based systems commonly in use then.