Note:  I've been posting links on forums to two of my favorite articles by Samuel Falvo which relate philosophically to the thrust of my website, but I recently thought they were gone, and I asked him where they got moved to.  They had not actually been moved; but on 7/28/2016, he gave me permission to chache his articles in case anything happens to them.  I would hate for these to be lost.  They're so good.  So here's one, minus a little CSS which is beyond me and which comes from a huge copyrighted file I didn't pay to use, so I slightly modified the article to only use html.  The content remains unchanged.


Neo-Retro Computing

06 Oct 2013

In the previous article, I described myself as a software survivalist. I said then that I looked to neo-retro computing as a means of securing my ability to perform interesting hacks going forward in the future. Well, interesting to me at least.

In this article I define my vision of neo-retro computing is.

Not Quite Scientific . . .

At its core, neo-retro computing involves at least four of the five steps outlined in the scientific method:

  1. Formulation of a question. When looking at contemporary computing infrastructure, all we see is complexity. Does it always need to be so complex? Often, for our own purposes, the answer is no; but, that raises another question: how and/or why did this complexity arise in the first place? Thus the motivation for engaging in the neo-retro community—the desire to know the answers to these questions.

  2. Hypothesis. Sometimes, you might call this step bravado, depending on your attitude towards software and/or hardware. It is in this stage that you look at some aspect of computing and say to yourself, "I can probably do this simpler/better/easier." If you're smart, you'll base your speculation on prior experience, either in coding or in product management. If you're not that smart, you will be soon enough.

  3. Testing. Also known as coding or hacking. It's here that you actually commit to writing your software, or if you're looking to include computer hardware in your project scope, schematic capture, board manufacture, etc.

  4. Analysis. Also known as retrospective. It's here that you record your experiences. You either write blog posts, or chapters of a book, whatever.

You'll notice that Prediction doesn't exist in the list above. That's because it's not a prerequisite to participate in the neo-retro movement. Of course, there's nothing preventing you from making predictions. I've been known to do it myself now and again. However, at least as often, I just want to attempt to reproduce a piece of computing history, just to play and learn. In such cases, I (try to) have no preconceived notions. I let the experiment guide me in my understanding why computing is the way it is.

In summary, neo-retro computing is about demanding from history proof that concepts need to be as complicated as they appear for the circumstances they're used in.

Examples of Neo-Retro Computing

My Kestrel computer family obviously qualifies; my experiences driving their evolution and development have informed my definition of neo-retro computing directly.

I think Jeri Ellsworth's Commodore-One platform certainly qualifies. It qualifies as the first commercially viable, reconfigurable computing system. Although it aims to support running classic computers (which itself isn't neo-retro), the fact that you can use the Commodore-One to question commonly held design philosophy and develop your own completely new computers makes the Commodore-One itself neo-retro.

Examples That Aren't Neo-Retro Computing

Building a classic computer implementation in an FPGA fails to qualify. For example, Jeri Ellsworth's C64DTV fails to qualify as a Neo-Retro Computing project because it fails to address the status quo of the Commodore 64 design. Instead, it takes the Commodore 64 system architecture as a given. Innovations like adapting it to use an SDRAM chip instead of regular DRAM definitely interests me, but I feel it doesn't address the fundamental core of the complexities found in the Commodore 64 design.

Case Study: The Kestrel Family

This blog doesn't generally cover Kestrel-related material, but I'll mention a quick retrospective here since I haven't yet set up the Kestrel-specific blog(s). (I'll announce on this blog when I complete it/them.)

The Kestrel computer family of completely home-made computers exists to ask these questions for quite a number of different aspects of the computing world today. Too many questions to list here, in fact. Indeed, for each generation of Kestrel, everything within my financial reach is custom-designed, with the goal of learning, refining, proving, and improving the next generation.

Kestrel-1 (???-2004)

  1. Question: How simply can I make a single-board computer?

  2. Hypothesis: (with some aspects of Prediction mixed in for good measure.) Given a Western Design 65C816 microprocessor, if I can remove the need for ROM, the address decoder becomes a simple NAND gate configured as a simple inverter. The remaining three NAND gates found in a 74ACT00 can decode the clock and R_W lines to form output- and write-enable signals. When the high address bit is low, select a single VIA chip. The W65C22 VIA can interface with SPI devices to provide virtually unlimited I/O capability as applications require. When the high address bit is high, select RAM. While the CPU is in RESET state, use a crude DMA circuit attached to the host PC to upload the initial program into high RAM.

  3. Testing/Hacking: The finished design consisted of three breadboards. The first contained the CPU and clock driving circuitry. The second contained the RAM and VIA chip; and, as well, a handful of LEDs to illustrate I/O was, in fact, working. The third contained a collection of 74ACT595 chips which my desktop PC drove to upload a program into RAM.

  4. Analysis: It turns out you can build a moderately capable computer in a small form-factor and for greatly reduced costs if you can find a way to reduce address decoding to a simple binary decision. Additionally, the computer would need to depend on an initial program loading mechanism in order to be useful.

Kestrel-2 (2005-2012)

I think the earliest recollection I have of the Kestrel-2 dates to around 2005 or 2006, just one year before moving to the Bay Area for a new job.

The Kestrel-2 attempts to answer a number of questions all at once, addressed independently, and in a very unstructured way. Originally, bolstered with confidence from the success of the Kestrel-1, I wanted the Kestrel-2 to be a kind of hybrid of the Apple IIgs and the Commodore-Amiga. Based on the 65816 CPU running at 14MHz, capable of graphics with resolutions up to 640x480 and 256 colors out of a palette of 65536, and polyphonic, DMA-driven audio channels, it was to be a home computer of my dreams. Something that combined the ease of benchtop hacking that the Commodore 8-bit computers offered, and the usability of the Commodore Amiga.

Well, it didn't turn out that way. I needed to answer a lot more questions before I could get there. For example, how to implement the video controller at all, much less one as capable as the Amiga's AGA chipset. The issue of cost entered the picture at this point as well: FPGA development boards were still well outside my reach. Expansion buses required designing, as I wanted to slowly expand my RAM over time on the one hand, and I/O cards on the other. The original Kestrel-2 design proved entirely and thoroughly over-ambitious for my meager experience and resources.

Something got my juices flowing again around 2011, however. The earliest commit record I have for the Kestrel-2 Github project dates back to June of 2011. I started writing a software emulator for the J1 stack-architecture CPU. Eventually, I even managed to get real hardware working in my then brand-new Digilent Nexys2 FPGA development board.

Unfortunately, I cannot recall the circumstances that caused me to change from the J1A to the [S16X4] (https://github.com/sam-falvo/kestrel/blob/master/cores/S16X4/doc/datasheet.pdf) CPU it currently uses today. I do remember byte-addressibility and code density concerns were factors; though, the details are lost to the mists of time now.

Kestrel-2 (2012-2014)

Questions: How far can I get without interrupts? (Quite far.) Can I embed a working Forth environment in system memory and still have enough left over for programs? (Not with a 16KB system; 32KB minimum memory required, 44KB recommended. Font and bitmapped text output consumes too much space.) Can I make a working operating system to make up for limited memory resources? (Yes, as long as you're careful about the OS' own memory consumption.) Can Forth make a good systems programming language? (Yes, as long as you're careful with code-reviews and proper test-driven techniques.) Is MISC as compact as hyped? (No. The 65816 in native-mode often produces smaller executables.) Is MISC as fast as hyped? (This depends on the workload; however, on average, even with something as limited as the S16X4, pure expression computation and simple effective address generation takes less time to complete than the equivalent 68000 or 65816 code.) More questions exist, of course, but are too numerous to list here.

The contemporary Kestrel-2 seems unrecognizable compared to my previously lofty goals. The modern Kestrel-2 addresses no greater than 64KB of memory space, of which up to 44KB can be program or data RAM, 4KB for I/O space, and 16KB for video display RAM. No audio support currently exists. No interrupts. No configurable video modes. Limited video RAM space forces the display to 640x200 resolution. No color—it's black and white only.

The reason for the significant scope reduction, as you might expect, involves answering questions related to achieving my ultimate goal. It was the perfect test mule to learn about video display circuitry, for example. The lack of interrupts made for interesting programming challenges. Surprisingly, you can write some amazingly sophisticated programs without them, provided your hardware supports alternative tools. For example, the keyboard controller has, built-into its hardware, a 16-byte FIFO. Most computers implement this in software, managed through an interrupt service routine. Putting the FIFO right in hardware significantly simplified software and hardware design. The microprocessor, now a very simple MISC-architecture design, supported byte-addressibility right in the instruction set.

Kestrel-3 (2014-???)

Questions: Can I make a 64-bit CPU run efficiently? How can I support interrupts and traps? Can I access external SRAM or SDRAM with any modicum of efficiency? Do I absolutely need byte-, word-, dword-, and qword-specific accessors? Or, can I get by with word-addressing only and use byte-banding? Can I finally upgrade the monochrome graphics interface adapter (MGIA) to support multiple resolutions and color depths (CGIA)? Will the eP64 have enough code density to embed a functional Forth environment as the power-on language environment without requiring more than 32KB of memory? Can I port Tripos relatively easily? Can I embed Forth in ROM for when I cannot boot Tripos?

As you can see, some of the lofty goals from the original Kestrel-2 make a come-back with the Kestrel-3. You'll notice I'm not tackling everything at once this time, however. For example, I'm still not addressing the needs of audio playback, nor of expansion buses. We do see the return of interrupt support, but system software will likely under-utilize this new feature for now.

Here's my one prediction that I'm quite sure of: the eP64 has a data bus wider than that of external RAM by a factor of four (64 bits versus 16 bits). As a consequence of this, driving the microprocessor at RAM speeds (no faster than 14MHz) will result in a substantial reduction in real-world performance. Indeed, if we run the CPU at 13MHz (1/5th the dot-clock frequency for a 1024-pixel wide display), we can expect the eP64 to function as though it were clocked at only 3.25MHz. This will put the performance of the Kestrel-3 computer firmly in the Commodore Plus/4 or Atari 7800 level of performance. Thus, the higher video resolutions will be useful for productivity applications only. No games or demo-scene programs at 1024x768 yet!

The only way to attain a higher performance is to make use of FPGA block-RAM resources as instruction and data caches. However, that sounds like a job for the Kestrel-4.


Posted on WilsonMinesCo.com as a backup, with permission, Aug 13, 2016.  Original post at http://sam-falvo.github.io/2013/10/06/neo-retro