Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ENIAC Operating Manual (1946) [pdf] (archive.org)
83 points by segfaultbuserr on May 16, 2019 | hide | past | favorite | 20 comments


The thing that people might find hard to understand these days is that computers - particularly mainframe or minicomputers - were really unreliable. The hardware was often/always custom built and it would often just stop and the manufacturer had to come in to work out what was wrong, and fix it which typically involved replacing boards until the problem went away.

I should note that I'm basing this experience of being involved with company in the late 1980's - not the 1940's which is the topic of this post. Although I can imagine that back then the machines would only run for a matter of hours at best before breaking down and needing debugging.

It was the same with networking which was particularly unreliable when it ran over coaxial cables like RG58 which had an array of problems that would lead to the whole company network going down and taking all the connected PCs with it.

Modern computing is incredibly reliable.


My mother was a programmer in the 1950s. Someone came in early every morning and ran memory tests. If a tray of memory failed, they replaced it so that the computer could run while a tech debugged the failing tray. (A tray was 100 words by 36 bits of tube memory.)


Same. Except my mother talks about replacing Valves (Vacuum Tubes).


When I visited the USS Hornet museum (WWII aircraft carrier docked in Oakland), our tour guide was the guy that used to replace vacuum tubes during combat.

The targeting computer was analog, and controlled a very large anti aircraft gun using radar to to automatically track targets and compute parabolas. The gunner sat in a little control seat on the turret.

When a tube blew, the whole turret would start oscillating violently. The job of our guide was to pull refrigerator-sized banks of vacuum tubes out of the wall, replace all the ones that looked dim, slam it shut, and move from bank to bank until it stopped trying to shake the poor gunner out of his seat.

The shells had a timer inside them, and the people loading it had to set the delay between launch and detonation based on the time of flight the computer reported. The timer was actuated when the rifling of the barrel spun the shell as it was fired, so this was as precise as the clockwork in the shell allowed.

Apparently, they shot down a lot of planes with that thing.


I think I may have gotten the original link from HN, but Youtube has a few training videos covering the design and operation of analog fire control computers.

Aside from the constant adjustments required to keep them accurate (analog computation requires precision!), they're amazing solutions.

https://youtu.be/lr1uK24SND8

E.g. look at the barrel cam at 10:13 for calculating super-elevation required to hit a flying target


We had our monthly "Technology Exchange" meeting at work today (internal lighting talks and short tech talks) and one of the presentations was an interview with a guy who is about to retire after working for our little unit for the last 40 years. When he started, the 6 or 7 programmers had to share 3 terminals (so you wrote you programs on paper before you typed them in). They thought it was great because they didn't have to punch cards anymore. They had a little mini computer in University Hall (across the street from the Berkeley campus) that was the size of a hardware rack where they could test some of their PL/I code -- but had to dial up on a modem to the mainframe down in Westwood (UCLA) to actually run the jobs (he said that computer took up a whole room). The original project for our group was to produce an annual union catalog of the UC libraries holdings on microfiche.

We also had the first network that connected all the campuses, with microwave line of sight to mount zion (UCSF) and satellite dishes. Someone had to write 75k lines of assembly to implement the TCP/IP stack for the mainframe.

Was RG58 the stuff that had the "vampire taps"? We still had some of that when I started at the libraries in 1996.


> Was RG58 the stuff that had the "vampire taps"?

Close. RG58 was 10base2 (thinnet) and used BNC T connectors. Vampire taps were used with 10base5 cabling (thicknet, similar to RG-8):

https://en.wikipedia.org/wiki/10BASE5


The early, first generation tube-based computers had tube failures more or less daily. Higher reliability tubes specifically intended for computers were eventually introduced. One trick was to run the tube filaments "derated", or under the rated voltage. The tubes worked fine as logic elements at the lower voltage although that probably messed up their normal response curves. When I saw the Colossus rebuild at Bletchley Park, I asked about tube failure and they confirmed that they run the tubes (or valves, as they call them) on the Colossus at derated voltage, greatly reducing the failure rate.


> Although I can imagine that back then the machines would only run for a matter of hours at best before breaking down and needing debugging.

Indeed, and that sometimes entailed the removal of actual bugs: https://upload.wikimedia.org/wikipedia/commons/8/8a/H96566k....


"Modern computing is incredibly reliable"

I sort of agree with that, though I'd say there was a peak that passed by somewhere in the 90's. There was a time when the most important compute was on high end mainframes. They were more reliable than what we have now.


From fixing IBM mainframes in the late 80s, they were far more unreliable than modem machines. Or at least, a mainframe consisting of many interconnected frames joined with thousands of tiny cables could throw up wicked problems that required hours or days hunched over an oscilloscope or logic probe to isolate. They probably seemed more reliable because of relentless redundancy in the hardware and a highly focused maintenance staff who proactively followed up on logged intermittent errors before they became serious.


That raises the question, what are the most unreliable components of a computer nowadays, beside the HDD?


The software.


What HDD?

I think the most unreliable part of most computers today is the screen that cracks when you drop it.


See also this work-in-progress ENIAC simulator: https://www.cs.drexel.edu/~bls96/eniac/


An interesting fact of ENIAC is, although it was originally programmed by plugboards in 1946, it was soon retrofitted to a stored-program computer in 1948 to simplify programming, using its spare function table units as ROM, and its extra accumulators as a program counter and a pointer. I wonder if there's any project to recreate an ENIAC simulation in stored-program mode.


For anyone interested in this period, there's a book on the stored-program ENIAC: http://eniacinaction.com/


I just read von Neumann's biography recently. He wanted to call the ENIAC the MANIAC - because of its unreliability - but his colleagues refused.


The name "MANIAC" (Mathematical Analyzer, Numerical Integrator, and Computer or Mathematical Analyzer, Numerator, Integrator, and Computer, https://en.wikipedia.org/wiki/MANIAC_I) was eventually picked up by physicist Nicholas Metropolis to name a computer designed under his leadership, as an attempt to ridicule and stop the rash of silly acronyms for machine names, such as ENIAC, EDVAC, UNIVAC, etc.


a programmer from Huntsville, Alabama started in 1963, and later worked on Burroughs equipment, and much later on MP/M, then Apple. Quite a clever fellow, many great stories.. hardware would certainly be down from time-to-time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: