Talk:Apollo Guidance Computer

Latest comment: 28 days ago by StarryGrandma in topic Rendezvous radar on both CM and LM

CCS instruction

edit

Fascinating article. Would love to know what the thinking is behind the CCS instruction - what sort of problem does it solve? What would be a modern (C, say) equivalent contruction? GRAHAMUK 04:51, 2 Sep 2003 (UTC)

The CCS instruction can be used to perform the equivalent of the C language "if" statement, "switch" statement, "for" loop, or "while" loop. I was going to put a code fragment in here as an example, but the editor really mangles the formatting... --Pultorak 07:29, 4 Sep 2003
The code fragment would be interesting, and could be adequately formatted in Wikipedia by putting a space first in every line. If applicable, you might get it back via the Page history link and put it back in. --Wernher 00:34, 3 Mar 2004 (UTC)
Designer's comment: I could probably write a small book on CCS alone. Let me know if my amplifications in November left anything unanswered. Did I mention that the basic idea was lifted from the IBM 704/7094's CAS (Compare Accumulator with Storage)? --67.75.8.160 02:46, 1 Mar 2004
First: a question before going into the subject matter proper: does "designer" in the above comment by any chance relate to "designer of the AGC and/or its instruction set"? Just wondering. And now to the question at hand: maybe the 7090/94 heritage should be mentioned in the article, in the 'quest' of educating readers about the sometimes hidden, and of varying degree and significance in each case, but still often very fascinating, continuity underlying much of computer development history? --Wernher 00:21, 3 Mar 2004 (UTC) / 21 Apr 2004

Single/double precision

edit

The "single" and "double" precision mentioned in the article links to IEEE definitions, but surely the author meant one or two of the 15/16-bit words the machine used? --Anonymous, 21 Apr 2004

Thanks for mentioning it, I have to admit I didn't think of that when inserting the links. :-) However, the opening paragraph of the respective 'IEEE articles' does actually define sgl/sbl precision fairly generally (i.e. using one vs two words), so hopefully we'll avoid misleading the readers too much. Eventually I think we should 1) make a separate article on general single/double precision numbers, and 2) incorporate even clearer introductory info, w/links to the general article, in the IEEE definition articles. --Wernher 16:13, 21 Apr 2004 (UTC)

User interface

edit

In this chapter there is a link to a picture and also to a diagram of the DSKY. The two do not match. [1] has there indicators:

+------------+------------+
|UPLINK ACTY |  TEMP      |
+------------+------------+
|  AUTO      |GIMBAL LOCK |
+------------+------------+
|  HOLD      |   PROG     |
+------------+------------+
|  FREE      | RESTART    |
+------------+------------+
| NO ATT     | TRACKER    |
+------------+------------+
|  STBY      |  [   ]     |
+------------+------------+
| KEY REL    | OPR ERR    |
+------------+------------+

Compare that to

 
LM DSKYs interface diagram.
+------------+------------+
|UPLINK ACTY |  TEMP      |
+------------+------------+
| NO ATT     |GIMBAL LOCK |
+------------+------------+
|  HOLD      |   PROG     |
+------------+------------+
| KEY REL    | RESTART    |
+------------+------------+
| OPR ERR    | TRACKER    |
+------------+------------+
|  [   ]     |   ALT      |
+------------+------------+
|  [   ]     |   VEL      |
+------------+------------+


I believe the top is Block I, the bottom is Block II LEM (the CSM doesn't have ALT and VEL indicators).

Points of reference

edit

For contemporary readers who may not have been alive at the time, or who may not be intimately familiar with computer architecture, the Apollo Guidance Computer could use a comparison to later well-known computers or calculators. For example, how would it compare to the HP-65 programmable calculator, or a personal computer, such as the Apple II or IBM PC that came a decade or so later? Quicksilver 02:13, 11 November 2005 (UTC)Reply

Good point! Perhaps we should also put in the TI-83/89 and HP-48/49g+ modern calculators, since those would be quite well-known to many college students (and engineers, scientists) today. --Wernher 17:10, 13 November 2005 (UTC)Reply
This the exact reason that I came looking for the Apollo computer information. We've heard for years that the cheap $4 calculator in the checkout line has more computer power than the one that went to the moon. So how much did it really have? I missed any reference to RAM or ROM. No changes in four years may not be a good sign. :) -- Kearsarge03216 (talk) 00:39, 27 March 2009 (UTC)Reply
It's clearly put forth in the article (Apollo_Guidance_Computer#Description) but the architecture and terminology were not altogether the same as today. A quick reading of the text shows the computer's RAM was about 2K and ROM was about 36k, along with memory add-ons. This was (barely) enough for the straightforward, skived-down guidance math it was built to do, but packed into a startlingly small area for the time. The interface was all hardware. It was indeed less powerful than advanced handhelds made only a few years later, much less powerful than the earliest IBM PCs but highly reliable. The AGC was more or less hand built, very expensively. $4? More like $1-2 when on sale at an office supply store: Most throwaway calculators in 2009 would be more powerful over all, but maybe less reliable. Any article comparison with calculators or PCs would need to be reliably sourced, though. Gwen Gale (talk) 04:33, 27 March 2009 (UTC)Reply
As a point of comparison, emulating the AGC using the Virtual AGC software takes about 2% of CPU time on a 3GHz Pentium-4 (i.e. that CPU can emulate the AGC at around 50x realtime). And given that it's probably running a hundred or more x86 instructions to emulate each AGC instruction, the AGC is less than 0.1% of the performance of the P4. Which makes you think, really, when you consider that it got men to the moon and back, with assistance from the ground. Mark Grant (talk) 07:45, 28 March 2009 (UTC)Reply
This does seem to line up neatly with the 2.048 MHz clocking speed of the AGC noted in the article. I've got spreadsheets which, from my human outlook, seem to do complex calculations "instantly," though these would have taken several seconds on the AGC (given it had enough memory to hold the data, which it did not), never mind the staggering overhead of the X Window GUI along with all those daemons and such. There is likely a source floating about somewhere which talks about this and could be cited in the text. Gwen Gale (talk) 08:30, 28 March 2009 (UTC)Reply

What, specifically, was it used for?

edit

This is a fascinating article, but I find myself wondering what exactly this computer was used for on its missions. The article says that it was used to "collect and provide flight information, and to automatically control all of the navigational functions of the Apollo spacecraft," but I'd like to see a more detailed explanation than that.

For example, what kinds of "flight information" were collected? What were the "navigational functions" of the spacecraft? Presumably the computer did not "fly the spacecraft" in a completely automatic manner. I would be interested to know more about in what way the computer was used by the astronauts to operate the craft.

DrDeke 15:29, 8 March 2006 (UTC)Reply

For a thorough explanation, there's a 500-odd page Delco manual covering the programs used by the AGC on the Apollo 15 mission at: http://history.nasa.gov/alsj/a15/A15Delco.pdf.
While the AGC didn't fly the spacecraft completely automatically (e.g. it didn't work out when the engines needed to fire to take it to the Moon, but if given that information by the crew it could fire the engines and control the burn), it was capable of a completely automatic landing on the Moon. Typically the AGC flew the LEM until a few hundred feet above the ground, then the astronauts would use the LEM controls to adjust the programmed landing site to ensure they were going to land on a flat area and not in a crater.
The Virtual AGC page at http://www.ibiblio.org/apollo/index.html has an AGC emulator and some of the real software which ran on it. There's a video of the Virtual AGC flying a simulated Apollo CSM at http://mysite.wanadoo-members.co.uk/ncpp/CSM_DAP.wmv
The video isn't terribly exciting as it's just firing the RCS thrusters to rotate the CSM to the specified orientation (45 degrees pitch and 90 degrees roll). It does give some indication of how the real AGC was used by the astronauts though. MarkGrant

The article still suffers from a certain lack of focus in the introduction. A naive reader could be forgiven for thinking that this was an article about embedded systems, as the lede leads with "The Apollo Guidance Computer (AGC) was the first recognizably modern embedded system". The focus at this point should be on what the AGC was and did. I'm looking at the article now (I'm in the middle of rereading Mike Collins's book "Carrying the Fire" (still my favourite after 30 some years, even with the advent of Chaikin's etc), Aldrin's "Men from Earth" and Don Eyles's "Tales from the Lunar Module Guidance Computer", hence the interest) and will make some change to effect this in the next hour or so. Lissajous (talk) 14:17, 27 August 2009 (UTC)Reply

I've added a better (I think) lede.--agr (talk) 14:52, 27 August 2009 (UTC)Reply

Integrated Circuits

edit

We should probably add more information about the decision to use ICs in the AGC rather than discrete transistors. I've added links to some documents on the klabs.org site discussing this decision and it would appear to have been highly contentious at the time but extremely sensible in hindsight. MarkGrant 02:11, 9 July 2006 (UTC)Reply

It is unclear that the AGC was the first computer to use integrated circuits. HP's 2116A (HP_2100), introduced in 1966 the same year as the AGC, may or may not hold that title. The 2116A was of course much larger at around 230 pounds, but it was a full general purpose computer, not a specialized controller. Badtux (talk) 18:23, 21 July 2019 (UTC)Reply

PGNCS trouble

edit

This is an excellent article, but I question the assertion that it was the program alarms that caused Neil Armstrong to go to manual control of the Apollo 11 landing. Is there a source? As far as I know, all of the astronauts went to manual control during lunar landings, and I've never seen Armstrong's decision singled out like this before. --MLilburne 09:16, 13 July 2006 (UTC)Reply

It does say 'more manual', but it seems odd to me too. BTW, I've also added a comment on the root cause of the 1201 alarms, which I didn't see mentioned anywhere else. Mark Grant 16:53, 13 July 2006 (UTC)Reply
There's a good discussion of manual control and lunar landings (though on a message board) here. What "manual control" would mean in this context is, I believe, going to P66, which all of the commanders seem to have done. So I do think that the article is wrong. But I'm going to ponder a bit more before changing anything. --MLilburne 17:06, 13 July 2006 (UTC)Reply

In First on the Moon', (Little Brown, 1970) Armstrong says he took manual control when he realized they were about to land in a boulder field. The alarm problems were an issue becaused they distracted him from looking out the window and following landmarks, but it was the realization that they were heading for a poor landing spot that caused him to take over the throtle ontrol so he could slow the rate of decent and allow more time at a higher altitude where he cold select a better spot.--agr 15:00, 8 December 2006 (UTC)Reply

The 1201 and 1202 alarms were caused by too many events from the Rendezvous Radar (part of the Abort Guidance System) which was attempting to track the CSM in case the abort switch was pressed during decent. This Rendezvous Radar was disabled during decent in subsequent missions which is why you never hear about it after Apollo 11. --Neilrieck 03:04, 10 February 2007 (UTC)Reply

Standby Mode

edit

Somebody needs to correct this. It can't have been 5 to 10 kW reduction. More like tens of Watts.

http://history.nasa.gov/ap16fj/csmlc/a16Lemer1-7.gif says that some sort of standby (I don't know if it's the same) reduced consumption by 3 Watts. Does someone know if it's the same as the one mentioned in article?

Presumably it's the same. The text has been corrected already to read W not kW.--agr 14:44, 8 December 2006 (UTC)Reply


The article says, that the standby mode was never used. Is this really true? I can't believe this. What about Apollo 13? —Preceding unsigned comment added by 89.27.200.16 (talk) 12:37, 24 July 2009 (UTC)Reply

Description section

edit

The photo is nice but gives no indication of scale. Could anyone add to the description section indications of Power usage and Physical dimensions? Garrie 03:01, 12 December 2006 (UTC)Reply

Technical & Generalize tags

edit

I added the technical and generalize tags. This page is far too technical. This is obviously an amazingly important technology, but I don't know that describing clock cycles, circuit design, and other technical specifications are the best ways to convey it. Can we get more about the historical context (state of IC computers in the era), advancements that the computer made that can still be seen in modern computers, how the computer affected or resolved missions-in-flight (ie Apollo 13)?Madcoverboy 16:02, 11 March 2007 (UTC)Reply

First?

edit

The description section says, The Apollo flight computer was the first to use integrated circuits (ICs), and then later on, The decision to use a single IC design throughout the AGC avoided problems that plagued another early IC computer design, the Minuteman II guidance computer. If there was an earlier design, how could this be the first? -- RoySmith (talk) 20:28, 26 June 2007 (UTC)Reply

It says 'another early design', not an earlier design. They were developed at around the same time, I'm not sure which came first. Mark Grant 22:28, 26 June 2007 (UTC)Reply
I read the sentence as meaning, "Based on the experiences of the Minuteman II computer, the AGC design team decided to go with a single IC desgn". Maybe the sentence just needs rewriting to avoid giving that impression. -- RoySmith (talk) 15:11, 27 June 2007 (UTC)Reply
Sure, I'd agree it's a bit confusing. Mark Grant 15:23, 27 June 2007 (UTC)Reply
Here's one site claiming it was the first: http://www.ieee-virtual-museum.org/collection/event.php?id=3457010&lid=1 Mark Grant 15:06, 27 June 2007 (UTC)Reply
And comments by Henry Spencer, who usually knows his stuff with anything space-related: http://yarchive.net/space/politics/nasa_and_ICs.html
This may come down to a question of how you define 'first': the AGC was probably the first IC-based computer to go into development (MIT were ordering ICs in February 1962), but may not have been the first to fly (e.g. I haven't found a date for the first Minuteman-II launch). Mark Grant 15:23, 27 June 2007 (UTC)Reply

Looks to me like it was the Minuteman II that was the first successful all-digital flight computer [2], the first MM2 test flight occurred in September 1964. [3] Banjodog (talk) 05:53, 27 January 2009 (UTC)Reply

Misc

edit
This is already in the external links section. Gwen Gale (talk) 01:17, 28 December 2007 (UTC)Reply

Error 1202

edit

Can you tell whether it is this Apollo Guidance Computer (AGC) that displayed an "error 1202" during Apollo 11 landing on the moon and forced Armstrong to moonland manually ?

Yes. See, for example, http://history.nasa.gov/alsj/a11/a11.1201-pa.html Mark Grant (talk) 00:41, 25 July 2009 (UTC)Reply
The 1201/1202 program alarms didn't have any bearing on Armstrong's decision to go to the "more manual" landing mode (in fact, every moon landing used that mode, which did still rely heavily on the computer). The alarms were a distraction, certainly, but beyond that didn't have any impact on the computer's functionality. Kaleja (talk) 17:10, 21 April 2015 (UTC)Reply
edit

The MIT AGC Project link is effectively dead --

The Burndy Library has moved to the Huntington Library, Art Collections, and Botanical Gardens in San Marino, California. The Dibner Institute, formerly on the MIT campus, is now closed. Information regarding the Burndy Library and Dibner Fellowships may now be found at http://huntington.org/burndy.htm. Inquiries may be sent to publicinformation@huntington.org.

Some of the material is archived at http://authors.library.caltech.edu/5456/, but I haven't tracked down all of the items from the external links yet. Autopilot (talk) 00:13, 5 March 2008 (UTC)Reply

Verb/noun commands

edit

Are there any examples of how the verb / noun command inputs worked? I can see the codes in the picture of the side panel, but its not clear what they are for or how they are used.

-Bill —Preceding unsigned comment added by 75.180.8.80 (talk) 12:10, 26 July 2009 (UTC)Reply

See the external links for the article but page 17 of this pdf puts forth a quick overall take on the noun/verb buttons. Gwen Gale (talk) 12:55, 26 July 2009 (UTC)Reply
Looking at the "helpful" list of verbs and nouns, now I think Robert Heinlein was closer to the mark than I thought in his story Misfit, where the navigators have to convert everything to binary - by hand! - before entering it into the computer. The DSKY must have been a brutal man-machine interface to use, by today's standards. --Wtshymanski (talk) 16:04, 2 April 2010 (UTC)Reply

Units - kibibyte

edit

Disclaimer: I didn't introduce the kibi/mibi nomenclature.

Has there been prior discussion on the units used in the article? A recent edit has removed references to KIBI and MIBI (see kibibyte) and replaced them with the more widely used (but arguably misleading) "kilo" and "mega". The original references were in line with the (not widely used) IEC standard for referring to powers of 2 (binary). I was myself tempted to make the same change, but left alone on the basis that there might have been good reason/prior consensus, and moreover the IEC designations are in some small way more precise.

Is there good argument for not using the kibi/mibi nomenclature? Lissajous (talk) 08:40, 4 September 2009 (UTC)Reply

On reading Anakin's response on my talk page (summarized below) - I'm won over to sticking with terminology that's already widely used - i.e., kilobyte etc.
It's somewhat a matter of personal taste, but I think there are good arguments to using the standard kilobyte/megabyte/gigabyte terms. There has been a lot of discussion about it on Wikipedia, and WP:MOSNUM suggests to use kilo/mega/giga as they're better understood than kibi/mibi/gibi, though it does offer some exceptions. Seems to me that in every article using the IEC prefixes, they're hyperlinked to a long page explaining their meaning and history and justification. The added confusion doesn't seem at all helpful to readers, having to force new words upon them before they can continue reading the original article (though it should probably be decided on a per-article basis). • Anakin (talk) 17:57, 4 September 2009 (UTC)Reply

Lissajous (talk) 18:37, 4 September 2009 (UTC)Reply

Block I vs Block II

edit

I'm in no great hurry, but at some time I'd like to move the focus of the article to be on the Block II design (which actually flew the manned missions), with the Block I taking a lesser role. At present the text describes the Block I by default, with references to changes for the block II made later. The design history is interesting, and the evolution from Block I to Block II is important, but the design of interest is in fact the Block-II version. Are there good reasons for not doing this? Lissajous (talk) 05:30, 11 October 2009 (UTC)Reply

edit

These links actually seem to be working now. Added 'nb.' on size of downloads. --220.101.28.25 (talk) 11:48, 31 October 2009 (UTC)Reply

They work for me too, so I removed the 'dead link' tags. I also changed the 'section headers' for the links section to wiki headings since they're not very obvious when they're just part of the generic text; that's bugged me for a while and I think this is an improvement. Mark Grant (talk) 20:16, 31 October 2009 (UTC)Reply

Overwriting software

edit

"The software could be overwritten by the astronauts using the DSKY interface. As as done on Apollo 14." - this seems rather dubious to me as the software was in fixed core memory, and the only explanation I've found of AGC hacks on Apollo 14 is this:

http://www.ibiblio.org/apollo/#Solution_to_the_Apollo_14_Final_Exam

So in that case it would appear that the astronauts were changing variables in the erasable memory rather than code.

I'm sure I do remember a mission installing a software 'patch' in erasable memory but can't find any reference to it now. And that's still not really 'overwriting' the software. Mark Grant (talk) 05:17, 15 September 2010 (UTC)Reply

Dimensions

edit

At the top of the article it mentions that the dimensions of the AGC are 24" x 12.5", however from the pictures it looks more squarish, and I recall it being more like 8" x 8". Is there a source to confirm dimensions? Or maybe I am thinking of just the DSKY part of it? Logicman1966 (talk) 23:48, 9 February 2011 (UTC)Reply

You're thinking of the DSKY. Here is a ref for the Block I AGC dimensions, which are a little different from the dimensions given in our article: http://www.nasm.si.edu/collections/artifact.cfm?id=A19720340000 -agr (talk) 01:10, 10 February 2011 (UTC)Reply

In layman's terms...

edit

> Block II had 32 kilowords of fixed memory and 4 kilowords of erasable memory.

I'd like to explain this in terms an ordinary person will understand - would it be fair to say: "approximately as much memory as a Comodore 64" ?

Regards, Ben Aveling 10:57, 7 October 2011 (UTC)Reply

Standby

edit

This is kinda confusing. So according to what it says now, it sounds as if full on used 70W and standby used 65W or 60W -- and I'm not sure that's correct. Also, can we get an inline reference there? 31.16.108.201 (talk) 23:38, 26 February 2012 (UTC)Reply

Factual errors

edit

In the 1960's I was an engineer who worked on the AGC at Raytheon. I noticed four minor factual errors:

1. There were actually 2 different integrated circuits in the AGC. As stated, one was the dual RTL 3-input nor gates used for the logic (made by Fairchild). In addition the menory used an integrated circuit sense amplifier (made by Norden).

2. The packaging of the ICs was not in flat packs, but metal TO-5 can-style packages (I forget the number of leads). I believe this was done for cooling reasons. The AGC was conduction cooled (no air in space) and the individual modules had magnesium headers with holes for the TO-5 cans, resistors, and other components.

3. Although wire wrap was used in the backplane connections (for module-to-module connections), within the (epoxy) potted modules the interconnections were welded wire. (Solder connections were viewed as too unreliable).

4. I challange the photo of the erasable core memory. The erasable memory was not built of individual planes as pictured, but was a "folded" design which was also potted to form a module like the others, but in a "silastic" type material since the epoxy was too rigid and the cores couldn't stand any compressive force. Interestingly, each core had 4 tiny magnet wires threading through it. The requirement of no solder connections also applied to those magnet wires each of which needed to thread several hundred tiny cores. In the process of manufacturing the "folded" stack, sometimes cores needed to be replaced (because they were defective), which meant painstakingly removing and replacing the 4 wires involved. In commercial memories where solder splices are allowed it's easy, but for the AGC memory the 4 wires needed to be replaced and re-strung. — Preceding unsigned comment added by 72.28.170.2 (talk) 19:01, 8 July 2012 (UTC)Reply

Regarding item 2, apparently the Block I version used can-style packages and the Block II used flat packs. See this document: http://klabs.org/history/history_docs/mit_docs/1675.pdf "Case History of the Apollo Guidance Computer" by Eldon Hall of MIT, which says on page 12 "Table II is a comparison of the Block II computer characteristics with those of Block I. ... it became [in the Block II] possible to increase the complexity of the equivalent circuit in each microcircuit element [integrated circuit] and also to change from the TO-47 package [a metal can with 6 leads coming out the bottom] to the flat package. ... The reduction in size [from the Block I to the Block II] is attributed mostly to the decrease in the number and size of the logic-gate packages, that is, from 4,100 TO-47 [can] packages to 2,800 flat packages." Also see figure 9 on page 18 which illustrates the packages and the internal circuitry. Differtus (talk) 07:21, 5 March 2015 (UTC)Reply
I mistrust the photo of the erasable memory as well. Neither the photo itself nor any NASA document seems to confirm, that this is actually a module from the AGC. To me it looks like the photo has been published here to stimulate ebay bids (see listing #332010866033). Opawaltrop (talk) 13:32, 31 October 2016 (UTC)Reply

EDRUPT Instruction

edit

Found this from page 95 of the book "The Apollo Guidance Computer: Architecture and Operation, ed:Frank OBrien" "Communicationg with the outside world :the IO system , The EDRUPT instruction"

It appears to explain how the EDRUPT Instruction was used. For disabling interupt, and appears to have been used for implimenting self diagnostic tests.

When EDRUPT is run, all other intrupts are disabled, and ZRUPT register loaded with value of Z (program counter) , and during the LM autopilot , code for terminating the DAP cycle is run, then RESUME instructoin is run at the end to re-enable interupts.

so the like (not exact word-to-word, just extracting how the Instruction is __supposed_ to work) Not verified on the AGC emu yet, but this appears reasonable. 27.253.192.65 (talk) 15:21, 3 April 2013 (UTC)Reply

Only 11 opcodes?

edit

The wikipedia article claims there are only 11 opcodes. However two of the links in the external section clearly contradict that:

I believe 11 opcodes is for Block I; the Block II section of the article says 34 opcodes. Kaleja (talk) 17:07, 21 April 2015 (UTC)Reply
The op-code used a 3-bit field, so there were actually only 8 legitimate opcodes. This was extended to 11 using some sort of hack involving deliberate register overflows. The remainder of the 34 were instructions for the Colossus/Luminary virtual machine. Rhoark (talk) 22:18, 4 June 2015 (UTC)Reply

Margaret Hamilton Hagiography

edit

Margaret Hamilton has become somewhat of an icon for women in STEM in the past year or so, which is all well and good, but it has caused some dubious claims about her accomplishments to creep in. She is often called, without qualification, the leader of Apollo software design. This was not a responsibility she held from the beginning of the program, rather only becoming the leader for command module software sometime after Apollo 8, and gaining responsibility for LM software later still. Major architectural elements she has been given credit for, namely the virtual machine interpreter and the executive/waitlist system were not her creations. The virtual machine was designed by Hugh Blair-Smith. The executive/waitlist system was designed by J. Halcombe Laning and Richard Battin, based on a paper they wrote in 1956 and their experience with the WHIRLWIND machine. Hamilton certainly worked on implementing and applying these ideas, but there is no evidence that she originated these or similarly influential concepts. The main idea she originated, "Higher-Order Systems" was dead-on-arrival in academia [[4]] and industry [[5]]. HOS seems to bear some relation to the work of Dijkstra and Hoare that preceded it as well as systems like coq or Haskell's Arrows that came later, but neither building on the former nor influencing the latter.

The idea that she coined the term "software engineering" is possible, but dubious. When asked directly about it, she said only that she "began to use"[6] the term, not that she began its use. The most widely recognized confirmed use of the term was in a 1965 letter by the president of the ACM, when Hamilton was 27 years old and not yet prominent in the Draper lab. Douglas T. Ross claims to have been using it as much as a decade before that, which Hamilton likely would have been exposed to while they both worked on the SAGE system. As eventual team leader, there's no doubt that Hamilton was involved in industrial aspects of software engineering like configuration management and schedule prediction. However, these things were quite notoriously imposed on the Draper lab by Bill Tindall around the time of Apollo 1. A sort of mythology has built up that software at that time was mostly done by women because it was denigrated by the male hardware engineers, but this doesn't really fit the facts. It conflates the kind of work Hamilton did with the rope memory weavers, who were seen as akin to telephone operators. Hamilton was among a tiny minority of women actually writing software at that time. It's true electrical hardware engineers tended to be dismissive of the challenges of software, but this was not a gender-based divide. Male computer scientists had been contending with it just as much as the few female ones for a decade by the time Hamilton began to work on Apollo. Software design and quality assurance became a program priority driven top-down by George C. Mueller and Bill Tindall, not something practiced surreptitiously and unappreciated. Rhoark (talk) 23:32, 4 June 2015 (UTC)Reply

Not to parse the article too closely, but the article says that under Hamilton's direction a "sophisticated software interpreter" was developed. While the claims you are refuting may be being made elsewhere, only the interpreter leadership is made here. I suggest that unless the claim regarding the interpreter is in dispute that this tag be removed from the first instance of the article. Further she was an early drive force for systems and realtime engineering and indeed "The design principles developed by Hamilton for the AGC became foundational to "software engineering"" is a fair comment. SO I believe that the second instance of the Dispute tage should be removed. — Preceding unsigned comment added by 204.128.192.34 (talk) 21:20, 16 October 2015 (UTC)Reply
I originally wrote to generally debunk questionable claims both in the article and in the sources these claims come from. With respect to the claim that the interpreter was developed under Hamilton's leadership:
  • Hugh Blair-Smith claims to have independently written the YUL assembler for the interpreted language beginning in 1959 (before the Draper lab was even contracted to work on Apollo.) [7]
  • That doesn't say who was responsible for the design of the interpreter of the resulting pseudocode, but does imply there was enough of a specification existing by 1959 for Blair-Smith to begin the task.
  • Blair-Smith maintained YUL throughout the Apollo program.[8]
  • He also takes credit for "microprogramming" of the "instruction repertoire" which possibly refers to the interpreter backend, but I can't say for sure.
  • This 1961 manual for the Mod 3C computer[9] includes thorough description of the interpreter. It is authored by Ramon Alonso, J. Halcombe Laning Jr., and Hugh Blair-Smith.
  • According to the book Digital Apollo[10] the interpreter was the "brainchild" of Laning, as was the executive scheduler (which among other feats allowed the computer to not be overloaded by the Apollo 11 radar mishap.)
  • I can't find an exact date for when Margaret Hamilton began to work on Apollo, but the outside bounds are placed by her working on weather-predicting software in 1960 and becoming Apollo software lead in 1965.[11] Some of the intervening time was spent on the SAGE air defense system.
  • According to this 1976 history published by the Draper lab[12] Hamilton led a team that implemented designs handed down by senior engineers.
  • Hamilton became responsible for all command module software after Apollo 8[13], by which time foundational elements of the software architecture had long been established.

Taken together, I think this thoroughly establishes that however invaluable Hamilton's contributions were to the program, the interpreter could not possibly have been developed under Hamilton's direction, and it's extremely dubious that she influenced the interpreter or executive design in any significant way. Rhoark (talk) 03:46, 17 October 2015 (UTC)Reply

Infamous AGC4 Memo #9, Block II Instructions?

edit

In what ways is the AGC4 Memo #9, Block II Instructions (listed as an external link), infamous? Were there significant errors in the documentation?69.69.101.185 (talk) 21:36, 30 November 2017 (UTC)Reply

See annotation to Chapter 11, pages 133-134 (Hugh Blair-Smith's Annotations to Eldon Hall's Journey to the Moon)

"So the (in)famous Memo #9, simply listing the instructions and what they did, was enough for the people that mattered, and not enough for the people who didn't matter."

--89.25.210.104 (talk) 19:25, 5 April 2018 (UTC)Reply

Before Block II

edit

Before Block I

edit
  • Transistor computers:

1959 - Mod 1A construction started

1960 - Construction of Mod 1B started, Mod 1A abandoned

1961 - Mod 1B completed

1962 - Mod 3C built, modified and completed into AGC3

  • IC computers:

1963 - AGC4

Block I

edit

1964 - AGC5, AGC6

1965 - AGC7

Sources:

[14]:

3. Ramon L. Alonso et al., A Digital Control Computer Developmental Model 1B (1962, 12 Mbytes).

15. Hugh Blair-Smith, Annotations for Eldon Hall's book, Journey to the Moon: The History of the Apollo Guidance Computer (1997) ([15])

38. Eldon C Hall, MIT's Role in Project Apollo, Vol. III: Computer Subsystem (1972, 11 Mbytes).

[16], [17] (Additional materials) — Preceding unsigned comment added by 89.25.210.104 (talk) 18:48, 5 April 2018 (UTC)Reply

[18]: Includes a picture of AGC4. — Preceding unsigned comment added by Iandiver (talkcontribs) 18:06, 27 May 2023 (UTC)Reply

What case material

edit

The pink/peach alloy used for the case - was it magnesium-aluminium - as stated by WSJ [19] ? - Rod57 (talk) 00:12, 16 September 2019 (UTC)Reply

How many were built?

edit

I tried to find this information in the article, but couldn't find it. Obviously there must have been two AGCs for every manned mission to the moon, but how many were tested before / flown on unmanned missions / used for something else later? And those few the article mentions that were used for later projects, were they surplus left over from the Apollo program, or did Raytheon continue to build some more specifically? --BjKa (talk) 19:13, 14 April 2020 (UTC)Reply

Youtube Recommendation

edit

For anyone interested in the AGC in general, or any detail of its workings and operation, I can highly recommend the videos of CuriousMarc, documenting a restoration of a privately owned AGC to working condition. --BjKa (talk) 19:13, 14 April 2020 (UTC)Reply

Other registers Incomplete

edit

the list of registers in this article is incomplete. for a more complete list of registers see Appendix C in the book: "The Apollo Guidance Computer: Architecture and Operation" by Frank O'Brien. ISBN: 1441908773, 9781441908773 — Preceding unsigned comment added by 61.69.177.79 (talk) 07:10, 6 December 2020 (UTC)Reply

Welding vs soldering

edit

I raised this issue in August 2023 since welding seemed wrong to me. I received the following note on my talk page:

Hello there, I am writing about ICs used in Apollo guidance computer - the photo, where I edited subtitle few days ago. The ICs are trully welded to PCB, because this connection was much more reliable than soldering. More info here: https://www.rit.edu/imagine/exhibit-extras/Apollo-Guidance-Computer-ImagineRIT-SKurinec.pdf 94.113.240.226 (talk) 06:33, 29 August 2023 (UTC)Reply

The relevant paragraph I added is:

According to Kurinec et al, the chips were welded onto the boards rather than soldered as might be expected.[1]

So surprisingly as it seems welded, not soldered is the correct term. Please don't revert unless you have citable evidence to the contrary. Martin of Sheffield (talk) 10:57, 21 December 2023 (UTC)Reply

this is very interesting.
however, the edit i reverted is still wrong: not because of welding vs. soldering (i was wrong about this point), but because it's not a PCB ("printed circuit board") - as the article itself states, and the PDF linked above by User:94.113.240.226 confirms, the logic boards were wire-wrapped. (the article also adds "and then embedded in cast epoxy"). this is distinctly different technology than PCB.
i was reverted once here by Martin of Sheffield, so i will leave correcting it (or not) to them.
peace. קיפודנחש (aka kipod) (talk) 16:15, 21 December 2023 (UTC)Reply
To quote the relevant page:
* Micrologic chips installed on "Logic Stick"
* Subassemblies (sticks) contain 120 chips (240 gates)
* Chips welded to multilayer boards
* Logic boards essentially identical
* Traditional circuit boards could not produce the necessary logic density
* Interconnections made through wire-wraps in the underside of the "logic tray"
Now as I understand it the chips were welded to multilayer boards, which must surely qualify as PCBs. I see 58 chips visible on the side nearest the camera, however there appears to have been one in each row removed. Either the PCB is double sided (unlikely with 1960s' technology) or there are at least two PCBs making up a logic tray. The assembles PCB(s) are then connected (welded?) by wires to the wirewrap pins at the bottom of the photo. At that date it was common for standard PCBs (though in this case custom made standards) to be brought out to wirewrap fields which is where the real logic was implemented. Indeed on page 10 you can see a "Logic and interface" module consisting of a lot of these logic sticks, the underside of which would be a forest of wirewrapping.

References

  1. ^ Kurinec, Santosh K; Indovina, Mark; McNulty, Karl; Seitz, Matthew (2021). "Recreating History: Making the Chip that went on the Moon in 1969 on Apollo 11" (PDF). Rochester Institute of Technology. p. 9. Retrieved 29 August 2023.

Rendezvous radar on both CM and LM

edit

The list of ports says that only the CM had a rendezvous radar, but I believe the LM also had a rendezvous radar, which was the cause of the famous 1201 and 1202 alarms in the Apollo 11 LM. 2A02:1406:11:AC2:0:0:E61:4D5A (talk) 10:22, 25 October 2024 (UTC)Reply

Good point. However both radars are on the LM, not the CM. The CM had a transponder to be detected by the radar on the LM. I'll change that. Also NASA documents don't refer to ports but call them inputs. StarryGrandma (talk) 23:33, 26 October 2024 (UTC)Reply