Diagram of Basic Principles Section

edit

A diagram could help clarify and remove ambiguity. I feel unclear about what components there are, and the relationship between them, or the physical connections between them. Rsamot (talk) 03:21, 12 June 2012 (UTC)Reply

First channel

edit

The first channel is usually considered to be the IBM 709's model 766 data synchronizer, shipped in about 1957. The 7090, announced in 1958 and shipped in 1969 supported the 766 and also the more advanced 7607 channel and 7606 multiplexor. These both significantly predate the 6600 which didn't ship until 1964. See http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP7090B.html John L 20:36, 16 September 2007 (UTC)Reply

Odd question

edit

I encountered an odd question embedded in a comment within the article that would have been better put on the talk page. The comment read:

huh|date=April 2008 … it doesn't use main memory? why?

Independent processors and coprocessors can't use main memory as scratchpad because it would interfere with the operation of programs already in main memory. They have to have independent counters and memory for addressing that doesn't conflict with the main unit.

However, DMA controllers will read the desired data directly into main memory or write the indicated data directly from main memory.

Does that clarify?

regards, --UnicornTapestry (talk) 02:28, 7 January 2009 (UTC)Reply

It was my question. The article states that all I/O channels have their own memory ("on-board", whatever it supposed to mean, for me implies it's a private memory). All of them, not some of them. Pretty strong assertion and it seems very doubtful. All coprocessors (and especially I/O channels) could easily use areas of main memory for all their needs, on the condition that OS manages the memory properly. Coprocessor does not need to rewrite all the main memory, ever, if that's what you mean by "scratchpad". Since I don't want to introduce WP:OR, I left the mark in the article. I kindly ask to remove that mark only after the thing is either (1) explained or (2) corrected. --Kubanczyk (talk) 11:26, 8 January 2009 (UTC)Reply
I see. The original author used coprocessor as an analogy, although in my opinion that was an inadequate explanatory shortcut and subprocessor is a more accurate term. That could be better written. IBM often referenced the term 'subprocessor' and CDC used both that term and a more inclusive term, 'peripheral processor'.
"Scratchpad memory" is working storage, and your use of 'private memory' is accurate in that it's separate from main memory. In channel design, it's not easy or appropriate to share main memory, especially as only a few bytes are involved.
To clarify, channels have access to main memory, but they don't use main memory, if you follow me. It's easier and more robust to separate. Indeed, trying to carve a chunk of working storage out of main memory could invoke numerous design problems. (We're not including mention of the CAW and CSW here.)
"Onboard", as used in the article, means that memory is packaged with the subprocessor. Usually, you see 'onboard' used when memory is included as part of a chip, but the author didn't mean that narrowly.
Introducing cite and WP:OR aren't appropriate here. The engineering involved is reasonably consistent and well understood (although not explained in detail). The original author does imply channels contain their own memory (as needed), and I have to agree in that I'm unaware of channel designs that utilize main memory rather than internal storage. (We're not discussing early personal computer designs that set aside part of main memory for video– that's entirely different.)
I'll be glad to answer further questions if I can.
--UnicornTapestry (talk) 12:41, 8 January 2009 (UTC)Reply
Thanks for explanations. Could you enrich this article with one sentence explaining the fact that channel processor needs working ("scratchpad") memory, and the reason it is easier to implement scratchpad outside of main memory? I'm unable to do it. --Kubanczyk (talk) 21:29, 12 January 2009 (UTC)Reply
Hi. I took run at it. Let me know what you think.
I recently came across a PDF of architecture by Gene Amdahl and Fred Brooks that discussed channel design. It was a part of a larger article, but I found it interesting they underlined the architectural independence of the channels.
best regards, --UnicornTapestry (talk) 07:42, 14 January 2009 (UTC)Reply
It's perfect. Thanks. --Kubanczyk (talk) 22:20, 14 January 2009 (UTC)Reply
Actually, they can and do use main memory, but most use separate scratchpads to improve performance. Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:58, 9 December 2019 (UTC)Reply
Multiplexer channels have to remember, somewhere, the state of all active channel operations. Selector channels can only have one active operation, so less to remember. That data, as well as I know, is channel internal. Well, for smaller systems that integrate channels with CPU hardware, there is "local storage", where it might go. Otherwise, it is CCWs and actual transfer data that go from/to main storage. Gah4 (talk) 22:54, 12 May 2020 (UTC)Reply
For processors, and I suspect also for channels, IBM calls it local storage. The low-end processors have a small(er) core storage unit used for processor state, such as registers, the PSW, and I suspect also for channel state. The channel needs things like the address where data is next read/written, and the count for how much to read/write. Mutliplexor channels need this for each subchannel, and the expectation is that they don't need it often (on CPU time scale) so it should be stored somewhere other than processor registers. The other possibility is reserving some part of main memory, and then disallowing that for ordinary use. Gah4 (talk) 19:13, 26 May 2020 (UTC)Reply

Too IBM Specific?

edit

IMO Sections 4 thru 6 of this article has recently become too specific to IBM Channels as opposed to I/O Channels in general, e.g. recent edits by User:Peterh5322. Perhaps the simple thing to do is make these three current sections into subsections of one new Section "4. IBM Channel I/O as an example". Tom94022 (talk) 18:25, 23 December 2014 (UTC)Reply

This is a good place to continue the relevant part of the discussion in Talk:Execute Channel Program#Duplicate info:
  1. Rewrite lede. In particular, channels were generally implemented with simple custom hardware rather than by programming a processor to perform channel functions and even after the IBM System/360 high-end processors continued to have hard-wired channels for over a decade. Peripheral processors are not channels, and, in fact, peripheral processor use channels.
  2. Move S/360 specific information to a new section and rewrite the overview to be generic. In particular, not all channels use channel programs.
  3. Remove software information from Channel I/O#Channel programs in virtual storage systems, add architecture summaries and references, e.g., to IBM System/360 architecture#Input/Output. Include Indirect Data Address List (IDAL)[1] and the ECPS:VSE feature[2] optionally used by DOS/VSE.
  4. Add sections for IBM 7030 Stretch, for 1410 and 7010, for 707x, for 7080, for 709, 709x and 704x
  5. Add sections for other vendors.
    1. CDC
      1. 924, 1604 and 3000 series
      2. 6000 series
      3. 7600
      4. CDC Cyber
    2. GE/Honeywell/Groupe Bull - at least GE-600 series/Honeywell 6000 series
    3. UNIVAC - at least UNIVAC 1100/2200 series
  6. Split Channel I/O#Booting with channel I/O into a generic section and a S/360-specific section.
Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:45, 9 December 2019 (UTC)Reply
While channel is the IBM name, maybe used by some others, we should probably be general enough, probably with a redirect, to allow for CDC processors, and other dedicated I/O systems. I thought that some channels were microcoded, but I didn't follow them at that much detail. I do remember the story of recycling 370/158 processors, with new microcode, for the channels of later processors. Gah4 (talk) 07:16, 26 May 2020 (UTC)Reply
Channel is the word that pretty much the entire industry used until the PC grabbed all the mindshare. However, the details varied even between products[a] from the same vendor, which is why I would like to see specifics for more than just the IBM channel architectures.
CDC Peripheral Processors used channels and CDC called them channels.
The low end System/360 models implemented channels via cycle stealing; the same repertoire of microinstructions was used for implementing S/360 instructions and S/360 channels. The hign end models used hardwired outboard channels: 2860, 2870 and eventually 2880. The same was true for the System/370; the 165, 168 and 195 had hardwired outboard channels, everything else was cycle-stealing microcode.
The 3031, 3032 and 3033 all used a 3158 engine running essentially the same channel microcode to implement channels. The 3031 had a second 3158 running the CPU microcode; the 3032 and 3033 were re-engineered 3168 engines running essentially the same microcode as the 3168.
The 3081, 3083 and 3084 had new channel implementations; I don't know the details. Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:46, 26 May 2020 (UTC)Reply

We might look to Smotherman's "A Survey and Taxonomy of I/O Systems" along with his cited references as a reliable starting point for revising this article. Smotherman also suggest a rewrite of Input/Output is in order.Tom94022 (talk) 17:12, 26 May 2020 (UTC)Reply

Should the chronology be based on design date or ship date, e.g., the IBM 709 shipped first, but some of its novel features derived from the 7030, which shipped several years later.
The lede of Input/Outputcertainly has information that should be moved to the body of the article, and there are WP:NPOV issues. Does Smotherman have specific recommendations? Shmuel (Seymour J.) Metz Username:Chatul (talk) 18:45, 26 May 2020 (UTC)Reply

Notes

  1. ^ Or, in the case of CDC, within a single product. The 7600 had two kinds of channel, one used between a CP and a PP and one used between a PP and a peripheral device.


References

  1. ^ "CHANNEL INDIRECT DATA ADDRESSING", IBM System/370 Principles of Operation (PDF) (Eleventh ed.), September 1987, pp. 13-45–13-46, GA22-7000-10
  2. ^ IBM 4300 Processors Principles of Operation for ECPS: VSE Mode (PDF) (Second ed.), September 1980, GA22-7070-1
edit

Hello fellow Wikipedians,

I have just modified 2 external links on Channel I/O. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 09:43, 14 January 2018 (UTC)Reply

Multiplexer vs Multiplexor

edit

I know there have been discussions of this elsewhere. Even IBM has been inconsistent; The original IBM System/360 System Summary (1964) uses "multiplexor".[1]

On the other hand this article is not IBM-specific. The google ngram viewer shows "multiplexer" with a significant lead—at one point 2 to one and now about 10 to 1.[2] Perhaps we should use "multiplexer" consistently and then list "multiplexor" as an alternate spelling. Comments? Peter Flass (talk) 21:25, 19 January 2018 (UTC)Reply

I put the one in the talk page for CTCA, as adaptor vs. adapter. More recently I found one related to power plugs and sockets, again adaptor vs. adapter. There is a WP:MOS related to British vs. American English. In that case, one is supposed to be consistent, and continue on with the existing form. There might also be one related to era appropriate usage. When did IBM make the change? In the case of CTCA, I put in the redirect, but that is only needed when it is in the title. My thought here is to consistently use adaptor, as IBM did over most of the years when the subject was being developed. Gah4 (talk) 21:35, 19 January 2018 (UTC)Reply
At least one RS regatding adapter/adaptor states that "They are used interchangeably in all varieties of English and in all their meanings, ..." A Google search of the Wikipedia site shows about a 2:1 usage of adapter vs adaptor which suggests for Wikipedia this is a matter of style. If so, then MOS:STYLERET should apply, that is, either spelling is acceptable unless there is a "substantial reason" for a change. IMO, consistency within an article might be a substantial reason. Tom94022 (talk) 20:28, 20 January 2018 (UTC)Reply


  1. ^ IBM Corporation (1964). IBM System/360 System Summary (PDF). p. 9. Retrieved Jan 19, 2018.
  2. ^ Google. "Ngram Viewer". Retrieved Jan 19, 2018. {{cite web}}: |last1= has generic name (help)

Duplicate info

edit

See #Too IBM Specific? and Talk:Execute_Channel_Program#Duplicate_info for discussion. Peter Flass (talk) 16:19, 6 December 2019 (UTC)Reply

implied CCW

edit

The implied CCW description doesn't seem like I thought I knew at. As well as I know, there is an implied CCW at location 0 that does the 24 byte read into location 8. I believe that a possible implementation (maybe used in some machines) is for the IPL logic to write an actual CCW into location 0 and SIO it. That is, I believe that IBM documented it that way somewhere. Since the real or implied CCW will be overwritten, there is normally no way for the user to know. I suspect that if it fails, it might be visible in storage after the failure. Gah4 (talk) 23:01, 12 May 2020 (UTC)Reply

Does read a little strange. I’ll revise it, unless someone else wants to. Peter Flass (talk) 23:29, 12 May 2020 (UTC)Reply
I've modified that section to fix errors and streamline the text. I've also added comments there and elsewhere that belongs in software articles. The whole section is 360-centric, not just IBM-centric, and really needs to be split into a generic section and architecture-specific subsections. Shmuel (Seymour J.) Metz Username:Chatul (talk) 01:44, 13 May 2020 (UTC)Reply
That does look better. Seems to me that the discussion should be about as OS independent as it is in Principles of Operation. Note, for example, that there is no rule in the IPL system that requires one to IPL an OS. One can, traditionally with the three card loader, an object program directly from cards, or card images. Especially, note the suggestion that the second IPL record is large. As well as I remember it, at least for OS/360, IPL text comes on cards and is written to disk by IBCDASDI. I would expect, then, to still be in 80 character records (blocks). If one IPLs a card reader, definitely they are in 80 character records. I suppose it is OS/360 centric, but it didn't change all that much later. Gah4 (talk) 02:24, 13 May 2020 (UTC)Reply
In fact, ZZSA is an example of a stand-alone utility traditionally run from DASD.
The loader for cards is, as you note, rather short. The loaders for DASD resident operating systems are somewhat larger, but considering that they were single records (R2) that fit on a 2311 track, the size was rather limited (but much larger than 80). IBCDASDI, IEHDASDR, ICKDSF et al write a 24 byte R1, construct R2 from the IPLTEXT cards in its input, write a volume label in R3 and create a VTOC wherever you tell them.
The OS/360 IPLTEXT may or may not have changed much, but it's very different from the, e.g., CP/67, DOS/360, Linux, TPF, TSS/360, IPLTEXT, and therefore the descriptions belong in articles on the respective operating systems.
The reference to a wait state also relates to software rather than to hardware. The DASD initialization utilities create a wait state PSW in bytes 0-7 of R1 when there is no IPLTEXT. If a DASD volume really is uninitialized, I would expect the IPL to fail as described in PoOps: When the I/O operations and psw loading are not completed satisfactorily, the CPU idles, and the load light remains.
Shmuel (Seymour J.) Metz Username:Chatul (talk) 04:08, 13 May 2020 (UTC)Reply

First non-IBM channel

edit

Channel I/O#History states One of the earliest non-IBM channel systems was hosted in the CDC 6600 supercomputer in 1965. However, both the CDC 1604 and UNIVAC 1107 were well before that. Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:02, 31 March 2021 (UTC)Reply

WP:OR - Reason small machines usually lack channels

edit

Channel I/O#History states However, with the rapid speed increases in computers today, combined with operating systems that don't 'block' when waiting for data, channel controllers have become correspondingly less effective and are not commonly found on small machines. That text has unsubstantiated conclusions:

  1. That Channel I/O is in fact less effective
  2. That the causes are those claimed rather than, e.g. the common use of DMA.

Shmuel (Seymour J.) Metz Username:Chatul (talk) 13:08, 31 March 2021 (UTC)Reply

Sorry for the slow response. Is this supposed to be small machines 60 or 50 or 40 years ago?
IBM PC/XT uses DMA for floppy disk, but programmed I/O (copy loop) to/from the disk controller buffer for hard disk. DMA was too slow for hard disks. S/360 disk controllers have pretty much no buffer. Each byte goes to/from the channel and storage. Mostly that is due to memory costs. With high speed semiconductor memory, it is easy to put big buffers into things, which completely changes the need for channels. Without actually doing calculations, the relative speed of CPU, memory, and I/O devices change the needs for buffers, and especially of separate channels. Another way to look at it, is that many I/O cards include the logic that might have been channels 50 years ago. Channels mean that logic only goes once (or a few times), instead of on each device. OK, so it is mostly VLSI technology, and less speed increase, that changes the needs. Gah4 (talk) 08:47, 24 May 2023 (UTC)Reply
I have no idea who wrote that sentence or what they meant by small, but it looks bogus to me and is certainly not trues for any mainframe since the late 1950s.
The old PC disk controllers may have required programmed I/O (copy loops), but, e.g., SCSI controllers, use DMA.
The S/360 DASD were mostly unbuffered, but buffered controllers were available on S/370 and became the norm with the IBM 3990.[1] -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 11:58, 24 May 2023 (UTC)Reply
If you look back one how they did things, over time, it is sometimes amazing that any of it worked as well as it did. And still the amazing thing about S/360 was the wide performance range for processors, but also for I/O devices. Disk controllers buffer one byte, because it is too expensive to do more. Well, there is both data buffer and CCW buffer, both of which were one. By the 3880 and then 3990, they could do more buffering. I think the first SCSI host controller I had, the Seagate ST01, didn't use DNA, as it is very simple and cheap. Microcomputer I/O system have been constantly getting faster, with both DMA and programmed I/O. But okay, the 3990 in about 1990. By then, PC disk controllers all had a block buffer, as the bus wasn't really designed for that I/O rate. I am not so sure about minicomputers in the 1970's and 1980's time frame, as big computer prices were also coming down. (At least for the same processing speed.) Oh well. Gah4 (talk) 07:29, 25 May 2023 (UTC)Reply


References

  1. ^ IBM 3990 - Storage Control - Introduction (PDF) (First ed.). IBM. September 1987. GA32-0098-0. Retrieved May 24, 2023. {{cite book}}: |work= ignored (help)

Virtual storage

edit

Talk:Channel I/O#Channel programs in virtual storage systems is oriented towards IBM OS/VS in general and MVS in particular. It does not describe systems such as DOS/VSE[1] with ECPS:VSE, where the channel controller handles translation of virtual addresses in the CCWs, nor does it describe channel I/O on capability based systems. Shmuel (Seymour J.) Metz Username:Chatul (talk) 02:37, 2 May 2021 (UTC)Reply

Microcode assist is funny, almost like cheating. I tend to think that we should follow normal Principles of Operations descriptions, though I suppose an appropriate section on microcode assist and how it allows things to speed up. Gah4 (talk) 08:25, 2 May 2021 (UTC)Reply
I'm not sure that I'd refer to ECPS:VSE[2] as an assist, since it's a total replacement of the CPU and I/O microcode, different enough to warrant its own principles of operation, albeit ECPS:VSE mode shares a lot of microcode with S/370 mode.

References

  1. ^ Introduction to DOS/VSE (Seventh ed.), IBM, January 1979, GC33-5370-6 {{cite book}}: |work= ignored (help)
  2. ^ IBM 4300 Processors Principles of Operation for ECPS:VSE Mode (PDF) (First ed.), IBM, January 1979, SA22-7070-0

Recent edits

edit

@Maury Markowitz: In a recent edit to Channel I/O#history, Maury Markowitz changed the text Channel architecture avoids this problem by using a logically independent, low-cost facility. to Channel architecture avoids this problem by using a logically independent, low-cost processor dedicated to the I/O task. I reverted the change with the comment t wasn't always a processor and he reinstated his change. On some computers the I/O channels were implemented with cycle stealing from the CPU and the only dedicated hardware was a few registers and an interface to the channel bus. Perhaps facility is too terse, but processor is misleading at best. Perhaps facility with a footnote?

He also added two {{clarify}} templates. I added wikilinks to channel commands and command chaining; is that sufficient to remove the {{clarify}} templates?

Is it appropriate to add citations for channel commands and chaining, and, if so, how many would be reasonable? --Shmuel (Seymour J.) Metz Username:Chatul (talk) 20:15, 24 December 2021 (UTC)Reply

I did not re-instate the change. Check the DIFFs. You expressed your concern about the use of the term "processor", and I took that under consideration. The word "facility" is meaningless in this context and has to be replaced.
As to the clarification-neededs, both are terms that remain undefined at the point of encounter. The term "data channel command" does not appear anywhere else in the document. The concept is defined in the "channel command words" section, but this is a platform-specific term. Chaining is defined using IBM-centric terminology and the description is completely confusing and offers no clue why it is mentioned in the History section.
The article needs a complete re-write, but I'm caring for a 5-yr-old over the holidays so it will have to wait. Maury Markowitz (talk)
I suggest the current edit is better, but the section needs work - I don't think the 2860, 2870 or 2880 are "simple" and they are quite powerful, probably powerful enough to be used as a computer. Ditto I think for the CDC peripheral processors. I'll start on it.
I think the wikilinks to channel commands and command chaining would be sufficient to remove the clarify templates - I don't see any?
While it is rarely inappropriate to add citations, I suggest in this case they are better in the linked channel commands and command chaining locations. Tom94022 (talk) 20:58, 24 December 2021 (UTC)Reply
The I/O channels on, e.g., UNIVAC 1107, are much simpler[a] than on S/360, and S/360 models[b] smaller than the 360/65 use cycle stealing rather than sophisticated dedicated hardware.
Channels on the CDC 6600 are extremely simple and didn't even do counting or DMA. Each of the 10 Peripheral Processors has access to all 12 channels. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:05, 26 December 2021 (UTC)Reply
@Chatul: Thanks for your recent edits. I think we have a semantic problem in the meaning of "Channel I/O", see Smotherman's A Survey and Taxonomy of I/O Systems for several alternative definitions. Right now there is no RS for the term. Under at least one definition the CDC PP and attached channels could be called "Channel I/O." I think we need to work on the definition of Channel I/O perhaps along the line of Blaauw and Brooks or some other reference. Tom94022 (talk) 17:36, 26 December 2021 (UTC)Reply
Story is that the channels on the 3032 and 3033 are recycled 370/158 processors with new microcode. I don't know any of the details, though. As leases were ending, presumably a lot of hardware came back, and the ability to reuse it made sense. Gah4 (talk) 18:17, 27 December 2021 (UTC)Reply
Yes, the 3031, 3032 and 3033 use a 3158 with only the channel microcode. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 00:14, 28 December 2021 (UTC)Reply
Over the past half century, quite a few hardware vendors have used the term channel, and I'm not aware of one that has included computational abilities as part of it's definition, although the IBM 7909[1][2] did have very basic arithmetic capabilities. In particular, CDC never referred to a PP as a channel. Would it be appropriate to add a {{Expert needed}} tag? Should I add citations for the channels on the 7090, UNIVAC 1107 and CDC 6600?
The paper you cite does not give any definition for I/O channel. However, I note that it claims an earlier origin than the 709. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 19:12, 26 December 2021 (UTC)Reply
The paper cites two possible sources for a definition of "Channel I/O", an unreferenced textbook definition as #4 of 4 methods of data transfer and Blauuw and Brooks (B&B) as #5 of 7 methods of data transfer. An example of the textbook definition begins on p331 of The Essentials of Computer Organization. This is the B&B definition:

8.5 Channel - Concept
A channel is a peripheral processor so specialized for the transmission of I/O data that it cannot perform the normal main-processor operations, even at reduced power or efficiency. Since transmission is most of I/O activity, a channel does most of the I/O work, in terms of time fraction. ... In contrast to the PPU, however, the channel has no explicit memory space of its own, the channel program therefore resides in main memory.

Computer Architecture, Blauuw and Brooks, Chapter 8 Input/Output

Note B&B leave out the standard interface that is usually associated with a channel and the B&B description is very IBM oriented. Depending upon which definition we use the article may have to be edited. I don't think an Expert needed tag would help Tom94022 (talk) 21:36, 26 December 2021 (UTC)Reply

References

  1. ^ Reference Manual IBM 7090 Data Processing System (PDF) (Fifth ed.). IBM. March 1962. A22-6528-4.
  2. ^ IBM 7094 Principles of Operation (PDF) (Fifth ed.). IBM. October 21, 1966. A22-6703-4. {{cite book}}: |work= ignored (help)

Organization of article

edit

I believe that #Types of channels should be organized as

  1. Minimal
    1. CDC 6600
      • Channel has no DMA
      • Channels assembles/disassembles 12-bit bytes
  2. single block
    1. 7607 on IBM 7090
    2. CDC 3600
    3. UNIVAC 1107
  3. chained
    1. IBM 7030
    2. IBM 7070
  4. programmed
    1. 7909 on IBM 7090
    2. IBM System/360[[Channel I/O# and successors
      1. Parallel channels
        1. Byte Multiplexor channels
        2. Selector channels
        3. Block multiplor channels
      2. ESCON
      3. FICON

I'm not sure what to do about Externally Specified Index (ESI) on, e.g., UNIVAC 490. I also don't know how many examples to give of each type.

Similarly, #Channel program should be split into separate section. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 19:12, 3 February 2022 (UTC)Reply

303x

edit

The article says: On the 303x processor complexes, IBM abandoned that implementation and used the same cycle-stealing implementation as on the 370/158. No subsequent product in the System/360 line had hardwired channels. As I understand it, they used an actual (returned from lease) 370/158 with new microcode for, at least, 3033 and I believe 3032. Maybe not the 3031. So it isn't cycle stealing, as it is a separate processor, after it has the new microcode. Gah4 (talk) 23:16, 30 April 2022 (UTC)Reply

Off the top of my head:
The 3031, 3032 and 3033 used repurposed 3158-3 processors without the S/370 CPU microcode; the 3031 had a second 3158-3 without the channel miccrocode; the 3032 had a 3168 without the channel microcode; and the 3033 had a re-engineeered version of the 3168 with denser chips.
Whether that was cycle stealing depends on how you treat multiple microprograms stealing cycles from an idle loop. You could also treat it as each channel microprogram running as an interrupt handler, although that's not how IBM describes it. --Shmuel (Seymour J.) Metz Username:Chatul (talk) 16:52, 1 May 2022 (UTC)Reply
I've replaced that with On the 303x processor complexes, the channels were implemented in independent channel directors in the same cabinet as the CPU, with each channel director implementing a group of channels. with a reference from an IBM document. I wouldn't call that "cycle stealing" in the "is the CPU involved in data transfer?" sense, which I think is the most meaningful sense; maybe different channels have to share the 3158-3's hardware, but the CPUs don't - they have their own hardware to run S/370 instructions. Guy Harris (talk) 07:40, 24 May 2023 (UTC)Reply

India Education Program course assignment

edit

  This article was the subject of an educational assignment supported by Wikipedia Ambassadors through the India Education Program.

The above message was substituted from {{IEP assignment}} by PrimeBOT (talk) on 20:07, 1 February 2023 (UTC)Reply
Cite error: There are <ref group=lower-alpha> tags or {{efn}} templates on this page, but the references will not show without a {{reflist|group=lower-alpha}} template or {{notelist}} template (see the help page).