Talk:Video display controller
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
This article links to one or more target anchors that no longer exist.
Please help fix the broken anchors. You can remove this template after fixing the problems. | Reporting errors |
Difference to RAMDAC
editwhat is the difference with a RAMDAC?78.185.183.73 (talk) 01:39, 18 December 2009 (UTC)mehmet
- Not the same thing at all, a RAMDAC is only used to generate three analog signals (for the red green and blue video signals), while also acting as the color palette. That is the digital color signals from video memory input to the RAMDAC got to the address lines of a RAM, in the RAMDAC and the output of the addressed RAM cell go to the input of the DAC. So changing the contents of the RAM changes what color is displayed for each binary value input to the RAMDAC.
- Contrary to that a Video Display controller does everything needed to generate a video signal, (horizontal and vertical sync-pulses, and blanking signals, and controls, and reads out, the video memory, so that the right memory locations are sequentially read out and send to a serializer to create a video signal. A Diplay controller may, or may not, also incorporate a palette mechanism, and/or a binary to video signal converter (DAC), but it may also incorporate many other things, like support for "player missiles"/"sprites" and/or bit-blitting functionality. Mahjongg (talk) 20:31, 18 December 2009 (UTC)
Requested move
edit- The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.
The result of the proposal was moved. --BDD (talk) 23:35, 21 August 2013 (UTC)
Video Display Controller → Video display controller – Please put your reason for moving here. Tony (talk) 14:35, 14 August 2013 (UTC)
I might be wrong, but this is not an architecture or a protocol, but just a component of a larger device—there seems no reason to cap it.
Per WP:MOSCAPS ("Wikipedia avoids unnecessary capitalization") and WP:TITLE, this is a generic, common term, not a propriety or commercial term, so the article title should be downcased. In addition, WP:MOSCAPS says that a compound item should not be upper-cased just because it is abbreviated with caps. Lowercase will match the formatting of related article titles. Tony (talk) 14:35, 14 August 2013 (UTC)
- Support this is clearly a generic term, not a proper noun. W Nowicki (talk) 20:00, 19 August 2013 (UTC)
- Support per nom. bd2412 T 20:24, 21 August 2013 (UTC)
- The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.
Removing the Fujitsu MB14241 and the grossly exaggerated claims made for it—an explanation
editI've removed the mention of the Fujitsu MB14241 from this article, because it simply doesn't belong here. However, for future reference, I think I should explain why it doesn't belong and what its actual purpose was. Although it was listed under "video shifters", it does not meet the definition given here for a video shifter. Nor does it come remotely close to the complexity and functionality of the two other examples listed here, the RCA CDP1861 and the Atari Television Interface Adapter. Furthermore, the claim made for it ("capable of displaying up to 60 sprites on screen, and move up to 24 of them at once") was a gross distortion and not at all backed up by the sources cited. In fact, the MB14241 wasn't capable of displaying or moving any sprites on screen, because it wasn't a sprite engine. It was a specialized barrel shifter, included so that the CPU could rapidly shift the bits of moving objects to the proper horizontal pixel offset before writing them into display memory.
This circuit actually began as a multi-chip discrete circuit in Midway's arcade video game Gun Fight of 1975, and it was used repeatedly by other Midway arcade video games of the later 1970s which shared that game's hardware design. Taito adopted it for Space Invaders in 1978, and like Midway, they reused it in other games which shared similar hardware. The Fujitsu MB14241 was a single-chip version which first appeared in Taito's Space Invaders Part II in late 1979. An explanation of how the shifter circuit worked and what benefit it provided is arguably too esoteric for Wikipedia's current articles on these subjects, but I will give one here because I think it helps show just how it does not really meet the normal definition of a "video" circuit. (I also think it happens to be interesting in its own right and would be worthy of inclusion in any detailed explanation of the hardware of these machines.)
All of the arcade machines which used this circuit had black-and-white bitmapped, framebuffer-based graphics: objects were displayed by writing their bits into the framebuffer memory, which the video circuitry would then scan out to the CRT. The CPU was the only device which could modify this framebuffer, and there were no other graphics devices, such as sprite or tile engines. All animation was done by the CPU writing and rewriting the framebuffer.
Moving objects vertically with this system was easy enough, where by "vertically" I mean crossing the video scanlines, rather than moving along them. (Vertically oriented arcade games such as Space Invaders have vertical scanlines, since the CRT monitor is mounted on its side; in that case the motion I mean will appear horizontal, such as the marching of the "invaders" across each screen row.) Each scanline has a different set of framebuffer memory addresses for the pixels within it, and thus moving an object from one vertical position to another simply means determining the old and new set of framebuffer addresses to write to, erasing the data at the old set of addresses, and writing it to the new set. Even for a difference as small as one scanline, the memory locations will be many bytes apart, so finding the old and new locations is just a matter of doing the proper arithmetic on framebuffer addresses. Even simple microprocessors, like the Intel 8080 used by these machines, could do this easily.
The problem is for horizontal motion: motion parallel to the scanlines, which includes the horizontal portion of diagonal motion. (Such motion will appear vertical in vertically oriented games like Space Invaders: think of the invaders' bombs falling and the player's shots streaking upward.) With black-and-white bitmapped graphics such as was used here, each pixel gets its own bit, and a single byte of framebuffer memory is a horizontal block of 8 pixels. If you want to move an object from one horizontal position to a different one which is an exact multiple of 8 pixels away, this is still just a matter of doing simple arithmetic on memory addresses to get the right positions. But that puts a big restriction on what motion you can do. Either your horizontal motion must be very fast (8 pixels per frame, which for rapid, smooth animation frame rates will be fast enough to cross the entire screen in around a second, or even less) or it will be very choppy, moving 8 pixels at a time with perceptible "jumps".
In order to move objects horizontally more smoothly, you need to be able to shift an object to any desired horizontal position, at the level of individual bits, which means being able to shift the bits within each byte to the right or left. An 8080 microprocessor can do this itself, but only to a limited degree. It has shift instructions (actually, "rotate" instructions, which can be used for shifting) that can take a single byte and shift its bits right or left by one bit position. That will get you a single pixel of horizontal motion for an object whose horizontal span of pixels will all fit within the same byte both before and after the shift. But if the object spans across two or more bytes, things get more complicated. To shift such an object by a single bit position, you might do the following: for each of its pixel rows, perform a shift instruction on each successive horizontal byte in that row. This will not only shift the bits within each byte, but will also shift each byte's end bit into the 8080 microprocessor's "carry flag", and the next byte to be shifted will get the carry flag bit shifted into it automatically.
But we're still only talking about shifting by one bit position. What if your object is moving faster horizontally than a single pixel per frame, and you thus want to shift it by several bit positions? Now for each row in the object, you must execute the entire sequence of shift instructions needed to shift the row by one bit position, and then repeat that sequence, over and over, as many times in total as the number of pixels to shift by: twice for a two-pixel shift, etc. This is going to get complicated, and slower and slower for greater shifts. Shifting a long row of bits by multiple bit positions, as you would need to do when showing a wide graphics object which is moving fairly fast horizontally, will require a lot of work by the CPU if it doesn't have help. That reduces the CPU time available for performing the rest of the game.
To solve this problem for Gun Fight and later games, Midway added the barrel shifter circuit. Its only job is to make it much easier for the CPU to shift rows of pixels to the proper horizontal bit position. It has two inputs, a shift amount input and a data input, both of which are latching, as well as an output which gives the shifted data. The latching inputs mean that once the CPU has written a value to one of these inputs, it will retain that value until the CPU writes a new value. As it turns out, this feature makes the barrel shifter more effective in streamlining the bit-shifting task. (The output doesn't need to be latching, as it will stay fixed for as long as the inputs are unchanged.)
(In the following, I will temporarily gloss over the important detail of how the boundaries between bytes are handled, in order to be more concise; afterward, I'll explain how that works.)
When drawing an object on the screen, the CPU first loads the desired shift amount into the shifter's corresponding input. Because this input is latching, the CPU only has to load it once, and it will hold that value for the entire process, thus fixing the object's bit shift and horizontal pixel offset. It only needs to be reset when a new object is to be drawn, presumably with a different shift amount. The CPU then loads the data input with the leftmost byte of the object's first row and reads the shifter's output, which is the input byte shifted rightward on the screen by the desired number of bits. The CPU, having read this shifted byte, can immediately copy it to the proper framebuffer address. The CPU then steps to the next byte to the right in the object's row, loads this byte into the shifter data input, reads the shifter output again, which will have the desired right shift, and copies that byte to the next framebuffer location. This process gets repeated until the CPU reaches the end of the object's row; once that happens, the CPU just has to update its source and framebuffer pointers to the start of the object's next row, and it can go on to process that row in the same way, reading out the shifted results and copying them to the framebuffer. This continues until the entire object has been processed and drawn to the framebuffer with the proper horizontal shift. During this process, the CPU loads and reads out the shifter with simple OUT and IN I/O instructions.
Now to explain how the boundaries between bytes are handled. When I described how the CPU might perform the bit-shifting task using its own shift instructions, I mentioned that its "carry flag" would transfer the end bit of each source byte into the opposite end of the shifted version of the next byte. In the case of the bit shifter circuit, as each source byte is shifted right, multiple bits may be shifted over the byte's right edge, and those bits will be cut off in the shifted output; they will be output when the next source byte is shifted. More precisely, loading the next source byte into the shifter's data input causes the bits of the former latched input to be "slid" leftward to make room for the new byte. The data input is not 8 bits wide, but 15, which is just wide enough to guarantee that even for the largest possible rightward shift of 7 bits, all the bits cut off from the previously shifted byte will show up in the next shifted byte. When the shifted output from this next source byte is read, it will include at its left end all of the bits which were cut off at the right edge of the preceding shift. Thus the latching of the shifter's data input doesn't merely ensure that the shifted result can be read, but also holds on to all bits that are shifted across byte edges. For the final byte at the end of the object row, one more byte of zeroes is actually loaded and shifted so that all the shifted bits on the right end are output; this also ensures that no "leftover" bits will show up when we start shifting the next row.
(You may wonder, "what about leftward shifts?". Those are simply treated as rightward shifts with a corresponding adjustment of the destination framebuffer location.)
The CPU is in control of the shifter during this entire procedure, determining where the shifter's input data comes from and where its output will go. The shifter circuit merely acts as an isolated device on the CPU's I/O bus. That's why the shifter circuit can't reasonably be considered a video controller in any way, although it is certainly useful in helping the CPU animate the game graphics. It's more accurately thought of as a sort of simple external CPU enhancement.
--Colin Douglas Howell (talk) 00:28, 16 March 2020 (UTC)
- Thank you so much for all of this. It does not surprise me the info was wrong, because I believe it was added by User:Jagged 85 (either under his own name or through one of his repeated acts of anonymous sock puppetry), who was permanently banned for his notorious penchant for adding sourced material to articles that was just plain wrong and often characterized by a misreading of sources. Much of his mess has been cleaned up over the years, but I doubt anyone with the necessary technical expertise had looked into this set of claims before. Bravo! Indrian (talk) 05:52, 17 March 2020 (UTC)