Talk:Gamma correction

Latest comment: 7 months ago by Gah4 in topic Hurter and Driffield

Newbie

edit

I have photo-editing software which includes Gamma adjustment. Looks like I can do approximately the same adjustments using the Brightness and Contrast adjustments.

Approximately, but I assume not exactly? So, what's the Gamma adjustment for?

That's where I'm coming from. Maybe more than me alone.

This article and everyone participating in it is speaking a different language. Maybe the answer to my question is buried in the article, somewhere. So when it comes to Gamma, I don't think Wikipedia is helping the vast majority of users, who need some Usability help with their software.

I recommend before getting into "Hurter–Driffield curves," "slopes of input–output curves in log–log space," and "conversion matrices," you begin with a plain English explanation of what those Gamma adjustments are, some possible methods for using them correctly, and how the Gamma adjustment is more than another Brightness and Contrast control.

Then, invite people to follow the more complex image-engineering concepts if they so desire. Still, assuming that people aren't stupid and you truly care about sharing your explanations with them, then in the more technical, second-half of the discussion, you might be careful to provide guidance to supplement any technical jargon.

Nei1 (talk) 02:40, 25 May 2009 (UTC)Reply

The whole article needs to be rewritten. References to Macs and PCs may seem useful to people who already own Macs and PCs, but if you don't already own one, or a copy of Photoshop (as the case may be), the article appears to wander off into territory that is completely and absolutely worthless. 216.99.219.73 (talk) 07:19, 15 September 2010 (UTC)Reply
Why would you need to own a Mac or PC to understand that these systems have different standard or historic gamma values? If it's worthless to you, skip that part. Dicklyon (talk) 04:15, 17 September 2010 (UTC)Reply
I can understand a little about the need for a digital to analog converter, in order to adjust a bit pattern in memory to an output signal sent to a Cathode Ray Tube. But most of the article seems to worry about why PC computers or Macs don't use the same numbers. And then they jump to the conclusion that what they see, when a file is displayed, is an accurate representation of the data in a particular file. My gut feeling is that the editors posting these paragraphs, are having problems with what they are seeing, when they really ought to be ignoring the visual data, and try using a hex dump instead, so they can see the real values of the bytes they are working with. 216.99.219.208 (talk) 04:37, 17 September 2010 (UTC)Reply
It's easy enough to use a dump to see the 8-bit values in an image. But you need a gamma to know what light intensities those numbers represent; a value of 100, for example, represents a brightness much darker than half of what 200 represents; more like 0.5^2.2 typically [≈22%]. Dicklyon (talk) 22:39, 17 September 2010 (UTC)Reply

I can't believe this article has not been re-written yet. It's far too technical. Encyclopedias are supposed to be readily understood by the layperson. It's current form is a joke. These contributers may well be highly concise and highly correct, covering all the academic holes and what not, but they are also unfortunately out of touch with most of the people who came to this topic and subsequently leave in a hurry. Preroll (talk) 13:48, 13 November 2010 (UTC)Reply

Feel free to be bold and take a stab at it. It’s kind of a technical subject, that frankly isn’t very interesting outside a technical context. –jacobolus (t) 03:41, 14 November 2010 (UTC)Reply
It's a technical context that everyone who tries to adjust their computer monitors may need help understanding. This article is not appropriate for that audience, or it needs a Section 1 inserted into the beginning. Isn't there someone who can use Plain English to provide the two elements presently missing in this article: 1) Some basic explanation of what Gamma is, and 2) Adjustment instructions -- perhaps three levels of gamma adjustment: a "crude adjustment," "better adjustment," and "perfectly accurate adjustment?" Nei1 (talk) 03:25, 10 April 2011 (UTC)Reply
It's great if people can use Wikipedia articles to answer such questions, but that really isn't their primary purpose. There are several nice links at the bottom of this article to “how to adjust your gamma” tutorials. –jacobolus (t) 03:31, 10 April 2011 (UTC)Reply

Inconsistency in use of the phrase "encoding gamma"?

edit

The introductory section of this article states that “A gamma value  , is sometimes called an encoding gamma”; while the "Explanation" section discusses the figure which “shows the behavior of a typical display when image signals are sent linearly (γ = 1.0) and gamma-encoded (standard NTSC γ = 2.2).” Since 2.2 is larger than 1.0, the article's usage of "encoding gamma" seems inconsistent to me.

W.F.Galway (talk) 06:48, 4 June 2009 (UTC)Reply

I've attempted a clarification. Dicklyon (talk) 14:50, 4 June 2009 (UTC)Reply

Coincidence?

edit

The article reads

For a computer CRT, γ is about 2.2. By coincidence, this results in the perceptually homogeneous scale as shown in the diagram on the top of this section.

Is it really a coincidence that the computer CRT γ is 2.2 and that produces a perceptually homogenous scale, or is it the result of a deliberate choice made when sRGB was defined, as to match the overall behavior of the then current run of the mill CRT displays? --Gutza T T+ 00:44, 4 October 2009 (UTC)Reply

Whether it's coincidental or not, if it's true it was true before sRGB was dreamed up, and not related to that. Personally, I don't think there's any reliable basis for it, though it's within the range that could be consistent with the Stevens' power law for brightness. Dicklyon (talk) 03:30, 4 October 2009 (UTC)Reply
Having sat around one afternoon discussing this very topic with Charles Poynton, the tentative conclusion was that in one sense it was purely coincidence, but in another sense it probably was not - if the physics had been different and CRT's were linear light response, then the necessity of having an efficient encoding over the limited signal to noise ratio transmission channels would have forced the researchers looking for display technology to have either moved on to some other technology that did have a gamma of about 2.2, or to have gone to the trouble of developing an electronic way of imposing such a power on CRT's. GWG (talk) 11:37, 2 December 2009 (UTC)Reply
For me no coincidence at all. The same laws are valid for a biological light sensor, aka the eye, and a vacuum based light sensor like the early TV cameras. The law was there and waited for someone to find the (average) gamma parameter. AndreAdrian (talk) 12:15, 24 November 2018 (UTC)Reply

Generalized gamma

edit

The statement "Gamma values less than 1 are typical of negative film, and values greater than 1 are typical of slide (reversal) film" is incorrect.

In the general case:
gamma > 1 — expanded luminance range;
0 < gamma < 1 — compressed luminance range;
-1 <gamma < 0 — compressed luminance range, negative image;
gamma < -1 — expanded luminance range, negative image.

Alex31415 (talk) 20:06, 7 February 2010 (UTC)Reply

OK, I just tried changing this, but my change is wrong. First, it is in terms of optical density (absorption), so negatives have a positive slope (gamma), and positives a negative slope. Reversal (positive) films do tend to have an abs(gamma) greater than one, as people like images this way. (As long as you fit the dynamic range of the subject into the range of the film.) Color negative films pretty much always have a gamma of about 0.5. This gives them a large exposure latitude (compressing the dynamic range), while color printing materials have an appropriately large gamma, to reverse this effect. This mostly came from early use of Kodacolor in simpler cameras, where the dynamic range is needed. Also, commercial printers use an appropriate analyzer to appropriately set exposure and color balance. Black and white films normally don't have the same low gamma, not quite as large an exposure latitude, but still enough. Black and white negative films have gammas between about 0.6 and 2.0[1] For a more usual pictorial film, TMax P3200[2] with usual development has gamma (contrast index) between about 0.6 and 0.8. One can adjust contrast at printing with graded or variable contrast paper. For a reversal (slide) film[3] they don't quote gamma or contrast index, but from the slope on the graph its (absolute) value is greater than one. Gah4 (talk) 10:34, 11 January 2019 (UTC)Reply


References

  1. ^ "Kodak Professional Technical Pan Film" (PDF). wwwru.kodak.com. Kodak. Retrieved 11 January 2019.
  2. ^ "KODAK PROFESSIONAL T-MAX P3200 Black & White Negative Film" (PDF). imaging.kodakalaris.com. KodakAlaris. Retrieved 11 January 2019.
  3. ^ "KODAK PROFESSIONAL EKTACHROME Film E100" (PDF). imaging.kodakalaris.com. KodakAlaris. Retrieved 11 January 2019.

Acer screen

edit

With my netbook (Acer Aspire One 751h), and its glary screen, I get the best results for this picture, http://en.wikipedia.org/wiki/File:Srgbnonlinearity.png, when I turn my Screen's gamma correction to the minimum. 69.203.29.76 (talk) 23:55, 21 April 2010 (UTC)Reply

Written for a specialized audience, redux

edit

Help! This article talks about encoding, and later on, reversing the encoding, so as to express the data in some way or other that is "visible." In that respect, is gamma some kind of characteristic of the data being compressed? Does plain vanilla ASCII have gamma? I just read the article, and am confused. I was wondering how on earth the main page of this article could be improved for someone who is very visually handicapped. Would a blind person still have been able to make heads and tails of this article? There has got to be some way of making this article palatable to people who can actually count bits, know what a byte is, but who can't exactly see the pixels involved. Is the encoding routine really just a simple matter of counting the pixels, and figuring out which kinds of pixels are most common? And how is it doing so? I mean, exactly. Are there 32 bit counters assigned to each of various specific colors, or shades of colors? The main page of this article appears to assume we are all using either a PC compatible, or a Mac. What if we aren't? The main page of this article could be improved vastly by making it platform-independent. The whole article needs to be rewritten. 216.99.219.73 (talk) 07:11, 15 September 2010 (UTC)Reply

Plain vanilla ascii is not an image format. The lead section is pretty explicit that we’re talking about images and video here, and the first paragraph of the article body explicitly talks about the components of an RGB image. Someone who is blind, doesn’t know what a video signal is, doesn’t have enough mathematical experience to understand what it means to be a power law & is unwilling to click the link and find out, and can’t read the formula Vout = Vinγ is going to have a pretty tough time understanding the concept, however we try to explain it. Frankly, the basics of the concept is just that formula alone, and just writing the math is pretty much the easiest and most explicit explanation we can give. If you have some other ideas though, suggest away. –jacobolus (t) 07:57, 15 September 2010 (UTC)Reply
Bytes are bytes. You can find bytes inside of graphic files, just like you can find bytes inside of text files. They are wholly interchangeable. I can't see any reason to treat them otherwise. Your statement "plain vanilla ascii is not an image format" doesn't make sense to me. But I think I am reasonably open-minded. Does it depend on the file system you are using? Can you show me, somehow, that vanilla ascii is not an "image" format? (Are you talking about 'IMG' files for an Atari ST running GEM? How about a file that consists entirely of 1 byte? What about a file that consists of exactly 2 bytes? How big must a file be, in order to be subject to gamma correction? Does a file have to be more than 3 or 4 bytes? Even if the file is 3 or 4 megs, why would it make a difference what kind of a filetype it is, for it to be gamma correctable? 216.99.219.102 (talk) 01:50, 17 September 2010 (UTC)Reply
But bytes are not ASCII. ASCII is a 7-bit character code, unrelated to anything we're talking about here. Images are stored usually with 8-bit pixel values, in a format that says how they're organized and encoded. Gamma has not much to do with file formats, except that most image file formats specify or assume that 8-bit pixel values are gamma compressed, since they wouldn't have much dynamic range of intensity if they were not. Whether JPEG, PNG, TIFF, GIF, or something else, if you take 8-bit values from an image file and send them to a display system, they will almost always be interpreted as gamma-compressed intensities, with a gamma between 1.5 and 2.4, depending on the system, file type, device, or color profile. Dicklyon (talk) 04:00, 17 September 2010 (UTC)Reply
Also, given how little you know of the topic, you probably shouldn't be improving the article from your own instinctive opinions; I reverted your edit. Dicklyon (talk) 04:13, 17 September 2010 (UTC)Reply
Oh, a personal dig, huh? How clever. Well, I am not going to get into an edit war with you. I don't have time for that. And plain vanilla ASCII is 8 bits wide with my computer. I can't make use of this article because it appears to require me to go out and get a platform X instead of my platform Y. 216.99.219.208 (talk) 04:28, 17 September 2010 (UTC)Reply
No one is trying to insult you. But this is not about data in general, but instead about the representation of colors numerically. ASCII really has nothing to do with anything, since ASCII is about text, not numbers. You don’t need any particular platform to understand some straightforward mathematics. Maybe start by reading the article on Exponentiation? –jacobolus (t) 12:16, 17 September 2010 (UTC)Reply

History of Gamma correction

edit

The main page of this article would be improved if a couple paragraphs were added on the history of Gamma correction, and when and how it was arrived at. Yes, that means that some of you will have to do your homework, and describe what it was in the olden days (before 1990, and earlier), and how it changed as the years went by. 216.99.201.198 (talk) 20:02, 15 September 2010 (UTC)Reply

There's a section entitled "Photography" which covers that somewhat. –jacobolus (t) 20:30, 15 September 2010 (UTC)Reply
Do you mean the history behind the inks used for printing color photographs? Cyan, Magenta, and stuff like that? How are those related to Gamma correction? 216.99.219.208 (talk) 04:31, 17 September 2010 (UTC)Reply
Did you read the Photography section of the article? No, there’s nothing related to color inks in there. γ was the symbol used for the slope of the straight part of the curve on a log-log plot comparing light exposure to resulting film density. –jacobolus (t) 12:22, 17 September 2010 (UTC)Reply

Misnomer; it's not correcting anything

edit

The title is a misnomer that sets the wrong context for gamma encoding. At its core, gamma encoding is a form of lossy image compression, similar to mu-law encoding. It's not there to correct anything, any more than encoding as electrical signals is a correction for the fact that the monitor receives the image as electrical signals. If you sent a signal without gamma encoding to a monitor, it would look bad, just as if you sent it to a monitor with the wrong voltage range or polarity. Using the term correction implies that if monitors could have accepted a linear signal back when the standards were being written, we wouldn't be using gamma encoding (at least where signal bandwidth was an issue). 216.188.252.240 (talk) 17:17, 31 October 2010 (UTC)Reply

While I have some sympathy for the idea that "gamma correction" is a confusing misnomer, it's a commonly used term. Gamma compression has mostly been conceptualized as correcting for a CRT display gamma, historically, even though you don't like that idea. And I disagree that gamma encoding should be thought of as a form of lossy encoding. It's not necessarily lossy; it just depends on whether you keep enough bits, or what you consider to be "uncompressed" as a reference. I don't take your implication. Dicklyon (talk) 22:49, 31 October 2010 (UTC)Reply
"Gamma correction" definitely needs to be mentioned, but 216.188.252.240 might be right that some other term would make a better title. Personally I don’t much care either way. –jacobolus (t) 08:35, 1 November 2010 (UTC)Reply
The german wiki entry reads "monitor calibration" and refers to the purpose of the gamma correction value. But as in most cases in engineering: the term is not self-explaning. And gamma correction implies - for me - that there is a gamma, which is not correct and needs correction. Exactly what we do ... AndreAdrian (talk) 12:21, 24 November 2018 (UTC)Reply
Correction or not, that is what it is usually called in the case of TV signals. It seems that CRTs have a gamma of about 2.8 for intensity vs. voltage. Instead of putting correction circuits into each TV set, they put such circuits in before transmission. (Back to the beginning black and white TV days.) But do they correct for 2.8? No, they use 2.2. The result is about 1.3 in the visual image, which partially makes up for contrast loss due to ambient light.[1] Note also, as noted, that slide films have a gamma (absolute value) a little greater than one, and I suspect that photographic prints are commonly printed with overall (negative+print) greater than one. That is the way it looks right to us. In the case of an analog TV signal, coding such as mu-law doesn't apply. Analog TV signals have white at a low voltage, and black at high, which allows for sync signals to be blacker than black, and so not visible. It is not so obvious how analog noise affects the result. I have no idea whether mu-law coding would have been done on digital signals without the analog history. Gah4 (talk) 10:20, 12 December 2019 (UTC)Reply

References

  1. ^ Hunt (2004). The Reproduction of Colour (6 ed.). Wiley. ISBN 978-0470024256.

Axis labels

edit

There are a lot of a graphs on this page, but none of them actually name any of the axes. I suggest "input signal" for the x axis, and "light intensity" or "photon count" (if that is even correct) for the y axis. Bajsejohannes (talk) 10:34, 8 February 2011 (UTC)Reply

Photography Section

edit

Moved the "Gamma In Photography" section to the Photography section, and removed some duplicated explanations. The last two paragraphs in this section are too wordy and technical, are not referenced, and in my view don't add anything useful to understanding gamma in photography. I intend removing them when I come up with a less verbose, and referenced, alternative. Guyburns (talk) 12:43, 13 March 2011 (UTC)Reply

It's the very definition of the term. It's silly to say that the quite straight-forward mathematical definition of a mathematical concept is "too technical". To be honest, the part about gamma in photography should be moved up in the article and expanded. As for citations, this is pretty much common knowledge, though the article as a whole could probably be better sourced. Feel free to track down some sources. –jacobolus (t) 16:23, 13 March 2011 (UTC)Reply
As has been mentioned elsewhere on this talk page, the "gamma" that describes the gradient of the linear section of the characteristic curve for photographic film has absolutely nothing to do with the "gamma" that the rest of this page is talking about. One is a gradient and one is an exponent. Other than the word "gamma" they are not related at all. The first two paragraphs in this section talk about the first gamma, before then switching back to the other gamma for the final paragraph. Why is this section even on this page? It doesn't belong here. — Preceding unsigned comment added by 109.176.90.220 (talk) 16:38, 20 July 2015 (UTC)Reply
The characteristic (D-H) curve in photography is a log-log curve. Explicitly in the case of exposure, implicitly in the case of optical density. The slope of a log-log curve is the exponent if it wasn't log-log. More specifically, the reason for drawing log-log curves is to extract the exponent value. Note also that there is a negative sign built-in, as optical density relates to the reduction in light coming through. Gah4 (talk) 20:42, 13 July 2021 (UTC)Reply

Generalised Gamma

edit

I removed the "d" symbol in the Generalised Gamma section because differentials were not being used in the previous derivation. i.e. the log was taken of both sides, and when rearranged, Gamma pops out. Nothing to do with differentials, because in log-log space the slope of the power equation as given is a constant and is equal to gamma. If someone wants to introduce differentials, by all means do so, but add another example. —Preceding unsigned comment added by Guyburns (talkcontribs) 04:38, 18 May 2011 (UTC)Reply

Seems to me that the derivative is used to get the slope at a point. If the whole thing is a straight line through the origin, then the slope is the same everywhere, and equals the ratio. In the case of photographic film, that never happens. There is always a curved region at each end, and often no actual straight region. In the case of electronic signals, it might be closer to straight and through the origin. Gah4 (talk) 02:34, 14 July 2021 (UTC)Reply

Removed quote by Ansel Adams

edit

The quote is rather silly and doesn't contribute anything to the article. For example, imagine if in a biology article about polymerase chain reactions, a biologist was quoted as saying "7 minutes at 68°F in Buffer X with machine settings Y" represents "normal" to me. I have no idea what the actual effective gamma is, nor do I care. I could consider this degree of development as yielding Gamma = 1.0 or being Development No. 9 or Operation H, or any other symbol I choose. But why should I inject an unnecessary and confusing symbol for a perfectly simple statement of procedure? "Buffer X/ Settings Y/68°F/7minutes" is definite and easily expressed and understood as the means of obtaining my "normal" results. — Preceding unsigned comment added by 132.183.104.52 (talk) 18:33, 15 May 2012 (UTC)Reply


This article is seriously corrupted

edit

This article was seriously corrupted 11 June 2011, and needs drastic correction/restoration. This page should be fully reverted to the prior version (before 11 June 2011). It was tremendously better (and accurate), before.

This Wikipedia page is one of worst anywhere now. That is not rude, it is frank. This is because of the paragraphs which begin:

"Gamma encoding of images is required to compensate for properties of human vision"
and
"A common misconception is that gamma encoding was developed to compensate for the input–output characteristic of cathode ray tube (CRT) displays"

That is simply wrong. Plus, this article even throws in film gamma, which is about slope and contrast, which has absolutely nothing to do with "gamma correction". Gamma is Greek letter, often used in science like the X in algebra. All things called X are not the same thing. Film Gamma needs its own page, at least not this page.

Google finds 1.26 million links for the phrase "gamma correction". Only two of the top several dozen sites (Wikipedia and CambridgeInColor) have it absolutely wrong (just some made up notion). A few sites may not say why gamma, but all the others have it right (gamma correction is about nonlinear CRT response). Read the links at the bottom of THIS article.

Even the ONE definitive source Charles Poynton says (FIRST THREE POINTS):
"A CRT's phosphor has a nonlinear response to beam current."
"The nonlinearity of a CRT monitor is a defect that needs to be corrected."
"The main purpose of gamma correction is to compensate the nonlinearity of the CRT."

That seems pretty clear. Even the very nice gamma graph still here is correctly labeled "CRT Gamma" (bottom curve) and "Gamma Correction" (top curve).

Gamma correction has absolutely nothing to do with the human eye (other than the eye wants to see correct response from the CRT). Yes, the eye also has an exponential response, but it is the OPPOSITE direction of CRT gamma. If the eye perceives 18% as middle at 50%, why in the world would we want to first modify the data IN THE SAME WAY (boosting 18% to 46% first) to help the eye? That is of course utter nonsense. Leave the eye alone, it wants to see the original analog data. It needs no help, it is what it is, and we have learned to deal with it.  :)

And also, since the monitor cannot avoid decoding gamma (which is the unfortunate nonlinear CRT problem to begin with), the eye only sees decoded data anyway (back to the original state, hopefully). The eye can not know about any previous data state, and it does not care about history. It just wants to see decoded linear, hopefully same analog as originally entered the camera lens. Gamma correction is absolutely NOT about the human eye. Gamma correction is totally about the CRT nonlinear response, which must be corrected to show tonal data. This has been done since earliest television, and then computers used CRT monitors too.

It may be true that we don't use CRT monitors any more, some youngsters may never have seen one, but we used them for decades, and all of the existing RGB image data in the world is already gamma encoded for CRT. It's much easier to continue than to obsolete absolute everything. Our color space specifications (like sRGB) continue to require gamma correction, for compatibility. LED monitors simply decode to throw it away now, simply because it is there. The eye can only use original linear analog. Regarding noise, digital data is ones and zeros, and often is protected by CRC, and does NOT have the same signal to noise issues as in analog transmission. Any analog digital sensor noise is digitized and encoded along with the data.

Think a little, and use your head, and don't just make up total garbage. This Wikipedia page has become the pits. Hopefully some hero knows how to back it out and revert to the good stuff, prior to 11 June 2011. — Preceding unsigned comment added by 96.226.241.67 (talk) 16:24, 29 August 2012 (UTC)Reply


Actually, the article became much more correct on that date. The quotes you pulled are entirely correct, as can be verified in authoritative literature on the subject.
Gamma encoding really has nothing to do with CRTs, it's about human vision and encoding efficiency.
The article has been edited several times since then, and updated by several experts on the topic - and they also agree with the corrected information.
Please take the time to read Charles Poynton's articles or books and learn more about how and why gamma encoding/correction is used. — Preceding unsigned comment added by 98.248.30.38 (talk) 01:12, 22 October 2012 (UTC)Reply
Also, you quoted the FALLACIES from Poynton's FAQ, not the correct information in the FACT column. http://www.poynton.com/notes/color/GammaFQA.html — Preceding unsigned comment added by 98.248.30.38 (talk) 01:15, 22 October 2012 (UTC)Reply


Sorry, that is as laughable as this corrupted Wikipedia gamma article is now. Gamma encoding has always only been about CRT, and gamma encoding has absolutely nothing to do with the human eye. Remember CRT? Gamma encoding became necessary and was invented when television first started (first use of CRT for tonal image data), maybe around 1940, because of the CRT (first NTSC spec). Digital images did not exist yet. Gamma is about the nonlinear CRT response.
The human eye expects to see linear data, same as like looking at the original scene, which of course is linear. However, CRT has nonlinear output and a poor result, and so gamma is done to boost the dark end losses, to be linear again after the CRT losses, so there will be linear data again for the eye to see, so the CRT becomes very usable. Gamma is only done so CRT will be corrected to output linear response, matching what the original scene looked like, so the eye sees a reproduction of the original linear scene. This is absolutely the FIRST thing to know about gamma encoding of digital image data.
The simple and obvious fact is, the human eye sees linear analog light, same as the original scene reflects. The eye NEVER EVER sees gamma data, which is only done to correct the response of CRT (so it outputs corrected linear again, so the eye sees a linear reproduction of the original scene).
We may not often use CRT today, but gamma 2.2 is in all our digital specs (for CRT), and obsoleting gamma would obsolete all existing digital image data. It is very much easier to continue it. So LCD monitors simply decode and discard gamma before the eye sees it. CRT monitors use gamma encoding to correct their output to be linear. Then both LCD and CRT output linear data. The eye absolutely requires linear data, same as from the original scene.
See a correct description of gamma from the w3.org specification body:
http://www.w3.org/TR/PNG-GammaAppendix
That is the correct answer. The least this badly corrupted Wikipedia article could do is to link to it.
Gamma encoding has absolutely nothing to do with human eye response. RGB Gamma is only about correcting the CRT response.
173.74.124.17 (talk) 13:40, 23 April 2013 (UTC)Reply
From the very page you linked:
Gamma-encoded samples are good
So, is it better to do gamma correction before or after the frame buffer?
In an ideal world, sample values would be stored in floating point, there would be lots of precision, and it wouldn't really matter much. But in reality, we're always trying to store images in as few bits as we can.
If we decide to use samples that are linearly proportional to intensity, and do the gamma correction in the frame buffer LUT, it turns out that we need to use at least 12 bits for each of red, green, and blue to have enough precision in intensity. With any less than that, we will sometimes see "contour bands" or "Mach bands" in the darker areas of the image, where two adjacent sample values are still far enough apart in intensity for the difference to be visible.
However, through an interesting coincidence, the human eye's subjective perception of brightness is related to the physical stimulation of light intensity in a manner that is very much like the power function used for gamma correction. If we apply gamma correction to measured (or calculated) light intensity before quantizing to an integer for storage in a frame buffer, we can get away with using many fewer bits to store the image. In fact, 8 bits per color is almost always sufficient to avoid contouring artifacts. This is because, since gamma correction is so closely related to human perception, we are assigning our 256 available sample codes to intensity values in a manner that approximates how visible those intensity changes are to the eye. Compared to a linear-sample image, we allocate fewer sample values to brighter parts of the tonal range and more sample values to the darker portions of the tonal range.
Thus, for the same apparent image quality, images using gamma-encoded sample values need only about two-thirds as many bits of storage as images using linear samples.
07:13, 7 January 2014 (UTC) — Preceding unsigned comment added by Blargg (talkcontribs)

The PNG spec was a great example of people who did not know the topic making mistakes. That's why the definition of gamma values in PNG was revised in later specs. And I'm sorry that someone told you the mistakes instead of factual information - but you really should read Mr. Poynton's books and articles - he really is the expert in the field. Plus it appears that we have had at least 2 other recognized experts contributing to this article after the mistakes were corrected. Yes, Gamma encoding is all about human visual response (that includes more than just the eye), not about CRTs. You might also want to read up on human vision - which is far from linear, and closer to a power law or logarithmic response. — Preceding unsigned comment added by 98.248.30.38 (talkcontribs) 23:39, 6 July 2013

Ignoring the rest of the post, it is at least true that the "gamma" value used to describe the slope of the characteristic curve for film is absolutely nothing to do with display gamma. The only thing it has in common is the word "gamma". This really should be removed as it is very misleading to have it discussed on this page. — Preceding unsigned comment added by 178.78.108.190 (talk) 15:04, 17 January 2014 (UTC)Reply


It is completely ludicrous to say "Gamma encoding really has nothing to do with CRTs, it's about human vision and encoding efficiency." That is sooooo wrong. Anyone claiming gamma is about the human eye needs to explain how the eye could/would EVER SEE ANY GAMMA DATA? The eye CANNOT see any gamma data, since the data is always decoded to linear first (by the CRT losses). The eye needs no help anyway, the brain handles its nonlinearity, the same as when the eye first saw the original scene too. The original scene view was not gamma encoded, and we except the image to still look the same.  :) The eye needs no help, and it expects to see linear data. This is completely obvious if we just think once.

But CRT response is very poor (lossy), so the purpose of gamma is that the data is encoded with opposite correction of the losses (gamma), so that after the losses occur (which is a decoding), then it again looks like the original linear data that the eye expects to see. That is the only purpose of gamma. Been true since first early NTSC TV back about 1940. Any other notion is ludicrous. It is not about bits, we had no bits in 1940. And it is not about the eye (other than the eye wants to see CRT images without losses). In contrast, LCD monitors are linear (not lossy) and would not need gamma, but we cannot eliminate it due to needed compatibility with all the world's images, so the LCD chip simply gratuitously decodes and discards gamma, SO THAT THE EYE CAN SEE THE ORIGINAL LINEAR VERSION (no gamma). The eye can never see any gamma data, and it would look bad if it could (but our histograms do show gamma data, because the data is still gamma encoded).

The Wikipedia article is corrupted by those who apparently know nothing about the subject of gamma, and this human eye nonsense needs to be removed. It's not even funny any more. — Preceding unsigned comment added by 71.170.134.198 (talk) 23:17, 22 April 2015 (UTC)Reply

No, the article was corrected by people who actually do know what they are talking about. It sounds like you are quite confused on the subject, and might benefit from reading some of Charle's Poynton's books on video encoding. It really is true that gamma encoding has little or nothing to do with CRTs, and is necessary because of human vision, not because the encoding is correcting for specific devices. Human vision really does have a power law or semi-logarithmic response - you can learn this in almost any reference on human vision. LCD displays still need gamma encoding (and are far from gamma 1.0/linear). — Preceding unsigned comment added by 192.150.10.203 (talk) 00:55, 7 May 2015 (UTC)Reply

I realize this is a long-stale conversation, but I'd like to point out that there are two sides to this, and nobody is entirely right or wrong here. Gamma correction was originally developed (maybe?) to compensate for CRT nonlinearity; see for example this 1953 hearing on color television. And as discussed above at #Coincidence?, the exponents that were good for CRT correction were also in a good range for noise quantization and were also not a bad match to human brightness perception (see Stevens's power law). It was fortuitous that things fit together as well as they did. I don't see anything in the article about "misconceptions" currently, and it says it "was developed originally to compensate for the input–output characteristic of cathode ray tube (CRT) displays" so that all seems OK. Dicklyon (talk) 04:11, 19 November 2017 (UTC)Reply
In case anyone is following this, read my addition above. It seems that CRTs have a gamma (intensity vs. input voltage) of about 2.8, and TV signals have a correction (inverse) for 2.2, with the result of about 1.3. The result partially corrects for contrast loss due to ambient light. The response of human vision is a complicated question, but it seems that in both video signals and photography, an overall gamma slightly greater than one, maybe 1.2 or 1.3, is needed to look right. You can see in the graph (they don't give a number) for KodakAlaris Ektachrome E100[1] that is is close to 2, maybe 1.8. Exactly the reason why like images with higher gamma isn't so obvious, some part of the visual system, or not, but we do. Gah4 (talk) 10:20, 12 December 2019 (UTC)Reply


References

  1. ^ "KODAK PROFESSIONAL EKTACHROME Film E100" (PDF). imaging.kodakalaris.com. KodakAlaris. Retrieved 12 December 2019.

Gamma compression/encoding and Poisson shot noise

edit

I was surprised not to see much discussion of digitization and signal-to-noise ratio in this article. I've been wondering about the appropriateness of using gamma compression to pack 12- or 16-bit photographs into few bits without meaningful loss of data. By that I mean, if I have an ideal image sensor, it will be subject to Poisson shot noise corresponding to the number of photons that actually land on each pixel. If I record a 16-bit image and my signal corresponds to around 65k photoelectrons, then my SNR will be about 256, so the bottom eight bits are all noise. But if in a different part of the image I have signal of 256 photoelectrons, then there the noise will be on the order of 16 counts, but I certainly could still tell the difference between 256 and 224; if I had dropped the bottom eight bits, I would have completely lost that data.

The problem seems to be that the SNR isn't uniform across the dynamic range even though the quantization error is. However, it seems like if we take the 16-bit signal and gamma-compress it with γ=0.5, we can then represent that as an 8-bit signal and not really loose any information since the quantization error would vary across the dynamic range along with the SNR. Does that sound right? —Ben FrantzDale (talk) 13:13, 18 June 2013 (UTC)Reply

In your example the quantization error with gamma 2.0 is on the same order of magnitude as the signal noise. That means you would be adding some noise to the image. To make the quantization error irrelevant you might want to use 9 bits to make it half the noise, or instead start with a 32k photoelectrons sensor. Search for dithering and quantization errors and how much dithering is needed. It's not a hard value, it depends on the errors you can live with, but usually dither noise should be a bit larger than the quantization step. Rnbc (talk) 13:46, 21 April 2019 (UTC)Reply
Actually his example is right on, and pretty typical of what's done in camera image processing. And yes you're right that the quantization noise is on the same order as the shot noise. With quantization being uniformly distributed between plus and minus a half step, the rms quantization noise is enough less than shot noise that you'll get an excellent 8-bit file this way, and starting with fewer electrons of signal will only make it noisier. And the shot noise is enough dither; no reason to add more. More typically, a slightly lower gamma is like 0.45 is used (like in sRGB and other standard representations), so the data will have a bit less quantization noise in the darker areas, that is, optimized for a modest amount of dynamic range compression in the rendering. An SNR near 100 in the highlights is plenty. Dicklyon (talk) 14:26, 21 April 2019 (UTC)Reply

Pictures show incorrect gamma

edit

The series of pictures on the right hand side do not display the correct gamma factors. When you plug the factors displayed in the images into the formula presented at the start of the article, a larger gamma should lead to a larger Vout, or a brighter picture, although the opposite is shown. Maybe the inverse of the gamma factor is displayed under the photographs. — Preceding unsigned comment added by 145.18.169.99 (talkcontribs) 13:09, 18 July 2014

No, a larger gamma should lead to a smaller Vout (except for in the white regions), since inputs and outputs (Vin and Vout) are in the range 0–1, so the pictures on the right hand side are correct. —Kri (talk) 14:39, 24 July 2015 (UTC)Reply

Ideal display gamma

edit

Since sRGB deviates from the exact power function, should this only apply to images that are to be gamma encoded, or is the monitor also ideally following this function with a 2.4 exponent and linearity near black? --2.108.128.193 (talk) 13:40, 3 July 2016 (UTC)Reply

The sRGB gamma function is supposed to be a pretty good model of what the the typical (old-style CRT) display does, so that the encoded values will be correctly reproduced as the intended intensities. Dicklyon (talk) 16:10, 3 July 2016 (UTC)Reply
edit

E.g. in the section "Power law for video display", it is stated that "[...] when a video signal of 0.5 (representing mid-gray) is fed to the display, the intensity or brightness is about 0.22 (resulting in a dark gray)". The last bit is incorrect. An intensity (relative luminance) of 0.22 will result in a mid-gray (perception). --Fhoech (talk) 15:45, 3 April 2017 (UTC)Reply

Right. I fixed that. Thanks. Dicklyon (talk) 04:15, 19 November 2017 (UTC)Reply
From this one, it seems that the eye has about 24 stops, or 7 orders of magnitude (that is, log10) of dynamic range. Less than the ear, which might be 10 orders of magnitude. So, there really has to be something logarithm like in between the sensor and the brain. Exactly what that means for gamma correction and the eye isn't so easy to say, but it does seem that we like visual images, either on paper or light sources, to have a gamma a little more than one, relative to the source image. Gah4 (talk) 02:46, 14 July 2021 (UTC)Reply
That was well known to be true to display of transparencies in a dark room (net gamma near 1.3 for Ektachrome, for example), but not for prints, as I recall my film photography. And the "something logarithm like" is really a bunch of different adaptive mechanisms, whether in the ear or the eye, operating on multiple time scales. Sort of like auto-exposure. Dicklyon (talk) 04:10, 14 July 2021 (UTC)Reply

In Sec. "Simple monitor tests", image "Srgbnonlinearity.png" is wrong for the purpose

edit

The grey tones are wrong in all of the image's three parts. For instance, the center block of the middle part should be 50% grey but it is 74%, as I find by using the color picker of GIMP.

I submit a corrected image, "Display_test_01.png", keeping the original design but substituting the correct grey tones.

To make the new image easily distinguishable from the incorrect one, I have changed the color of the frame from fluorescent green to light blue. If you prefer the frame black, so that the distinction will be easier to remember, use the file "Display_test_02.png".

Perhaps "Srgbnonlinearity.png" was correct when drawn and then saved with gamma information included but, even if it is so, no such information is recognised by either Firefox (52.0.2) or GIMP (2.8.20), which render an image inappropriate for monitor testing.

— Preceding unsigned comment added by GeKahrim (talkcontribs) 08:16, 17 April 2017 (UTC)Reply

That's incorrect - the center block of the middle part of the test image is not supposed to be 50% RGB, but the amount of non-linear-light RGB needed according to the sRGB tone response curve to match the surrounding bars in average relative luminance, so that the surrounding horizontal stripes + gray center look like a homogeneous area (if you slightly squint your eyes). Note that in case of LCD monitors, the correct appearance can only be obtained when driving the panel at its native resolution and the image is viewed at 1:1 scaling (there must not be any interpolation). If your system is calibrated to the sRGB tone response curve (not necessarily the sRGB gamut, as we're dealing with an achromatic image here), which can be achieved by either making the actual response match the sRGB response (e.g. by calibrating the system's 1D videoLUT), or employing color management (which requires an accurate monitor profile if we're dealing with ICC-based color management), the original image looks correct, as intended (if you have an accurate monitor profile, you can get the correct view in GIMP by enabling color management in its options and assigning the sRGB profile to the image). Your modified images on the other hand, do not give the correct appearance, because they are not created with the correct intensities. Fhoech (talk) 12:21, 6 May 2017 (UTC)Reply
edit

Hello fellow Wikipedians,

I have just modified 2 external links on Gamma correction. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 16:09, 10 October 2017 (UTC)Reply

Gamma used in Gamma correction test image

edit

The checkered area has a pattern of 0% and 100% brightness primary color pixels. This integrates to a 50% brightness. The non-checkered area needs a brightness of integrated_brightness ^ (1/Gamma). The test image uses Gamma=2.5 and a non-checkered brightness of 75.8%. To give "high" gamma a value of 2.5 is (a bit) arbitrary. 2.2 or 2.8 are reasonable values, too. Charles Poynton advocates Gamma=2.5 in his GammaFAQ https://poynton.ca/GammaFAQ.html. Please tell me if you can live with the Gamma=2.5 value of mine. AndreAdrian (talk) 21:37, 26 November 2018 (UTC)Reply

Test Image Strobing on Scroll

edit

The test image in the article can start strobing if users scroll in Firefox 87 and probably other browsers. That isn't safe. 138.88.18.245 (talk) 20:00, 19 April 2021 (UTC)Reply

Mathematics

edit

I am copying here some recently removed from the article. Not that I disagree with its removal, but as a place to discuss it, if it needs discussing. If it doesn't, that is fine with me. Also, fix the DOI in the reference.

It is often casually stated that, for instance, the decoding gamma for sRGB data is 2.2, yet modern formats such as Rec. 709 use a piecewise partly-linear transfer function.[1] The actual exponent is 2.4. [citation needed] This is because the net effect of the piecewise decomposition is necessarily a changing instantaneous gamma at each point in the range: It goes from γ = 1 at zero to γ = 2.4 at maximum intensity with a median value being close to 2.2. Transformations can be designed to approximate a certain gamma with a linear portion near zero to avoid having an infinite slope at K = 0, which can cause numerical problems. The continuity condition for the curve   gives the linear-domain threshold  :

 

Solving for   gives two solutions, which are usually rounded. However, for the slopes match as well, we must have

 

If the two unknowns are taken to be   and  , then we can solve to give

 
 

References

  1. ^ Bull, David (2014). "Chapter 4 - Digital Picture Formats and Representations". Communicating Pictures: 99–132. doi:10.1016/B978-0-12-405906-1.00004-0.

Gah4 (talk) 20:29, 13 July 2021 (UTC)Reply

I think it would be sufficent to say that many implementations of gamma use a piece-wise approximation, primarily to avoid infinite/zero slopes at zero, and that software must use these piece-wise approximations to avoid perceptible differences when images that should be the same color are placed adjacent to each other. I would skip all the math about how to match a linear section to an exponent, and don't say gamma 2.4 as it is not that at all.
This info could be pushed down into the "deprecation of the term gamma" section, too. That appears to be why some people do not like the term.Spitzak (talk) 20:56, 13 July 2021 (UTC)Reply
I started a discussion about it at Talk:SRGB#"Theory_of_the_transformation"_section; it was copied from that article, where it's also messed up. Dicklyon (talk) 01:31, 14 July 2021 (UTC)Reply

Deprecation of the term "gamma"

edit

The section Gamma_correction#Deprecation_of_the_term_"gamma" is another that should probably be removed. I can't find any evidence in the citations that term has been deprecated (by whom?), and there's no sourcing for the claimed list of confusions. @Spitzak: it looks like you also copied this from the sRGB article, more or less; what do you like about it? Dicklyon (talk) 04:31, 14 July 2021 (UTC)Reply

I don't like anything about it, but I moved it since it is probably better here, and it appears the person who added it was rather insistent it was correct.Spitzak (talk) 19:25, 14 July 2021 (UTC)Reply
There might be some resistance to its use in photography, especially as it applies one number to a whole curve. There is often not a nice straight section to the graph. Some call it contrast index, or something like that. But in electronics, it should be closer to the right description. Gah4 (talk) 21:58, 14 July 2021 (UTC)Reply
OK, lacking sources or particular support, let's remove it for now. Dicklyon (talk) 04:25, 15 July 2021 (UTC)Reply
The issue with gamma's mutiple meanings and the justification for why sRGB don't use the term is actually explained in the standard itself. — SH4ever (talk) 17:12, 5 November 2021 (UTC)Reply

.55

edit

FTA: A notable exception, until the release of Mac OS X 10.6 (Snow Leopard) in September 2009, were Macintosh computers, which encoded with a gamma of 0.55 and decoded with a gamma of 1.8.

This sentence makes it seem like the Mac just happened to use .55 accidentally, but I don't think that's historically accurate. I'm pretty sure the Mac's gamma was set to match print process, given the DTP focus of early Macs. And when you look at modern colour appearance models, their effective power factor seems close to .55 under standard conditions. 77.61.180.106 (talk) 05:43, 26 July 2023 (UTC)Reply

Issues with gamma multiple meanings

edit

Gamma has multiple meanings in the world of images and video. The gamma of a photographic film is not the same as the gamma (transfer function) of displays, cameras and color spaces. The gamma effect you find in image and video editing softwares along other effects such as lift, gain, contrast, is also something different. This page mix everything. It will need clean up and proper explanation. — SH4ever (talk) 17:02, 5 November 2021 (UTC)Reply

Gamma is somewhat generally used for any power law exponent. It could be any other letter, but commonly is gamma. Power laws are used in different ways, which would might take exception to. It is commonly used for the contrast index of photographic film, though it measures the slope of a curve that isn't quite a straight line. It is used in the case of CRTs, which naturally have a non-linear response. And as noted above, there is a gamma correction on TV broadcasts that partly makes up for the CRT gamma, giving a desired result. Fundamentally, all power laws are the same. They are used in different ways, though. Gah4 (talk) 18:57, 5 November 2021 (UTC)Reply

gamma test image rendering on high density displays

edit

how do we apply an "image-rendering: pixelated;" style/css to the image used for test gamma? Eesn (talk) 22:43, 25 November 2022 (UTC)Reply

rainuré

edit

Je suis KONE Kolot, responsable de l’entreprise 2KB spécialiser dans d'entretien ménager à Québec. Il faut dire que j’ai moi-même travaillé dans ce secteur d’activité comme étant préposé à l’entretien ménager depuis plus de 3 ans. Et vu le besoin croissant de la demande, j’ai donc décidé de lancer mon entreprise qui a aujourd’huit non seulement un ans d’existence amis aussi avec beaucoup de réalisation à son actif. 38.87.11.25 (talk) 12:00, 26 July 2023 (UTC)Reply

1.0 gamma is just linear gamma, XYZ can use linear gamma

edit

It is thus supported in browsers. Also stop quoting me from my gitlab comments! I never said 1.0 (in ICC profile, BTW, not gAMA) is common or even relates to PNG or JPEG. It was J2K format, unrelated. Valery Zapolodov (talk) 17:18, 26 July 2023 (UTC)Reply

1.0 is THE WRONG VALUE when stored in any typical .png file. Stop trying to say it is correct. Also stop deleting the fact that there is no gamma value that means "put the 8 bit data unchanged into the display buffer" which is what virtually all users of 8-bit files want. Spitzak (talk) 18:15, 26 July 2023 (UTC)Reply
Wrong. Here is PNG from dpx scan (https://forum.videohelp.com/attachments/43324-1507526652/dpx-sequence.zip), it has gAMA 1.0 and opens correctly in Chrome and Firefox. https://photos.google.com/share/AF1QipPGmNiAPrmZCZkqJUx0_NghSyyhKkBuZfCr6hVDWGkb9a5AajdfKe0-abglHoKVEA?key=ZmJlb2NQeURycEZTcFNOSnA0QjMxQzZ1NjVmVXd3 Valery Zapolodov (talk) 18:30, 26 July 2023 (UTC)Reply
By far the biggest problem was programs writing 1.0 when the data WAS NOT 1.0 GAMMA! It does not matter if actual gamma 1.0 data displays correctly. There was an awful lot of people who thought "gamma" was the division of the actual gamma value by the display gamma value, thus 1.0. I dealt with this quite a lot and know this for a fact, and basically this meant "ignore any gamma near 1.0". It is possible that png files with this wrong gamma are disappearing nowadays so browsers don't have to do it. Also I have no idea if that image is being shown correctly, looks really dark to me (which is what would happen if "real" gamma 1.0 data is displayed without color correction on a sRGB display). Spitzak (talk) 18:41, 26 July 2023 (UTC)Reply
No one does that. No one writes that. The ICC profile was corrupted. I edited my comments on gitlab. Valery Zapolodov (talk) 18:45, 26 July 2023 (UTC)Reply
I edited my comment further https://gitlab.gnome.org/GNOME/gimp/-/issues/5363#note_1008599 added "Please do not use that as a cite on WIKIPEDIA!" Valery Zapolodov (talk) 18:50, 26 July 2023 (UTC)Reply
There is such a gamma value. It is the value that must equal the transfer of display. E.g. sRGB if display is sRGB, 2.2 if display is 2.2... 1.8 if 1.8. Valery Zapolodov (talk) 18:44, 26 July 2023 (UTC)Reply
As well as I know it, in reflection and transparency photography, the combined gamma from original scene to final visual product has to be somewhat greater than 1.0, to look right to us. Maybe up to about 1.5. I believe this is also true for electronic displays, from television CRTs up through computer LCD monitors. The human visual system is not linear, and so the system needs to correct for that. Gah4 (talk) 04:32, 21 April 2024 (UTC)Reply

Browsers support PNG's gAMA chunk

edit

Indeed, try to open this image in Chrome and Firefox even on android, it will show the pear, not apple https://gitlab.gnome.org/GNOME/gimp/uploads/e764b2029957401b9f99d46e3e1c6203/VhGrd.png Valery Zapolodov (talk) 18:58, 26 July 2023 (UTC)Reply

I will add a good test with gAMA chunks. http://www.libpng.org/pub/png/colorcube/colorcube-pngs-gamma16.html Valery Zapolodov (talk) 22:21, 26 July 2023 (UTC)Reply

Can someone find a recent summary of the current state of browser handling of gamma (or color profiles) in PNG images and HTML/CSS colors? Last time I tried running a bunch of tests on a few machines and asked browser vendors to fix their spec-noncompliant HTML/CSS color rendering was c. 2006, and my vague impression is that they still may be spec-noncompliant to this day, but I haven't tried testing any time recently. (Apple's solution about that time was to just start making their display hardware quite close to sRGB, so that if they did no color management for HTML/CSS colors they would still render roughly correctly as long as nobody used a third party display or changed the operating system display settings). It would be nice to have a clear source to cite instead of just going on the gut feeling of Wikipedians. –jacobolus (t) 04:44, 27 July 2023 (UTC)Reply

Chrome already supports HDR (last test in https://www.wide-gamut.com/test) in AVIF and WCG in CSS since 2023 (https://css-tricks.com/the-expanding-gamut-of-color-on-the-web/ (tests) and also https://developer.chrome.com/articles/high-definition-css-color-guide/), as per CSS Color Module Level 4. Since Android 10 color managment is automatic too at least in Samsung browser. And they are already doing https://www.w3.org/TR/css-color-5/ it works perfectly on my LG C9 TV, my only monitor, and on my Galaxy S22 Ultra. Valery Zapolodov (talk) 13:38, 27 July 2023 (UTC)Reply
There is evidence that gAMA png chunk worked in all browsers besides IE 7 as long as back in 2008. https://habr.com/ru/amp/publications/19163/ Valery Zapolodov (talk) 17:23, 27 July 2023 (UTC)Reply

History of the Concept

edit

I'd be interested in learning who first worked on the concept. Who named it gamma? Why gamma and not another Greek letter?

— David James, Atlanta 2600:1700:2876:78D0:6DB3:6567:68C:D699 (talk) 12:51, 20 April 2024 (UTC)Reply

As well as I know it, and without doing any actual WP:OR, it goes back to the somewhat early days of silver halide photography. In that case, gamma corresponds to what we usually consider contrast. It is the slope of the characteristic curve, now named after Hurter and Driffield. They were working from about 1876 into the 1890's. Even more, and not so obvious, to look right to us, from original scene to a print or transparency, the gamma needs to be a little more than 1.0, maybe up to 1.4.
In addition, the beam current in a CRT is not linearly dependent on the grid-cathode voltage, but has a power law, maybe with gamma 2.5. In the vacuum tube days, it was then usual to apply an inverse gamma in the TV camera, though not quite the inverse of the CRT. That is, the camera+TV set has a combined gamma greater than 1, maybe 1.5 or so. It depends in a complicated way on the human visual system, which is not linear.
I don't know if Hurter and Driffield actually named it gamma, but it seems that they did the early work on it. Gah4 (talk) 04:28, 21 April 2024 (UTC)Reply

Hurter and Driffield

edit

As the article notes, gamma correction goes back to silver halide films in the 1870's and 1880's, and yet indicates that it originated with the CRT. Hurter and Driffield studied the effects on photographic films, and the characteristic curve is now commonly named after them. Gah4 (talk) 07:03, 21 April 2024 (UTC)Reply