What is brightness?

This boring gray square is much more interesting than it first appears.

In fact it isn’t gray at all; it’s a checkerboard of black and white pixels. If you are very close to the screen, you might actually be able to see the individual pixels, which is not what we want. Adjust your viewing distance or squint your eyes. The goal is to have this square appear like one uniform shade of gray.

I want this image to look correct on your screen because we’ll be using it to strike at the heart of a seriously tricky issue related to how computers display color: brightness.

Let’s imagine how we perceive this square in terms of measurable light. If the square were solid black, it would be emitting whatever the minimum amount of light is for your screen (disregarding the independent brightness setting of your screen). Let’s call this minimum amount X. If the square were pure white, similarly it would be emitting the maximum amount of light, which we’ll call Y. Intuitively, since half of the pixels are white and half are black the overall light emitted by the square will average out to be halfway between those two extremes: (X+Y)/2.

But we can also think about this square in another way. Your computer has to store pixel intensities as numbers somewhere so that it can tell your screen how bright each pixel should be. Let’s say the number it stores for the darkest pixels (black) is 0 and the number it stores for the brightest pixels (white) is 1. If we had to pick a single pixel intensity number to represent the entire square, we could reason similarly as we did above and choose (0+1)/2=0.5. Right?

Well let’s put those two side-by-side. The checkerboard from above first, and that solid color 0.5 pixel intensity square right next to it. By the above logic, the two will look almost exactly the same.

…WHAT!?

The reason why the second square looks darker is because of an incorrect assumption in our second argument about pixel intensities (on many cheap laptop displays, the second square will also look much more blueish; this is an unrelated effect, so try to ignore it and focus just on how bright the two squares are).

We were correct to say that 0 is the number to represent the minimum light intensity and that 1 represents the maximum. Our incorrect assumption is that 0.5 must correspond to the average of those two extreme light intensities. Remember that these numbers are just how your computer is storing the brightness level so that it can tell your screen what to do. There is no requirement that the number 0.5 has to correspond with the brightness halfway between minimum and maximum. Your computer could, for example, choose that as the number increases from 0 towards 1, the actual brightness emitted by your screen first increases very slowly. Then as the brightness number gets closer to 1, the screen brightness could start increasing faster. Consider this example:

In the middle we have two spectra from black to white. I made each spectrum according to a different rule. The spectrum on the right is based on the numbers your computer stores: the top is the color corresponding with brightness 0, the bottom is brightness 1, the middle is brightness 0.5, and so on. The left spectrum however is based on light intensity. The top is the darkest color your screen can display, the bottom is the brightest, and halfway through is half as bright as the brightest color. If you don’t believe me, the areas on the left and right are repeated from the previous two squares I showed you: the black and white checkerboard and the solid 0.5 color. You can see that both sides appear to match their respective spectrum right in the middle.

Notice how the spectrum on the right has many more dark shades than the left one. This points out something you might not have realized about your eyes: they are much better at telling dark shades apart than light ones! On the left you can see that the difference between maximum brightness and half of maximum brightness is very subtle, and most of the distinguishable shades are crammed into the top of the spectrum. The reason why these two spectra look so different is because your computer is (intentionally) accounting for this quirk of our eyes. As the brightness numbers transition from 0 to 1, your computer adjusts the rate of increase in measurable light intensity so that the perceived transition looks smooth to the human eye.

So these are the two ways of thinking about brightness: perceived brightness based on distinguishability to the human eye, and measurable light intensity. For short, I’ll call these perceived brightness and light intensity moving forward. From the previous examples, we learned that computer color is actually based on perceived brightness, because when we look at the “middle” value of 0.5, we end up in the middle of the perceived brightness spectrum, not the middle of the light intensity one.

Takeaway #1

The intensity of a light source (or color) is not the same thing as how bright we perceive that light (or color) to be.

Gamma and sRGB

The relationship between these two ways of thinking about brightness is well-studied. There is a function that will give you the light intensity for a perceived brightness and vice versa. For the purposes of this blog, we don’t care what this function is exactly, but in general this process of switching between perceived brightness and light intensity is called gamma correction, and thus the function is often called a gamma curve.

What complicates the issue somewhat is that there are many different gamma curves out there. Because in general gamma is a way of converting between how a color is stored and how a color is displayed. However for now we only care about images stored and displayed on computers, and on computers there’s one ubiquitous standard: sRGB. As we just discussed, your computer stores colors in terms of their perceived brightness, that is what sRGB is. So all of that talk about how 0.5 brightness looks halfway between the darkest and lightest colors only makes sense in the context of the sRGB color space.

One subtle point is how sRGB relates to black and white (grayscale) images. All of the previous examples used grayscale images so that we could focus on just the perceived brightness vs. light intensity issue. I acted like every pixel had a single light intensity value, but that isn’t how it actually works: every pixel has three values, corresponding to the red, green, and blue light intensities, in that order. Gamma correction handles each color the same, and in grayscale images all three values are always equal, so everything works out very simply. When we gamma correct the perceived color with red, green, blue equal to [0.5, 0.5, 0.5], we get the light intensities [0.214, 0.214, 0.214] (i.e. 21% of the maximum light intensity). We can think of each pixel as a single perceived brightness (0.5) that gets gamma corrected to a single light intensity (0.214). But since gamma correction is applied to each channel separately and identically, we can easily think about full-color images by considering each color channel on its own. For example, we know that a checkerboard between red [1,0,0] and blue [0,0,1] will look brighter than the average of their sRGB values [0.5,0,0.5] because of the principles we just learned:

Almost all computer images store colors using sRGB. Unless your image format has a specific feature for specifying a different color space, the default is sRGB. This goes for images taken by most cheap digital cameras, images in your browser, images in your favorite image editing software, anything!

Takeaway #2

When you hear gamma correction, think “Colors are not being stored as light intensities and need to be converted.” When you hear sRGB, think “Colors are being stored in terms of perceived brightness and there is a standard gamma curve for converting that to light intensity.”

The problem

So far this has all has been purely academic and I’ve been focusing only on correcting your understanding of brightness. But there are occasionally situations where your computer needs to understand these different ways of thinking about brightness. And when it understands them incorrectly, problems can occur.

Let’s look at our familiar checkerboard again, and then right next to it I’ll have your web browser show the same checkerboard but shrunk down in size.

…WHAT!?

I told your browser to make it smaller, which it did, but it also made it darker. Is this the correct behavior? Here’s a simple test: keep looking at the larger checkerboard and back away from your screen. Does the square get darker as you get further away? Nope.

What’s happening here is your browser is making the exact same incorrect assumption we made in the first section. Half of the pixels are black and half are white, so when we shrink the image the light intensities blend together and we should end up with a solid square with light intensity halfway between black and white: (X+Y)/2. But your browser instead says black=0, white=1, so we get (0+1)/2=0.5. It’s mixing up perceived brightness and light intensity!

To understand why this is incorrect, think about this: when colors blend together, they blend together physically i.e. with groups of photons mixing together in the real world. The checkerboard looks gray because the tiny groups of black photons from the black pixels mix with the tiny groups of white photons from white pixels. When you “shrink” the square by backing away from your screen, you aren’t changing the way those photons are mixing, you’re only changing the area of the eyeball they are hitting: same mixture, same light intensity, less area.

This problem occurs because the smallest color your screen can display is a single pixel, and the checkerboard is made of individual black and white pixels. When I ask your browser to display those same pixels in a smaller area, it’s forced to mix them together, thus I’m forcing it to demonstrate its understanding of light intensity.

This problem crops up any time you ask your computer to blend together colors stored in sRGB. For example, it’s well-known that mixing together red and blue gives you purple, right? How does the browser do?

At the top I told the browser to draw a gradient from the most intense red to the most intense blue. The dark, murky purple we get in the middle of the gradient is also drawn on its own right below that. This is the same result you get when you naively mix together sRGB color values (which is what the browser is doing). On the bottom we have a different gradient that I drew manually based on mixing light intensities, and above that is the bright purple you get in the middle.

Bright red + bright blue = dark purple? Ew, no. The bright purple on bottom makes both intuitive and visual sense. This is the general pattern when mishandling brightness: colors end up too dark.

In this blog post, I’m picking examples that clearly point out the error. In real applications, with real digital images, the difference is usually more subtle. And when you start getting into photo manipulation where color distortion is intended in order to achieve a certain look, it can be even harder to tell what is “correct”. But the simple fact is still that your web browser, which was made by hundreds of highly trained software developers, and is used by billions of people every day, is mixing colors incorrectly. Which means it’s also

  • Resizing images incorrectly
  • Blurring incorrectly
  • Blending semi-transparent layers incorrectly
  • Drawing gradients incorrectly

Oh, and it’s not just your browser. Remember what I said earlier about sRGB being a ubiquitous standard in almost every bit of computer software? Yeah, they’re all doing these things wrong too. Well, almost all of them. Software like Photoshop and ImageMagick are capable of mixing colors correctly, but you must explicitly specify the sRGB gamma correction; by default they will do it wrong.

Takeaway #3

If you’re using software to mix colors together, chances are it’s doing it wrong.

Colors

But it’s not even that simple. Earlier I said that we can just think about this gamma issue with each color channel independent of the others, but that was a lie.

sRGB stores three separate numbers representing the perceived brightness of red, green, and blue, respectively. All three numbers go from 0 to 1 so we can represent the brightest red, green, or blue by drawing a color that has 1 in one channel and 0 in the other two. We do that below, but pay attention to how you perceive its brightness:

Which of the three appears brightest? Which appears darkest? If your eyes work like most people’s, you perceive green as the brightest and blue as the darkest. But this is despite them all having the same sRGB value of 1 and thus the same light intensity! It’s our human eyes playing tricks on us again: we perceive the brightness of light differently depending on its color.

One commonly accepted standard (on which sRGB is based) which quantifies this is ITU-R Recommendation BT.709, which defines a formula for how bright a color is based on its RGB light intensities:

L = 0.2126R + 0.7152G + 0.0722B

Notice how the green component is multiplied by a much larger number than the other two. According to this standard, a pure green color is about 10 times brighter than a pure blue one! But wait… what does that value L represent? Is it measurable light intensity or perceived brightness? Confusingly, it’s kind of a mix of both. First of all, this formula is pretty subjective: it’s supposed to reflect the “average” human eye’s receptiveness to various colors of light. So it will certainly be wrong for some observers (think color blindness or tetrachromacy). So in that sense it’s perceived brightness. But notice that I said it’s based on “RGB light intensities”, so its output is also in terms of light intensity. Think about it this way: this formula takes an RGB color in terms of light intensities and gives you the light intensity of the shade of gray that has the same perceived brightness as that color.

It’s essentially a method of converting a color to grayscale. When you make an image grayscale, you need to set R=G=B for every pixel so that they are all shades of gray. And ideally you want the brightness of that gray to match the brightness of the original color. Notice that if R=G=B, plugging into the formula gives you L=R=G=B. So if you set red, green, and blue to L you get the desired brightness-preserving grayscale image. But this is in terms of light intensity, so we have to convert from and to sRGB if we want to use this formula with digital images. So a good brightness-preserving grayscale conversion for sRGB would go like this:

  1. Convert from perceived brightness sRGB to light intensity RGB
  2. Plug that into the BT.709 formula
  3. Convert the resulting light intensity back to perceived brightness
  4. Use that one value for red, green, and blue in sRGB

And guess what, again most software does not do this correctly. Many naively average the sRGB red, green, and blue values (ignoring that green has a stronger contribution to brightness) while others try to be smart by using BT.709, but they plug in the sRGB values directly even though the formula is not designed for that color space. You can see the results below:

The four bottom squares show the green color converted to gray using, from left to right: sRGB average, sRGB “lightness” (a different, but still wrong average of channels), incorrect BT.709 using sRGB, and correct BT.709. As expected, the two sRGB averaged-based results are far too dark since they don’t account for green being perceived more brightly. The incorrect BT.709 formula is almost right, but again is a little too dark because sRGB emphasizes darker shades. If you focus on the border between the green and the gray, the last (correct) square is the least distinguishable from the color in terms of brightness, indicating that it is a good match for a black-and-white conversion.

With all of these examples of sRGB going bad, you might be thinking that it’s a no-brainer to always gamma correct your colors first. But there are other complications. For example, let’s repeat the image from the beginning of this post comparing the light intensity method with perceived brightness method but now with black/white replaced with red/blue in the first image and with red/yellow in the second:

Recall how both of these images were made: the left side is a checkerboard showing a mix of light intensities, then a gradient based on light intensities, then a naive sRGB-based gradient, then a solid color resulting from a naive sRGB average. Focus on the purple image first: this is just like the red/blue gradient comparison I did earlier. Again, the left side looks correct since it gives us the intuitive bright red + bright blue = bright purple result. But now look at the orange image.

Even with everything I’ve talked about so far, my gut still tells me that the right side of the orange image using incorrect sRGB averaging looks “correct”. But why? Why does the left side of the purple image look correct while the right side of the orange image looks correct? Why doesn’t a single method of color mixing always give us the right answer?

The difference is in the colors. Red and blue are separate color channels, so when you do a gradient between them the red and blue channels are both changing inversely with each other. The result is a mixture of different amounts of red and blue together. However yellow is represented by RGB as a mixture of red and green, so a gradient from red to yellow is a gradient from [1, 0, 0] to [1, 1, 0]: red and blue stay constant while green varies from 0 to 1. When we perceive the left image from top to bottom, the brightness is affected both by red becoming less intense and blue becoming more intense. However when we perceive the right image from top to bottom, the only brightness change is due to the change in green. And we’re back to where we started: the human eye perceives darker differences better than brighter ones, so the sRGB gradient spreads out those dark differences more and ends up giving a color that really is perceptually halfway between red and yellow.

This problem existed right from the beginning. Go back and look at the black/white image from the beginning. Sure, the left side illustrates what color is halfway between black and white in terms of light intensities, but which color actually looks like its halfway between black and white? The right side. It has to be the right side because the whole point of sRGB is to smooth out changes in perceived brightness.

The problem is that sRGB is working double duty. On the one hand, the real purpose of sRGB is a kind of compression. There are infinitely many different levels of brightness in the analog real world and we have to choose some finite number to store digitally (usually 256 per channel). We could choose those finite levels to be evenly spread out from the darkest light intensity to the brightest but this would largely be a waste: we know that our eyes are really bad at distinguishing between similar bright intensities. sRGB fixes this by devoting more of those levels to the darks than the brights meaning we can store the maximum number of discernable differences in brightness within the limited number of levels we have available.

On the other hand, sRGB has also been co-opted as a user-friendly way of picking and working with colors. Because sRGB smooths out perceptual differences in light intensities, an sRGB gradient from dark to light matches our (incorrect) expectations of what the dark to light transition should look like: halfway through the gradient looks halfway between darkest and lightest even though it’s more like 21% brightness in terms of light intensity. This works great if you lock some channels together and change them in-sync because then the change in color appearance as you tweak the numerical values matches a change in perceived brightness. Want to make a shade of gray look half as bright? Halve all of its RGB values! Want to make a shade of orange look three times as bright? Triple its green channel! In these limited situations, the wrong behavior is actually intuitive. The more channels change independently of each other, the less we get this intuitive matchup and the more strange the sRGB result looks. The best case for sRGB is all channels changing together (the original black/white gradient) while the worst case is two channels inverting (our red/blue example). I must emphasize that this isn’t a case of sRGB being more correct in some cases; mixing colors in the sRGB space is always incorrect. This is about sRGB being more intuitive for people who do not understand the difference between perceived brightness and light intensity.

As a more concrete example, even in the red-yellow gradient case where I said the perceived brightness spectrum looks more “correct”, this intuition is unnatural. If you were to shine a red lamp and yellow lamp on a wall, the area where their beams blend together would look much more like the light-intensity-based spectrum: the green component that our eyes are more sensitive to would wash out the red even at lower intensities, making the yellowish part of the spectrum look larger, not a 50/50 split like you see on the right.

So why does mishandling of sRGB color persist? In my opinion, it’s a perfect mix of ignorant programmers, uneducated users, and the status quo. Many programmers don’t know all of this stuff about sRGB and still think that perceived brightness and light intensity are the same. They write image editing software and web browsers that mishandle sRGB values. Users of this software also don’t understand this distinction and thus think the software is working correctly when it draws gradients or mixes colors together. They are incorrectly unsurprised when simple operations that should be only mixing together colors end up making their images darker. And lastly, these errors are so common in essentially all color-handling software out there that we’re used to them. Designers rely on them when picking colors and they expect them when manipulating images. I’m sure that many developers would consider correct sRGB handling to be a regression at this point, due to how it would upset their users’ expectations.

Takeaway #4

sRGB can be a useful tool when changes in color correspond with changes in perceived brightness, for example when changing a color one channel at a time. In those cases sRGB will match with the layman’s intuition.

But that doesn’t make it correct to manipulate colors in sRGB without performing gamma correction.

Try it for yourself

Here’s an interactive gradient with the light intensity method on top and the incorrect sRGB method on bottom. Click the controls on the sides to change the colors and get a feel for how the two differ.

If you want to try out correct color manipulation, a great way to get started is with ImageMagick. If you want to use your preferred image manipulation software, you’re going to have to look up specific instructions for it elsewhere.

For most ImageMagick commands, you can get it to do the right thing by prefixing your operations with -colorspace RGB to perform the gamma correction and then preceding your final output with -colorspace sRGB to undo it for saving. For example, this works for both resizing and blurs:

convert inputfile.png -colorspace RGB -resize 800 -colorspace sRGB outputfile.png

convert inputfile.png -colorspace RGB -gaussian-blur 0x8 -colorspace sRGB outputfile.png

Gradient generation is a little different because ImageMagick creates all gradients in sRGB space. Instead you have to force it to reinterpret the image as light-intensity RGB (without performing gamma correction) so that the final -colorspace sRGB gamma encodes it correctly:

convert -size 200x200 gradient:red-blue -set colorspace RGB -colorspace sRGB outputfile.png

Converting an image to black and white is also different. If you specify the BT.709 formula it does the gamma correction for you, but the output result is light intensity so you still need the sRGB conversion at the end:

convert inputfile.png -grayscale rec709luminance -colorspace sRGB outputfile.png

Note: ImageMagick also provides the incorrect implementation of the formula, which directly plugs in the sRGB values without gamma correction (they call this rec709luma). Unfortunately, that is the formula used by their default “gray” colorspace, and it’s also the one recommended in their documentation.

Exceptions

Annoyingly, there are exceptional cases.

The first and probably less important one is color inversion. This is a pretty weird, uncommon operation but it’s actually what led me down this rabbit hole in the first place. Obviously when we invert an image we want the colors to… invert. Things that were very red before should be very not red after. Black should become white, etc. But with what we know about sRGB, gamma correction will actually hurt us here. Let’s do an example. We start with a black/white sRGB gradient which gives us a smooth transition between black and white where the halfway point is perceptually halfway between black and white. What will happen if we gamma correct this gradient, do the inversion in terms of light intensity and then convert back to sRGB? Look at the result below, with the original gradient on top and the inverted one underneath:

The result looks like a light intensity-based gradient! Why is this bad? Well remember that the dark colors are where the human eye best perceives details. We’ve taken a gradient that had a perceptually smooth transition and—by inverting it in terms of light intensity—crammed all of the details into what were the very brightest and least-distinguishable colors before. And all of the darker shades in the lower half which used to be easily distinguishable are now nearly-identical white! We intended to just invert the colors, but we ended up inverting the details as well.

The trouble is that unlike color mixing, blurring, or image resizing, color inversion doesn’t have a real-world analogue so there’s no ground truth to compare to. The most reasonable result I can think of is one that inverts colors while maintaining perceptual differences between them so that details don’t get lost/added as a side-effect. Since sRGB is perception-based, the naive inversion of the sRGB channels actually does a pretty good job. The gradient gets flipped, so the perceptual differences are maintained:

The only improvement I can think to make is that this naive inversion doesn’t take into account the BT.709 formula. I can imagine it being desirable that inverting a color also inverts its perceived brightness; the simple sRGB inversion can’t do this because it treats each color channel equally (it works for this simple black/white gradient, but not in general). But off the top of my head I can’t think of any easy way to do this, and I imagine it could actually be pretty expensive so the sRGB approach is a good compromise for now.

The other, more tricky exception is font rendering. Fonts employ a number of tricks to maintain readability at small sizes. The simplest trick is anti-aliasing, which the font rendering engine applies to edges at small sizes to keep them looking smooth. The engine might also mess with pixel and sub-pixel alignment so that the edges of letters align with the pixel grid and are thus clearer. Font designers can also design their glyphs to actually change shape at smaller sizes, for example removing artistic flourishes that would hamper readability.

In one sense this does have a real world analogue. A small font is kind of like a shrunken font, so in that sense it should use gamma correction just like image resizing. But the goal of font rendering is to get a readable result, not a real-world light-accurate one. If the inaccurate result is more readable it’s the right result. What complicates this is that some font rendering engines do not perform gamma correction when blending colors to make the edges of the font look smooth. At large sizes this is not much of an issue because it just makes the edges of the font look slightly less smooth. But if font designers design their fonts around these incorrect engines, they might be relying on that incorrect color blending at the small sizes to improve readability. So you can’t fix the larger case (which would make edges look smoother and more realistic) without breaking the smaller case (making small fonts less readable).

Further reading

Here are links to where I did most of my research:

Gamma error in picture scaling. A very extensive article with a focus on image resizing. Good examples of real world images with noticeable distortion due to incorrect color space handling.

Computer Color is Broken. A nicely animated explanation with a focus on image blurring.

What every coder should know about gamma. Really thorough article with even more examples than I have here.

Resizing with Colorspace Correction. The ImageMagick manual’s section on correctly resizing images, with some notes about other linear color spaces.

Interactive Wide-Gamut Comparison. Kinda unrelated, but neat if you want to check out some non-sRGB colorspaces.

Lastly, if you’re interested in how the math works, you can read the source for this page in your browser. All of the images in this article are procedurally generated in Javascript so you can see how I do the gamma-correct gradients and such.

For the pedants

I fudged a lot of stuff while writing this. It kind of drives me crazy how widespread this issue is and yet how little attention it gets, so I simplified things a bit to get the point across quicker. But if you really care about technical details, read on.

“Light intensity” and “perceived brightness” are not the correct terms for this stuff. Usually people described sRGB as a “nonlinear RGB” color space. And when you gamma correct sRGB to get light intensities, that’s “linear RGB”. I find these terms a little vague because they are implicitly referring to light intensities: sRGB has nonlinear light intensity and when you gamma correct it you (obviously) get linear light intensity because that’s what gamma correction is. But if you think about it in terms of perceived brightness then sRGB is the linear one and the “linear RGB” is now nonlinear! I find that thinking linearly is more intuitive, so I named both of them by the context in which they are linear. sRGB is linear in the perceptual space and “linear RGB” is linear in the light intensity space and both of them are nonlinear when viewed from the perspective of the other space.

I made it sound like everything uses sRGB and there’s only one sRGB gamma curve, but that’s not entirely true. Different screens can display colors differently and might have to tweak the gamma conversion to get the right result. So although there’s only one sRGB standard, the standard gamma curve can have variations. And again gamma is really about the difference between how colors are stored and displayed, but colors can get stored in lots of places before they end up on your screen. For example your image file might get loaded by your operating system, which uploads it to your image editing software, which uploads it to your graphics card, which uploads it to your screen. All of those intermediate steps might use different color spaces and do their own gamma conversion and you need the whole chain to be working correctly in order to get the right result. So your software can be handling sRGB gamma exactly right and still the result looks wrong because something else in the chain messed it up.

I’m not a physicist. I’m sure there’s something terribly inaccurate about saying that “tiny groups of black photons” are “mixing” with “tiny groups of white photons”. But hopefully this still gets the idea across.

There is a large class of software that does color blending right: graphics drivers. The code that runs on graphics processors to compute colors all runs in a linear RGB color space so that they blend correctly. Though it’s still possible to mess it up: when loading an sRGB image onto the graphics card you still need to tell the driver to do the gamma correction. Otherwise it will load the sRGB values directly and will end up doing linear operations on nonlinear values!

You could argue that sRGB wasn’t designed as a compression scheme, it actually has to do with CRT voltages and the whole perceived brightness thing was a happy accident. But I believe that coincidence is why it has stuck around so long and why this gamma correction issue is so hard to teach people about. So it might as well be the reason for its existence.