Sunday, May 28, 2017

Chromaticity Diagrams

Humans can see light of the wavelengths between around 380 nm and 780 nm. We see many photons at a time, and we recognize the collection of photons as a particular color. Each photon has a frequency, which means that a particular color is the effect of how much power the photons have at each particular frequency. Put another way, a color is a distribution of power throughout the visible wavelengths of light. For example, if 1/3 of your power is at 700 nm and 2/3 of your power is at 400 nm, the color is a deep purple. This color can be described by the 2-dimensional function:



Different curves over this domain represent different colors. Here is the curve for daylight:



So, if we want to represent a color, we can describe the power function over the domain of visible wavelengths. However, we can do better if we include some biology.

Biology

We have three types of cells (called “cones”) in our eyes which react to light. Each of the three kinds of cones are sensitive to different wavelengths of light. Cones only exist the the center of our eye (the “fovea”) and not in our peripheral vision, so this model is only accurate to describe the colors we are directly looking at. Here is a graph of the sensitivities of the three kinds of cones:



Here, you can see that the “S” cones are mostly sensitive to light at around 430 nm, but they still respond to light within a window of about 75 nm around it. You can also see that if all the light entering your eye is at 540 nm, the M cones will respond the most, the L cones will respond strongly (but not as much as the M cones), and the S cones will respond almost not at all.

This means that the entire power distribution of light entering your eye is encoded as three values on its way to your brain. There are significantly fewer degrees of freedom in this encoding than there are in the source frequency distribution. This means that information is lost. Put another way, there are many different frequency distributions which get encoded the same way by your cones’ response.

This is actually a really interesting finding. It means we can represent color by three values instead of a whole function across the frequency spectrum, which results in a significant space savings. It also means that, if you have two colors which appear to “match,” their frequency distributions may not match, so if you perform a modification the same way to both colors, they may cease to match.

If you think about it, though, this is the principle that computer monitors and TVs use. They have phosphors in them which emit light at a particular frequency. When we watch TV, the frequency diagram of the light we are viewing contains three spikes at the three frequencies of phosphors. However, the images we see appear to match our idea of nature, which is represented by a much more continuous and flat frequency diagram. Somehow, the images we see on TV and the images we see in nature match.

Describing color

So color can be represented by a triple of numbers: (Response strength of S cones, Response strength of M cones, Response strength of L cones). Every combination of these three values represents every color we can perceive.

It would be great if we could simply represent a color by the response strength of each of the particular kinds of cones in our eyes; however, this is difficult to measure. Instead, let’s pick frequencies of light which we can easily produce. Let’s also select these frequencies such that they will correspond as well as possible to each of the three cones. By varying the power these lights produce, we should be able to produce many of the three triples, and therefore many of the colors we can see.

In 1931, two experiments attempted to “match” colors using lights of 435.8 nm, 546.1 nm, and 700 nm (let’s call them “blue,” “green,” and “red” lamps). The first two wavelengths are easily created by using mercury vapor tubes, and correspond to the S and M cones, respectively. The last frequency corresponds to the L cones, and, though isn’t easily created with mercury, is insensitive to small errors because the L cones’ frequency is close to flat in this neighborhood.

So, which colors should be matched? Every color can be decomposed to a collection of power values at particular frequencies. Therefore, if we could find a way to match every frequency in the observable range by humans, this data would be sufficient to match any color. For example, if you have a color with a peak at 680 nm and a peak at 400 nm, and you know that 680 nm light corresponds to our lamp powers of (a, b, c) and 400 nm corresponds to our lamp powers of (d, e, f), then the (a + d, b + e, c + f) should match our color.

This was performed by two people: Guild and Wright, using 7 and 10 samples, respectively (and averaging the results). They went through every frequency of light in the visible range, and found how much power they had to make each of the lamps emit in order to match the color.

However they found something a little upsetting. Consider the challenge of matching light at wavelength 510 nm. At this wavelength, we can see that the S cones would react near 0, and that the M cones would react maybe 20% more than the L cones. So, we are looking for how much power our primaries should emit to construct this same response in our cones.

(The grey bars are our primaries, and the light blue bar is our target)

Our primaries lie at 435.8 nm, 546.1 nm, and 700 nm. So, the blue lamp should be at or near 0; so far so good. If we select a power of the green light which gives us the correct M cone response, we find that it causes too high of an L cone response (because of how the cones overlap). Adding more of the red light only causes the problem to grow. Therefore, because the cones overlap, it is impossible to achieve a color match with this wavelength of light using these primaries.

The solution is to subtract the red light instead of adding it. The reason we couldn’t find a match before is because our green light added too much L cone response. If we could remove some of the L cone’s response, our color would match. We can do this by, instead of matching against 520 nm, let’s instead match against the sum of 520nm plus some of our red lamp. This has the effect of subtracting out some of the L cone response, and lets us match our color.

Using this approach, we can construct a graph, where for each wavelength, the three powers of the three lights are plotted. It will include negative values where matches would otherwise be impossible.



X Y Z color space

Once we have this, we  now can represent any color by a triple, possibly negative, where each value in the triple represents the power of our particular primary. However, the fact that these values can be negative kind of sucks. In particular, machines were created which can measure colors, but the machines would have to be more complicated if some of the values could be negative.

Luckily, the power of light follows mathematical operations. In particular, addition and multiplication hold. Color A plus color B yields a consistent result, no matter what frequency distribution color A is represented by. The same is true for multiplication. This means that we are actually dealing with a vector space. A vector space can be transformed via a linear transformation.

So, the power values of each of the lights at each frequency were transformed such that the resulting values were positive at each frequency. This new, transformed, vector space, is called X Y Z, and is not physically-based.

Given these new non-physical primaries, you can construct a similar graph. It shows, for each frequency of light, how much of each primary is necessary to represent that frequency.



Chromaticity graphs

So, for each frequency of light, we have an associated (X, Y, Z) triple. Let’s plot it on a 3-D graph!


(Best viewed in Safari Nightly build.)
Click and drag to control the rotation of the graph!

The white curve is our collection of (X, Y, Z) triples. (The red is our unit axes.)

Remember that every visible color is represented as a linear combination of the vectors from the origin to points on this (one-dimensional) curve.

The origin is black, because it represents 0 power. The hue and saturation of the color is described by the orientation of the point, not the distance the point is from the origin. If we want to represent this space in two dimensions, it would make sense to eliminate the brightness component and instead only show the hue and saturation. This can be done by projecting each point onto the X + Y + Z = 1 plane, as seen by the green on the above chart.

Note that this shape is convex. This is particularly interesting: any point on the contour of the shape, plus any other point on the contour of the shape, yields a point within the interior of the shape (when projected back to our projection plane). Recall that all visible colors are equal the the linear combination of points on the contour of the curve. Therefore, all visible colors equal all the points in the interior of this curve. Points on the exterior of this curve represent colors with negative cone response for at least one type of cone (which cannot happen).

So, the inside of this horseshoe shape represents every visible color. This shape is usually visualized by projecting it down to the (X, Y) plane. That yields the familiar diagram:



Inside this horseshoe represents every color we can see. Also, you can notice that, because X Y Z was constructed so that every visible color has positive coordinates, and the projection we are viewing is onto the X + Y + Z = 1 plane, all the points on the diagram are below the X + Y = 1 line.

Color spaces

A color space is usually represented as three primary colors, as well as a white point (or a maximum bounds on the magnitude of each primary color). The colors in the color space are usually represented as a linear combination of the primary colors (subject to some maximum). In our chromaticity diagram, we aren’t concerned with brightness, so we can ignore these maximums values (and associated white-points). Because we know the representable colors in a color space are a linear combination of the primaries, we can plot the primaries in X, Y, Z color space and project them to the same X + Y + Z = 1 plane. Using the same logic we used above, we know that the representable colors in the color space are on the interior of the triangle realized by this projection.

You can see the result of this projection for the primaries of sRGB in the shaded triangle in the above chart. As you can see, there are many colors that human eyes can see which aren’t representable within sRGB. The chart also allows you to toggle the bounding triangle for the DCI-P3 color space, which Apple recently released on some of its devices. You can see how Display P3 includes more colors than sRGB.

Because the shape of all visible colors isn’t a triangle, it isn’t possible to create a color space where each primary is a visible color and the colorspace encompasses every visible color. If your color space encompasses every visible color, the primaries must lie outside of the horseshoe and are therefore not visible. If your primaries lie inside the horseshoe, there are visible colors which cannot be captured by your primaries. Having your primaries be real physical colors is valuable so that you can, for example, actually build physical devices which include your primaries (like the phosphors in a computer monitor). You can get closer to encompassing every visible color if you increase the number of primaries to 4 or 5, at the cost of making each color "fatter."

Keep in mind that these chromaticity diagrams (which are the ones in 2D above) are only useful for plotting individual points. Specifically, 2-D distances across this diagram are not meaningful. Points that are close together on the diagram may not be visually similar, and points which are visually similar may not be close together on the above diagram.

Also, when reading these horseshoe graphs, realize that they are simply projections of a 3D graph onto a somewhat-arbitrary plane. A better visualization of color would include all three dimensions.

Saturday, May 20, 2017

Relationship Between Glyphs and Code Points

Recently, there have been some discussions about various Unicode concepts like surrogate pairs, variation selectors, and combining clusters, but I thought I could shed some light into how these pieces all fit together and their relationship with the things we actually see on screen.

tl;dr: The relationship between what you see on the screen and the unicode string behind it is completely arbitrary.

The biggest piece to understand is the difference between Unicode's specs and the contents of font files. A string is a sequence of code points. Certain code points have certain meanings, which can affect things like the width of the rendered string, caret placement, and editing commands. Once you have a string and you want to render it, you partition it into runs, where each run can be rendered with a single font (and has other properties, like a single direction throughout the run). You then map code points to glyphs one-to-one, and then you run a Turing-complete "shaping" pass over the sequence of glyphs and advances. Once you've got your shaped glyphs and advances, you can finally render them.

Code Points


Alright, so what's a code point? A code point is just a number. There are many specs which describe a mapping of number to meaning. Most of them are language-specific, which makes sense because, in any given document, there will likely only be a single language. For example, in the GBK encoding, character number 33088 (which is 0x8140 in hex) represents the δΈ‚ character in Chinese. Unicode includes another such mapping. In Unicode, this same character number represents the θ…€ character in Chinese. Therefore, the code point number alone is insufficient unless you know what encoding it is in.

Unicode is special because it aims to include characters from every writing system on the planet. Therefore, it is a convenient choice for an internal encoding inside text engines. For example, if you didn't have a single internal encoding, all your editing commands would have to be reimplemented for each supported encoding. For this reason (and potentially some others), it has become the standard encoding for most text systems.

UTF-32


In Unicode, there are over 1 million (0x10FFFF) available options for code points, though most of those haven't been assigned a meaning yet. This means you need 21 bits to represent a code point. One way to do this is to use a 32-bit type and pad it out with zeroes. This is called UTF-32 (which is just a way of mapping a 21-bit number to a sequence of bytes so it can be stored). If you have one of these strings on disk or in memory, you need to know the endianness of each of these 4-byte numbers so that you can properly interpret it. You should already have an out-of-band mechanism to know what encoding the string is in, so this same mechanism is often re-used to describe the endianness of the bytes. (On the Web, this is HTTP headers or the tag.) There's also this neat hack called Byte Order Markers, if you don't have any out-of-band data.

UTF-16


Unfortunately, including 11 bits of 0s for every character is kind of wasteful. There is a more efficient encoding, called UTF-16. In this encoding, each code point may be encoded as either a single 16-bit number or a pair of 16-bit numbers. For code points which fit into a 16-bit number naturally, the encoding is the identity function. Unfortunately, there are over a million (0x100000) code points remaining which don't fit into a 16-bit number themselves. Because there are 20 bits of entropy in these remaining code points, we can split it into a pair of 10 bit numbers, and then encode this pair as two successive "code units." Once you've done that, you need a way of knowing, if someone hands you a 16-bit number, if it's a standalone code point or if it's part of a pair. This is done by reserving two 10-bit ranges inside the character mapping. By saying that code points 0xD800 - 0xDBFF are invalid, and code points 0xDC00 - 0xDFFFF are invalid, we can now use these ranges to encode these 20-bit numbers. So, if someone hands you a 16-bit number, if it's in one of those ranges, you know you need to read a second 16-bit number, mask the 10 low bits of each, shift them together, and add to 0x10000 to get the real code point (otherwise, the number is equal to the code point it represents).

There are some interesting details here. The first is that the two 10-bit ranges are distinct. It could have been possible to re-use the same 10-bit range for both items in the pair (and use its position in the pair to determine its meaning). However, if you have an item missing from a long string of these surrogates, it may cause every code point after the missing one to be wrong. By using distinct ranges, if you come across an unpaired surrogate (like two high surrogates next to each other), most text systems will simply consider the first surrogate alone, treat it like an unsupported character, and resume processing correctly at the next surrogate.

UTF-8


There's also another one called UTF-8, which represents code points as either 1, 2, 3, 4, or 5 byte sequences. Because it uses bytes, endianness is irrelevant. However, the encoding is more complicated and it can be less efficient for some strings than UTF-16. It does have the nice property, however, that no byte within a UTF-8 string can be 0, which means it is compatible with C strings.

"πŸ’©".length === 2


Because its encoding is somewhat simple, but fairly compact, many text systems including Web browsers, ICU, and Cocoa strings use UTF-16. This decision has actually had kind of a profound impact on the web. It is the reason that the "length" attribute on emoji returns 2: the "length" attribute returns the number of code units in the UTF-16 string, not the number of code points. If it wanted to return the number of code points, it would require linear time to compute. The choice of which number represents which "character" (or emoji) isn't completely arbitrary, but some things we think of as emoji actually have a number value less than 0x10000. This is why some code points have a length of two but some have a length of one.

Combining code points


Unicode also includes the concept of combining marks. The idea is that if you want to have the character "Γ©", you can represent it as the "e" character followed by U+301 COMBINING ACUTE ACCENT. This is so that every combination of diacritic marks and base characters doesn't have to be encoded in Unicode. It's important because, once a code point is assigned a meaning, it can never ever be un-assigned.

To make matters worse, there is also a standalone code point U+E9 LATIN SMALL LETTER E WITH ACUTE. When doing string comparisons, these two strings need to be equal. Therefore, string comparisons aren't just raw byte comparisons.

This idea can happen even without these zero-width combining marks. In Korean, adjacent letters in words are grouped up to form blocks. For example, the letters γ…‚ γ…“ γ…‚ join to form the Korean word for "rice:" 법 (read from top left to bottom right). Unicode includes a code point for each letter of the alphabet (γ…‚ is U+3142 HANGUL LETTER PIEUP), as well as a code point for each joined block (법 is U+BC95). It also includes joining letters, so 법 can be represented as a single code point, but can also be represented by the string:

U+1107 HANGUL CHOSEONG PIEUP
U+1161 HANGUL JUNGSEONG A
U+11B8 HANGUL JONGSEONG PIEUP

This means, in JavaScript, you can have two strings which are treated exactly equally by the text system, and look visually identical (they literally have the same glyph drawn on screen), but have different lengths in JavaScript.

Normalization


One way to perform these string comparisons is to use Unicode's notion of "normalization." The idea is that strings which are conceptually equal should be normalized to the same sequence of code points. There are a few different normalization algorithms, depending on if you want the string to be exploded as much as possible into its constituent parts, or if you want it to be combined to be as short as possible, etc.

Fonts


When reading text, people see pictures, not numbers. Or, put another way, computer monitors are not capable of showing you numbers; instead, they can only show pictures. All the picture information for text is contained within fonts. Unicode doesn't describe what information is included in a font file.

When people think of emoji, people usually think of it as the little color pictures inside our text. These little color pictures come from font files. A font file can do whatever it wants with the string it is tasked with rendering. It can draw emoji without color. It can draw non-emoji with color. The idea of color in a glyph is orthogonal to whether or not a code point is classified as "emoji."

Similarly, a font can include a ligature, which draws multiple code points as a single glyph ("glyph" just means "picture"). A font can also draw a single code point as multiple glyphs (for example, an accent over Γ© may be implemented as as separate glyph from e). But it doesn't have to. The choice of what glyphs to use where is totally an implementation detail of the font. The choice of which glyphs include color is totally an implementation detail of the font. Some ligatures get caret positions inside them; others don't.

For example, Arabic is a handwritten script, which means that the letters flow together from one to the next. Here are two images of two different fonts (Geeza Pro and Noto Nastaliq Urdu) rendering the same string, where each glyph is painted in a different color. You can see that both fonts show the string with a different number of glyphs. Sometimes diacritics are contained within their base glyph, but sometimes not.


Variation Selectors


There are other classes of code points which are invisible and are added after a base code point to modify it. One example is the using Variation Selector 15 and Variation Selector 16. The problem these try to solve is the fact that some code points may be drawn in either text style (☃︎) or emoji style (☃️). Variation Selector 16 is an invisible code point that means "please draw the base character like an emoji" while #15 means "please draw the base character like text." The platform also has a default representation which is used when no variation selector is present. Unicode includes a table of which code points should be able to accept these variation selectors (but, like everything Unicode creates, it affects but doesn't dictate implementations).

These variation selectors are a little special because they are the only combining codepoints I know of that can interact with the "cmap" table is the font, and therefore can affect font selection. This means that a font can say "I support the snowman code point, but not the emoji style of it." Many text systems have special processing for these variation selectors.

Zero-Width-Joiner Sequences


Rendering on old platforms is also important when Unicode defines new emoji. Some Unicode characters, such as "πŸ‘¨‍πŸ‘©‍πŸ‘§" are a collection of things (people) which can already be represented with other code points. This specific "emoji" is actually the string of code points:

U+1F468 MAN
U+200D ZERO WIDTH JOINER
U+1F469 WOMAN
U+200D ZERO WIDTH JOINER
U+1F467 GIRL

The zero width joiners are necessary for backwards compatibility. If someone had a string somewhere that was just a list of people in a row, the creation of this new "emoji" shouldn't magically join them up into a family. The benefit of using the collection of code points is that older systems showing the new string will show something understandable instead of just an empty square. Fonts often implement these as ligatures. Unicode specifies which sequences should be represented by a single glyph, but, again, it's up to each implementation to actually do that, and implementations vary.

Caret Positions


Similarly to how Unicode describes sequences of codepoints which should visually combine to a single thing, Unicode also describes what a "character" is, in the sense of what most people mean when they say "character." Unicode calls this as a "grapheme clusters." Part of the ICU library (which implements pieces of Unicode) creates iterators which will give you all the locations where lines can break, words can be formed (in Chinese this is hard), and characters' boundaries lie. If you give it the string of "e" followed by U+301 COMBINING ACUTE ACCENT, it should tell you that these codepoints are part of the same grapheme cluster. It does this by ingesting data tables which Unicode creates.

However, this isn't quite sufficient to know where to put the caret when the user presses the arrow keys, delete key, or forward-delete key (Fn + delete on macOS). Consider the following string in Hindi "ΰ€•ि". This is composed of the following two code points:

U+915 DEVANAGARI LETTER KA
U+93F DEVANAGARI VOWEL SIGN I

Here, if you select the text or use arrow keys, the entire string is selected as a unit. However, if you place the caret after the string and press delete, only the U+93F is deleted. This is particularly confusing because this vowel sign is actually drawn to the left of the letter, so it isn't even adjacent to the caret when you press delete. (Hindi is a left-to-right script.) If you place the caret just before the string and press the forward delete key (Fn + delete), both code points get deleted. The user expectations for the results of these kinds of editing commands are somewhat platform-specific, and aren't entirely codified in Unicode currently.
Try it out here:
==> ΰ€•ि <==

Simplified and Traditional Chinese


The Chinese language is many thousands of years old. In the 1950s and 1960s, the Chinese government (PRC) decided that their characters had too many strokes, and simplifying the characters would increase literacy rates. So, they decided to change how about 1/3 of the characters were written. Some of the characters were untouched, some were touched only very slightly, and some were completely changed.

When Unicode started codifying these characters, they had to figure out whether or not to give these simplified characters new code points. For the code points which were completely unchanged, it is obvious they shouldn't get their own code points. For code points which were entirely changed, it is obvious that they should get their own code points. However, what about the characters which changed only slightly? The characters were decided on a case-by-case basis, and some of these slightly-changed characters did not receive their own new code points.

This is really problematic for a text engine, because this is a discernible difference between the two, and if you show the wrong one, it's wrong. This means that the text engine has to know out-of-band which one to show.

Here's an example showing the same code point with two different "lang" tags.
Simplified Chinese:
ι›ͺ
Traditional Chinese:
ι›ͺ

There are a few different mechanisms for this. HTML includes the "lang" attribute, which includes whether or not the language is supposed to be simplified or traditional. This is used during font selection. On macOS and iOS, every Chinese face actually includes two font files: one for Simplified Chinese and one for Traditional Chinese. (For example, PingFang SC and PingFang TC.) Browsers use the language of the element when deciding which of these fonts to use. If the lang tag isn't present or doesn't include the information browsers need, browsers will use the language the machine is configured to use.

Rather than including two separate fonts for every face, another mechanism to implement this is by using font features. This is part of that "shaping" step I mentioned earlier. This shaping step can include a set of key/value pairs provided by the environment. CSS controls this with the font-variant-east-asian property. This works by having the font include glyphs for both kinds of Chinese, and the correct one is selected as part of text layout. This only works, however, with text renderers which support complex shaping and font features.

I think there's at least one other way to have a single font file be able to draw both simplified and traditional forms, but I can't remember what they are right now.