The entirety of this discussion comes from 
Before CoreAnimation, AppKit had a particular drawing model. In this drawing model, AppKit has a single screen-sized canvas. When AppKit wants to populate a region of this canvas, it would construct a CoreGraphics drawing context targeting the canvas, and ask each of your views which intersect the dirty region of the canvas to recursively draw using that drawing context. Your app would then would then call functions like drawLine(), drawCircle(), drawText(), etc. on the drawing context, and those commands would be realized on the canvas. The canvas then shows up on the screen. There is a simple callback you can call to mark a region dirty.
CoreAnimation, instead, uses a different paradigm, which involves a scene graph of 2D layers. Each layer can have its own canvas. CoreAnimation supports a similar drawing model as AppKit, where when CoreAnimation wants to populate a region of a layer, it will construct a CoreGraphics drawing context targeting the layer, and then ask either subclasses or the layer’s delegate to draw using that drawing context. Once all the canvases in the scene graph are populated, CoreAnimation takes care of actually updating the screen.
Before CoreAnimation, your window is visually atomic, meaning that any time anything changes, you must redraw everything dependent on that change with CoreGraphics. However, with CoreAnimation, you chop up all your drawing into layers, which are finer-grained than the entire window. Therefore, when a region of a layer changes, only bits of that layer will need to be redrawn (and all the other layers don’t need to be touched).
In addition, layers can all have differing sizes and positions. This means that, if something moves, and it has its own layer, you can just move the layer, and never have to touch CoreGraphics at all!
I’d like to take a minute to describe the pros and cons of layers. In particular, layers can be moved extremely quickly compared to AppKit views because the CoreGraphics content doesn’t have to be repainted. However, it is very common to have overlapping layers, which means you may be paying the memory cost of a pixel many times. If you have tons of overlapping views, you can definitely run out of memory. This means that, if you ask for a layer, you may be denied. You should gracefully handle that.
So that’s pretty cool, but CoreAnimation was invented after AppKit, which means that, if AppKit wants to use CoreAnimation, it has to figure out a way to make the two models work with each other. There are two models for this interaction: layer-backed views and layer-hosted views.
Layer-hosted views are conceptually simpler. A layer-hosted view is a leaf view in the AppKit view hierarchy, but it has its own (possibly complex) CoreAnimation layer tree rooted at it. Therefore, there isn’t much interoperability here; instead, when AppKit encounters this view, it knows to just let CoreAnimation handle it from there.
A layer-backed view has a CoreAnimation layer, but also has child views (which have their own layers). Thus, the AppKit view hierarchy is intact; each view just has this extra property of its backing layer. Now, layers and views have some duplicate functionality: they both have things like position and size and the ability to ask the application to draw, etc. AppKit owns all the layers which back views, and will automatically synchronize all the state changes between the view and the layer. This means if you move the view to a particular place, AppKit will implement that by moving the layer to that place. When the layer needs to be drawn, AppKit will ask the view to draw into the layer’s context. It can do this because AppKit implements the layer’s delegate, so when the layer needs something, it calls up to AppKit.
You opt-in to using a layer by using the NSView property wantsLayer. However, note that the name is “wants.” In particular, because of the memory concerns described above, you may want a layer but not get one. This is okay, though, because there isn’t any reason why your drawRect: method should behave differently depending on if its drawing into a layer or not. If you ask for a layer and don’t get one, it just means that animations will be slower, but your app will continue to behave correctly. Therefore, we have graceful degradation.
AppKit also exposes a getter which returns the layer associated with any view (and may return nil for regular views). This means, however, that you now have access to both pieces of state that AppKit tries to keep synchronized: properties on the view and properties on the layer. Therefore, when you use layer-backed views, there is a set of properties on the layers that you shouldn’t modify (because modifying them will cause CoreAnimation to get out of synch with AppKit).
But it also means that you can set any of the ::other:: properties of the CoreAnimation layer which AppKit isn’t responsible for. This is pretty powerful, as you can set the contents of the layer to an image (with a simply property assignment) or give the layer a border, etc. In fact, you may actually have a layer where the entire visual contents of the layer can be described simply by setting properties on the layer! In this case, your drawRect: function would be empty, which means that there is no reason for CoreAnimation to create a CoreGraphics context for the layer at all.
CoreAnimation actually optimizes for this case by allowing you to implement the displayLayer: method on the layer’s delegate. If you do this, it will be called ::instead:: of drawSublayer:inContext:. Note how displayLayer: doesn’t pass you a CoreGraphics context; instead, you are expected to simply set some properties on the layer which will populate its contents.
AppKit exposes this with the updateLayer: method on NSView. You opt-in to using this method by setting the wantsUpdateLayer property to true. If you do this, then AppKit will cause CoreAnimation to call the layer’s delegate’s displayLayer: method, which AppKit implements by calling the view’s updateLayer: method. This is where you can set any properties on the layer you want (except for the ones AppKit manages). Note that, if you implement this, you should ::also:: implement drawRect for graceful degradation if there isn’t enough memory to create a layer for your view.
Before CoreAnimation, animations in AppKit were implemented by an “animator” proxy object. You would set properties on this object as if it was a view, and the animator would take care of updating the real view over time. However, there was no hardware acceleration, which means that each time the animator would change something on the view, the view would be asked to redraw (with CoreGraphics). However, in CoreAnimation, we would like to draw with CoreGraphics as infrequently as possible. Specifically: when we draw into a layer, the backing store of that layer can be animated much faster than we can draw into the layer. Therefore, it’s valuable for AppKit not to tell our view to paint on each frame of an animation, and instead animate the layer. You can prescribe this behavior with the layerContentsRedrawPolicy property. Note that this degrades gracefully; this property is only consulted if you actually get a layer.
Also, because AppKit owns particular properties of the backing layers, you shouldn’t use CoreAnimation to animate those, as that will cause the states to get out of sync. Instead, you should use the AppKit animator proxy object.