The model of OpenGL on iOS is much simpler than that on macOS. In particular, the context creation routine on macOS is older than the concept of OpenGL frame buffers, which is why it is structured the way that it is. Back then, the model was much simpler: the OS gave you a buffer, and you drew stuff into it. If you wanted to render offscreen, you had to ask the OS to give you an offscreen buffer.
That all changed with frame buffer objects. Now, in OpenGL, you can create your own offscreen render targets, render into them, and when you’re done, read from them (either as a texture or into host memory). This means that there is a conceptual divide between that buffer the OS gives you when you create your context, and the frame buffer objects you have created in your own OpenGL code.
On iOS, the OpenGL infrastructure was created with frame buffer objects in mind. Instead of asking the OS to give you a buffer to render into, you instead, ask the OS to assign a backing store to a render buffer (which is part of a framebuffer). Specifically, you do this after the OpenGL context is created. This means that almost all of those creation parameters are now unnecessary, since most of them define the structure of that buffer the OS gives you. Indeed, on iOS, when you create a context, the only thing you specify is which version of OpenGL ES you want to use.
On iOS, the way you render directly to the screen is with CoreAnimation layers. There is a method on EAGLContext, renderbufferStorage:fromDrawable: which connects an EAGLDrawable with a renderbuffer. Currently, CAEAGLLayer is the only class which implements EAGLDrawable, which means you have to draw into a layer in the CoreAnimation layer tree. (You can also draw into an offscreen IOSurface by wrapping a texture around it and using render-to-texture, as detailed in my previous post).
This model is quite different from CAOpenGLLayer, as used on macOS. Here, you can affect the properties of the drawable by setting the drawableProperties property on the EAGLDrawable.
There is a higher-level abstraction: a GLKView, which subclasses UIView. This class has a GLKViewDelegate which provides the drawing operations. It has properties which let you specify the attributes of the drawable. There’s also the associate GLKViewController which subclasses UIViewController, which has its own GLKViewControllerDelegate. This delegate has an update() method, which is called between frames. The idea is that you shouldn’t need to subclass GLKView or GLKViewController, but you should subclass the delegates.
Many iOS devices have retina screens. The programmer has to opt-in to high density screens by setting the contentsScale property of the CAEAGLLayer to whatever UIScreen.nativeScale is set to. If you don’t do this, your view will be stretched and blurry. This also means that you have to take care to update any places where you interact with pixel data directly, like glReadPixels().
iOS devices also support multiple monitors via AirPlay. With AirPlay, an app can render content on to a remote display. However, the model for this is a little different than on macOS: instead of the user dragging a window to another monitor, and the system telling the app about it, the app handles the movement to the external monitor. The system will give you a UIScreenDidConnectNotification / UIScreenDidDisconnectNotification when the user enables AirPlay. Then, you can see that the [UIScreen screens] array has multiple items in it. You can then move a view hierarchy to the external screen by assigning the screen to your UIWindow’s screen property. You can create a new UIWindow by using the regular alloc / initWithFrame constructor and passing in the UIScreen’s bounds. You then set the rootViewController of this new window to whatever you want to show on the external monitor. Therefore, when this occurs, you have the freedom to query the properties of the remote screen (using UIScreen APIs, such as UIScreen.nativeScale) and react accordingly. For example, if you have a retina device but you are moving content to a 1x screen, you can know this by querying the screen at the time you move the window to it.
On macOS, an OpenGL context could have many renderers inside it, with only one being active at a current time. On iOS devices, there is only one GPU, which means there is only one renderer. This means you don’t have to worry about a switch in renderers. This means that the model is much simpler and you don’t have to worry so much about things changing out from under you.
No comments:
Post a Comment