Saturday, June 7, 2014

Design of Mesa Round 2!

The last I wrote about Mesa was over a year ago. During that time, I have started afresh at trying to learn about how Mesa works. Last time I was writing, my graphics card didn’t have a Mesa driver that could run on my operating system, which made learning about this difficult. I recently bought a machine with a G41 chipset [1], which allows for use of the GPU on Intel Core 2 processors. This GPU has an open source Mesa driver, so after recompiling Mesa with debugging symbols, I was ready to go. This time, I have a focus on how Mesa interacts with hardware.

The first thing to realize is that there are actually four players here. The most obvious player is Mesa [2] , the project which actually implements the OpenGL API. Mesa doesn’t interact with the graphics card directly, however; there is a low-level library called libdrm [3] which provides an interface to the hardware. This library isn’t much more than a thin layer around system calls (though there are a couple places where it does a little more accounting. More on that later). It is important to note that most of libdrm is platform-specific, though there are a few non-platform-specific bits. The third player is the kernel itself, which has to receive the system calls which libdrm makes and do something with them. The fourth player is the X11 server, which doesn’t interact much but is involved in some of the setup steps.

You can actually run OpenGL commands two ways:

  • GLX [4] is an X11 extension which serializes OpenGL commands and forwards them through the X display protocol to the X server. The X server then executes these commands. As you can imagine, the extra serialization, IPC, and deserialization overhead make this path non optimal. However, on setups where the X11 server is running on a different machine than the X11 client, this is the only option.
  • DRI [5] is a concept that allows X11 clients to access the display hardware without going through the X11 server. This doesn’t have the aforementioned overhead, but only works if the server and client are running on the same machine (which is almost always the case).

I am going to be completely ignoring GLX and will focus entirely on DRI (version 2).

There are actually three different versions of DRI. Mesa will try to use each one, in turn, until it finds one that works. It does this first by asking the X11 server if it supports the official protocol name using the standard method. If so, Mesa will issue a DRI2Connect [6] request (in DRI2Connect()) which will return the name of the DRI driver to use (i965) and a file path to the device file representing the GPU. The X11 server can reply with these things because the X11 driver is DRI-aware, and the DRI messages are fulfilled by the X11 driver. Mesa goes through these steps for each X11 screen (because each screen might be connected to a different graphics card).

Mesa then dlopen()s the driver .so (driOpenDriver()), and looks for a couple symbols with well-known names. These symbols include function pointers to creating DRIScreen and DRIContext objects (which, in turn, have function pointers for all of the GL calls). Mesa then saves these function pointers in a table so that API calls can be routed to the correct place.

The kernel represents the graphics card as a device inside /dev/ (with a path like /dev/dri/card0). Commands sent to the graphics card are implemented in terms of ioctl() calls on a file descriptor to that device file. Issuing these ioctl() calls is largely the job of libdrm. One example is that there is a DRM call named drmGetVersion() which simply ends up calling ioctl(fd, DRM_IOCTL_VERSION, buffer), where DRM_IOCTL_VERSION is just a #define’d constant that matches what the kernel is looking for. The kernel will both read and write the contents of the buffer supplied to the ioctl, which is how arguments are passed. Most of these #define’d constants are platform-specific (e.g. DRM_I916_GETPARAM).

The last piece I want to discuss today is the integration between DRM and the X11 server. This is actually fairly simple: Part of the DRI X11 extension includes a DRI2GetBuffers request, which will return an array of information regarding the buffers which the X11 server has allowed DRM to render to. This is an array because you may have a double-buffered context, for example, so DRM needs information regarding both the front buffer and the back buffer. Among this information is a handle for each of the buffers. The Mesa i965 driver then runs the drm_intel_bo_gem_create_from_name() function inside libdrm (using the DRM_IOCTL_GEM_OPEN ioctl) which creates a buffer object (bo) given the handle, which the driver can then use to render to. Mesa is free to populate that buffer object however it wants, while all the X11 server has to do is composite the region of memory that Mesa is rendering to. This just reiterates the fact that the X11 server driver needs to be DRI-aware.


[1] http://www.intel.com/content/www/us/en/chipsets/mainstream-chipsets/g41-express-chipset.html
[2] http://www.mesa3d.org
[3] http://cgit.freedesktop.org/mesa/drm/
[4] http://dri.freedesktop.org/wiki/GLX/
[5] http://dri.freedesktop.org/wiki/
[6] http://www.x.org/releases/X11R7.7/doc/dri2proto/dri2proto.txt

1 comment:

  1. This is a terrific series and I am looking forward for future articles. The information about topics like mesa internals or DRI in the internet is close to zero. As far as I know, these series of articles are the only ones which tackle them. So, please continue..

    note: as far as x server internals go, there is the article series of christos karayiannis..
    http://www.tutorialized.com/tutorial/X-WIndow-System-Internals

    ReplyDelete