tisdag 1 september 2015

GPipe is dead, long live GPipe!

I am proud to announce the release of the next major version of GPipe on Hackage!

In this post I'll explain the whats and whys and do some demonstration of the new API.

What is GPipe?

If you never heard about GPipe before, please read on. If you however already been using GPipe for the last six years and just been waiting for the next version to come out, you might wanna skip ahead to the next section.

GPipe is OpenGl made functional and type safe for Haskell. And then I mean really type safe, as in "if it compiles, it runs"! There are already a couple of bindings for OpenGl in Haskell but when you use either of those you are still exposed to the inherently imperative state machine that is OpenGl, and there are a myriad of invariants you need to keep track of that otherwise will cause hard to debug runtime crashes. This is especially true with modern OpenGl (versions >= 3). GPipe is based on the OpenGl 3.3 core profile.

Some examples of things that are awkward or hard to do right in plain OpenGl but that GPipe makes safe and easy:
  • Setting up complete textures (e.g. define all mip map levels with correct size)
  • Setting up texture samplers for use in shaders
  • Setting up complete FBOs and attach all outputs correctly from fragment shaders
  • Setting up VBOs and match the types and indices in GLSL code
  • Aligning uniform buffers to match GLSL code
  • Handling buffers is about as safe as dealing with void* in C++, especially interleaved VBOs
  • Selecting the correct format, internal format and type for transferring pixels to your texture
Since I want shaders to be type safe with respect to the rest of the pipeline (e.g. use same format of vertex attributes as provided in the vertex array) I decided to provide a DSL for the shaders as well. There is no need to learn GLSL at all to use GPipe! (You might wanna learn linear though...)

Where can I get it?

It's right here!

Why a new major version?

If you have tried GPipe 1, chances are that you ran into some memory leaks. And that is the major flaw with the GPipe 1 design: you had little control over space and time. "Space" as in vertex buffers and textures where converted from Haskell lists and cached automatically. To animate a texture for instance, you had to create a new one each frame! And "time" as in shaders were generated and compiled at times that weren't really obvious for the user. In fact, shader source code was generated each frame and the shader cache used that source to pick a compiled shader, or compile a new one. While this cache used a trie to be as efficient as possible, it just didn't scale well.

GPipe 1 also modeled the graphical pipeline as a pure function, something that only really pays out if you are rendering to an image to store offline. If you want to present the rendered result, you are still bound to the IO monad. This perceived purity required a lot of unsafePerformIO under the hood, and in order to prevent different GL frames to interfere there was a lot of explicit forcing of evaluations that bloated the code.

So when I set off to rewrite GPipe I decided on a couple of design principles:
  • It should be safe
  • It should be fast
  • It should be familiar to anyone who knows modern OpenGl
In order to achieve that, I had to kill some of my darlings. For example, the almost magical toGPU function that turned a normal Haskell value into a GL uniform value to be used in shaders had to go. It was the main cause for the shader to be regenerated every frame. Still, I think the end result is just as modular and easier to reason about.

Whats new in GPipe 2?

  • Based on OpenGl 3.3 core profile
  • Mutable textures
  • Mutable type safe buffers
  • Interleaved or non-interleaved vertex arrays
  • Instanced rendering
  • Control structures (such as loops) in shaders
  • Render to texture (FBOs) with support for multiple render targets
  • Integral texture formats (even for FBOs)
  • Much wider texture and sampler support (e.g. shadow samplers)
  • Explicit control of shader compilation
  • Explicit control of context creation, with support for shared contexts
  • No built in GLUT window manager. This functionality is instead provided by external packages.
  • Vec library is replaced by linear
  • No unsafePerformIO!
GPipe 2 is not backward compatible with GPipe 1. That also means that auxiliary packages like GPipe-Collada and GPipe-TextureLoad doesn't work with this new version yet.

As for now, there exist one window manager package for GPipe 2 created by plredmond: GPipe-GLFW. Anyone who dares can create their own window manager based on GLUT or whatever if they wanted.

Enough talk! Show me some code and pretty pictures!

Right.

Edit 2015-09-02: Added missing language pragmas and fixed indentation that got skewed when entering world of HTML.

{-# LANGUAGE ScopedTypeVariables, PackageImports, FlexibleContexts, TypeFamilies #-}  
module Main where  
   
import Control.Applicative  
import Control.Monad  
import "transformers" Control.Monad.IO.Class  
   
import Graphics.GPipe  
import qualified "GPipe-GLFW" Graphics.GPipe.Context.GLFW as GLFW  
import qualified "JuicyPixels" Codec.Picture as Juicy  
import qualified "JuicyPixels" Codec.Picture.Types as Juicy  
import "linear" Linear  
   
main =   
  runContextT GLFW.newContext (ContextFormatColorDepth SRGB8 Depth16) $ do  
    -- Create vertex data buffers  
    positions :: Buffer os (B2 Float) <- newBuffer 4  
    normals   :: Buffer os (B3 Float) <- newBuffer 6  
    tangents  :: Buffer os (B3 Float) <- newBuffer 6  
    writeBuffer positions 0 [V2 1 1, V2 1 (-1), V2 (-1) 1, V2 (-1) (-1)]        
    writeBuffer normals 0 [V3 1 0 0, V3 (-1) 0 0, V3 0 1 0, V3 0 (-1) 0, V3 0 0 1, V3 0 0 (-1)]  
    writeBuffer tangents 0 [V3 0 1 0, V3 0 (-1) 0, V3 0 0 1, V3 0 0 (-1), V3 (-1) 0 0, V3 1 0 0]  
  
    -- Make a Render action that returns a PrimitiveArray for the cube   
    let makePrimitives = do   
          pArr <- newVertexArray positions  
          nArr <- newVertexArray normals  
          tArr <- newVertexArray tangents  
          let sideInstances = zipVertices (,) nArr tArr          
          return $ toPrimitiveArrayInstanced TriangleStrip (,) pArr sideInstances   
      
    -- Load image into texture  
    Right (Juicy.ImageYCbCr8 image) <- liftIO $ Juicy.readImage "image.jpg"  
    let size = V2 (Juicy.imageWidth image) (Juicy.imageHeight (image))  
    tex <- newTexture2D SRGB8 size maxBound -- JPG converts to SRGB  
    writeTexture2D tex 0 0 size $ Juicy.pixelFold getJuicyPixel [] image   
    generateTexture2DMipmap tex  
      
    -- Create a buffer for the uniform values        
    uniform :: Buffer os (Uniform (V4 (B4 Float), V3 (B3 Float))) <- newBuffer 1  
    
    -- Create the shader  
    shader <- compileShader $ do  
      sides <- fmap makeSide <$> toPrimitiveStream primitives  
      (modelViewProj, normMat) <- getUniform (const (uniform, 0))  
      let filterMode = SamplerFilter Linear Linear Linear (Just 4)  
          edgeMode = (pure ClampToEdge, undefined)  
          projectedSides = proj modelViewProj normMat <$> sides            
      samp <- newSampler2D (const (tex, filterMode, edgeMode))  
                       
      fragNormalsUV <- rasterize rasterOptions projectedSides          
      let litFrags = light samp <$> fragNormalsUV  
          litFragsWithDepth = withRasterizedInfo   
              (\a x -> (a, getZ $ rasterizedFragCoord x)) litFrags  
          colorOption = ContextColorOption NoBlending (pure True)  
          depthOption = DepthOption Less True                            
    
      drawContextColorDepth (const (colorOption, depthOption)) litFragsWithDepth  
       
    -- Run the loop  
    loop shader makePrimitives uniform 0  
    
loop shader makePrimitives uniform angle = do  
  -- Write this frames uniform value   
  size@(V2 w h) <- getContextBuffersSize  
  let modelRot = fromQuaternion (axisAngle (V3 1 0.5 0.3) angle)  
      modelMat = mkTransformationMat modelRot (pure 0)  
      projMat = perspective (pi/3) (fromIntegral w / fromIntegral h) 1 100   
      viewMat = mkTransformationMat identity (- V3 0 0 5)  
      viewProjMat = projMat !*! viewMat !*! modelMat  
      normMat = modelRot  
  writeBuffer uniform 0 [(viewProjMat, normMat)]  
   
  -- Render the frame and present the results  
  render $ do  
    clearContextColor 0 -- Black  
    clearContextDepth 1 -- Far plane  
    prims <- makePrimitives  
    shader $ ShaderEnvironment prims (FrontAndBack, ViewPort 0 size, DepthRange 0 1)  
  swapContextBuffers  
  
  closeRequested <- GLFW.windowShouldClose  
  unless closeRequested $  
    loop shader makePrimitives uniform ((angle + 0.005) `mod''` (2*pi))  
  
getJuicyPixel xs _x _y pix =  
  let Juicy.PixelRGB8 r g b = Juicy.convertPixel pix in V3 r g b : xs   
  
getZ (V4 _ _ z _) = z -- Some day I'll learn to use lenses instead...  
  
data ShaderEnvironment = ShaderEnvironment   
  { primitives :: PrimitiveArray Triangles (B2 Float, (B3 Float, B3 Float))  
  , rasterOptions :: (Side, ViewPort, DepthRange)  
  }          
   
-- Project the sides coordinates using the instance's normal and tangent  
makeSide (p@(V2 x y), (normal, tangent)) =   
  (V3 x y 1 *! V3 tangent bitangent normal, normal, uv)    
  where bitangent = cross normal tangent  
        uv = (p + 1) / 2  
   
-- Project the cube's positions and normals with ModelViewProjection matrix  
proj modelViewProj normMat (V3 px py pz, normal, uv) =   
  (modelViewProj !* V4 px py pz 1, (fmap Flat $ normMat !* normal, uv))   
  
-- Set color from sampler and apply directional light  
light samp (normal, uv) =   
  sample2D samp SampleAuto Nothing Nothing uv * pure (normal `dot` V3 0 0 1)     


(It is actually spinning)

You need an image called "image.jpg" in the same folder as your compiled program.

Besides linear and transformers, this code uses GPipe-GLFW for window management, and JuicyPixels for loading images.

I wont go into the details of the code today, but will instead cover all of it and more in a tutorial that I will publish as a series of blog posts here on this site. I trust you are adventurous enough to make some sense of the code on your own in the meantime, maybe with a little help from the haddocks. I expect you to find bugs, and when you do please report them to https://github.com/tobbebex/GPipe-Core/issues.

What's next?

In order to reach feature completeness I had to leave a couple of OpenGl 3.3 features out for GPipe 2.0, but now starts the time of bringing them in! I'd like to have the communities feedback first on what features you need. Some suggestions on features currently not in GPipe 2:
  • Geometry shaders
  • Transform feedback
  • Multisample framebuffers and textures
  • Compressed textures
  • Half and double formats
  • Shader array types
  • Buffer textures
  • Occlusion queries
  • Packed host and buffer formats
  • OpenGl 4.5 features...
Let me know what you want, at https://github.com/tobbebex/GPipe-Core/issues, or by email to tobias_bexelius snabelA hotmail.com, where "snabelA" is Swedish for "at".

8 kommentarer:

  1. This is kinda tangential to this, but what's your opinion of Vulkan? it looks like it could be very powerful for this type-safe GPU programming, especially the replacement for GLSL being a bytecode lang.

    SvaraRadera
    Svar
    1. I look forward to it!

      With Vulkan, OpenGl will stop doing a lot of book keeping and safe guarding and be closer to the actual hardware. The learning curve for using OpenGl directly will be much steeper than before, which will increase the need for an engine or middleware like GPipe. Also, since the type safe and functional interface of GPipe makes some invariants impossible to express, many of the runtime checks the drivers have will not be needed anymore. This will hopefully make it possible to create a GPipe that has much smaller CPU overhead, on par with or better than the old raw OpenGl bindings. So for the end user it will be a clear win!

      For me as a developer, it will surely be a challenge though. A fun one, but a challenge none the less. The OpenGl drivers from GPU vendors like nVidia and AMD has tons of optimizations that you will have to do yourself instead. But again, I expect the types to help me avoid many of the cases that the drivers today need to optimize for during runtime.

      Radera
  2. Great to see the resurrection of an awesome library. Btw how does this compare to luminance?

    https://hackage.haskell.org/package/luminance

    SvaraRadera
    Svar
    1. The biggest difference is that luminance uses GLSL shaders, so it doesn't provide type safety at shader boundaries (you will for example reference uniforms with their string names). In GPipe shaders are written in Haskell and their usage will be type checked at compile time.

      Furthermore luminance isn't as flexible when declaring vertex array layouts (most notably doesn't support instanced rendering).

      Radera
  3. Looks like an awesome library, can't wait to play with it!

    SvaraRadera
  4. Too many dependencies on linear ...

    SvaraRadera
  5. I just thought I should let you know that linear defines lenses for each vector component (_x, _y, _z). And they're not that difficult to define anyhow.

    SvaraRadera