Page cover

VIOSO WarpBlend API

Overview

VIOSO offers a library for integration in image generators and rendering/media engines. Its main features are:

  • Warping: apply geometric correction and distortion compensation.

  • Blending & masking: apply soft-edge gradients and black-out masks.

  • Automatic view calculation: set the direction & field of view of each virtual camera based on the calibrated mesh.

  • Perspective correction for static/dynamic eyepoint: supports dynamic views and warping per frame, for VR, tracking, motions platforms, etc.

The API takes as input a .vwf export file and a texture buffer to sample from. If no texture buffer is given, it uses a copy of the current back buffer. The output is rendered to the currently set back buffer.

The API is available open-source here on bitbucket:

https://bitbucket.org/VIOSO/vioso_api

The VIOSOWarpBlend binaries for Windows are include in “bin” folder, so you do not need to compile the API by yourself on Windows platforms.

There are various examples for (nearly) all rendering APIs: DX9, DX11, DX12, OpenGL, Vulkan..

VIOSO WarpBlend.ini Reference

Here you will find the complete description of the WarpBlend.ini parameters for implementing the VIOSO WarpBlend API: VIOSOWarpBlend.ini Reference

API Usage on Windows

Static binding

Having VIOSOWarpBlend next to your executable or in added to %path%: link against VIOSOWarpBlend.lib and, in your [precompiled] header.

#include "VIOSOWarpBlend.h"

Dynamic binding

1) use wrapper from VIOSOWarpBlend

declaration:

#include "../../Include/VIOSOWarpBlend.hpp"
const char* s_configFile = "VIOSOWarpBlendGL.ini";
std::shared_ptr<VWB> pWarper;

initialization: (where channel is a string containing the channel name)

try {
    pWarper = std::make_shared<VWB>( "", nullptr, s_configFile, channel.c_str(), 1, "" );
}
catch( VWB_ERROR )
{
    return FALSE;
}
if( VWB_ERROR_NONE != pWarper->Init() )
    return FALSE;

pre-render:

float view[16], proj[16];
    float eye[3] = { 0,0,0 };
    float rot[3] = { 0,0,0 };

    pWarper->GetViewProj( eye, rot, view, proj ); // call some of the get frustum functions there are others serving clip coordinates or angles

render: render your scene into FBO / Offscreen RT, attached to texUnwarped

post-render:

pWarper->Render( texUnwarped, VWB_STATEMASK_PIXEL_SHADER | VWB_STATEMASK_SHADER_RESOURCE );

b) via header:

in header declare functions and types:

#define VIOSOWARPBLEND_DYNAMIC_DEFINE
#include "VIOSOWarpBlend.h"

in one file on top, to implement the actual functions/objects

#define VIOSOWARPBLEND_DYNAMIC_IMPLEMENT
#include "VIOSOWarpBlend.h"

in module initialization, this loads function pointers from library

#define VIOSOWARPBLEND_DYNAMIC_INITIALIZE
#include "VIOSOWarpBlend.h"

c) Single file:

in file on top, to declare and implement functions/objects,

#define VIOSOWARPBLEND_DYNAMIC_DEFINE_IMPLEMENT
#include "VIOSOWarpBlend.h"

in module initialization, this loads function pointers from library

#define VIOSOWARPBLEND_DYNAMIC_INITIALIZE
#include "VIOSOWarpBlend.h"

Always make sure to have your platform headers loaded before!

API Usage on Linux

There is a separate linux branch which is work in progress. Check it out via the command

git clone -b linux_test https://bitbucket.org/VIOSO/VIOSO_api.git

(be sure to have installed git lfs or do sudo apt-get install git-lfs if you get an error related to extracting vioso2d.zip or vioso3D.zip you may have force a git lfs checkout: git lfs pull)

Build it (as a shared library) using cmake: in the root folder execute the follwing command (having cmake installed – sudo apt-get install cmake): mkdir build && cd build && cmake .. && make and make install to make it available system wide.

Link it the usual way: g++ ... -L/usr/local/lib -l VIOSOWarpBlend (adjust -L/usr/local/lib to the place you installed it to)

In order to build the example project you’ll need glfw + dependencies. use this command: sudo apt-get install libglfw3-dev libxcursor-dev libxi-dev libxinerama-dev freeglut3-dev

Implementation Remarks

The warper uses a pixel shader to deform the input to the currently set render target.

By using the same view and projection matrices as the rendering process, we can map provided image using the 3D-coordinates of the actual screen for every projector pixel.

The sequence is always the same:

Set up your 3D environment. For each window call

VWB_Create( )

Providing a DirectXDevice or for DX12 a ID3D12CommandQueue. In case of OpenGL, set this to NULL but make sure the window’s context is current and in same thread.

In case you provided a path to some .ini-file, it is loaded and the warper’s attributes are set accordingly. You might now change the warper’s attributes or rely on .ini solely. Then call

VWB_init()

This will load all mappings to the GPU and computes life-time transformations and compiles the shader code.

Your render loop you call

VWB_getViewProj() or VWB_getViewClip()

To obtain the frustum to render your scene with.

In case you do only 2D rendering (wallpaper) make sure, you use a 2D mapping, then you can skip the above step.

After rendering is finished, and the image output is ready in a texture or the current backbuffer, set the backbuffer as render target (in D3D12, you provide it’s GPU-handle) then call

VWB_render()

to warp scene to screen. There is no multi-threading inside VIOSOWarpBlend module, make sure to use same thread or make your GL-context current before calling VWB_render.

This API can use all mappings. In case you export 3D, you can specify view parameter.

dir=[pitch, yaw, roll]
fov=[left,top,right,bottom]   ;all positive, so 90°x85° would be [45,42.5,45,42.5]
screen=distanceToViewPlane

OR set

bAutoView=1

to let the API calculate these values.

These values

near=minimumDistanceToRender
far=maximmDistanceToRender

need to be set to some sane value. It is not used for calculating the warped output. It makes sense to overwrite this with your values after the warper was created.

The eye point correction algorithm works implicitly. Imagine a rectangular “window” placed virtually near next to the screen. It must be situated the way that from every possible (dynamic) eye point the whole projection area of the regarded projector is seen through that window.

API Examples

1 - Simulator with moving platform

Image Generator:

  • Origin is viewer’s eye

  • OpenGL right-handed camera-like coordinate system +X = right, +Y= up, +Z back

  • Unit Millimeter

Model:

  • Panadome Screen

  • generated with VIOSO

    • R=3000mm, left: -90°, right 90°, lower -20°, upper 70°

    • Axes: +X right, +Y up, +Z back

  • Platform pivot is 750mm below mid-point of sphere

  • Eye is 600mm above and 350mm left of platform pivot

The simulator’s axes and scale are identical to VIOSO, thus the rotation/scale part of the matrix, which is the upper 3×3 is Identity.

The pivot (rotation center) is 750mm below model origin, so a position input of (0,0,0) must become (0,-750,0) this is realized by setting t=(0,-750,0).

1
0
0
0

0

1

0

0

0

0

1

0

0

-750

0

1

base=[1,0,0,0;0,1,0,0;0,0,1,0;0,-750,0,1]

Eye is in IG coordinates. So set it to (-350,600,0)

eye=[-350,600,0]

Don’t forget to set

bTurnWithView=1

2 - Another simulator with moving platform

Image generator:

  • Origin is viewer’s eye

  • DirectX left-handed camera-like coordinate system +X = right, +Y= up, +Z front

  • Unit meter

Model:

The simulator’s coordinate system is Z-mirrored because of the other handedness, so the resulting Z axes must be negated. Scale is 1000 to VIOSO.

The pivot (rotation center) is 750mm below model origin, so a position input of (0,0,0) must become (0,-750,0) this is realized by setting t=(0,-750,0). This is always model coordinates.

1000
0
0
0

0

1000

0

0

0

0

-1000

0

0

-750

0

1

base=[1000,0,0,0;0,1000,0,0;0,0,-1000,0;0,-750,0,1]

Eye is in IG coordinates. So set it to (-0.35,0.6,0)

eye=[-0.35,0.6,0]

Again don’t forget to set

bTurnWithView=1

as we want “straight ahead” always in user’s view direction.

3 – Cave

Image Generator

  • Origin is center of floor, 30cm below mid-point of base circle of screen

  • DirectX left-handed camera-like coordinate system +X = right, +Y= up, +Z front

  • Unit inches

Model

  • Custom model

  • Unit Meter

  • Origin is mid-point of base circle of cylinder

  • R=4m h=3m

  • Axes: +X=right, +Y=forward, +Z=up (right handed)

Others:

  • Tracker is calibrated to model origin, so it yields model coordinates

  • Stereoscopic 3D

The base matrix must be scaled down to inches 1”=0.0254m. Also, the axes are rotated: Y is same but model Y is IGs Z and Z is Y. This also flips the handedness. The origin 30cm lower.

0.0254
0
0
0

0

0

0.0254

0

0

0.0254

0

0

0

0

-0.3

1

base=[0.0254,0,0,0;0, 0, 0.0254,0;0, 0.0254,0,0;0,0,-0.3,1]

The tracker yields the actual head position and direction in inches. We must add the eye-offset to left and right eye this is in IG coordinates so we need to shift 1.25” to each side. We set eye-vector to:

[channel1L]
eye=[-1.25,0,0]
[channel1R]
eye=[1.25,0,0]

bTurnWithView=0 ; to keep the world fixed to the walls of the screen instead to the viewers eye.

4 – Video player

Image Generator

  • 2D Output

No need to set matrices or eye point. Render the wallpaper into a texture and call

VWB_Render()

for each channel with that texture. The warper will pick the right content part to fill the screen.

If you set

bAutoView=1

You need to render your content into a texture of size and offset given by

Warper->optimalRect

This way you can use multiple clients to fill the whole projection area. optimalRect is a volatile value, so you can adjust it after VWB_init() with immediate effect in VWB_Render()

Last updated