首页 > 代码库 > NV SDK 10 (1) Clipmaps

NV SDK 10 (1) Clipmaps

Clipmaps sample:

Abstract

Clipmaps are a feature first implemented on SGI workstations that allow mapping
extremely high resolution textures to terrains. The original SGI implementation
required highly specialized, custom hardware. The advanced features of the
NVIDIA® GeForce® 8800 now permit the same algorithm using consumer
hardware.


Although current APIs and the GeForce 8800 directly support textures with
dimensions up to 8192, this size may be considered insufficient when we talk about
wide landscapes, say, in flight simulators. The idea of using a single texture for the
whole landscape can be very promising due to the fact that we can not only design
the whole landscape texture at once, but also parameterize it simply. Big textures
have "big" advantages compared to traditional methods of using several textures
with blending. This comes from the fact that they can be as complex as you wish.
Ones a designer has created a whole map it can be used as is.


Clipmaps take advantage of the fact that, due to perspective projection, only
relatively small regions within the texture mipmap pyramid are being accessed every
frame. Thus we have to manage these “hot” regions and update them in video
memory as the viewer moves around. A DX10 solution is to store such regions in a
texture array. Being able to index into it from the pixel shader allows for a
straightforward implementation of the clipmap algorithm in DX10.

 

How Clipmap Works
Clipmap can be defined as a partial representation of a mipmap pyramid which holds all
information needed for texturing at every single frame. How do you determine which data
from a source texture can potentially be used? The answer lies in a mipmap sample
selection strategy. The best case while texturing is one that allows you to use 1:1
mappings of texels to pixel area. That is how you can define clip size for mipmap
levels based on the current screen resolution. The lowest levels of the mipmap
pyramid will always fit in video memory and can be used statically. All other mip
levels form the clipmap stack which is dynamically updated to store actual data at
every frame (see Figure 1). The contents of a stack in most common cases can be
defined by its size and the viewer’s position.

The basic idea is to store the clipmap stack in a 2D texture array. Texture arrays are
a new feature of DX10. The remaining part of the mipmap pyramid is implemented
as a conventional 2D texture with mips. You can perform a dynamic stack update
using copy/update sub resource methods. It is totally clear that sometimes it would
not be possible to hold all the data needed in system memory. Therefore you are
going to need an additional mechanism to stream all necessary data efficiently from
disk.


A clipmap stack is stored in a 2D texture array. This array forms a dynamic part of
clipmap and should contain actual data for every mip level for each frame. Since
there are separate layers for each original mip level, you should create a texture
without mips. The remaining part of the image can be stored as a conventional 2D
texture.
Using the DX10 API, create these resources as follows (note that for a clipmap
stack texture, you should specify the number of layers using the ArraySize
element):

D3D10_TEXTURE2D_DESC texDesc;
ZeroMemory( &texDesc, sizeof(texDesc) );
texDesc.ArraySize = 1;
texDesc.Usage = D3D10_USAGE_DEFAULT;
texDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.Width = g_PyramidTextureWidth;
texDesc.Height = g_PyramidTextureHeight;
texDesc.MipLevels = g_SourceImageMipsNum - g_StackDepth;
texDesc.SampleDesc.Count = 1;
pd3dDevice->CreateTexture2D(&texDesc, NULL, &g_pPyramidTexture);
texDesc.ArraySize = g_StackDepth;
texDesc.Width = g_ClipmapStackSize;
texDesc.Height = g_ClipmapStackSize;
texDesc.MipLevels = 1;
pd3dDevice->CreateTexture2D(&texDesc, NULL, &g_pStackTexture);

 

Clipmap Texture Addressing
All the work is done in the pixel shader. First you need to determine a mip level to
fetch from. For this use the ddx and ddy instructions to find the quad size in a
screen space.

float2 dx = ddx(input.texCoord * textureSize.x);
float2 dy = ddy(input.texCoord * textureSize.y);
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) ,
sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
Now you can easily calculate a suitable mip level as follows.
float mipLevel = log2( d );
Calculate the mipLevel as a float and use the fractional part to perform trilinear
filtering.
Clipmap texture addressing is rather simple; the only thing you need to do is to scale
the input texture coordinates based on the mip level. Calculate a scale factor by
dividing the source image size by the clipmap stack size.
float2 clipTexCoord = (input.texCoord) / pow(2, iMipLevel);
clipTexCoord.x *= scaleFactor.x + 0.5f;
clipTexCoord.y *= scaleFactor.y + 0.5f;
float4 color = StackTexture.Sample( stackSampler,
float3(clipTexCoord, iMipLevel) );
For the stack sampler, specify the address mode as wrap to implement toroidal
addressing.
Table 1.

Storage Efficiency*
Texture sizes 40962 81922 163842
Full mipmap 85.3 341.3 5461.3
1024 clipmap 13.3(16%) 17.3(5%) 25.3(<1%)
2048 clipmap 37.3(44%) 53.3(16%) 85.3(1.6%)
4096 clipmap 85.3(100%) 149.3(44%) 213.3(3.9%)
*Memory costs for 32-bit texels storage

 

Texture Filtering: Anisotropic / Trilinear

 

//----------------------------------------------------------------------------------
// File: Clipmaps.fx
// Author: Evgeny Makarov
// Email: sdkfeedback@nvidia.com
//
// Copyright (c) 2007 NVIDIA Corporation. All rights reserved.
//
// TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THIS SOFTWARE IS PROVIDED
// *AS IS* AND NVIDIA AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, EITHER EXPRESS
// OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL NVIDIA OR ITS SUPPLIERS
// BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT, OR CONSEQUENTIAL DAMAGES
// WHATSOEVER (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS,
// BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR ANY OTHER PECUNIARY LOSS)
// ARISING OUT OF THE USE OF OR INABILITY TO USE THIS SOFTWARE, EVEN IF NVIDIA HAS
// BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
//
//
//----------------------------------------------------------------------------------

Texture2D PyramidTexture;
Texture2D PyramidTextureHM;
Texture2DArray StackTexture;

#define MAX_ANISOTROPY 16
#define MIP_LEVELS_MAX 7

SamplerState samplerLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};

SamplerState samplerAnisotropic
{
Filter = ANISOTROPIC;
MaxAnisotropy = MAX_ANISOTROPY;
AddressU = Wrap;
AddressV = Wrap;
};

SamplerState samplerPoint
{
Filter = MIN_MAG_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};

SamplerState samplerStackLinear
{
Filter = MIN_MAG_LINEAR_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};

RasterizerState RStateMSAA
{
MultisampleEnable = TRUE;
};

struct VSIn
{
uint index : SV_VertexID;
};

struct PSIn
{
float4 position : SV_Position;
float2 texCoord : TEXCOORD0;
float3 viewVectorTangent : TEXCOORD1;
float3 lightVectorTangent : TEXCOORD2;
};

struct PSInQuad
{
float4 position : SV_Position;
float3 texCoord : TEXCOORD0;
};

struct PSOut
{
float4 color : SV_Target;
};

struct PSOutQuad
{
float4 color : SV_Target;
};

cbuffer cb0
{
row_major float4x4 g_ModelViewProj;
float3 g_EyePosition;
float3 g_LightPosition;
float3 g_WorldRight;
float3 g_WorldUp;
};

cbuffer cb1
{
int2 g_TextureSize; // Source texture size
float2 g_StackCenter; // Stack center position defined by normalized texture coordinates
uint g_StackDepth; // Number of layers in a stack
float2 g_ScaleFactor; // SourceImageSize / ClipmapStackSize
float3 g_MipColors[MIP_LEVELS_MAX];
int g_SphereMeridianSlices;
int g_SphereParallelSlices;
float g_ScreenAspectRatio;
}


//--------------------------------------------------------------------------------------
// Calculate local normal using height values from Texture2D
//--------------------------------------------------------------------------------------
float3 GetLocalNormal(Texture2D _texture, SamplerState _sampler, float2 _coordinates)
{
float3 localNormal;

localNormal.x = _texture.Sample( _sampler, _coordinates, int2( 1, 0) ).x;
localNormal.x -= _texture.Sample( _sampler, _coordinates, int2(-1, 0) ).x;
localNormal.y = _texture.Sample( _sampler, _coordinates, int2( 0, 1) ).x;
localNormal.y -= _texture.Sample( _sampler, _coordinates, int2( 0, -1) ).x;
localNormal.z = sqrt( 1.0 - localNormal.x * localNormal.x - localNormal.y * localNormal.y );

return localNormal;
}


//--------------------------------------------------------------------------------------
// Calculate local normal using height values from Texture2DArray
//--------------------------------------------------------------------------------------
float3 GetLocalNormal_Array(Texture2DArray _texture, SamplerState _sampler, float3 _coordinates)
{
float3 localNormal;

localNormal.x = _texture.Sample( _sampler, _coordinates, int2( 1, 0) ).w;
localNormal.x -= _texture.Sample( _sampler, _coordinates, int2(-1, 0) ).w;
localNormal.y = _texture.Sample( _sampler, _coordinates, int2( 0, 1) ).w;
localNormal.y -= _texture.Sample( _sampler, _coordinates, int2( 0, -1) ).w;
localNormal.xy *= 5.0 / ( _coordinates.z + 1.0 ); // Scale the normal vector to add relief
localNormal.z = sqrt( 1.0 - localNormal.x * localNormal.x - localNormal.y * localNormal.y );

return localNormal;
}


//--------------------------------------------------------------------------------------
// Calculate a minimum stack level to fetch from
//--------------------------------------------------------------------------------------
float GetMinimumStackLevel(float2 coordinates)
{
float2 distance;

distance.x = abs( coordinates.x - g_StackCenter.x );
distance.x = min( distance.x, 1.0 - distance.x );

distance.y = abs( coordinates.y - g_StackCenter.y );
distance.y = min( distance.y, 1.0 - distance.y );

return max( log2( distance.x * g_ScaleFactor.x * 4.0 ), log2( distance.y * g_ScaleFactor.y * 4.0 ) );
}


//--------------------------------------------------------------------------------------
// Calculate vertex positions for procedural sphere mesh based on an input index buffer
//--------------------------------------------------------------------------------------
PSIn VSMain(VSIn input)
{
PSIn output;

float meridianPart = ( input.index % ( g_SphereMeridianSlices + 1 ) ) / float( g_SphereMeridianSlices );
float parallelPart = ( input.index / ( g_SphereMeridianSlices + 1 ) ) / float( g_SphereParallelSlices );

float angle1 = meridianPart * 3.14159265 * 2.0;
float angle2 = ( parallelPart - 0.5 ) * 3.14159265;

float cos_angle1 = cos( angle1 );
float sin_angle1 = sin( angle1 );
float cos_angle2 = cos( angle2 );
float sin_angle2 = sin( angle2 );

float3 VertexPosition;
VertexPosition.z = cos_angle1 * cos_angle2;
VertexPosition.x = sin_angle1 * cos_angle2;
VertexPosition.y = sin_angle2;

output.position = mul( float4( VertexPosition, 1.0 ), g_ModelViewProj );
output.texCoord = float2( 1.0 - meridianPart, 1.0 - parallelPart );

float3 tangent = float3( cos_angle1, 0.0, -sin_angle1 );
float3 binormal = float3( -sin_angle1 * sin_angle2, cos_angle2, -cos_angle1 * sin_angle2 );

float3 viewVector = normalize(g_EyePosition - VertexPosition);

output.viewVectorTangent.x = dot( viewVector, tangent );
output.viewVectorTangent.y = dot( viewVector, binormal);
output.viewVectorTangent.z = dot( viewVector, VertexPosition );

float3 lightVector = normalize( g_LightPosition );

output.lightVectorTangent.x = dot( lightVector, tangent );
output.lightVectorTangent.y = dot( lightVector, binormal);
output.lightVectorTangent.z = dot( lightVector, VertexPosition );

return output;
}


PSInQuad VSMainQuad(VSIn input)
{
PSInQuad output;

// We don‘t need to do any calculations here because everything
// is done in the geometry shader.
output.position = 0;
output.texCoord = 0;

return output;
}


[maxvertexcount(4)]
void GSMainQuad( point PSInQuad inputPoint[1], inout TriangleStream<PSInQuad> outputQuad, uint primitive : SV_PrimitiveID )
{
PSInQuad output;

output.position.z = 0.5;
output.position.w = 1.0;

output.texCoord.z = primitive;

float sizeY = 0.3;
float sizeX = sizeY * 1.2 / g_ScreenAspectRatio;

float offset = 0.7 - min( 1.2 / g_StackDepth, sizeY ) * primitive;

output.position.x = -0.9 - sizeX * 0.2;
output.position.y = offset;
output.texCoord.xy = float2( 0.0, 0.0 );
outputQuad.Append( output );

output.position.x = -0.9 + sizeX * 0.8;
output.position.y = offset + sizeY * 0.2;
output.texCoord.xy = float2( 1.0, 0.0 );
outputQuad.Append( output );

output.position.x = -0.9;
output.position.y = offset - sizeY - sizeY * 0.2;
output.texCoord.xy = float2( 0.0, 1.0 );
outputQuad.Append( output );

output.position.x = -0.9 + sizeX;
output.position.y = offset - sizeY;
output.texCoord.xy = float2( 1.0, 1.0 );
outputQuad.Append( output );

outputQuad.RestartStrip();
}


PSOut PS_Trilinear(PSIn input)
{
PSOut output;

// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );

// Calculate base mip level and fractional blending part for trilinear filtering.
float mipLevel = max( log2( d ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate(g_StackDepth - mipLevel);

float diffuse = saturate( input.lightVectorTangent.z );
diffuse = max( diffuse, 0.05 );

float4 color0 = PyramidTexture.Sample( samplerLinear, input.texCoord );

// Make early out for cases where we don‘t need to fetch from clipmap stack
if( blendGlobal == 0.0 )
{
output.color = color0 * diffuse;
}
else
{
// This fractional part defines the factor used for blending
// between two neighbour stack layers
float blendLayers = modf(mipLevel, mipLevel);
blendLayers = saturate(blendLayers);

int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );

// Here we need to perform proper scaling for input texture coordinates.
// For each layer we multiply input coordinates by g_ScaleFactor / pow( 2, layer ).
// We add 0.5 to result, because our stack center with coordinates (0.5, 0.5)
// starts from corner with coordinates (0, 0) of the original image.
float2 clipTexCoord = input.texCoord / pow( 2, mipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color1 = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, mipLevel ) );

clipTexCoord = input.texCoord / pow( 2, nextMipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color2 = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, nextMipLevel ) );

output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal ) * diffuse;
}

return output;
}


PSOut PS_Trilinear_Parallax(PSIn input)
{
PSOut output;

// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );

// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( d ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );

float2 viewVectorTangent = normalize( input.viewVectorTangent ).xy;
float2 scaledViewVector = viewVectorTangent / g_ScaleFactor;

float3 lightVector = normalize(input.lightVectorTangent);

float2 newCoordinates = input.texCoord - scaledViewVector * ( PyramidTextureHM.Sample( samplerLinear, input.texCoord ).x * 0.02 - 0.01 );
float3 normal = GetLocalNormal( PyramidTextureHM, samplerLinear, newCoordinates );
float diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );

float4 color0 = PyramidTexture.Sample( samplerLinear, newCoordinates ) * diffuse;

if( blendGlobal == 0.0 )
{
output.color = color0;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );
blendLayers = saturate( blendLayers );

int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );

float scale = pow( 2, mipLevel );

float2 clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;

float height = StackTexture.Sample( samplerStackLinear, float3(clipTexCoord + 0.5, mipLevel) ).w * 0.02 - 0.01;

newCoordinates = clipTexCoord - viewVectorTangent * height / scale + 0.5;
float4 color1 = StackTexture.Sample( samplerStackLinear, float3( newCoordinates, mipLevel ) );

normal = GetLocalNormal_Array( StackTexture, samplerStackLinear, float3( newCoordinates, mipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color1 *= diffuse;

scale = pow( 2, nextMipLevel );

clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;

height = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, nextMipLevel ) ).w * 0.02 - 0.01;

newCoordinates = clipTexCoord - viewVectorTangent * height / scale + 0.5;
float4 color2 = StackTexture.Sample( samplerStackLinear, float3( newCoordinates, nextMipLevel ) );

normal = GetLocalNormal_Array( StackTexture, samplerStackLinear, float3( newCoordinates, nextMipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color2 *= diffuse;

output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal );
}

return output;
}


PSOut PS_Anisotropic(PSIn input)
{
PSOut output;

// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );

float squaredLengthX = dot( dx.x, dx.x ) + dot( dx.y, dx.y );
float squaredLengthY = dot( dy.x, dy.x ) + dot( dy.y, dy.y );

float det = abs(dx.x * dy.y - dx.y * dy.x);

bool isMajorX = squaredLengthX > squaredLengthY;
float squaredLengthMajor = isMajorX ? squaredLengthX : squaredLengthY;
float lengthMajor = sqrt(squaredLengthMajor);
float normMajor = 1.0 / lengthMajor;

if( isMajorX )
squaredLengthMajor = squaredLengthX;
else
squaredLengthMajor = squaredLengthY;

lengthMajor = sqrt( squaredLengthMajor );

float ratioOfAnisotropy = squaredLengthMajor / det;
float lengthMinor = 0;

lengthMinor = ( ratioOfAnisotropy > MAX_ANISOTROPY ) ? lengthMajor / ratioOfAnisotropy : det / lengthMajor;
lengthMinor = max( lengthMinor, 1.0 );

// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( lengthMinor ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );

float diffuse = saturate( input.lightVectorTangent.z );
diffuse = max( diffuse, 0.05 );

float4 color0 = PyramidTexture.Sample( samplerAnisotropic, input.texCoord );

if( blendGlobal == 0.0 )
{
output.color = color0 * diffuse;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );

int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );

float2 clipTexCoord = input.texCoord / pow( 2, mipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color1 = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord + 0.5, mipLevel ) );

clipTexCoord = input.texCoord / pow( 2, nextMipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color2 = StackTexture.Sample( samplerAnisotropic, float3(clipTexCoord + 0.5, nextMipLevel ) );

output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal ) * diffuse;
}

return output;
}


PSOut PS_Anisotropic_Parallax(PSIn input)
{
PSOut output;

// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );

float squaredLengthX = dot( dx.x, dx.x ) + dot( dx.y, dx.y );
float squaredLengthY = dot( dy.x, dy.x ) + dot( dy.y, dy.y );

float det = abs( dx.x * dy.y - dx.y * dy.x );

bool isMajorX = squaredLengthX > squaredLengthY;
float squaredLengthMajor = isMajorX ? squaredLengthX : squaredLengthY;
float lengthMajor = sqrt( squaredLengthMajor );
float normMajor = 1.0 / lengthMajor;

if( isMajorX )
squaredLengthMajor = squaredLengthX;
else
squaredLengthMajor = squaredLengthY;

lengthMajor = sqrt( squaredLengthMajor );

float ratioOfAnisotropy = squaredLengthMajor / det;
float lengthMinor = 0;

lengthMinor = ( ratioOfAnisotropy > MAX_ANISOTROPY ) ? lengthMajor / ratioOfAnisotropy : det / lengthMajor;
lengthMinor = max( lengthMinor, 1.0 );

// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( lengthMinor), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );

float2 viewVectorTangent = normalize( input.viewVectorTangent ).xy;
float2 scaledViewVector = viewVectorTangent / g_ScaleFactor;

float3 lightVector = normalize( input.lightVectorTangent );

float2 newCoordinates = input.texCoord - scaledViewVector * ( PyramidTextureHM.Sample( samplerLinear, input.texCoord ).x * 0.02 - 0.01 );
float3 normal = GetLocalNormal( PyramidTextureHM, samplerAnisotropic, newCoordinates );
float diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );

float4 color0 = PyramidTexture.Sample( samplerAnisotropic, newCoordinates ) * diffuse;

if( blendGlobal == 0.0 )
{
output.color = color0;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );
blendLayers = saturate( blendLayers );

int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );

float scale = pow( 2, mipLevel );

float2 clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
clipTexCoord += 0.5;

float height = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord, mipLevel ) ).w * 0.02 - 0.01;

newCoordinates = clipTexCoord - viewVectorTangent * height / scale;
float4 color1 = StackTexture.Sample( samplerAnisotropic, float3( newCoordinates, mipLevel ) );

normal = GetLocalNormal_Array( StackTexture, samplerAnisotropic, float3( newCoordinates, mipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color1 *= diffuse;

scale = pow( 2, nextMipLevel );

clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
clipTexCoord += 0.5f;

height = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord, nextMipLevel ) ).w * 0.02 - 0.01;

newCoordinates = clipTexCoord - viewVectorTangent * height / scale;
float4 color2 = StackTexture.Sample( samplerAnisotropic, float3( newCoordinates, nextMipLevel ) );

normal = GetLocalNormal_Array( StackTexture, samplerAnisotropic, float3( newCoordinates, nextMipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color2 *= diffuse;

output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal );
}

return output;
}


//--------------------------------------------------------------------------------------
// Calculate color values to show
//--------------------------------------------------------------------------------------
PSOut PS_Color(PSIn input)
{
PSOut output;

// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );

// Calculate base mip level and fractional blending part.
float mipLevel = log2( d );

float blendLayers = modf( mipLevel, mipLevel );

int mipBoundary = min( g_StackDepth, MIP_LEVELS_MAX ) - 1;

int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, mipBoundary );
mipLevel = clamp( mipLevel, 0, mipBoundary );

output.color.xyz = lerp( g_MipColors[mipLevel], g_MipColors[nextMipLevel], blendLayers );

return output;
}

PSOut PSQuad(PSInQuad input)
{
PSOut output;

float width = 0.995 - input.texCoord.x;
width = saturate( width * 50.0 );

output.color.xyz = lerp( g_MipColors[input.texCoord.z], StackTexture.Sample( samplerStackLinear, input.texCoord ), width);
output.color.w = 1.0;

return output;
}


//--------------------------------------------------------------------------------------
// Compiled shaders used in different techniques
//--------------------------------------------------------------------------------------

VertexShader vsCompiled = CompileShader( vs_4_0, VSMain() );
VertexShader vsCompiledQuad = CompileShader( vs_4_0, VSMainQuad() );

GeometryShader gsCompiledQuad = CompileShader( gs_4_0, GSMainQuad() );

PixelShader ps_Trilinear = CompileShader( ps_4_0, PS_Trilinear() );
PixelShader ps_Trilinear_Parallax = CompileShader( ps_4_0, PS_Trilinear_Parallax() );
PixelShader ps_Anisotropic = CompileShader( ps_4_0, PS_Anisotropic() );
PixelShader ps_Anisotropic_Parallax = CompileShader( ps_4_0, PS_Anisotropic_Parallax() );
PixelShader ps_Color = CompileShader( ps_4_0, PS_Color() );
PixelShader psComiledQuad = CompileShader( ps_4_0, PSQuad() );

technique10 Trilinear
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Trilinear );
SetRasterizerState(RStateMSAA);
}

pass p1
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Trilinear_Parallax );
SetRasterizerState(RStateMSAA);
}
}

technique10 Anisotropic
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Anisotropic );
SetRasterizerState(RStateMSAA);
}

pass p1
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Anisotropic_Parallax );
SetRasterizerState(RStateMSAA);
}
}

technique10 ColoredMips
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Color );
SetRasterizerState(RStateMSAA);
}
}

technique10 StackDrawPass
{
pass p0
{
SetVertexShader( vsCompiledQuad );
SetGeometryShader( gsCompiledQuad );
SetPixelShader( psComiledQuad );
SetRasterizerState(RStateMSAA);
}
}

 

//----------------------------------------------------------------------------------
// File: JPEG_Preprocessor.fx
// Author: Evgeny Makarov
// Email: sdkfeedback@nvidia.com
//
// Copyright (c) 2007 NVIDIA Corporation. All rights reserved.
//
// TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THIS SOFTWARE IS PROVIDED
// *AS IS* AND NVIDIA AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, EITHER EXPRESS
// OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL NVIDIA OR ITS SUPPLIERS
// BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT, OR CONSEQUENTIAL DAMAGES
// WHATSOEVER (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS,
// BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR ANY OTHER PECUNIARY LOSS)
// ARISING OUT OF THE USE OF OR INABILITY TO USE THIS SOFTWARE, EVEN IF NVIDIA HAS
// BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
//
//
//----------------------------------------------------------------------------------

Texture2D TextureDCT;
Texture2D<uint1> QuantTexture;
Texture2D RowTexture1;
Texture2D RowTexture2;
Texture2D ColumnTexture1;
Texture2D ColumnTexture2;
Texture2D TargetTexture;

Texture2D TextureY;
Texture2D TextureCb;
Texture2D TextureCr;
Texture2D TextureHeight;

SamplerState samplerPoint
{
Filter = MIN_MAG_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};

SamplerState samplerLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};

BlendState NoBlending
{
BlendEnable[0] = FALSE;
};

struct VSIn
{
uint index : SV_VertexID;
};

struct PSIn
{
float4 position : SV_Position;
float3 texCoord : TEXCOORD0;
};

struct PSOut
{
float4 color : SV_Target;
};

struct PSOutMRT
{
float4 color0 : SV_Target0;
float4 color1 : SV_Target1;
};

cbuffer cb0
{
float g_RowScale;
float g_ColScale;
};

PSIn VS_Quad(VSIn input)
{
PSIn output;

output.position = 0;
output.texCoord = 0;

return output;
}

[maxvertexcount(4)]
void GS_Quad( point PSIn inputPoint[1], inout TriangleStream<PSIn> outputQuad, uint primitive : SV_PrimitiveID )
{
PSIn output;

output.position.z = 0.5;
output.position.w = 1.0;

output.texCoord.z = primitive;

output.position.x = -1.0;
output.position.y = 1.0;
output.texCoord.xy = float2( 0.0, 0.0 );
outputQuad.Append( output );

output.position.x = 1.0;
output.position.y = 1.0;
output.texCoord.xy = float2( 1.0, 0.0 );
outputQuad.Append( output );

output.position.x = -1.0;
output.position.y = -1.0;
output.texCoord.xy = float2( 0.0, 1.0 );
outputQuad.Append( output );

output.position.x = 1.0;
output.position.y = -1.0;
output.texCoord.xy = float2( 1.0, 1.0 );
outputQuad.Append( output );

outputQuad.RestartStrip();
}

/////////////////////////////////////////////////////////////////////////////
// JPEG Decompression
// IDCT based on Independent JPEG Group code
/////////////////////////////////////////////////////////////////////////////

PSOutMRT PS_IDCT_Rows( PSIn input )
{
PSOutMRT output;
float d[8];

// Read row elements
d[0] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -4, 0 ) );
d[1] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -3, 0 ) );
d[2] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -2, 0 ) );
d[3] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -1, 0 ) );
d[4] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 0, 0 ) );
d[5] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 1, 0 ) );
d[6] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 2, 0 ) );
d[7] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 3, 0 ) );

// Perform dequantization
d[0] *= QuantTexture.Sample( samplerPoint, float2( 0.0625, input.texCoord.y * g_ColScale ) );
d[1] *= QuantTexture.Sample( samplerPoint, float2( 0.1875, input.texCoord.y * g_ColScale ) );
d[2] *= QuantTexture.Sample( samplerPoint, float2( 0.3125, input.texCoord.y * g_ColScale ) );
d[3] *= QuantTexture.Sample( samplerPoint, float2( 0.4375, input.texCoord.y * g_ColScale ) );
d[4] *= QuantTexture.Sample( samplerPoint, float2( 0.5625, input.texCoord.y * g_ColScale ) );
d[5] *= QuantTexture.Sample( samplerPoint, float2( 0.6875, input.texCoord.y * g_ColScale ) );
d[6] *= QuantTexture.Sample( samplerPoint, float2( 0.8125, input.texCoord.y * g_ColScale ) );
d[7] *= QuantTexture.Sample( samplerPoint, float2( 0.9375, input.texCoord.y * g_ColScale ) );

float tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
float tmp10, tmp11, tmp12, tmp13;
float z5, z10, z11, z12, z13;

tmp0 = d[0];
tmp1 = d[2];
tmp2 = d[4];
tmp3 = d[6];

tmp10 = tmp0 + tmp2;
tmp11 = tmp0 - tmp2;

tmp13 = tmp1 + tmp3;
tmp12 = (tmp1 - tmp3) * 1.414213562 - tmp13;

tmp0 = tmp10 + tmp13;
tmp3 = tmp10 - tmp13;
tmp1 = tmp11 + tmp12;
tmp2 = tmp11 - tmp12;

tmp4 = d[1];
tmp5 = d[3];
tmp6 = d[5];
tmp7 = d[7];

z13 = tmp6 + tmp5;
z10 = tmp6 - tmp5;
z11 = tmp4 + tmp7;
z12 = tmp4 - tmp7;

tmp7 = z11 + z13;
tmp11 = (z11 - z13) * 1.414213562;

z5 = (z10 + z12) * 1.847759065;
tmp10 = 1.082392200 * z12 - z5;
tmp12 = -2.613125930 * z10 + z5;

tmp6 = tmp12 - tmp7;
tmp5 = tmp11 - tmp6;
tmp4 = tmp10 + tmp5;

output.color0.x = tmp0 + tmp7;
output.color1.w = tmp0 - tmp7;
output.color0.y = tmp1 + tmp6;
output.color1.z = tmp1 - tmp6;
output.color0.z = tmp2 + tmp5;
output.color1.y = tmp2 - tmp5;
output.color1.x = tmp3 + tmp4;
output.color0.w = tmp3 - tmp4;

return output;
}

PSOut PS_IDCT_Unpack_Rows( PSIn input )
{
PSOut output;

// Get eight values storded in 2 textures
float4 values1 = RowTexture1.Sample( samplerPoint, input.texCoord );
float4 values2 = RowTexture2.Sample( samplerPoint, input.texCoord );

// Calculate a single non-zero index to define an element to be used
int index = frac( input.texCoord.x * g_RowScale ) * 8.0;

float4 indexMask1 = ( index == float4( 0, 1, 2, 3 ) );
float4 indexMask2 = ( index == float4( 4, 5, 6, 7 ) );

output.color = dot( values1, indexMask1 ) + dot( values2, indexMask2 );

return output;
}

PSOutMRT PS_IDCT_Columns( PSIn input )
{
PSOutMRT output;
float d[8];

// Read column elements
d[0] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -4 ) );
d[1] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -3 ) );
d[2] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -2 ) );
d[3] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -1 ) );
d[4] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 0 ) );
d[5] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 1 ) );
d[6] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 2 ) );
d[7] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 3 ) );

float tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
float tmp10, tmp11, tmp12, tmp13;
float z5, z10, z11, z12, z13;

tmp0 = d[0];
tmp1 = d[2];
tmp2 = d[4];
tmp3 = d[6];

tmp10 = tmp0 + tmp2;
tmp11 = tmp0 - tmp2;

tmp13 = tmp1 + tmp3;
tmp12 = (tmp1 - tmp3) * 1.414213562 - tmp13;

tmp0 = tmp10 + tmp13;
tmp3 = tmp10 - tmp13;
tmp1 = tmp11 + tmp12;
tmp2 = tmp11 - tmp12;

tmp4 = d[1];
tmp5 = d[3];
tmp6 = d[5];
tmp7 = d[7];

z13 = tmp6 + tmp5;
z10 = tmp6 - tmp5;
z11 = tmp4 + tmp7;
z12 = tmp4 - tmp7;

tmp7 = z11 + z13;
tmp11 = (z11 - z13) * 1.414213562;

z5 = (z10 + z12) * 1.847759065;
tmp10 = 1.082392200 * z12 - z5;
tmp12 = -2.613125930 * z10 + z5;

tmp6 = tmp12 - tmp7;
tmp5 = tmp11 - tmp6;
tmp4 = tmp10 + tmp5;

output.color0.x = tmp0 + tmp7;
output.color1.w = tmp0 - tmp7;
output.color0.y = tmp1 + tmp6;
output.color1.z = tmp1 - tmp6;
output.color0.z = tmp2 + tmp5;
output.color1.y = tmp2 - tmp5;
output.color1.x = tmp3 + tmp4;
output.color0.w = tmp3 - tmp4;

return output;
}

PSOut PS_IDCT_Unpack_Columns( PSIn input )
{
PSOut output;

// Get eight values storded in 2 textures
float4 values1 = ColumnTexture1.Sample( samplerPoint, input.texCoord );
float4 values2 = ColumnTexture2.Sample( samplerPoint, input.texCoord );

// Calculate a single non-zero index to define an element to be used
int index = frac( input.texCoord.y * g_ColScale ) * 8.0;

float4 indexMask1 = ( index == float4( 0, 1, 2, 3 ) );
float4 indexMask2 = ( index == float4( 4, 5, 6, 7 ) );

output.color = clamp( ( dot( values1, indexMask1 ) + dot( values2, indexMask2 ) ) * 0.125 + 128.0, 0.0, 256.0 );

return output;
}

PSOut PS_IDCT_RenderToBuffer( PSIn input )
{
PSOut output;

float Y = TextureY.Sample( samplerPoint, input.texCoord );
float Cb = TextureCb.Sample( samplerPoint, input.texCoord );
float Cr = TextureCr.Sample( samplerPoint, input.texCoord );

// Convert YCbCr -> RGB
output.color.x = Y + 1.402 * ( Cr - 128.0 );
output.color.y = Y - 0.34414 * ( Cb - 128.0 ) - 0.71414 * ( Cr - 128.0 );
output.color.z = Y + 1.772 * ( Cb - 128.0 );
output.color.w = TextureHeight.Sample( samplerPoint, input.texCoord );

output.color.xyzw *= ( 1.0 / 256.0 );

return output;
}

//--------------------------------------------------------------------------------------
// Compiled shaders used in different techniques
//--------------------------------------------------------------------------------------

VertexShader VS_Quad_Compiled = CompileShader( vs_4_0, VS_Quad() );
GeometryShader GS_Quad_Compiled = CompileShader( gs_4_0, GS_Quad() );

technique10 JPEG_Decompression
{
pass p0
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Rows() ) );
}
pass p1
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Unpack_Rows() ) );
}
pass p2
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Columns() ) );
}
pass p3
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Unpack_Columns() ) );
}
pass p4
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_RenderToBuffer() ) );
}
}