Planetary landscape. Natural landscape environmental problems

AlexWIN32 August 20, 2017 at 01:36 pm

Planetary landscape

  • API,
  • C++
  • Game development
  • Tutorial

It's hard to argue that landscape is an integral part of most outdoor computer games. The traditional method of implementing changes in the relief of the surface surrounding the player is the following - we take a mesh, which is a plane, and for each primitive in this mesh we make a displacement along the normal to this plane by a value specific to this primitive. In simple terms, we have a single-channel 256 by 256 pixel texture and a plane mesh. For each primitive, we take the value from the texture based on its coordinates on the plane. Now we simply shift the coordinates of the primitive along the normal to the plane by the resulting value (Fig. 1)

Fig. 1 height map + plane = landscape

1. Sector

Obviously, it is not wise to build a landscape for the entire sphere at once - most of it will not be visible. Therefore, we need to create a certain minimal area of ​​space - a certain primitive that will make up the relief of the visible part of the sphere. I'll call it a sector. How can we get it? So look at Fig. 2a. The green cell is our sector. Next, we will construct six grids, each of which is a face of a cube (Fig. 2b). Now let's normalize the coordinates of the primitives that form the meshes (Fig. 2c).


Fig.2

As a result, we got a cube projected onto a sphere, where the sector is the area on one of its faces. Why does this work? Consider an arbitrary point on the grid as a vector from the origin. What is vector normalization? This is the transformation of a given vector into a vector in the same direction, but with unit length. The process is as follows: first we find the length of the vector in the Euclidean metric according to the Pythagorean theorem

Then divide each of the vector components by this value

Now let's ask ourselves, what is a sphere? A sphere is a set of points equidistant from a given point. The parametric equation of a sphere looks like this

Where x0, y0, z0 are the coordinates of the center of the sphere, and R is its radius. In our case, the center of the sphere is the origin, and the radius is equal to one. Let's substitute known values and take the root of two sides of the equation. It turns out the following

Literally the last transformation tells us the following: “In order to belong to the sphere, the length of the vector must be equal to one.” This is what we achieved through normalization.

What if the sphere has an arbitrary center and radius? You can find the point that belongs to it using the following equation

Where pS is a point on the sphere, C is the center of the sphere, pNorm is the previously normalized vector and R is the radius of the sphere. In simple terms, what happens here is “we move from the center of the sphere towards a point on the grid at a distance R.” Since each vector has unit length, in the end all points are equidistant from the center of the sphere by the distance of its radius, which makes the equation of the sphere true.

2. Management

We need to get a group of sectors that are potentially visible from the viewpoint. But how to do that? Suppose we have a sphere with a center at some point. We also have a sector, which is located on the sphere, and a point P, located in space near the sphere. Now let's construct two vectors - one directed from the center of the sphere to the center of the sector, the other - from the center of the sphere to the viewing point. Look at Fig. 3 - the sector can be visible only if the absolute value of the angle between these vectors is less than 90 degrees.


Fig.3 a - angle less than 90 - the sector is potentially visible. b - angle greater than 90 - sector not visible

How to get this angle? To do this, you need to use the scalar product of vectors. For the three-dimensional case it is calculated as follows:

The scalar product has the distributive property:

Previously we defined the equation for the length of a vector - now we can say that the length of a vector is equal to the root of dot product of this vector onto itself. Or vice versa - the scalar product of a vector with itself is equal to the square of its length.

Now let's look at the law of cosines. One of its two formulations looks like this (Fig. 4):


Fig.4 law of cosines

If we take a and b to be the lengths of our vectors, then the angle alfa is what we are looking for. But how do we get the value of c? Look: if we subtract a from b, we get a vector directed from a to b, and since a vector is characterized only by direction and length, we can graphically locate its beginning at the end of vector a. From this we can say that c is equal to the length of the vector b - a. So, we did it

Let's express the squares of lengths as scalar products

Let's open the brackets using the distributive property

Let's shorten it a little

Finally, dividing both sides of the equation by minus two, we get

This is another property of the scalar product. In our case, we need to normalize the vectors so that their lengths are equal to one. We don't have to calculate the angle - the cosine value is enough. If it is less than zero, then we can safely say that this sector does not interest us

3. Grid

It's time to think about how to draw primitives. As I said earlier, the sector is the main component in our diagram, so for each potentially visible sector we will draw a mesh whose primitives will form the landscape. Each of its cells can be displayed using two triangles. Because each cell has adjacent faces, the values ​​of most triangle vertices are repeated across two or more cells. To avoid duplicating data in the vertex buffer, let's fill the index buffer. If indexes are used, then with their help the graphics pipeline determines which primitive in the vertex buffer it should process. (Fig. 5) The topology I chose is triangle list (D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)


Fig.5 Visual display of indices and primitives

Creating a separate vertex buffer for each sector is too expensive. It is much more efficient to use a single buffer with coordinates in grid space, i.e. x is a column and y is a row. But how can one get a point on the sphere from them? A sector is a square area with a beginning at a certain point S. All sectors have the same edge length - let's call it SLen. The grid covers the entire area of ​​the sector and also has the same number of rows and columns, so to find the length of the cell edge we can construct the following equation

Where СLen is the length of the cell edge, MSize is the number of rows or columns of the grid. Divide both parts by MSize and get CLen


Fig.6 Visual display of the formation of a point on the grid

To get a point on the sphere, we use the equation derived earlier

4. Height

Everything we have achieved to this point bears little resemblance to landscape. It's time to add what will make it so - the height difference. Let's imagine that we have a sphere of unit radius with a center at the origin, as well as a set of points (P0, P1, P2... PN) that are located on this sphere. Each of these points can be represented as a unit vector from the origin. Now imagine that we have a set of values, each of which is the length of a specific vector (Fig. 7).

I will store these values ​​in a two-dimensional texture. We need to find the connection between the texture pixel coordinates and the point vector on the sphere. Let's get started.

In addition to the Cartesian one, a point on a sphere can also be described using a spherical coordinate system. In this case, its coordinates will consist of three elements: the azimuth angle, the polar angle and the value of the shortest distance from the origin to the point. The azimuth angle is the angle between the X axis and the projection of the ray from the origin to a point on the XZ plane. It can take values ​​from zero to 360 degrees. Polar angle is the angle between the Y axis and the ray from the origin to the point. It may also be called zenith or normal. Accepts values ​​from zero to 180 degrees. (see Fig. 8)


Fig.8 Spherical coordinates

To convert from Cartesian to spherical I use the following equations (I assume the Y axis is up):

Where d is the distance to the point, a is the polar angle, b is the azimuth angle. The parameter d can also be described as “the length of the vector from the origin to the point” (as can be seen from the equation). If we use normalized coordinates, we can avoid division when finding the polar angle. Actually, why do we need these angles? Dividing each of them by its maximum range, we get coefficients from zero to one and use them to sample from the texture in the shader. When obtaining the coefficient for the polar angle, it is necessary to take into account the quarter in which the angle is located. “But the value of the expression z / x is not defined when x is equal to zero,” you say. Moreover, I will say that when z is equal to zero, the angle will be zero regardless of the value of x.

Let's add some special cases for these values. We have normalized coordinates (normal) - let's add several conditions: if the X value of the normal is zero and the Z value is greater than zero - then the coefficient is 0.25, if X is zero and Z is less than zero - then it will be 0.75. If the value of Z is zero and X is less than zero, then in this case the coefficient will be equal to 0.5. All this can be easily verified on a circle. But what to do if Z is zero and X is greater than zero - after all, in this case both 0 and 1 will be correct? Let's imagine that we have chosen 1 - well, let's take a sector with a minimum azimuth angle of 0 and a maximum of 90 degrees. Now let's look at the first three vertices in the first row of the grid that displays this sector. For the first vertex, we met the condition and set the texture coordinate X to 1. Obviously, for the next two vertices this condition will not be met - the corners for them are in the first quarter and as a result we get something like this set - (1.0, 0.05, 0.1). But for a sector with angles from 270 to 360 for the last three vertices in the same row, everything will be correct - the condition for the last vertex will work, and we will get the set (0.9, 0.95, 1.0). If we choose zero as the result, we will get the sets (0.0, 0.05, 0.1) and (0.9, 0.95, 0.0) - in any case, this will lead to quite noticeable surface distortions. So let's apply the following. Let's take the center of the sector, then normalize its center, thereby moving it to the sphere. Now let's calculate the scalar product of the normalized center and the vector (0, 0, 1). Formally speaking, this vector is normal to the XY plane, and by calculating its dot product with the normalized sector center vector, we can understand which side of the plane the center is on. If it is less than zero, then the sector is behind the plane and we need the value 1. If the scalar product is greater than zero, then the sector is in front of the plane and therefore the boundary value will be 0. (see Fig. 9)


Fig.9 The problem of choosing between 0 and 1 for texture coordinates

Here is the code for getting texture coordinates from spherical ones. Please note - due to calculation errors, we cannot check the normal values ​​for equality to zero, instead we must compare their absolute values ​​with some threshold value (for example 0.001)

//norm - normalized coordinates of the point for which we obtain texture coordinates //offset - normalized coordinates of the center of the sector to which norm belongs //zeroTreshold - threshold value (0.001) float2 GetTexCoords(float3 norm, float3 offset) ( float tX = 0.0f , tY = 0.0f; bool normXIsZero = abs(norm.x)< zeroTreshold; bool normZIsZero = abs(norm.z) < zeroTreshold; if(normXIsZero || normZIsZero){ if(normXIsZero && norm.z >0.0f) tX = 0.25f;< 0.0f && normZIsZero) tX = 0.5f; else if(normXIsZero && norm.z < 0.0f) tX = 0.75f; else if(norm.x >else if(norm.x< 0.0f) tX = 1.0f; else tX = 0.0f; } }else{ tX = atan(norm.z / norm.x); if(norm.x < 0.0f && norm.z >0.0f && normZIsZero)( if(dot(float3(0.0f, 0.0f, 1.0f), offset)< 0.0f && norm.z < 0.0f) tX += 3.141592; else if(norm.x >0.0f) tX += 3.141592;< 0.0f) tX = 3.141592 * 2.0f + tX; tX = tX / (3.141592 * 2.0f); } tY = acos(norm.y) / 3.141592; return float2(tX, tY); }
else if(norm.x

0.0f && norm.z

I will give an intermediate version of the vertex shader

//startPos - beginning of the cube face //vec1, vec2 - direction vectors of the cube face //gridStep - cell size //sideSize - sector edge length //GetTexCoords() - converts spherical coordinates to texture coordinates VOut ProcessVertex(VIn input) ( float3 planePos = startPos + vec1 * input.netPos.x * gridStep.x + vec2 * input.netPos.y * gridStep.y; float3 sphPos = normalize(planePos); normalize(startPos + (vec1 + vec2) * sideSize * 0.5f); float2 tc = GetTexCoords(sphPos, normOffset); float height = mainHeightTex.SampleLevel(mainHeightTexSampler, 0).x; posL = sphPos * (sphereRadius + height); (posL, 1.0f), worldViewProj); output.texCoords = tc; return output;

5. Lighting

The same is true for the cosine value -1, only in this case the vectors point in opposite directions. It turns out that the closer the normal vector and the vector to the light source are to the state of collinearity, the higher the illumination coefficient of the surface to which the normal belongs. It also assumes that a surface cannot be illuminated if its normal points in the opposite direction to the source - which is why I only use positive cosine values.

I'm using a parallel source, so its position can be neglected. The only thing to keep in mind is that we are using a vector to the light source. That is, if the direction of the rays is (1.0, -1.0, 0) - we need to use the vector (-1.0, 1.0, 0). The only thing that is difficult for us is the normal vector. Calculating the normal to the plane is simple - we need to produce vector product two vectors that describe it. It is important to remember that the vector product is anticommutative - you need to take into account the order of the factors. In our case, we can get the normal to the triangle, knowing the coordinates of its vertices in mesh space, as follows (Please note that I do not take into account the boundary cases for p.x and p.y)

Float3 p1 = GetPosOnSphere(p); float3 p2 = GetPosOnSphere(float2(p.x + 1, p.y)); float3 p3 = GetPosOnSphere(float2(p.x, p.y + 1)); float3 v1 = p2 - p1; float3 v2 = p3 - p1; float3 n = normalzie(cross(v1, v2));
But that is not all. Most of the mesh vertices belong to four planes at once. To get an acceptable result, you need to calculate the average normal as follows:

Na = normalize(n0 + n1 + n2 + n3)
Implementing this method on a GPU is quite expensive - we need two stages to calculate normals and average them. In addition, the efficiency leaves much to be desired. Based on this, I chose another method - to use a normal map. (Fig. 10)


Fig. 10 Normal map

The principle of working with it is the same as with a height map - we transform the spherical coordinates of the mesh vertex into texture ones and make a selection. But we won’t be able to use this data directly - after all, we are working with a sphere, and the vertex has its own normal that needs to be taken into account. Therefore, we will use the normal map data as the coordinates of the TBN basis. What is a basis? Here's an example for you. Imagine that you are an astronaut and sitting on a beacon somewhere in space. You receive a message from the MCC: “You need to move from the beacon 1 meter to the left, 2 meters up and 3 meters forward.” How can this be expressed mathematically? (1, 0, 0) * 1 + (0, 1, 0) * 2 + (0, 0, 1) * 3 = (1,2,3). IN matrix form This equation can be expressed as follows:

Now imagine that you are also sitting on a beacon, only now they write to you from the control center: “we sent you direction vectors there - you must move 1 meter along the first vector, 2 meters along the second and 3 meters along the third.” The equation for the new coordinates will be:

The exploded notation looks like this:

Or in matrix form:

So, a matrix with vectors V1, V2 and V3 is a basis, and the vector (1,2,3) is the coordinates in the space of this basis.

Let's imagine now that you have a set of vectors (base M) and you know where you are relative to the beacon (point P). You need to find out your coordinates in the space of this basis - how far you need to move along these vectors to end up in the same place. Let's imagine the required coordinates (X)

If P, M and X were numbers, we would simply divide both sides of the equation by M, but alas... Let's go the other way - according to the inverse matrix property

Where I is the identity matrix. In our case it looks like this

What does this give us? Try multiplying this matrix by X and you will get

It is also necessary to clarify that matrix multiplication has the associativity property

We can quite legitimately consider a vector as a 3 by 1 matrix

Considering all of the above, we can conclude that in order to get X on the right side of the equation, we need in the right order multiply both sides by the inverse M matrix

We will need this result later.

Now let's get back to our problem. I will use an orthonormal basis - this means that V1, V2 and V3 are orthogonal to each other (form an angle of 90 degrees) and have unit length. V1 will be the tangent vector, V2 will be the bitangent vector, and V3 will be the normal. In the traditional DirectX transposed form, the matrix looks like this:

Where T is the tangent vector, B is the bitangent vector and N is the normal. Let's find them. It's easier with normal basically everything these are the normalized coordinates of the point. The bitangent vector is equal to the cross product of the normal and the tangent vector. The most difficult thing will be with the tangent vector. It is equal to the direction of the tangent to the circle at a point. Let's look at this moment. First, let's find the coordinates of a point on the unit circle in the XZ plane for some angle a

The direction of the tangent to the circle at this point can be found in two ways. The vector to a point on the circle and the tangent vector are orthogonal - therefore, since the functions sin and cos are periodic - we can simply add pi/2 to the angle a and get the desired direction. According to the pi/2 displacement property:

We got the following vector:

We can also use differentiation - see Appendix 3 for more details. So, in Figure 11 you can see a sphere for which a basis has been built for each vertex. Blue vectors denote normals, red - tangent vectors, green - bitangent vectors.


Fig.11 Sphere with TBN bases at each vertex. Red - tangent vectors, green - bitangent vectors, blue vectors - normals

We've sorted out the basis - now let's get a normal map. To do this, we will use the Sobel filter. The Sobel filter calculates the brightness gradient of the image at each point (roughly speaking, the brightness change vector). The principle of the filter is that you need to apply a certain matrix of values, which is called the “Kernel,” to each pixel and its neighbors within the dimension of this matrix. Suppose we process pixel P with kernel K. If it is not on the edge of the image, then it has eight neighbors - top left, top, top right, and so on. Let's call them tl, t, tb, l, r, bl, b, br. So, applying the K kernel to this pixel is as follows:

Pn = tl * K(0, 0) + t * K(0,1) + tb * K(0,2) +
          l * K(1, 0) + P * K(1,1) + r * K(1,2) +
          bl * K(2, 0) + b * K(2,1) + br * K(2,2)

This process is called "Convolution". The Sobel filter uses two kernels to calculate the vertical and horizontal gradient. Let us denote them as Kx and Ku:

The basis is there - you can start implementing it. First we need to calculate the brightness of the pixel. I use the conversion from the RGB color model to the YUV model for the PAL system:

But since our image is initially in grayscale, this step can be skipped. Now we need to “collapse” the original image with Kx and Ky kernels. This will give us the X and Y components of the gradient. The normal value of this vector can also be very useful - we won't use it, but images containing normalized gradient normal values ​​have some useful uses. By normalization I mean the following equation

Where V is the value that we normalize, Vmin and Vmax are the range of these values. In our case, the minimum and maximum values ​​are tracked during the generation process. Here is an example implementation of a Sobel filter:

Float SobelFilter::GetGrayscaleData(const Point2 &Coords) ( Point2 coords; coords.x = Math::Saturate(Coords.x, RangeI(0, image.size.width - 1)); coords.y = Math::Saturate( Coords.y, RangeI(0, image.size.height - 1)); int32_t offset = (coords.y * image.size.width + coords.x) * image.pixelSize; const uint8_t *pixel = &image.pixels; return (image.pixelFormat == PXL_FMT_R8) ? pixel : (0.30f * pixel + //R 0.59f * pixel + //G 0.11f * pixel); //B ) void SobelFilter::Process() ( RangeF dirXVr, dirYVr, magNormVr; for(int32_t y = 0; y< image.size.height; y++) for(int32_t x = 0; x < image.size.width; x++){ float tl = GetGrayscaleData({x - 1, y - 1}); float t = GetGrayscaleData({x , y - 1}); float tr = GetGrayscaleData({x + 1, y - 1}); float l = GetGrayscaleData({x - 1, y }); float r = GetGrayscaleData({x + 1, y }); float bl = GetGrayscaleData({x - 1, y + 1}); float b = GetGrayscaleData({x , y + 1}); float br = GetGrayscaleData({x + 1, y + 1}); float dirX = -1.0f * tl + 0.0f + 1.0f * tr + -2.0f * l + 0.0f + 2.0f * r + -1.0f * bl + 0.0f + 1.0f * br; float dirY = -1.0f * tl + -2.0f * t + -1.0f * tr + 0.0f + 0.0f + 0.0f + 1.0f * bl + 2.0f * b + 1.0f * br; float magNorm = sqrtf(dirX * dirX + dirY * dirY); int32_t ind = y * image.size.width + x; dirXData = dirX; dirYData = dirY; magNData = magNorm; dirXVr.Update(dirX); dirYVr.Update(dirY); magNormVr.Update(magNorm); } if(normaliseDirections){ for(float &dirX: dirXData) dirX = (dirX - dirXVr.minVal) / (dirXVr.maxVal - dirXVr.minVal); for(float &dirY: dirYData) dirY = (dirY - dirYVr.minVal) / (dirYVr.maxVal - dirYVr.minVal); } for(float &magNorm: magNData) magNorm = (magNorm - magNormVr.minVal) / (magNormVr.maxVal - magNormVr.minVal); }
It must be said that the Sobel filter has the property of linear separability, so this method can be optimized.

The hard part is over - all that remains is to write the X and Y coordinates of the gradient direction into the R and G channels of the normal map pixels. For the Z coordinate I use one. I also use a 3D vector of coefficients to adjust these values. The following is an example of generating a normal map with comments:

//ImageProcessing::ImageData Image - original image. The constructor contains the pixel format and image data ImageProcessing::SobelFilter sobelFilter; sobelFilter.Init(Image); sobelFilter.NormaliseDirections() = false; sobelFilter.Process(); const auto &resX =sobelFilter.GetFilteredData(ImageProcessing::SobelFilter::SOBEL_DIR_X); const auto &resY =sobelFilter.GetFilteredData(ImageProcessing::SobelFilter::SOBEL_DIR_Y); ImageProcessing::ImageData destImage = (DXGI_FORMAT_R8G8B8A8_UNORM, Image.size); size_t dstImageSize = Image.size.width * Image.size.height * destImage.pixelSize; std::vector dstImgPixels(dstImageSize); for(int32_t d = 0 ; d< resX.size(); d++){ //используем вектор настроечных коэффициентов. У меня он равен (0.03, 0.03, 1.0) Vector3 norm = Vector3::Normalize({resX[d] * NormalScalling.x, resY[d] * NormalScalling.y, 1.0f * NormalScalling.z}); Point2 coords(d % Image.size.width, d / Image.size.width); int32_t offset = (coords.y * Image.size.width + coords.x) * destImage.pixelSize; uint8_t *pixel = &dstImgPixels; //переводим значения из области [-1.0, 1.0] в а затем в область pixel = (0.5f + norm.x * 0.5f) * 255.999f; pixel = (0.5f + norm.y * 0.5f) * 255.999f; pixel = (0.5f + norm.z * 0.5f) * 255.999f; } destImage.pixels = &dstImgPixels; SaveImage(destImage, OutFilePath);
Now I will give an example of using a normal map in a shader:

//texCoords - texture coordinates that we obtained using the method described in paragraph 4 //normalL - vertex normal //lightDir - vector to the light source //Ld - color of the light source //Kd - color of the material of the illuminated surface float4 normColor = mainNormalTex. SampleLevel(mainNormalTexSampler, texCoords, 0); //transfer the value from the area to [-1.0, 1.0] and normalize the result float3 normalT = normalize(2.0f * mainNormColor.rgb - 1.0f); //translate the X texture coordinate from the area to float ang = texCoords.x * 3.141592f * 2.0f; float3 tangent; tangent.x = -sin(ang); tangent.y = 0.0f; tangent.z = cos(ang); float3 bitangent = normalize(cross(normalL, tangent)); float3x3 tbn = float3x3(tangent, bitangent, normalL); float3 resNormal = mul(normalT, tbn); float diff = saturate(dot(resNormal, lightDir.xyz)); float4 resColor = Ld * Kd * diff;

6.Level Of Detail

Well, now our landscape is illuminated! You can fly to the Moon - add a height map, set the color of the material, load the sectors, set the grid size to (16, 16) and... Yes, something is too small - I’ll put it (256, 256) - oh, something is slowing everything down , and why high detail in distant sectors? Moreover, the closer the observer is to the planet, the fewer sectors he can see. Yes... we still have a lot of work to do! Let's first figure out how to cut off unnecessary sectors. The determining value here will be the height of the observer from the surface of the planet - the higher it is, the more sectors he can see (Fig. 12)


Fig. 12 Dependence of the observer’s height on the number of processed sectors

We find the height as follows - we construct a vector from the observer’s position to the center of the sphere, calculate its length and subtract the value of the sphere’s radius from it. Earlier I said that if the scalar product of the vector by the observer and the vector by the center of the sector is less than zero, then this sector is not of interest to us - now instead of zero we will use a value linearly dependent on the height. First, let's define the variables - so we will have a minimum and maximum value for the dot product and a minimum and maximum value for the height. Let's construct the following system of equations

Now let's express A in the second equation

Let's substitute A from the second equation into the first

Let's express B from the first equation

Substitute B from the first equation into the second

Now let's substitute the variables into the function

And we will receive

Where Hmin and Hmax are the minimum and maximum height values, Dmin and Dmax are the minimum and maximum values ​​of the scalar product. This problem can be solved differently - see Appendix 4.

Now we need to understand the levels of detail. Each of them will define the range of the dot product. In pseudocode, the process of determining whether a sector belongs to a certain level looks like this:

Cycle through all sectors, calculate the scalar product of the vector by the observer and the vector by the center of the sector, if the scalar product is less than the minimum threshold calculated earlier, move on to the next sector, cycle through the levels of detail, if the scalar product is within the limits defined for this level, add the sector to this level, end of the cycle by levels of detail end of cycle for all sectors
We need to calculate the range of values ​​for each level. First, let's build a system of two equations

Having solved it, we get

Using these coefficients, we define the function

Where Rmax is the range of the scalar product (D(H) - Dmin), Rmin is the minimum range determined by the level. I'm using a value of 0.01. Now we need to subtract the result from Dmax

Using this function we will get areas for all levels. Here's an example:

Const float dotArea = dotRange.maxVal - dotRange.minVal; const float Rmax = dotArea, Rmin = 0.01f; float lodsCnt = lods.size(); float A = Rmax; float B = powf(Rmin / Rmax, 1.0f / (lodsCnt - 1.0f)); for(size_t g = 0; g< lods.size(); g++){ lods[g].dotRange.minVal = dotRange.maxVal - A * powf(B, g); lods[g].dotRange.maxVal = dotRange.maxVal - A * powf(B, g + 1); } lods.dotRange.maxVal = 1.0f;
Now we can determine to what level of detail the sector belongs (Fig. 13).


Fig. 13 Color differentiation of sectors according to levels of detail

Next you need to figure out the size of the grids. It will be very expensive to store your own mesh for each level - it is much more efficient to change the detail of one mesh on the fly using tessellation. To do this, we need, in addition to the usual vertex and pixel shaders, to also implement hull and domain shaders. In the Hull shader, the main task is to prepare control points. It consists of two parts - the main function and the function that calculates the control point parameters. You must specify values ​​for the following attributes:

domain
partitioning
outputtopology
outputcontrolpoints
patchconstantfunc
Here is an example of a Hull shader for triangle tiling:

Struct PatchData ( float edges : SV_TessFactor; float inside: SV_InsideTessFactor; ); PatchData GetPatchData(InputPatch Patch, uint PatchId: SV_PrimitiveID) ( PatchData output; flloat tessFactor = 2.0f; output.edges = tessFactor; output.edges = tessFactor; output.edges = tessFactor; output.inside = tessFactor; return output; ) VIn ProcessHull(InputPatch Patch, uint PointId: SV_OutputControlPointID, uint PatchId: SV_PrimitiveID) ( return Patch; )
You see, the main work is done in GetPatchData(). Its task is to establish the tessellation factor. We'll talk about it later, now let's move on to the Domain shader. It receives control points from the Hull shader and coordinates from the tessellator. The new value of position or texture coordinates in the case of division by triangles must be calculated using the following formula

N = C1 * F.x + C2 * F.y + C3 * F.z

Where C1, C2 and C3 are the values ​​of the control points, F are the coordinates of the tessellator. Also in the Domain shader you need to set the domain attribute, the value of which corresponds to the one specified in the Hull shader. Here is an example of a Domain shader:

Cbuffer buff0: register(b0) ( matrix worldViewProj; ) struct PatchData ( float edges : SV_TessFactor; float inside: SV_InsideTessFactor; ); PIn ProcessDomain(PatchData Patch, float3 Coord: SV_DomainLocation, const OutputPatch
Tri) ( float3 posL = Tri.posL * Coord.x + Tri.posL * Coord.y + Tri.posL * Coord.z; float2 texCoords = Tri.texCoords * Coord.x + Tri.texCoords * Coord.y + Tri .texCoords * PIn output.posH = mul(float4(posL, 1.0f), output.normalW = Tri.normalW; return output;

The role of the vertex shader in this case is reduced to a minimum - for me it simply “passes” the data to the next stage.

Now we need to implement something similar. Our primary task is to calculate the tessellation factor, or more precisely, to plot its dependence on the height of the observer. Let's build a system of equations again

Solving it in the same way as before, we get
Where Tmin and Tmax are the minimum and maximum tessellation coefficients, Hmin and Hmax are the minimum and maximum values ​​of the observer's height. My minimum tessellation coefficient is one. the maximum is set separately for each level

(for example 1, 2, 4, 16).

Let's take the decimal logarithm of the two sides of the equation

According to the property of logarithms, we can rewrite the equation as follows

Now all we have to do is divide both sides by log(2)

But that is not all. X is approximately 2.58. Next, you need to reset the fractional part and raise two to the power of the resulting number. Here is the code for calculating tessellation factors for levels of detail

Float h = camera->GetHeight(); const RangeF &hR = heightRange; for(LodsStorage::Lod &lod: lods)( //derived from system //A + B * Hmax = Lmin //A + B * Hmin = Lmax //and getting A then substitution B in second equality float mTf = (float )lod.GetMaxTessFactor(); float tessFactor = 1.0f + (mTf - 1.0f) * ((h - hR.maxVal) / (hR.minVal - hR.maxVal)); (1.0f, mTf)); float nearPowOfTwo = pow(2.0f, floor(log(tessFactor) / log(2)));

7. Noise

Let's see how we can increase the detail of the landscape without changing the size of the height map. What comes to my mind is to change the height value to the value obtained from the gradient noise texture. The coordinates at which we will sample will be N times larger than the main ones. When sampling, the mirror addressing type (D3D11_TEXTURE_ADDRESS_MIRROR) will be used (see Fig. 14).


Fig. 14 Sphere with height map + sphere with noise map = sphere with final height

In this case, the height will be calculated as follows:

//float2 tc1 - texture coordinates obtained from a normalized point, as //described earlier //texCoordsScale - texture coordinate multiplier. In my case, it is equal to the value 300 //mainHeightTex, mainHeightTexSampler - height map texture //distHeightTex, distHeightTexSampler - gradient noise texture //maxTerrainHeight - maximum landscape height. In my case 0.03 float2 tc2 = tc1 * texCoordsScale; float4 mainHeighTexColor = mainHeightTex.SampleLevel(mainHeightTexSampler, tc1, 0); float4 distHeighTexColor = distHeightTex.SampleLevel(distHeightTexSampler, tc2, 0); float height = (mainHeighTexColor.x + distHeighTexColor.x) * maxTerrainHeight;
So far the periodic nature is expressed significantly, but with the addition of lighting and texturing the situation will change for the better. What is a gradient noise texture? Roughly speaking, it is a grid of random values. Let's figure out how to match lattice sizes to texture sizes. Let's say we want to create a noise texture of 256 by 256 pixels. It's simple, if the dimensions of the lattice coincide with the dimensions of the texture, we will get something similar to white noise on a TV. But what if our lattice has dimensions, say, 2 by 2? The answer is simple - use interpolation. One formulation of linear interpolation looks like this:

This is the fastest, but at the same time the least suitable option for us. It's better to use cosine-based interpolation:

But we can't simply interpolate between the values ​​at the edges of the diagonal (bottom left and top right corner of the cell). In our case, interpolation will need to be applied twice. Let's imagine one of the lattice cells. It has four corners - let's call them V1, V2, V3, V4. Also, inside this cell there will be its own two-dimensional coordinate system, where point (0, 0) corresponds to V1 and point (1, 1) to V3 (see Fig. 15a). In order to get a value with coordinates (0.5, 0.5), we first need to get two X-interpolated values ​​- between V1 and V4 and between V2 and V3, and finally Y-interpolate between these values ​​(Fig. 15b).

Here's an example:

Float2 coords(0.5f, 0.5f) float4 P1 = lerp(V1, V4, coords.x); float4 P2 = lerp(V2, V3, coords.x); float4 P = lerp(P1, P2, coords.y)


Fig. 15 a - Image of a lattice cell with coordinates V1, V2, V3 and V4. b - Sequence of two interpolations using a cell as an example

Now let's do the following - for each pixel of the noise texture, take the interpolated value for the 2x2 grid, then add to it the interpolated value for the 4x4 grid, multiplied by 0.5, then for the 8x8 grid, multiplied by 0.25, etc. up to a certain limit - this is called addition octaves (Fig. 16). The formula looks like this:


Fig. 16 Example of adding octaves

Here is an example implementation:

For(int32_t x = 0; x< size.width; x++) for(int32_t y = 0; y < size.height; y++){ float val = 0.0f; Vector2 normPos = {(float)x / (float)(sideSize - 1), (float)y / (float)(sideSize - 1)}; for(int32_t o = 0; o < octavesCnt; o++){ float frequency = powf(2.0f, (float)(startFrequency + o)); float intencity = powf(intencityFactor, (float)o); Vector2 freqPos = normPos * frequency; Point2 topLeftFreqPos = Cast(freqPos);
Also for V1, V2, V3 and V4 you can get the sum from the value itself and its neighbors like this:

Float GetSmoothValue(const Point2 &Coords) ( float corners = (GetValue((Coords.x - 1, Coords.y - 1)) + GetValue((Coords.x + 1, Coords.y - 1)) + GetValue((Coords .x - 1, Coords.y + 1)) + GetValue((Coords.x + 1, Coords.y + 1))) / 16.0f; float sides = (GetValue((Coords.x - 1, Coords.y )) + GetValue((Coords.x + 1, Coords.y)) + GetValue((Coords.x, Coords.y - 1)) + GetValue((Coords.x, Coords.y + 1))) / 8.0 f; float center = GetValue(Coords) / 4.0f; return center + sides + corners )
and use these values ​​for interpolation. Here's the rest of the code:

Float GetInterpolatedValue(const Point2 &TopLeftCoord, const Point2 &BottomRightCoord, float XFactor, float YFactor) ( Point2 tlCoords(TopLeftCoord.x, TopLeftCoord.y); Point2 trCoords(BottomRightCoord.x, TopLeftCoord.y); Point2 brCoords(Bottom RightCoord.x, BottomRightCoord .y); Point2 blCoords(TopLeftCoord.x, BottomRightCoord.y); float tl = (useSmoothValues) ? GetSmoothValue(tlCoords) : GetValue(tlCoords); float tr = (useSmoothValues) ? trCoords); Float Br = (USSMOTHVALUS): BRCOORDS (BRCOORDS); Ottomval = Math :: Cosinterpolation (BL, BR, XFACTOR); topVal = Math::CosInterpolation(tl, tr, XFactor); return Math::CosInterpolation(topVal, bottomVal, YFactor);
In conclusion of the subsection, I want to say that everything I have described up to this point is a slightly different implementation of Perlin noise from the canonical one.
We've sorted out the heights - now let's see what to do with normals. As with the main height map, we need to generate a normal map from the noise texture. Then in the shader we simply add the normal from the main map with the normal from the noise texture. I must say that this is not entirely correct, but it gives an acceptable result. Here's an example:
//float2 texCoords1 - texture coordinates obtained from the normalized point, as described earlier //mainNormalTex, mainNormalTexSampler - main normal map //distNormalTex, distNormalTexSampler - gradient noise normal map float2 texCoords2 = texCoords1 * texCoordsScale; float4 mainNormColor = mainNormalTex.SampleLevel(mainNormalTexSampler, TexCoords1, 0); float4 distNormColor = distNormalTex.SampleLevel(distNormalTexSampler, TexCoords2, 0); float3 mainNormal = 2.0f * mainNormColor.rgb - 1.0f; float3 distNormal = 2.0f * distNormColor.rgb - 1.0f; float3 normal = normalize(mainNormal + distNormal);

8. Hardware Instancing

Let's do optimization. Now the cycle of drawing sectors in pseudocode looks like this

Loop over all sectors, calculate the scalar product of a vector by a point and a vector by the center of the sector, if it is greater than zero, set the value of S in the shader data, set the value of V1 in the shader data, set the value of V2 in the shader data, draw the mesh, end of condition, end of loop over all sectors
The performance of this approach is extremely low. There are several optimization options - you can build a quadtree for each plane of the cube, so as not to calculate the scalar product for each sector. You can also update the values ​​of V1 and V2 not for each sector, but for the six planes of the cube to which they belong. I chose the third option - Instancing. Briefly about what it is. Let's say you want to draw a forest. You have a tree model, and you also have a set of transformation matrices - tree positions, possible scaling or rotation. You can create one buffer that will contain the vertices of all trees converted to world space - this is a good option, the forest does not run around the map. But what if you need to implement transformations - say, trees swaying in the wind. You can do this - copy the data of the model vertices N times into one buffer, adding the tree index (from 0 to N) to the vertex data. Next, we update the array of transformation matrices and pass it as a variable to the shader. In the shader we select the desired matrix by the tree index. How can you avoid data duplication? First, I would like to draw your attention to the fact that these vertices can be collected from several buffers. To describe a vertex, you need to specify the source index in the InputSlot field of the D3D11_INPUT_ELEMENT_DESC structure. This can be used when implementing morphing facial animation - let's say you have two vertex buffers containing two face states, and you want to linearly interpolate these values. Here's how to describe a vertex:

D3D11_INPUT_ELEMENT_DESC desc = ( /*part1*/ ("POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0), ("NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0), ("TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0), /*part2*/ ("POSITION", 1, DXGI_FORMAT_R32G32B32_FLOAT, 1, 0, D3D11_INPUT_PER_VERTEX_DATA, 0), ("NORM" AL", 1, DXGI_FORMAT_R32G32B32_FLOAT, 1, 12, D3D11_INPUT_PER_VERTEX_DATA , 0), ("TEXCOORD", 1, DXGI_FORMAT_R32G32B32_FLOAT, 1, 24, D3D11_INPUT_PER_VERTEX_DATA, 0) )
in the shader the vertex should be described as follows:

Struct VIn ( float3 position1: POSITION0; float3 normal1: NORMAL0; float2 tex1: TEXCOORD0; float3 position2: POSITION1; float3 normal2: NORMAL1; float2 tex2: TEXCOORD1; )
then you just interpolate the values

Float3 res = lerp(input.position1, input.position2, factor);
Why am I saying this? Let's return to the example of trees. We will collect the vertex from two sources - the first will contain the position in local space, texture coordinates and normal, the second will contain the transformation matrix in the form of four four-dimensional vectors. The description of the vertex looks like this:

D3D11_INPUT_ELEMENT_DESC desc = ( /*part1*/ ("POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0), ("NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0), ("TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0), /*part2*/ ("WORLD", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1), ( "WORLD", 1, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 16, D3D11_INPUT_PER_INSTANCE_DATA , 1), ("WORLD", 2, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 32, D3D11_INPUT_PER_INSTANCE_DATA, 1), ("WORLD", 3, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 48, ANCE_DATA, 1), )
Please note that in the second part, the InputSlotClass field is equal to D3D11_INPUT_PER_INSTANCE_DATA and the InstanceDataStepRate field is equal to one (For a brief description of the InstanceDataStepRate field, see Appendix 1). In this case, the collector will use the data of the entire buffer from the source with type D3D11_INPUT_PER_VERTEX_DATA for each element from the source with type D3D11_INPUT_PER_INSTANCE_DATA. In this case, in the shader these vertices can be described as follows:

Struct VIn ( float3 posL: POSITION; float3 normalL: NORMAL; float2 tex: TEXCOORD; row_major float4x4 world: WORLD; );
By creating a second buffer with the attributes D3D11_USAGE_DYNAMIC and D3D11_CPU_ACCESS_WRITE, we can update it from the CPU side. You need to draw this kind of geometry using calls to DrawInstanced() or DrawIndexedInstanced(). There are also calls to DrawInstancedIndirect() and DrawIndexedInstancedIndirect() - about them, see Appendix 2.

Here's an example of setting buffers and using the DrawIndexedInstanced() function:

//vb - vertex buffer //tb - "instance" buffer //ib - index buffer //vertexSize - size of the element in the vertex buffer //instanceSize - size of the element in the "instance" buffer //indicesCnt - number of indexes //instancesCnt - number of "instances" std::vector buffers = (vb, tb); std::vector strides = (vertexSize, instanceSize); std::vector offsets = (0, 0); deviceContext->IASetVertexBuffers(0,buffers.size(),&buffers,&strides,&offsets); deviceContext->IASetIndexBuffer(ib, DXGI_FORMAT_R32_UINT, 0); deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); deviceContext->DrawIndexedInstanced(indicesCnt, instancesCnt, 0, 0, 0);
Now let's finally get back to our topic. The sector can be described by a point on the plane to which it belongs and two vectors that describe this plane. Therefore, the peak will consist of two sources. The first is grid space coordinates, the second is sector data. The description of the vertex looks like this:

Std::vector meta = ( //coordinates in grid space ("POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0) //first face vector ("TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 1, 0, UT_PER_INSTANCE_DATA, 1), / /second face vector ("TEXCOORD", 1, DXGI_FORMAT_R32G32B32_FLOAT, 1, 12, D3D11_INPUT_PER_INSTANCE_DATA, 1), //start of face ("TEXCOORD", 2, DXGI_FORMAT_R32G32B32_FLOAT, 1, 24, D3D11_INPUT_PER_ INSTANCE_DATA, 1) )
Note that I'm using a 3D vector to store the grid space coordinates (no z coordinate is used)

9. Frustum culling

Another important optimization component is visibility pyramid clipping (Frustum culling). The visibility pyramid is the area of ​​the scene that the camera “sees.” How to build it? First, remember that a point can be in four coordinate systems - local, world, view and projection coordinate systems. The transition between them is carried out through matrices - world, species and projection matrix, and the transformations must take place sequentially - from local to world, from world to species and finally from species to projection space. All these transformations can be combined into one by multiplying these matrices.

We use a perspective projection, which involves the so-called “uniform division” - after multiplying the vector (Px, Py, Pz, 1) by the projection matrix, its components should be divided by the W component of this vector. After transition to projection space and uniform division, the point ends up in NDC space. NDC space is a set of three coordinates x, y, z, where x and y belong to [-1, 1], and z - (It must be said that in OpenGL the parameters are slightly different).

Now let's get down to solving our problem. In my case, the pyramid is located in view space. We need six planes that describe it (Fig. 17a). A plane can be described using a normal and a point that belongs to this plane. First, let's get the points - for this we take the following set of coordinates in NDC space:

Std::vector PointsN = ( (-1.0f, -1.0f, 0.0f, 1.0f), (-1.0f, 1.0f, 0.0f, 1.0f), ( 1.0f, 1.0f, 0.0f, 1.0f), (1.0 f, -1.0f, 0.0f, 1.0f), (-1.0f, -1.0f, 1.0f, 1.0f), (-1.0f, 1.0f, 1.0f, 1.0f), ( 1.0f, 1.0f , 1.0f, 1.0f), (1.0f, -1.0f, 1.0f, 1.0f) );
Look, at the first four points the z value is 0 - this means that they belong to the near clipping plane, at the last four z is equal to 1 - they belong to the far clipping plane. Now these points need to be converted to view space. But how?

Remember the example about the astronaut - so it’s the same here. We need to multiply the points by the inverse projection matrix. True, after this we still need to divide each of them by its W coordinate. As a result, we get the required coordinates (Fig. 17b). Let's now look at the normals - they must be directed inside the pyramid, so we need to choose the necessary order for calculating the vector product.

Matrix4x4 invProj = Matrix4x4::Inverse(camera->GetProjMatrix()); std::vector PointsV; for(const Point4F &pN: pointsN)( Point4F pV = invProj.Transform(pN); pV /= pV.w; pointsV.push_back(Cast (pV)); ) planes = (pointsV, pointsV, pointsV); //near plane planes = (pointsV, pointsV, pointsV); //far plane planes = (pointsV, pointsV, pointsV); //left plane planes = (pointsV, pointsV, pointsV); //right plane planes = (pointsV, pointsV, pointsV); //top plane planes = (pointsV, pointsV, pointsV); //bottom plane planes.normal *= -1.0f; planes.normal *= -1.0f;


Fig. 17 Visibility pyramid

The pyramid is built - it's time to use it. We do not draw those sectors that do not fall inside the pyramid. In order to determine whether a sector is inside the visibility pyramid, we will check the bounding sphere located in the center of this sector. This does not give exact results, but in this case I don’t see anything wrong with several extra sectors being drawn. The radius of the sphere is calculated as follows:

Where TR is the upper right corner of the sector, BL is the lower left corner. Since all sectors have the same area, it is enough to calculate the radius once.

How can we determine whether the sphere describing the sector is inside the visibility pyramid? First we need to determine whether the sphere intersects the plane and, if not, on which side of it it is located. Let's get the vector to the center of the sphere

Where P is a point on the plane and S is the center of the sphere. Now let's calculate the scalar product of this vector and the normal of the plane. The orientation can be determined using the sign of the dot product - as mentioned earlier, if it is positive, then the sphere is in front of the plane, if negative, then the sphere is behind. It remains to determine whether the sphere intersects the plane. Let's take two vectors - N (normal vector) and V. Now let's construct a vector from N to V - let's call it K. So, we need to find such a length N that it forms an angle of 90 degrees with K (formally speaking, so that N and K were orthogonal). Oki doki, look at Fig. 18a - from the properties of a right triangle we know that

We need to find the cosine. Using the previously mentioned dot product property

Divide both sides by |V|*|N| and we get

Let's use this result:

Since |V| is just a number, we can reduce it by |V|, and then we get

Since the vector N is normalized, the last step is to simply multiply it by the resulting value, otherwise the vector should be normalized - in this case the final equation looks like this:

Where D is our new vector. This process is called “Vector Projection” (Fig. 18b). But why do we need this? We know that a vector is determined by its length and direction and does not change in any way depending on its position - this means that if we position D so that it points to S, then its length will be equal to the minimum distance from S to the plane (Fig. 18c)


Fig. 18 a Projection of N onto V, b Visual display of the length of projected N in relation to a point, c Visual display
length of the projected N as applied to a sphere centered at S

Since we don't need a projected vector, we just need to calculate its length. Given that N is a unit vector, we only need to calculate the scalar product of V and N. Putting everything together, we can finally conclude that the sphere intersects the plane if the value of the scalar product of the vector with the center of the sphere and the normal to the plane is greater than zero and less than the value of the radius this sphere.

In order to claim that the sphere is inside the pyramid of visibility, we need to make sure that it either intersects one of the planes or is in front of each of them. You can pose the question differently - if the sphere does not intersect and is located behind at least one of the planes, it is definitely outside the pyramid of visibility. That's what we'll do. Please note that I transfer the center of the sphere to the same space in which the pyramid is located - into view space.

Bool Frustum::TestSphere(const Point3F &Pos, float Radius, const Matrix4x4 &WorldViewMatrix) const ( Point3F posV = WorldViewMatrix.Transform(Pos); for(const Plane &pl: planes)( Vector3 toSphPos = posV - pl.pos; if(Vector3 ::Dot(toSphPos, pl.normal)< -Radius) return false; } return true; }

10. Cracks

Another problem that we have to solve is cracks at the boundaries of detail levels (Fig. 19).


Fig. 19 demonstration of landscape cracks

First of all, we need to identify those sectors that lie on the border of levels of detail. At first glance, it seems that this is a resource-intensive task - after all, the number of sectors at each level is constantly changing. But if you use adjacency data, the solution becomes much simpler. What is adjacency data? Look, each sector has four neighbors. The set of references to them - be they pointers or indexes - is the adjacency data. With their help, we can easily determine which sector lies on the border - just check which level its neighbors belong to.

Well, let's find the neighbors of each sector. And again, we don’t need to loop through all the sectors. Let's imagine that we are working with a sector with X and Y coordinates in grid space.

If it does not touch the edge of the cube, then the coordinates of its neighbors will be as follows:

Top neighbor - (X, Y - 1)
Bottom neighbor - (X, Y + 1)
Left neighbor - (X - 1, Y)
Right neighbor - (X + 1, Y)

If the sector touches an edge, then we place it in a special container. After processing all six faces, it will contain all the boundary sectors of the cube. It is in this container that we have to perform the search. Let's calculate the edges for each sector in advance:

Struct SectorEdges ( CubeSectors::Sector *owner; typedef std::pair Edge; sectorsEdges; //borderSectors - a container with border sectors for(CubeSectors::Sector &sec: borderSectors)( // each sector contains two vectors that describe the face of the cube // to which it belongs Vector3 v1 = sec.vec1 * sec.sideSize; Vector3 v2 = sec.vec2 * sec.sideSize; //sec.startPos - the start of the sector in the local space SectorEdges; secEdges.owner = secEdges.edges = (sec.startPos, sec.startPos + v1); startPos, sec.startPos + v2); sec.startPos + v2, sec.startPos + v2 + v1); sec.startPos + v1, sec.startPos + v2 + v1); .push_back(secEdges)
Next comes the search itself

For(SectorEdges &edgs: sectorsEdges) for(size_t e = 0; e< 4; e++) if(edgs.owner->adjacency[e] == nullptr) FindSectorEdgeAdjacency(edgs, (AdjacencySide)e, sectorsEdges);
The FindSectorEdgeAdjacency() function looks like this

Void CubeSectors::FindSectorEdgeAdjacency(SectorEdges &Sector, CubeSectors::AdjacencySide Side, std::vector &Neibs) ( SectorEdges::Edge &e = Sector.edges; for(SectorEdges &edgs2: Neibs)( if(edgs2.owner == Sector.owner) continue; for(size_t e = 0; e< 4; e++){ SectorEdges::Edge &e2 = edgs2.edges[e]; if((Math::Equals(e.first, e2.first) && Math::Equals(e.second, e2.second)) || (Math::Equals(e.second, e2.first) && Math::Equals(e.first, e2.second))) { Sector.owner->adjacency = edgs2.owner;
edgs2.owner->adjacency[e] = Sector.owner;

return;
) ) ) )

Std::vector Note that we update the adjacency data for two sectors - the searched (Sector) and the found neighbor.
After we have distributed the sectors by levels of detail, we determine the neighboring tessellation coefficients for each sector:

For(LodsStorage::Lod &lod: lods)( const std::vector §ors = lod.GetSectors();
bool lastLod = lod.GetInd() == lods.GetCount() - 1;

for(Sector *s: sectors)( int32_t tessFacor = s->GetTessFactor(); s->GetBorderTessFactor() = ( GetNeibTessFactor(s, Sector::ADJ_BOTTOM, tessFacor, lastLod), GetNeibTessFactor(s, Sector::ADJ_LEFT, tessFacor, lastLod), GetNeibTessFactor(s, Sector::ADJ_TOP, tessFacor, lastLod), GetNeibTessFactor(s, Sector::ADJ_RIGHT, tessFacor, lastLod) ) )< TessFactor) ? (float)neibTessFactor: 0.0f; }
A function that looks for a neighboring tessellation factor:

Float Terrain::GetNeibTessFactor(Sector *Sec, Sector::AdjacencySide Side, int32_t TessFactor, bool IsLastLod) ( Sector *neib = Sec->GetAdjacency(); int32_t neibTessFactor = neib->GetTessFactor(); return (neibTessFactor

If we return zero, then the neighbor on the Side side is of no interest to us. Let me jump ahead and say that we need to eliminate cracks from the side of the level with a high tessellation coefficient.
Now let's move on to the shader. Let me remind you that first we need to get the grid coordinates using the tessellator coordinates. Then these coordinates are converted to a point on the face of the cube, this point is normalized - and now we have a point on the sphere:

Float3 p = Tri.netPos * Coord.x + Tri.netPos * Coord.y + Tri.netPos * Coord.z; float3 planePos = Tri.startPos + Tri.vec1 * p.x * gridStep.x + Tri.vec2 * p.y * gridStep.y; float3 sphPos = normalize(planePos);
First we need to figure out whether the vertex belongs to either the first or last row of the grid, or the first or last column - in this case, the vertex belongs to an edge of the sector. But this is not enough - we need to determine whether the vertex belongs to the LOD boundary. To do this, we use information about neighboring sectors, or rather their tessellation levels: Float4 bTf = Tri.borderTessFactor; bool isEdge = (bTf.x != 0.0f && p.y == 0.0f) || //bottom (bTf.y != 0.0f && p.x == 0.0f) || //left (bTf.z != 0.0f && p.y == gridSize.y) || //top (bTf.w != 0.0f && p.x == gridSize.x) //right- actually eliminating cracks. Look at Fig. 20. The red line is the face of a vertex belonging to the second level of detail. The two blue lines are the edges of the third level of detail. We need V3 to belong to the red line - that is, to lie on the edge of the second level. As the heights V1 and V2 are equal for both levels, V3 can be found by linear interpolation between them


Fig. 20 Demonstration of the faces that form a crack in the form of lines

So far we have neither V1 and V2, nor the coefficient F. First we need to find the index of point V3. That is, if the mesh has a size of 32 by 32 and the tessellation coefficient is four, then this index will be from zero to 128 (32 * 4). We already have coordinates in grid space p - within this example they can be for example (15.5, 16). To obtain the index, you need to multiply one of the p coordinates by the tessellation coefficient. We also need the beginning of the face and the direction to its end - one of the corners of the sector.

Float edgeVertInd = 0.0f; float3 edgeVec = float3(0.0f, 0.0f, 0.0f); float3 startPos = float3(0.0f, 0.0f, 0.0f); uint neibTessFactor = 0; if(bTf.x != 0.0f && p.y == 0.0f)( // bottom edgeVertInd = p.x * Tri.tessFactor; edgeVec = Tri.vec1; startPos = Tri.startPos; neibTessFactor = (uint)Tri.borderTessFactor.x ; )else if(bTf.y != 0.0f && p.x == 0.0f)( // left edgeVertInd = p.y * Tri.tessFactor; edgeVec = Tri.vec2; startPos = Tri.startPos; neibTessFactor = (uint)Tri. borderTessFactor.y; )else if(bTf.z != 0.0f && p.y == gridSize.y)( // top edgeVertInd = p.x * Tri.tessFactor; edgeVec = Tri.vec1; startPos = Tri.startPos + Tri.vec2 * (gridStep.x * gridSize.x); neibTessFactor = (uint)Tri.borderTessFactor.z; )else if(bTf.w != 0.0f && p.x == gridSize.x)( // right edgeVertInd = p.y * Tri .tessFactor; edgeVec = Tri.vec2; startPos = Tri.startPos + Tri.vec1 * (gridStep.x * gridSize.x);
Next we need to find the indices for V1 and V2. Imagine that you have the number 3. You need to find the two closest numbers that are multiples of two. To do this, you calculate the remainder when dividing three by two - it is equal to one. Then you subtract or add this remainder to three and get the desired result. Same with indexes, but instead of two we will have a ratio of tessellation coefficients of detail levels. That is, if the third level has a coefficient of 16, and the second has 2, then the ratio will be equal to 8. Now, in order to get the heights, you first need to get the corresponding points on the sphere by normalizing the points on the face. We have already prepared the beginning and direction of the edge - all that remains is to calculate the length of the vector from V1 to V2. Since the length of the edge of the original grid cell is gridStep.x, the length we need is gridStep.x / Tri.tessFactor. Then from the points on the sphere we will get the height, as described earlier.

Float GetNeibHeight(float3 EdgeStartPos, float3 EdgeVec, float VecLen, float3 NormOffset) ( float3 neibPos = EdgeStartPos + EdgeVec * VecLen; neibPos = normalize(neibPos); return GetHeight(neibPos, NormOffset); ) float vertOffset = gridStep.x / Tri. tessFactor; uint tessRatio = (uint)tessFactor / (uint)neibTessFactor; uint ind = (uint)edgeVertInd % tessRatio; uint leftNeibInd = (uint)edgeVertInd - ind; float leftNeibHeight = GetNeibHeight(startPos, edgeVec, vertOffset * leftNeibInd, normOffset); uint rightNeibInd = (uint)edgeVertInd + ind; float rightNeibHeight = GetNeibHeight(startPos, edgeVec, vertOffset * rightNeibInd, normOffset);
Well, the last component is the factor F. We get it by dividing the remainder of the division by the ratio of the coefficients (ind) by the ratio of the coefficients (tessRatio)

Float factor = (float)ind / (float)tessRatio;
The final stage is linear interpolation of heights and obtaining a new vertex

Float avgHeight = lerp(leftNeibHeight, rightNeibHeight, factor); posL = sphPos * (sphereRadius + avgHeight);
A crack may also appear in the place where sectors with edge texture coordinates equal to 1 or 0 border. In this case, I take the average value between the heights for the two coordinates:

Float GetHeight(float2 TexCoords) ( float2 texCoords2 = TexCoords * texCoordsScale; float mHeight = mainHeightTex.SampleLevel(mainHeightTexSampler, TexCoords, 0).x; float dHeight = distHeightTex.SampleLevel(distHeightTexSampler, texCoords 2, 0).x; return (mHeight + dHeight) * maxTerrainHeight; ) float GetHeight(float3 SphPos, float3 NormOffset) ( float2 texCoords1 = GetTexCoords(SphPos, NormOffset); float height = GetHeight(texCoords1); if(texCoords1.x == 1.0f)( float height2 = GetHeight( float2(0.0f, texCoords1.y)); return lerp(height, height2, 0.5f); )else if(texCoords1.x == 0.0f)( float height2 = GetHeight(float2(1.0f, texCoords1.y)) ; return lerp(height, height2, 0.5f);

11. GPU Processing

Let's move sector processing to the GPU. We will have two Compute shaders - the first will perform clipping along the visibility pyramid and determine the level of detail, the second will receive the boundary tessellation coefficients to eliminate cracks. The division into two stages is necessary because, as in the case of the CPU, we cannot correctly determine the neighbors for sectors until we perform the pruning. Since both shaders will use LOD data and work with sectors, I introduced two general structure: Sector and Lod

Struct Sector ( float3 vec1, vec2; float3 startPos; float3 normCenter; int adjacency; float borderTessFactor; int lod; ); struct Lod ( RangeF dotRange; float tessFactor; float padding; float4 color; );
We will use three main buffers - input (contains initial information about sectors), intermediate (it contains data from sectors obtained as a result of the first stage) and final (will be passed on for drawing). The input buffer data will not change, so it is reasonable to use the D3D11_USAGE_IMMUTABLE value in the Usage field of the D3D11_BUFFER_DESC structure. We will simply write the data of all sectors into it with the only difference that for adjacency data we will use sector indices, rather than pointers to them. For the level of detail index and boundary tessellation coefficients, set the values ​​to zero:

Static const size_t sectorSize = sizeof(Vector3) + //vec1 sizeof(Vector3) + //vec2 sizeof(Point3F) + //normCenter sizeof(Point3F) + //startPos sizeof(Point4) + //adjacency sizeof(Vector4) + //borderTessFactor sizeof(int32_t);//lod size_t sectorsDataSize = sectors.GetSectors().size() * sectorSize; std::vector sectorsData(sectorsDataSize); char* ptr = const Sector* firstPtr = §ors.GetSectors(); for(const Sector &sec: sectors)( Utils::AddToStream (ptr, sec.GetVec1()); Utils::AddToStream (ptr, sec.GetVec2()); Utils::AddToStream (ptr, sec.GetStartPos()); (ptr, sec.GetStartPos()); (ptr, sec.GetStartPos()); Utils::AddToStream (ptr, sec.GetNormCenter());
Utils::AddToStream

(ptr, sec.GetAdjacency() - firstPtr);
Utils::AddToStream

(ptr, Vector4()); Utils::AddToStream (ptr, 0); ) inputData = Utils::DirectX::CreateBuffer(§orsData,//Raw data sectorsDataSize,//Buffer size D3D11_BIND_SHADER_RESOURCE,//bind flags D3D11_USAGE_IMMUTABLE,//usage 0,//CPU access flags D3D11_RESOURCE_MISC_BUFFER_STRUCTURED,//mis c flags sectorSize); //structure byte stride< dotRange.minVal || dotVal >Now a few words about the intermediate buffer. It will play two roles - output for the first shader and input for the second, so we will specify the value D3D11_BIND_UNORDERED_ACCESS | in the BindFlags field. D3D11_BIND_SHADER_RESOURCE. We will also create two views for it - UnorderedAccessView, which will allow the shader to write the result of its work into it, and ShaderResourceView, with which we will use the buffer as input. Its size will be the same as the previously created input buffer< 4; l++){ Lod lod = lods[l]; if(dotVal >UINT miscFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE; intermediateData = Utils::DirectX::CreateBuffer(sectors.GetSectors().size() * sectorSize,//Buffer size miscFlags, D3D11_USAGE_DEFAULT,//usage 0,//CPU access flags D3D11_RESOURCE_MISC_BUFFER_STRUCTURED,//misc flags sectorSize);/ /structure byte stride intermediateUAW = Utils::DirectX::CreateUnorderedAccessView(intermediateData, D3D11_BUFFER_UAV(0, sectors.GetSectors().size(), 0)); intermediateSRV = Utils::DirectX::CreateShaderResourceView(intermediateData, D3D11_BUFFEREX_SRV(0, sectors.GetSectors().size(), 0));<= lod.dotRange.maxVal) sector.lod = l + 1; } outputData = sector; }
After calculating the dot product, we check whether the sector is in the potentially visible region. Next, we clarify the fact of its visibility using the IsVisible() call, which is identical to the Frustum::TestSphere() call shown earlier. The function's operation depends on the worldView, sphereRadius, frustumPlanesPosV and frustumPlanesNormalsV variables, the values ​​for which must be passed to the shader in advance. Next we define the level of detail. Please note that we indicate the level index starting from one - this is necessary in order to discard at the second stage those sectors whose level of detail is equal to zero.

Now we need to prepare the buffers for the second stage. We want to use the buffer as an output for the Compute shader and an input for the tessellator - for this we need to specify the value D3D11_BIND_UNORDERED_ACCESS | in the BindFlags field. D3D11_BIND_VERTEX_BUFFER. We will have to work with the buffer data directly, so we will indicate the value D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS in the MiscFlags field. To display such a buffer, we will use the value DXGI_FORMAT_R32_TYPELESS in the Flags field, and in the NumElements field we will indicate the entire buffer divided by four

Size_t instancesByteSize = instanceByteSize * sectors.GetSectors().size(); outputData = Utils::DirectX::CreateBuffer(instancesByteSize, D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_VERTEX_BUFFER, D3D11_USAGE_DEFAULT, 0, D3D11_RESOURCE_MISC_BUFFER_ALLOW_RAW_VIEWS, 0); D3D11_BUFFER_UAV uavParams = (0, instancesByteSize / 4, D3D11_BUFFER_UAV_FLAG_RAW); outputUAW = Utils::DirectX::CreateUnorderedAccessView(outputData, uavParams, DXGI_FORMAT_R32_TYPELESS);
We also need a counter. With its help, we address memory in the shader and use its final value in the instanceCount argument of the DrawIndexedInstanced() call. I implemented the counter as a 16-byte buffer. Also, when creating a mapping in the Flags field of the D3D11_BUFFER_UAV field, I used the value D3D11_BUFFER_UAV_FLAG_COUNTER

Counter = Utils::DirectX::CreateBuffer(sizeof(UINT), D3D11_BIND_UNORDERED_ACCESS, D3D11_USAGE_DEFAULT, 0, D3D11_RESOURCE_MISC_BUFFER_STRUCTURED, 4); D3D11_BUFFER_UAV uavParams = (0, 1, D3D11_BUFFER_UAV_FLAG_COUNTER); counterUAW = Utils::DirectX::CreateUnorderedAccessView(counter, uavParams);
It's time to give the code for the second shader

(ptr, Vector4()); inputData: register(t0); RWByteAddressBuffer outputData: register(u0); RWStructuredBuffer counter: register(u1);
void Process(int3 TId: SV_DispatchThreadID) ( int ind = TId.x; Sector sector = inputData; if(sector.lod != 0)( sector.borderTessFactor = GetNeibTessFactor(sector, 0); //Bottom sector.borderTessFactor = GetNeibTessFactor (sector, 1); //Left sector.borderTessFactor = GetNeibTessFactor(sector, 2); //Top sector.borderTessFactor = GetNeibTessFactor(sector, 3); //Right int c = counter.IncrementCounter(); ; outputData.Store(c * dataSize + 0, asuint(sector.startPos.x)); outputData.Store(c * dataSize + 4, asuint(sector.startPos.y)); outputData.Store(c * dataSize + 8 , asuint(sector.startPos.z)); outputData.Store(c * dataSize + 12, asuint(sector.vec1.x)); outputData.Store(c * dataSize + 16, asuint(sector.vec1.y)) ; outputData.Store(c * dataSize + 20, asuint(sector.vec1.z)); outputData.Store(c * dataSize + 24, asuint(sector.vec2.x)); outputData.Store(c * dataSize + 28 , asuint(sector.vec2.y)); outputData.Store(c * dataSize + 32, asuint(sector.vec2.z)); outputData.Store(c * dataSize + 36, asuint(sector.borderTessFactor));

outputData.Store(c * dataSize + 40, asuint(sector.borderTessFactor));
outputData.Store(c * dataSize + 44, asuint(sector.borderTessFactor)); outputData.Store(c * dataSize + 48, asuint(sector.borderTessFactor)); For the DrawIndexedInstancedIndirect() and CopyStructureCount() methods, please refer to Appendix 2

12. Camera

Surely you know how to build a model of a simple FPS (First Person Shooter) camera. I follow this scenario:
  • 1. From two angles I get a direction vector
  • 2. using the direction vector and the vector (0, 1, 0) I get the basis
  • 3. According to the direction vector and vacctor to the right, obtained in step 2, I change the camera position

In our case, the situation is somewhat more complicated - firstly, we must move relative to the center of the planet, and secondly, when constructing a basis, instead of the vector (0, 1, 0), we must use the normal of the sphere at the point where we are now. In order to achieve desired results, I will use two bases. According to the first, the position will change, the second will describe the orientation of the camera. The bases are interdependent, but I calculate the position basis first, so I'll start with that. Let's assume that we have an initial position basis (pDir, pUp, pRight) and a direction vector vDir along which we want to move some distance. First of all, we need to calculate the projections of vDir onto pDir and pRight. Adding them up, we get an updated direction vector (Fig. 21).


Fig. 21 Visual process of obtaining projDir

Where P is the camera position, mF and mS are coefficients, meaning how much we need to move forward or sideways.

We cannot use PN as a new camera position because PN does not belong to the sphere. Instead, we find the normal of the sphere at point PN, and this normal will be the new value of the upward vector. Now we can form an updated basis

Vector3 nUp = Vector3::Normalize(PN - spherePos); Vector3 nDir = projDir Vector3 nRight = Vector3::Normalize(Vector3::Cross(pUp, pDir))
where spherePos is the center of the sphere.

We need to make sure that each of its vectors is orthogonal to the other two. According to the cross product property, nRight satisfies this condition. It remains to achieve the same for nUp and nDir. To do this, project nDir onto nUp and subtract the resulting vector from nDir (Fig. 22)


Fig.22 Orthogonalization of nDir with respect to nUp

We could do the same with nUp, but then it would change its direction, which in our case is unacceptable. Now we normalize nDir and get an updated orthonormal direction basis.

The second key stage is the construction of an orientation basis. The main difficulty is obtaining the direction vector. the most suitable solution is to convert a point with a polar angle a, an azimuth angle b and a distance from the origin equal to one from spherical coordinates to Cartesian. Only if we carry out such a transition for a point with a polar angle equal to zero, will we get a vector looking up. This is not entirely suitable for us, since we will be incrementing the angles and assuming that such a vector will look forward. Simply shifting the angle by 90 degrees will solve the problem, but it is more elegant to use the angle shift rule, which states that

Let's do so. As a result, we get the following

Where a is the polar angle, b is the azimuth angle.

This result is not entirely suitable for us - we need to construct a direction vector relative to the position basis. Let's rewrite the equation for vDir:

Everything is like the astronauts - so much in this direction, so much in that direction. It should now be obvious that if we replace the standard basis vectors with pDir, pUp and pRight, we will get the direction we need. Like this

You can represent the same thing in the form of matrix multiplication

The vector vUp will initially be equal to pUp. By calculating the cross product of vUp and vDir, we get vRight

Now we will make sure that vUp is orthogonal to the rest of the basis vectors. The principle is the same as when working with nDir

We've sorted out the basics - all that remains is to calculate the camera position. It's done like this

Where spherePos is the center of the sphere, sphereRadius is the radius of the sphere and height is the height above the surface of the sphere. Here is the operating code for the described camera:

Float moveFactor = 0.0f, sideFactor = 0.0f, heightFactor = 0.0f; DirectInput::GetInsance()->ProcessKeyboardDown(( (DIK_W, [&]())(moveFactor = 1.0f;)), (DIK_S, [&]())(moveFactor = -1.0f;)), (DIK_D, [ &]())(sideFactor = 1.0f;)), (DIK_A, [&]())(sideFactor = -1.0f;)), (DIK_Q, [&]())(heightFactor = 1.0f;)), (DIK_E , [&]() (heightFactor = -1.0f;)) )); if(moveFactor != 0.0f || sideFactor != 0.0f)( Vector3 newDir = Vector3::Normalize(pDir * Vector3::Dot(pDir, vDir) + pRight * Vector3::Dot(pRight, vDir)); Point3F newPos = pos + (newDir * moveFactor + pRight * sideFactor) * Tf * speed; pUp = Vector3::Normalize(newPos - spherePos); ); pDir = Vector3::Normalize(pDir - pUp * Vector3::Dot(pUp, pDir)); pos = spherePos + pUp * (sphereRadius + height); if(heightFactor != 0.0 f)( height = Math::Saturate(height + heightFactor * Tf * speed, heightRange); pos = spherePos + pUp * (sphereRadius + height); ) DirectInput::MouseState mState = DirectInput::GetInsance()->GetMouseDelta( ); if(mState.x != 0 || mState.y != 0 || moveFactor != 0.0f || sideFactor != 0.0f)( if(mState.x != 0) angles.x = angles.x + mState .x / 80.0f; if(mState.y != 0) angles.y = Math::Saturate(angles.y + mState.y / 80.0f, RangeF(-Pi * 0.499f, Pi * 0.499f)); vDir = Vector3::Normalize(pRight * sinf(angles.x) * cosf(angles.y) + pUp * -sinf(angles.y) + pDir * cosf(angles.x) * cosf(angles.y)); vUp = pUp; vRight = Vector3::Normalize(Vector3::Cross(vUp, vDir)); vUp = Vector3::Normalize(vUp - vDir * Vector3::Dot(vDir, vUp)); viewMatrix = Matrix4x4:: Inverse(((vRight, 0.0f), (vUp, 0.0f), (vDir, 0.0f), (pos, 1.0f)));
Note that we reset angles.x after we've updated the position basis. This is critical. Let's imagine that we simultaneously change the viewing angle and move around the sphere. First we will project the direction vector onto pDir and pRight, get the offset (newPos) and update the position basis based on it. The second condition will also work, and we will begin to update the orientation basis. But since pDir and pRight have already been changed depending on vDir, without resetting the azimuth angle (angles.x) the rotation will be more “steep”

Conclusion

I thank the reader for your interest in the article. I hope that the information contained in it was accessible, interesting and useful to him. Suggestions and comments can be sent to me by mail. [email protected] or leave as comments.

I wish you success!

Annex 1

the InstanceDataStepRate field contains information about how many times to draw the D3D11_INPUT_PER_VERTEX_DATA data for one D3D11_INPUT_PER_INSTANCE_DATA element. In our example, everything is simple - one to one. “But why do we need to draw the same thing several times?” - you ask. A reasonable question. Armenian radio answers - suppose we have 99 balls of three different colors. We can describe the vertex this way:

UINT colorsRate = 99 / 3; std::vector meta = (("POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0), ("NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0), ( "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0), ("WORLD", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1), ("WORLD", 1, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 16, D3D11_INPUT_PER_INSTANCE_DATA, 1), ("WORLD", 2, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 32, D3D11_INPUT_PER_INSTANCE_DATA, 1), ("WORLD", 3, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 48, D3D11_INPUT_PER_INSTANCE_DATA, 1), ("COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 2, 0, D3D11_INPUT_PER_INSTANCE_DATA, colorsRate), );
Please note that the vertex is collected from three sources, and the data of the latter is updated once every 33 “instances”. As a result, we will get 33 instances of the first color, another 33 of the second, etc. Now let's create the buffers. Moreover, since the colors will not change, we can create a buffer with colors with the D3D11_USAGE_IMMUTABLE flag. This means that after the buffer is initialized, only the GPU will have read-only access to its data. Here is the code for creating the buffers:

MatricesTb = Utils::DirectX::CreateBuffer(sizeof(Matrix4x4) * 99, D3D11_BIND_VERTEX_BUFFER, D3D11_USAGE_DYNAMIC, D3D11_CPU_ACCESS_WRITE); colorsTb = Utils::DirectX::CreateBuffer(colors, D3D11_BIND_VERTEX_BUFFER, D3D11_USAGE_IMMUTABLE, 0);
then, if necessary, we update the buffer with matrices (I use the functions of my library - I hope that everything will be clear)

Utils::DirectX::Map (matricesTb, [&](Matrix4x4 *Data) ( //first we write data for balls of the first color into the buffer //then for the second, etc. Please note that the necessary correspondence of //data to colors must be ensured at the stage of generating the buffer data ));
Data access in the shader can be implemented in the same way as I described earlier


Appendix 2

Unlike DrawIndexedInstanced(), calling DrawIndexedInstancedIndirect() takes as an argument a buffer that contains all the information you use to call DrawIndexedInstanced(). Moreover, this buffer must be created with the D3D11_RESOURCE_MISC_DRAWINDIRECT_ARGS flag. Here is an example of creating a buffer:

//indicesCnt - the number of indexes that we want to display //instancesCnt - the number of "instances" that we want to display std::vector args = ( indicesCnt, //IndexCountPerInstance instancesCnt, //InstanceCount 0, //StartIndexLocation 0,//BaseVertexLocation 0//StartInstanceLocation ); D3D11_BUFFER_DESC bd = (); bd.Usage = D3D11_USAGE_DEFAULT; bd.ByteWidth = sizeof(UINT) * args.size(); bd.BindFlags = 0; bd.CPUAccessFlags = 0; bd.MiscFlags = D3D11_RESOURCE_MISC_DRAWINDIRECT_ARGS; bd.StructureByteStride = 0; ID3D11Buffer* buffer; D3D11_SUBRESOURCE_DATA initData = (); initData.pSysMem = HR(DeviceKeeper::GetDevice()->CreateBuffer(&bd, &initData, &buffer)); example of calling DrawIndexedInstancedIndirect(): DeviceKeeper::GetDeviceContext()->DrawIndexedInstancedIndirect(indirectArgs, 0);
as the second argument we pass the offset in bytes from the beginning of the buffer from which we need to start reading data. How can this be used? For example, when implementing invisible geometry clipping on the GPU. In general, the chronology is as follows - first, in the Compute shader, we fill the AppendStructuredBuffer, which contains visible geometry data. Then using CopyStructureCount() we set the number of instances we want to display to the number of entries in this buffer and call DrawIndexedInstancedIndirect()


Appendix 3

Let's assume that the value of the x coordinate is equal to the result of the function X with argument a, and the value of the z coordinate is the result of function Z with the same argument:

Now we need to calculate the derivative for each function. Essentially, the derivative of the function in given point equal to the rate of change of function values ​​at exactly this point. According to the rules for differentiating trigonometric functions:

Which ultimately gives us the same result:

Why can we use velocity values ​​as components of a direction vector? This is how I understand it. Imagine we have a vector function (for t >= 0):

Let's calculate the derivative for the X coordinate

Now for Y

We have found that the velocity vector is equal to (2, 3), now let’s find the starting point

As a result, we can express the function P(t) as follows:

Which in simple words can be described as “the point moves from the origin with coordinates (3, 2) to t in the direction (2, 3).” Now let's take another example:

Let's calculate the derivative for the X coordinate again

And for the Y coordinate

Now the velocity vector changes depending on the argument. In this case, in simple words the situation can be described as follows: “The point moves from the origin with coordinates (3, 2), and the direction of its movement is constantly changing.”


Appendix 4

Let's define a function F(H) that will take the height in the area and return a value between 0 and 1, where F(Hmin) = 0 and F(Hmax) = 1. Solving the system of equations

I got

As a result, function F takes the form

Now we need a function that takes a height factor between 0 and 1 and returns the minimum value for the dot product. We need to take into account that the closer the observer is to the surface, the larger it is. This results in the following equation:

Let's open the brackets

Let's simplify and get

Now we express D(F(H)) and get

Tags:

  • directx11
  • terrain render
Add tags

During the evolution of the Earth, the change in the appearance of land landscapes was a reaction to the transformation of natural conditions. All the diversity of the geographical envelope, known as geosystems, landscapes or natural complexes, reflects the results of various manifestations of temperature and humidity, which in turn are subject to radiation balance.

These dynamic systems of different ranks, characterized by integrity, special interaction of their constituent elements and functioning, productivity and appearance, collectively form the geographical envelope and relate to it as parts of the whole. They have their own natural (natural resource) potential, measurements of which make it possible to rank geosystems and study their changes. The unifying principle of these structures is the exchange of flows of matter and energy, their partial accumulation and consumption. Thus, energy and mass exchange within the geographical envelope serves as the basis for its differentiation, and its changes are reflected in the appearance of the earth's surface. This process ensures the modern geographic zonation and zonality of the Earth and the diversity of specific landscapes of varying degrees of organization.

However, during the evolution of the geographical envelope, changes in its terrestrial systems were also associated with deep processes and phenomena, partly expressed on the surface (zones of volcanism, seismicity, mountain building, etc.). At the same time, along with direct changes in the lithogenic base of landscapes and the geographic envelope as a whole, the latter received additional matter and energy, which was reflected in the functioning of its individual components and the system as a whole. This “complementarity” (at some times, probably significant) manifested itself not only quantitatively, in the global circulation of matter and energy, but also in qualitative changes in individual components. The role of the processes of degassing of the Earth and their energy-mass exchange with the atmosphere and hydrosphere has not yet been sufficiently studied. Only from the middle of the 20th century. information appeared on the material composition of mantle matter and its quantitative characteristics.

Research by V.I. Bgatov has established that atmospheric oxygen is not so much of photosynthetic origin as of deep origin. The generally accepted scheme of the carbon cycle in nature must be corrected by the supply of its compounds from the bowels of the earth, in particular during volcanic eruptions. Apparently, no smaller amounts of the substance enter the water shell during underwater eruptions, especially in spreading zones, volcanic island arcs and in individual hot spots. The total annual amount of carbon compounds coming from the subsoil into the ocean and atmosphere is comparable to the mass of annual carbonate formation in water bodies and, apparently, exceeds the volume of accumulation of organic carbon by land plants.

Natural climate warming and its anthropogenic intensification should cause a shift in the boundaries of geographic zones and zones and contribute to the modification of individual landscapes.

However, the development of human society and the expansion of its needs and capabilities lead to the artificial restructuring of natural complexes of different scales and the formation of cultural landscapes that affect the functioning of the geographical envelope, disrupting the natural course. Among these impacts, the most obvious are the following:

Note that the summation of annual emissions of pollutants is not theoretically and practically fully justified, since as they enter the geographic environment they are assimilated, transformed under the influence of each other and function differently. It is important to analyze each major anthropogenic release, taking into account its reactions with existing compounds.

A change in the energy of the geographical shell or its parts causes a restructuring of the internal structure and processes of functioning of the geosystem and related phenomena. This process is complex and is regulated by multiple direct and feedback connections (Fig. 9.4). Anthropogenic impacts on the geographical environment cause changes in the composition and state of the environment, disrupt the quantitative and qualitative composition of living matter (up to mutations), and modify the existing systems of energy, mass and moisture exchange. However, currently available evidence suggests that anthropogenic changes do not fundamentally affect the geographic envelope. The relative balance of its existence and the sustainability of development are mainly ensured by natural causes, the scale of which exceeds human influence. It does not follow from this that the geographical envelope itself will always overcome the increasing anthropogenic pressure. Interventions in nature must be regulated from the point of view of the expediency of their manifestations - for the benefit of humanity and without significant harm to the natural environment. The concepts being developed in this direction are called sustainable (balanced) development. They should be based on general geological patterns and features of the current state and development of the geographical envelope.

In conclusion, let us touch upon the emerging statement that the modern geographical envelope is becoming anthroposphere, or part of emerging noosphere. Note that the concept of “noosphere” is largely philosophical in nature. The human impact on the environment and the involvement of waste products in it is an undeniable phenomenon. It is important to understand that most often a person changes his habitat not consciously, but through unforeseen consequences. Moreover, these implementations are not aimed at all components of the geographical envelope, but only at the components necessary for people (forest, soil, raw materials, etc.). Thus, there are only pockets of change, although sometimes very significant and serious, and although human activity increases, nature still develops mainly under the influence of natural processes. Therefore, at present we should talk about certain areas of the geographical envelope, where the natural environment has been significantly changed and is developing under the influence of human-regulated processes.

Rice. 9.4. Some feedbacks governing global climate

During the evolution of the Earth, changes in the appearance of land landscapes were a reaction to the transformation of natural conditions. All the diversity of the geographical envelope, known as geosystems, landscapes or natural complexes, reflects the results of various manifestations of temperature and humidity, which in turn are subject to radiation balance.

These dynamic systems of different ranks, characterized by integrity, special interaction of their constituent elements and functioning, productivity and appearance, collectively form the geographical envelope and relate to it as parts of the whole. They have their own natural (natural resource) potential, measurements of which make it possible to rank geosystems and study their changes. The unifying principle of these structures is the exchange of flows of matter and energy, their partial accumulation and consumption. Thus, energy and mass exchange within the geographical envelope serves as the basis for its differentiation, and its changes are reflected in the appearance of the earth's surface. This process ensures the modern geographic zonation and zonality of the Earth and the diversity of specific landscapes of varying degrees of organization.

However, during the evolution of the geographic envelope, changes in its terrestrial systems were also associated with deep processes and phenomena, partly expressed on the surface (zones of volcanism, seismicity, mountain building, etc.). At the same time, along with direct changes in the lithogenic base of landscapes and the geographic envelope as a whole, the latter received additional matter and energy, which was reflected in the functioning of its individual components and the system as a whole. This “complementarity” (at some times, probably significant) manifested itself not only quantitatively, in the global circulation of matter and energy, but also in qualitative changes in individual components. The role of the processes of degassing of the Earth and their energy-mass exchange with the atmosphere and hydrosphere has not yet been sufficiently studied. Only from the middle of the 20th century. information appeared on the material composition of mantle matter and its quantitative characteristics.

Research by V.I. Bgatov has established that atmospheric oxygen is not so much of photosynthetic origin as of deep origin. The generally accepted scheme of the carbon cycle in nature must be corrected by the supply of its compounds from the bowels of the earth, in particular during volcanic eruptions. Apparently, no smaller amounts of the substance enter the water shell during underwater eruptions, especially in spreading zones, volcanic island arcs and in individual hot spots. The total annual amount of carbon compounds coming from the subsoil into the ocean and atmosphere is comparable to the mass of annual carbonate formation in water bodies and, apparently, exceeds the volume of accumulation of organic carbon by land plants.

Natural climate warming and its anthropogenic intensification should cause a shift in the boundaries of geographic zones and zones and contribute to the modification of individual landscapes.

However, the development of human society and the expansion of its needs and capabilities lead to the artificial restructuring of natural complexes of different scales and the formation of cultural landscapes that affect the functioning of the geographical envelope, disrupting the natural course. Among these impacts, the most obvious are the following:

1) The creation of reservoirs and irrigation systems changes the surface albedo, the regime of heat and moisture exchange, which, in turn, affects air temperature and cloudiness.

2) Conversion of land to agricultural land or destruction of vegetation (massive deforestation) changes albedo and thermal conditions, disrupts the cycle of substances due to a reduction in active surfaces for photosynthesis. The most significant impact in terms of scale was the massive development of virgin and fallow lands, when many millions of hectares of green pastures and fallow lands were plowed and sown. The increase in the absorption capacity of the earth's surface, the disruption of its roughness and the continuity of soil and vegetation cover changed the radiation balance, caused a transformation in the circulation of air masses and increased winds, which led to dust storms and a decrease in the transparency of the atmosphere. The result of the transformations was the transfer of stable productive landscapes to unstable ones with the intensification of desertification processes and risk in land use.

3) Redistribution of surface runoff (flow regulation, creation of dams and reservoirs) most often leads to swamping of surrounding areas. At the same time, the albedo of the underlying surface changes, humidity, the frequency of fogs, cloudiness and air permeability increase, which disrupts the natural heat and mass transfer between the earth's surface and the atmosphere. The damming of water flow and the formation of swampy spaces change the nature of the decomposition of plant litter, which causes the entry of additional quantities of greenhouse gases (carbon dioxide, methane, etc.) into the atmosphere, changing its composition and transparency.

4) The creation of hydropower structures on rivers, damming with the formation of cascades of year-round falling water change the annual regime of rivers, disrupt the ice situation, the distribution of transportable sediments and transform the river-atmosphere system. Non-freezing reservoirs with constant fog and evaporation from the water surface (even in winter) affect the course of temperatures, the circulation of water masses, worsening weather conditions and changing the habitat of living organisms. Impact of hydroelectric power station on large rivers(Yenisei, Angara, Kolyma, Volga, etc.) is felt tens of kilometers downstream and on all dammed parts of reservoirs, and general changes in climate cover hundreds of square kilometers. The slow supply of river sediments and their redistribution lead to disruption of geomorphological processes and destruction of river mouths and banks water pools(for example, the destruction of the Nile Delta and the southeastern part of the Mediterranean coast after the construction of the Aswan Dam and its interception of a significant part of the solid sediment carried by the river).

5) Reclamation work, accompanied by the drainage of large spaces, disrupts the existing regime of heat, moisture and exchange and contributes to the development of negative feedback during the transformation of landscapes. Thus, the over-drying of marshy systems in a number of regions (Polesie, Novgorod region, Irtysh region) led to the death of natural vegetation cover and the emergence of deflation processes, which even in areas of sufficient moisture formed shifting sands. As a result, the dustiness of the atmosphere increased, the surface roughness increased, and the wind regime changed.

6) An increase in the roughness of the earth's surface during the construction of various structures (buildings, mine workings and dumps, industrial storage, etc.) leads to changes in wind conditions, dust levels and weather and climatic characteristics.

7) Various pollutants entering in huge quantities into all natural environments change, first of all, the material composition and energy capacities of air, water, surface formations, etc. This change in natural agents determines the transformation of the natural processes they carry out, as well as various interactions with the environment environment and other natural factors.

Note that the summation of annual emissions of pollutants is not theoretically and practically fully justified, since as they enter the geographic environment they are assimilated, transformed under the influence of each other and function differently. It is important to analyze each major anthropogenic release, taking into account its reactions with existing compounds.

A change in the energy of the geographical shell or its parts causes a restructuring of the internal structure and processes of functioning of the geosystem and related phenomena. This process is complex and is regulated by multiple direct and feedback connections (Fig. 9.4). Anthropogenic impacts on the geographical environment cause changes in the composition and state of the environment, disrupt the quantitative and qualitative composition of living matter (up to mutations), and modify the existing systems of energy, mass and moisture exchange. However, currently available evidence suggests that anthropogenic changes do not fundamentally affect the geographic envelope. The relative balance of its existence and the sustainability of development are mainly ensured by natural causes, the scale of which exceeds human influence. It does not follow from this that the geographical envelope itself will always overcome the increasing anthropogenic pressure. Interventions in nature must be regulated from the point of view of the expediency of their manifestations - for the benefit of humanity and without significant harm to the natural environment. The concepts being developed in this direction are called sustainable (balanced) development. They should be based on general geological patterns and features of the current state and development of the geographical envelope.

In conclusion, let us touch upon the emerging statement that the modern geographical envelope is becoming anthroposphere, or part of emerging noosphere. Note that the concept of “noosphere” is largely philosophical in nature. The human impact on the environment and the involvement of waste products in it is an undeniable phenomenon. It is important to understand that most often a person changes his habitat not consciously, but through unforeseen consequences. Moreover, these implementations are not aimed at all components of the geographical envelope, but only at the components necessary for people (forest, soil, raw materials, etc.). Thus, there are only pockets of change, although sometimes very significant and serious, and although human activity increases, nature still develops mainly under the influence of natural processes. Therefore, at present we should talk about certain areas of the geographical envelope, where the natural environment has been significantly changed and is developing under the influence of human-regulated processes.

Rice. 9.4. Some feedbacks governing global climate

Control questions

What phenomena are classified as global changes in the geographical envelope?

What are the specifics of global changes at the end of the 20th and beginning of the 21st centuries?

What is the greenhouse effect and what are its consequences?

What is a common problem anthropogenization of the geographical envelope?

What is the problem with climate warming?

What are the dangers of oil pollution?

What is the global environmental crisis, how and where does it manifest itself?

What is the meaning of optimistic and pessimistic views on the development of planet Earth?

What an impact polar ice have an impact on the geographic envelope?

What are terrestrial landscape changes?

LITERATURE

Alpatiev A. M. Development, transformation and protection of the natural environment. - L., 1983.

Balandin R.K., Bondarev L.G. Nature and civilization. - M., 1988.

Biological indication in anthropoecology. - L., 1984.

Bitkaeva L.Kh., Nikolaev V.A. Landscapes and anthropogenic desertification of the Terek Sands. - M., 2001.

Bokov V.A., Lushchik A.V. Fundamentals of environmental safety. - Simferopol, 1998.

Vernadsky V.I. Biosphere and noosphere. - M., 1989.

Geographical problems of the late 20th century / Rep. ed. Yu. P. Seliverstov. - St. Petersburg, 1998.

Geography and environment / Responsible. ed. N. S. Kasimov, S. M. Malkhazova. - M., 2000.

Global changes in the natural environment (climate and water regime) / Rep. ed. N.S. Kasimov. - M., 2000.

Global and regional climate changes and their natural and socio-economic consequences / Responsible. ed. V.M. Kotlyakov. - M., 2000.

Global ecological problems on the threshold of the 21st century / Rep. ed. F.T. Yanshina. - M., 1998.

Govorushko S. M. The influence of natural processes on human activity. - Vladivostok, 1999.

Golubev G.N. Geoecology. - M., 1999.

Gorshkov V. G. Physical and biological foundations of life sustainability. - M., 1995.

Gorshkov SP. Conceptual foundations of geoecology. - Smolensk, 1998.

Grigoriev A. A. Ecological lessons of the past and present. - L., 1991.

Grigoriev A. A., Kondratiev K. Ya. Ecodynamics and geopolitics. - T. 11. Environmental disasters. - St. Petersburg, 2001.

Gumilyov L.N. Ethnogenesis and biosphere of the Earth. - L., 1990.

Danilov A.D., Korol I.L. Atmospheric ozone - sensations and reality. - L., 1991.

Dotto L. Planet Earth is in danger. - M., 1988.

Zaletaev V.S. Ecologically destabilized environment. Ecosystems of arid zones in a changing hydrological regime. - M., 1989.

Earth and humanity. Global problems/ Countries and peoples. - M., 1985.

Zubakov V. A. Ecogea - House Earth. Briefly about the future. Contours of the ecogay concept of a way out of the global environmental crisis. - St. Petersburg, 1999.

Zubakov V. A. House Earth. Contours of ecogeosophical worldview. (Scientific development of maintenance strategy). - St. Petersburg, 2000.

Isachenko A. G. Optimization of the natural environment. - M., 1980.

Isachenko A. G. Ecological geography of Russia. - St. Petersburg, 2001.

Kondratyev K. Ya. Global climate. - M., 1992.

Kotlyakov V. M. The science. Society. Environment. - M., 1997.

Kotlyakov V.M., Grosvald M.G., Lorius K. Past climates from the depths of ice sheets. - M., 1991.

Lavrov S.B., Sdasyuk G.V. This contrasting world. - M., 1985.

Environment / Ed. A. M. Ryabchikova. - M., 1983.

Fundamentals of Geoecology / Ed. V. G. Morachevsky. - St. Petersburg, 1994.

Petrov K. M. Natural processes of restoration of devastated lands. - St. Petersburg, 1996.

Problems of ecology in Russia / Responsible. ed. V. I. Danilov-Danilyan, V. M. Kotlyakov. - M., 1993.

Russia in the world around us: 1998. Analytical collection / Ed. ed. N.N. Moiseeva, S.A. Stepanova. - M., 1998.

Rown S. Ozone crisis. The fifteen-year evolution of an unexpected global threat. - M., 1993.

Russian Geographical Society: new ideas and paths / Rep. ed. A.O. Brinken, S.B. Lavrov, Yu.P. Seliverstov. - St. Petersburg, 1995.

Seliverstov Yu. P. The problem of global environmental risk // News of the Russian Geographical Society. - 1994. - Issue. 2.

Seliverstov Yu. P. Anthropogenization of nature and the problem of environmental crisis // Vestnik St. Petersburg. University. - 1995. - Ser. 7. - Issue. 2.

Seliverstov Yu. P. Planetary environmental crisis: causes and realities // Vestnik St. Petersburg. University. - 1995. - Ser. 7. - Issue. 4.

Fortescue J. Geochemistry of the environment. - M., 1985.

Ecological alternative / Ed. ed. M.Ya. Lemesheva. - M., 1990.

Environmental imperatives of sustainable development of Russia / Ed. V.T.Pulyaeva.-L., 1996.

Environmental problems: what is happening, who is to blame and what to do? / Ed. V.I.Danilov-Danilyan. - M., 1997.

Yanshin A.L., Melua A.I. Lessons from environmental crises. - M., 1991.

The surface of the Earth does not remain unchanged. During the millions of years that our planet has existed, its appearance has been constantly influenced by various natural forces. The changes that occur on the Earth's surface are caused both by internal forces and by what happens in the atmosphere.

Thus, mountains were formed as a result of the movement of the earth's crust. The rock masses were pushed to the surface, crushed and broken, resulting in the formation of various types of mountains. As time passed, rain and frost crushed the mountains, creating separate cliffs and valleys.

Some mountains were formed as a result of volcanic eruptions. The molten rock bubbled out onto the Earth's surface through holes in the crust, layer by layer, until eventually a mountain emerged. Vesuvius in Italy is a mountain of volcanic origin.

Volcanic mountains can also form underwater. For example, the Hawaiian Islands are the peaks of volcanic mountains.

The sun, wind and water cause constant destruction of rocks. This process is called erosion. But it can affect not only rocks. Thus, erosion by ice, wind and water washes away the earth's soil.

In places where they slide into the sea, glaciers cut the plains, forming valleys and fjords - narrow and winding sea bays.

Fiords were formed during ice age when the continents were covered with a thick layer of ice and snow.

This ice, in turn, caused the formation of glaciers, which are slow-moving rivers of ice.

Sliding from the mountains into the valleys, glaciers, the thickness of the ice in which sometimes reached several tens of meters, made their way. The force of their movement was very great.

At first, narrow gorges formed along the path of the glaciers, then the monstrous force of the glacier enlarged them, opening the way down. Gradually this space became deeper and wider.

After the end of the Ice Age, ice and snow began to melt. As the ice melted, the width of the rivers increased. At the same time, sea levels were rising. Thus, fiords were formed in place of rivers.

The shores of fjords are usually rocky slopes, sometimes reaching heights of 1,000 meters (3,000 feet).

Some fiords are so deep that ships can move through them.

A large number of fiords are located on the coasts of Finland and Greenland. But the most beautiful fjords are in Norway. The longest fiord is also in Norway. It's called Sognefjord. Its length is 180 kilometers (113 miles).

When the ice melts, moraines—accumulations of rock fragments—remain behind and form zigzag mountain peaks. Rivers carve ravines into loose rocks, and in some places, huge canyons (deep river valleys with steep stepped slopes), such as the Grand Canyon in Arizona (USA). It extends 349 kilometers in length.

The rains and winds are true sculptors and carve real sculptural groups and various figures. In Australia there are the so-called Wind Cliffs, and not far from Krasnoyarsk there are stone pillars. Both were formed as a result of wind erosion.

Erosion of the earth's surface is far from a harmless process. Every year, thanks to it, many tens of hectares of arable land disappear. A large amount of fertile soil is carried into the rivers, the formation of which in natural conditions takes hundreds of years. Therefore, people try in every possible way to fight erosion.

The main direction of this fight is to prevent soil erosion. If there is no vegetation cover on the soil, then wind and water easily carry away the fertile layer and the land becomes infertile. Therefore, in areas with intense winds, conservation methods of land cultivation are used, for example, no-moldboard plowing.

In addition, the fight against ravines is underway. For this purpose, river banks are planted with various plants and the slopes are strengthened. On sea and river coasts, where severe erosion of the coast occurs, a special gravel dump is made and protective dams are installed to prevent the transfer of sand.