Principles of visualization using cgi graphics. How computer graphics become indistinguishable from reality. 3D graphic editors

Main feature A film remake of the 1967 Disney feature-length cartoon, it’s not even the voices of Scarlett Johansson, Idris Elba and Christopher Walken (whom the Russian viewer won’t hear in the dubbing anyway), but the fact that in 105 minutes of an incredibly realistic film, only one living person appears in the frame - Mowgli, whom played by debutant Neel Sethi. All other characters were created using computer graphics, for which director Jon Favreau has already received an award from the PETA organization, since during filming not a single animal was harmed or even worked on the set.

What happened before

The first film made entirely with computer animation (CGI) was the short film Hummingbird, released in Belgium in 1967. Back then no one could have guessed what the future would hold. new technology. Until the early 1990s computer graphics, like the entire IT sector, developed at a very slow pace by today's standards. The breakthrough was Jurassic Park (1993) with its realistic CGI dinosaurs. Two years later, Toy Story was released - the first full-length cartoon, made on a computer from start to finish.

2001 was a turning point in the history of CGI, graphics were divided into two directions. “Shrek” was released, the characters of which looked, on the one hand, realistic, but on the other, still stylized. At the same time, the science fiction film “Final Fantasy” was released, which marked the beginning of photorealism in CGI - the desire to create characters indistinguishable from real living beings. The continuators of this trend were “The Lord of the Rings: The Two Towers”, “Beowulf”, “Avatar”, “Life of Pi” and, finally, “The Jungle Book”.

What's new in The Jungle Book

When creating The Jungle Book, Favreau and his team made full use of all the advances in the field of CGI. The director has extensive experience in using computer graphics thanks to his work on the same " Iron Man”, but with “The Jungle Book,” Favreau wanted to go even further: tell the tale using entirely photorealistic images. We already saw something similar in “Life of Pi” - then some viewers at first even refused to believe that the tiger in the film was completely computer generated. In The Jungle Book, not only the tiger (by the way, very impressive and quite scary for a children's cartoon) has already been created with the help of CGI, but also the entire jungle. Special effects development was led by Rob Legato, who previously worked on computer graphics for Avatar.

How real footage and graphics are combined

An oversaturated CG color scheme that clashes with the colors of big picture, destroys all realism, and the drawn characters simply fall out of the scene. Therefore, the most important process when creating computer animation is compositing (from the English compositing - “layout”). At this stage, 3D models are integrated into the surrounding reality.

Compositing involves combining character models with background video and other elements of the frame, including footage of live actors (usually shot against a green background using chroma key). First, various video layers are superimposed on each other, then the brightness of the layers is equalized and color correction occurs.
The creators of The Jungle Book tried to make the border between reality and computer graphics as invisible as possible. For each individual scene involving Mowgli, new sets were built, including a three-meter jungle. Then the material filmed in the pavilions was combined with computer models. So, in one of the scenes, the hero first crawls through very real mud, and then jumps on an animal created using computer graphics, which helps him escape from, again, a computer Shere Khan. It is difficult even for a specialist to understand where reality ends and digital animation begins.

Realistic movements and rigging

All credit brilliant artists, 3D modelers and composers can be overwhelmed by unrealistic physics. And motion simulation is one thing, believable movements of living characters are another. The sensational scene from The Lord of the Rings, where Legolas, from the point of view of realism is perceived in much the same way as the Tom and Jerry cartoon. For recent years More and more technologies are appearing that calculate the movements of living beings. For example, it simulates the deformation of human soft tissues during movement and adds weight to parts of the body.

High-quality rigging (from the English rig - “rigging”) is also very important - the creation and development of a virtual skeleton and joints inside a three-dimensional character model. All the constituent elements of the animated figure (not only the limbs, but also the facial muscles, eyes, lips, etc.) are given characteristics, and a hierarchical relationship is built between them. Fine tuning allows you to make truly realistic models.

Motion capture

Motion capture is used to create a character's facial expressions and movements. The technology became widespread in the 1990s, after it was first used to create character animation for the computer game Virtua Fighter 2 in 1994. Motion capture began to be actively used in cinema in the 2000s (The Lord of the Rings, Beowulf, Avatar, Harry Potter, Life of Pi).

There are marker and markerless motion capture systems. The most popular are the first ones, where special equipment is used: the actor is put on a suit with sensors (to create facial expressions, sensors are placed on the face), the data from which is recorded and transferred to a computer. In markerless, computer vision and pattern recognition technologies are used to record data. Then the computer combines the received information into a single three-dimensional model, and then an appropriate animation is created based on it.

Thus, motion capture serves to capture movements and facial expressions real actors onto computer models, resulting in a portrait resemblance between the characters and the actors who voiced them. Thanks to motion capture in The Lord of the Rings, Gollum retained , and Smaug did so. In The Jungle Book, by the way, not all the characters have a similar face to the actor who plays them. The boa constrictor Kaa, for example, only adopted a velvety voice from Scarlett Johansson - Jon Favreau explained in an interview that “to give a snake a face similar to a person would be completely ridiculous.”

Eyes and facial expressions

Photographic realism of characters is impossible without high-quality rendering of their facial expressions. Work in this area is carried out in two main directions: directly generating the appropriate animation and applying it to the characters. The animation itself is created, as a rule, using the same motion capture technique. A smooth change in a character’s facial expressions is achieved in Autodesk Maya and 3DS Max using the blendshaping (morphing) technique.

Despite the rapid development of computer graphics in last decades, for a long time there was no way to create realistic human eyes. In 2014, Disney proposed the following method to solve this problem: when capturing eye expression, place separate markers on eyeball, cornea and retina, and then compose the obtained data and overlay it on a three-dimensional computer model of the eye.

Emotions and age

Disney specialists recently shared a test version of the unusual FaceDirector software - a kind of auto-tune for emotions. The program allows you to combine several takes in real time, depicting a whole palette of different emotions, and adjust the acting. The program provides the director with the opportunity in post-production to combine several facial expressions, enhance or remove the emotional intensity at a certain moment in the scene.

Another development is digital cosmetics that can restore youth to actors. The impressive video was presented by VFX specialist Rousselos Aravantinos, who used a Nikon V1 camera and programs NUKE and Mocha Pro. Similar stunts were performed in the film " Mysterious story Benjamin Button."

Hair and wool

Creating realistic fur and hair is a difficult technical task, one that animators have been struggling with for a long time. Hair as a 3D model is a whole system that must maintain its integrity and character, while in dynamics each individual hair must behave independently and react to collisions with other hairs. Simulations of believably swaying fur as an animal moves have been learned to be created relatively recently, and modern plugins for CGI editors, such as XGen, have simplified the task for animators. It is known that this particular hair generator was used in the creation of Zootopia and Toy Story 3.

What programs are used to create special effects and who creates them?

Many large studios such as Pixar and Disney use their own to create computer graphics. software, but they also resort to programs available to the general public, including Autodesk Maya, Adobe After Effects, Adobe Premiere, Luxology Modo, Houdini. Thus, most of the special effects in Avatar were created using Maya; Adobe After Effects was used for compositing.

As a rule, several companies work on computer graphics for large projects. The creators of The Jungle Book resorted to the services of the British MPC and the New Zealand Weta Digital. MPC also worked on Life of Pi, World War Z and all of the Harry Potter films. Developers Weta Digital worked on graphics in Avatar, The Avengers, The Hunger Games and The Lord of the Rings. Most companies specializing in special effects are registered in the USA and Britain, but many of them move part of their production to India and China, creating their own studios there or buying existing ones. Thus, in 2014, the British Double Negative and the Indian Prime Focus merged, which then jointly created the graphics for Interstellar. However, Chinese and Indian special effects studios that are not part of large companies are not yet as popular among filmmakers as Western ones, mainly due to the lack of sufficient experience and resources.

CGI in our everyday life

Complex technologies for creating computer animation are gradually becoming available to the masses. Among the latest achievements in this area, one can note the program released in 2014 or the sensational Belarusian application. They allow you to superimpose animation on the user’s face or people caught in the lens of his camera in real time. A similar function is available in the Snapchat messenger. Applications track the user's movements, analyze them and superimpose the received data onto three-dimensional models in real time, that is, they use methods similar to those used to convey the facial expressions of characters in films and computer games.

On the one hand, the OpenSceneGraph engine itself has a developed subsystem for managing windows, processing user input events, sending and receiving user messages. We talked about this in some detail in previous articles in this series. In general, combined with the capabilities of C++/STL, this is quite enough to develop arbitrarily complex applications.

An example of OSG integration into an application, developed in QtDesigner. This example will be discussed in detail below.


On the other hand, to speed up development in C++, both third-party libraries are used that expand the capabilities of this language (like boost), as well as entire frameworks that allow you to easily and naturally develop cross-platform applications with wide functional purposes. One such framework is the ultra popular Qt. No matter how Qt is criticized for its meta-object compiler and other shortcomings and inconveniences, the strength of Qt is in its extensive class library, which solves all conceivable problems of cross-platform development, as well as in the “signals-slots” concept, which implements a message exchange subsystem between classes. Methods of interaction between an application and the operating system, as well as interprocess communication, are also based on signals and slots.

And, damn it, it would be very interesting to combine two technologies: Qt and OSG. My team had to solve a similar problem, which I already wrote about in. However, I would like to expand on this question a little more widely, and this article will be on this topic.

Everyone good mood and the temperature outside is lower. As promised, I am publishing a continuation of the article on super-duper modern OpenGL. Who hasn't read the first part - Ultra-modern OpenGL. Part 1.


Hi all. Anyone who has even a little understanding of the OpenGL topic knows that there is large number articles and courses on this topic, but many do not touch on the modern API, and some of them even talk about glBegin and glEnd . I will try to cover some of the nuances of the new API starting with version 4.

Today I'll show you how to open a window and create an OpenGL context. This is a surprisingly difficult task, OpenGL still does not have official cross-platform context creation tools, so we will rely on third-party libraries (in in this case GLFW and glad). There are already a lot of similar hello worlds on the Internet, but I don’t like everything I’ve seen: either it’s very sophisticated, or the pictures in the examples are very primitive (or both!). Many thanks to all the authors, but I’ll upload another tutorial :)

Today we will draw something like this:



Another weekend has arrived, I need to write a couple of dozen lines of code and draw a picture, or better yet, more than one. So last weekend and the weekend before I showed how to do ray tracing and even blow things up. This surprises many, but computer graphics are very simple thing, a couple of hundred lines of bare C++ are enough to create interesting pictures.

The topic of today's conversation is binocular vision, and today we won’t even reach one hundred lines of code. Knowing how to render three-dimensional scenes, it would be foolish to ignore stereopars; today we will draw something like this:



Introduction

One of most interesting tasks solved through three-dimensional graphics is the creation of " big worlds» - extended scenes containing large number objects with the possibility of unlimited movement around the scene. The solution to this problem rests on the understandable limitations inherent in computer hardware.

Typical example: " big world» when visualizing railway on the OSG engine. The only thing missing is the Langoliers devouring the world behind the train...

In this regard, there is a need to manage application resources, which comes down to an obvious solution: loading only those resources (models, textures, etc.) that are necessary to form the scene at the current moment in time current situation observer; reducing the levels of detail of distant objects; unloading no longer needed objects from system memory. For the most part, graphics and game engines provide some set of tools for solving such problems. Today we'll look at which ones are available in OpenSceneGraph.


Introduction

Speaking about programming techniques specific to OSG, last time we talked about the callback mechanism and its implementation in the engine. It's time to look at what opportunities the use of this mechanism gives us to manage the content of a 3D scene.

If we talk about object animation, OSG provides the developer with two options for its implementation:

  1. Procedural animation implemented programmatically through the transformation of objects and their attributes
  2. Export animation from a 3D editor and control it from application code

First, let's consider the first possibility, as the most obvious. We will definitely talk about the second one a little later.

Hi all! My name is Grisha and I am the founder of CGDevs. Let's continue talking about mathematics or something. Perhaps the main application of mathematics in game development and computer graphics in general is VFX. So let's talk about one such effect - rain, or rather about its main part, which requires mathematics - ripples on the surface. Let's consistently write a shader for ripples on the surface, and analyze its mathematics. If you are interested, welcome to cat. Github project is attached.



Happy upcoming year everyone! My name is Grisha and I am the founder of CGDevs. The holidays are just around the corner, someone has already decorated the Christmas tree, eaten tangerines and is fully charged with the New Year's mood. But today we will not talk about that. Today we will talk about a wonderful format called LDraw and about a plugin for Unity, which I implemented and posted on OpenSource. A link to the project and sources for the article, as always, are attached. If you love Lego as much as I do, welcome to the cat.

I also made a small web application where you can practice creating formulas for arbitrary figures and generate your Excel file.

Be careful: 19 pictures and 3 animations under the cut.

3D graphics

3D graphics operates with objects in three-dimensional space. Usually the results are a flat picture, a projection.

3D computer graphics are widely used in cinema, computer games.

In 3D computer graphics, all objects are usually represented as a collection of surfaces or particles. The minimal surface is called a polygon. Triangles are usually chosen as polygons.

3D graphics

All visual transformations in 3D graphics are controlled by matrices.

There are three types of matrices used in computer graphics:

rotation matrix

shift matrix

scaling matrix

3D graphics

Any polygon can be represented as a set of coordinates of its vertices.

The triangle will have 3 vertices. The coordinates of each vertex are a vector (x, y, z).

By multiplying the vector by the corresponding matrix, we get a new vector. Having made such a transformation with all the vertices of the polygon, we get a new polygon, and having transformed all the polygons, we get a new object, rotated/shifted/scaled relative to the original one.

CGI - graphics

CGI (computer-generated imagery) , lit. “computer-generated images”) are special effects in film, television and simulation created using three-dimensional computer graphics.

Computer games typically use real-time computer graphics, but occasionally in-game videos that use CGI are added.

CGI allows you to create effects that cannot be achieved with traditional makeup and animatronics, and can replace sets and the work of stuntmen and extras.

CGI - graphics

For the first time in feature film computer graphics were used in Westworld, released in 1973.

In the second half of the 1970s, films using elements of three-dimensional computer graphics appeared, including “Future World”, “ Star wars" and "Alien".

CGI - graphics

IN Jurassic Park (1993) was the first to use CGI to replace the stuntman; the same film was the first to seamlessly combine CGI (the skin and muscles of the dinosaurs were created using computer graphics) with traditional filming and animatronics.

IN In 1995, the first full-length film completely simulated on a computer was released - Toy Story.

IN the film "Final Fantasy: The Spirits Within Us" (2001) first featured realistic CGI images of people.

CGI - graphics. Character Creation

http://city.zp.ua/viewvideo/R4woMpsHYSA.html

Computer graphics in special effects

Special effect, special effect (English special effect, abbreviated SPFX or SFX) is a technological technique in cinema, television, shows and computer games, used to visualize scenes that cannot be filmed in the usual way (for example, for visualization of battle scenes spaceships in the distant future).

Special effects are also often used when filming a natural scene is too expensive compared to a special effect (for example, filming a large explosion).

computer-generated imagery , lit. "computer-generated images") are still and moving images generated by and used in the visual arts, printing, cinematic special effects, television and simulation. Computer games typically use real-time computer graphics, but CGI-based in-game videos are also occasionally added.

The creation of moving images is done by computer animation, which is a narrower area of ​​​​CGI graphics, applicable, including in cinema, where it allows you to create effects that cannot be achieved using traditional makeup and animatronics. Computer animation can replace the work of stuntmen and extras, as well as scenery.

Story

The first time computer graphics were used in a feature film was in Westworld, released in 1973. In the second half of the 1970s, films using elements of three-dimensional computer graphics appeared, including Tomorrow World, Star Wars and Alien. In the 1980s, before the release of the second Terminator, Hollywood cooled down to computer effects, in particular, due to the more than modest box office receipts of Tron (1982), which was entirely built on the use of the latest advances in computer graphics.

In "Jurassic Park" (1993), for the first time, it was possible to replace the stuntman with the help of CGI; the same film was the first to seamlessly combine CGI (the skin and muscles of the dinosaurs were created using computer graphics) with traditional filming and animatronics. In 1995, the first full-length cartoon completely simulated on a computer was released - “Toy Story”. The film "Final Fantasy: The Spirits Within Us" (2001) featured realistic CGI images of people for the first time