Documentation: Base Rig

Downloadable .pdf here with full install instructions and some advanced setup descriptions.

Base Rig is the fundamental Cinemachine environment which includes a number of modules.  You can do a heck of a lot with Base Rig.

Cinemachine has been designed to be the entire unified camera system in your project but can be used along side your existing cameras as well. If you have a bunch of camera stuff working already and just want to use Cinemachine for cutscenes or something specific,  no problem at all. However when you use it across your project it allows you to blend any camera to any other camera in a gameplay-to-cutscene-and-back-seamlessly kind of way.

Base Rig has these components:

Base rig has a number of modules which all work together.  Briefly, they are:

  • CinemachineVirtualCamera  a shot, a single camera.  It contains:
    • Priority   the priority of this shot.  Equal or Higher value camera priorities compared to the current camera priority will be activated.  This allows for camera state machine setups where cameras are called based on trigger volumes, animations, health states, etc.  Important cameras will trigger over lower priority ones
    • Noise  procedural Perlin multi-layered noise system for handheld behaviors, shakes, and vibrations
    • Composer  a procedural way to track and compose any object
    • Transposer  a way to ‘mount’ your camera to any object and move the camera with it
  • AUTOGEN_CinemachineRuntime  this is the central Cinemachine component which does all the magic.  It contains:
    •  Blend Settings  this is the array which defines how any camera blends to any other camera.  Have a 4 second blend from CameraA to CameraB, but a 1 second blend from CameraB back to CameraA.  This is very powerful when used in a state machine type setup.  If a specific camera blend between two cameras isn’t defined, it uses the Default Blend
  • Debug  a really useful debug window to show the current active camera and the current blending cameras. Turn on under Preferences (see below)
  • Preferences  under Unity->Preferences->Cinemachine  this panel allows you to turn the debug window on and off and lets you set colours for the Composer components and set the target icon scale

CinemachineVirtualCamera

CinemachineVirtualCamera is a single camera or shot.  It’s has child modules which you can turn on to make it do things like procedurally track and compose objects, follow things and with procedural noise using the – Noise, Composer and Transposer modules.   Together they are all a very powerful combination and can yield an unlimited number of results.

These modules have been designed, re-designed and re-designed again to offer the widest range of possibilities with the least number of controls.  The math driving these camera behaviours is complex and sophisticated, having gone through many different scenarios across all sorts of games.

The reason for the ‘virtual’ camera scenario is because Unity renders from the main camera. Out-of-the-box Unity is limited to only one camera being active camera at a time – which makes the blending of two shots effectively impossible. Cinemachine is engineered to solve this and provide a wealth of simple, powerful functionality at the same time, allowing unsurpassed camera behaviors which are fast and easy to setup.

Cinemachine allows you to create an unlimited number of virtual cameras, blend them together and those transforms and settings are presented to the single Unity camera.   It does all this automatically.

VirtualCamera

In detail:

Auto Add To Priority Stack  This keep the camera alive and in the priority stack able to be turned on and blended (or cut) to if it’s a same or higher priority. If it’s set to false, the camera needs to be added to runtime programmatically.

Priority  the priority setting for that shot.  Equal or higher priority cameras will be blended to. An example, say you’re currently running a priority 2 camera.  If a priority 1 camera is called it will be ignored. If a priority 2 or higher camera is called, it will blend to that new camera using the Default Blend or a specific blend if you’ve defined that in the Blend Settings set under the AUTOGEN_CinemachineRuntime object.   This is really powerful in state machine type setups.   You can have any number of cameras running and say you’ve defined a camera for a victory sequence or a specific location in the world, you can have that camera either trump whatever is going on or be ignored.

Targets:

  • Composer  drag any game object into this area to define what the camera will look at.  For more info see the Composer section below.
  • Transposer drag any game object into this area to define what the camera will follow / be ‘mounted’ to.  For more info see the Transposer section below.

Offsets

  • Transposer Tracking Offset
  • Transposer Dampening Offset
  • Composer Tracking Offset
  • Composer Dampening Offset

Field Of View Offset  Typically leave this alone. This is the FOV for the camera, in Unity values which are vertical field of view in degrees. It’s best to set this under the Composer module, see below. Only set this if you’re not using a Composer asset

Dutch Offset  this is the camera tilt, or z-roll, or dutch.

Noise Amplitude Scalar

Noise Speed Scalar

Settings

  • Noise Settings  this is the asset which contains all the Noise settings.  See Noise below.
  • Composer Settings  this is the asset which contains all the Composer settings.  See Composer below
  • Transposer Settings  this is the asset which contains all the Transposer settings.  See Transposer below

Show Camera Guides  this draws an overlay in screen-space which shows you the current composer settings, where you’d like the target to be composed on the screen and the Soft and Hard composer controls.  See Composer below.

Noise Module

The Noise module is a multi-layered Perlin noise function which is applied after the Composer and adds additional transforms.  It has controls for Position and Orientation.  You can add as many layers as you want by increasing the Size value.

Procedural noise is a complex thing to make look real.  Convincing hand-held motion is a mixture of low, medium and high frequency wobbles which together combine to create something believable.

Amplitude defines the amount of noise in degrees. Wider lenses will need larger degree values in order to ‘see’ the shake.  Telephoto lenses need less, as that small motion seems amplified through narrower FOV lenses

Frequency defines the speed of the noise in Hz.  Typically a ‘low’ frequency value might be around 0.1.  Consider that your game is running at 30 or 60hz, so settings higher than that will be ‘on the other side’ of the Nyquest frequency meaning that they will not be directly tracked.  A setting of 100 will be higher than what the camera can ‘follow’ as your game is only running at say 60hz.  It can look kind of choppy since the camera can’t track something which is sampling faster than what the game is running at.  It can also look kind of cool, but rarely.  Experiment. Typically, for most hand-held setups, the low is around 0.1-0.5, the mid maybe .8-1.5 and the high around 3-4.  That’s 3-4  shakes back and forth per second.

The most convincing camera shakes are typically done with Orientation noise as that’s where the camera is aiming.  Handheld camera operators tend shake more rotationally than they do positionally, but of course feel free to mix in some Position noise, just remember it’s probably best to start with the Orientation.

We’ve included a number of presets to get you going, under Assets/Cinemachine/Noise and of course you can add as many of your own as you wish, just right click in the Asset window Create->Cinemachine->Noise, and drag that asset into the Noise Settings window under that VirtualCamera.

You can also animate the Noise through the Noise Amplitude Scalar, and Noise Speed Scalar to ramp the effect up and down.

Composer Module

This module does some amazing things.  It will orient or rotate the camera to procedurally track objects.  You can set it to track a bone on a character and compose it how you want and the camera will rotate to dynamically track the object.  No matter what that object does, it will stay in that region of screen set by the compositional settings. You can aggressively track objects or have a delay – Soft Dampening of the Composer so it loosely tries to keep the subject where you want.

The Composer flattens your entire 3D world into 2D screen-space, so it doesn’t matter if it’s far away or close – your target is where you want it on the screen – just like how a real camera operator would shoot it.

The controls let you decouple vertical from horizontal dampening, so things can be tracked aggressively vertically or horizontally or both, or neither.   Areas ‘inside’ the soft regions cause the Composer to disregard any motion, so you can set the camera to ignore subtle movements, but then spring into action if the subject moves beyond a set region on screen.

As an aside,  a testament to how much we’ve worked on this math, for shots in Homeworld: Deserts of Kharak, we would setup shots with a 1 degree lens, tracking vehicles 20 kilometers away in Unity units and the camera didn’t jitter at those extremes.  The camera would track them fluidly. It’s extremely precise.  This took us a long time to figure out.

Composer works like this:  First you set what you want the Composer to look at.  Here we have it looking at a ‘CameraTarget’ object which can be any Unity object.  A bone in a skeleton, a vehicle,  a dummy object as a child of something or an object you control through code (like the average position of a number of characters) – whatever you want the camera to track.

TransposerTarget

Then you setup where you’d like this object to be composed, in screen space.  The red, or ‘Hard’ controls set regions in screen space where the target will not pass. Typically they’re at the edge of the frame so your target never goes off-screen.

CinemachineLander_crop

The blue, or ‘Soft’ controls are like sponges, the degree of ‘squishiness’ set by the Horizontal and Vertical Soft Dampening.  If those values are set at zero, the blue regions effectively become red or ‘Hard’ regions since you’ve set the dampening to be zero thus turning them solid.  When you increase the dampening value, you let the target ‘squish’ into the blue regions and the camera becomes slightly unresponsive – a bit of a delay – in tracking objects.   This has been designed to behave in as as ‘camera operator’ a way as possible.

The clear area inside the blue zone – the dead zone – is an area where the camera will ignore all subject movement.  You can set this to be just big enough to ignore animation cycles, physics glitches, or any other motion from the subject that you’d like the camera to ignore.

You set the dampening horizontally and vertically so you can set how aggressively you want to track the camera in each axis.   This design has gone through many iterations where we strive to allow for the greatest amount of control with the least number of controls !

It’s amazing how many different camera tracking behaviours which can be created with just the offset, dampening and screen space controls.  When you add camera blending on top of all that – so you can blend to cameras with different compositional settings – the possibilities are totally unlimited.

ComposerDampening

Checkout the Examples page for videos of the Composer in action

With these controls you can mimic a real camera operator as it tries to follow the motion of the target in a cinematic way.  Real camera operators don’t know exactly what the subject will do and they’re constantly chasing the action in a framing way. Composer lets you emulate this behaviour with your fleet of AI Cinemachine cameras and the results look much like all the TV and movies we’ve seen where the camera has a little lag.  You can tune it so it just feels right. 

The amazing bit is, once you’ve set this up, things can change and the camera will always try to figure it out!   Say an Animator changes an animation or a Designer changes a vehicle speed – no worry – the camera will track that object based on the compositional rules you’ve instructed it to do so and you’ll get a good shot.

Entire cutscenes can be setup with Composer cameras and the rest of your team can change all sorts of things and your cutscene will probably not break as these Composer cameras will do their best to shoot whatever is happening.

Also, because the camera is always working to keep the shot, you can move the camera and it will still ensure the composition.  It doesn’t matter if the subject is moving, or the camera is moving or both – that compositional setting you defined will be ensured regardless of what’s happening.   This is really powerful and easy to do if you want shots where the camera is moving all about but you want to keep something on frame.  Throw a target object in, move it or the camera and you’ll always keep it in whatever screen-space you set.

Take a look at our Examples page to see videos of Composer in action.

With Blender you can animate the composition of objects.  Simply blend to a camera with everything the same bit with different Composer settings and you can move the subject around the screen.  This is very powerful in scenarios where you want your subject to be in different areas of screen-space depending on the game or cinematic demands

Transposer Module

The Transposer module dynamically places the camera body.  It only influences the camera position.  Composer influences camera rotation, or orientation and Transposer does the position which makes a knock-out combo in terms of camera controls.

TransposerTarget
In this setup the camera is looking at CameraTarget and the camera body is following Cube

The Transposer lets you adjust the offset of the camera position in relation to the centre of Transposer Camera Target object, in local space, as well as giving you per-axis dampening controls.   Increasing the dampening makes the camera follow the target less aggressively.  Each axis has its own control so you can closely follow X and Z but loosely follow Y, etc.

Transposer

The Transposer has some really powerful different modes in which it follows its target.
TransposerModes

Local Space on Target Assignment:  Sets the Transposer / target relationship based where the target is on camera initialization.
Local Space Locked Up Vector: Sets the Transposer position based on Transposer Camera Target position but it locks the camera to always pointing up, unless Composer adds z-roll or dutch.
Local Space Locked To Target:  Fully mounts the camera to the target, no matter what it does, minus the Transposer dampening controls
World Space: Sets the Transposer camera position to be offset from the Transposer Camera Target object in world space, based on the offset values

 

All Together

The possibilities are endless.   Mount the camera to vehicle A with a bit of dampening, track and compose on vehicle B and no matter what they do, you’ll get a shot.  The configurations are endless.

Here are a few examples to get your mind going.

Noise, Composer, Transposer working together

This is a simple shot which has the power of procedural systems working together.   The camera moves in on the vehicle using Transposer, it frames the front of the car using Composer and the handheld camera shakes are from Noise.   The beauty is that this shot is incredibly resistant to change.  If a designer or animator speeds up the vehicle, or a level designer puts a hill in the way, the cameras will figure it out and your shot will still work.   You’ve directed Cinemachine on how you want this shot to look and it will follow your orders even if things change.