With the experience of being involved in creation of about 1000 motion graphics templates in Blender for our automated video creation platform Viddyoze (viddyoze.com) I’ve come to realise that Blender’s incredible versatility is often inhibited by lack of some of the most basic options used in creation of motion graphics. Following document describes requests for fixes of existing features as well as recommendations for additional features that motion graphics artists need in Blender. Areas in which Blenders severely lacks, in order of urgency, are:
- Grouped instancing
- Scene render result as real time texture
- Relative Nodes
- Lens flares
- Non-Destructive Conversion to Mesh (gridding)
Horizontal text alignment works as expected, but vertical alignment is completely broken in current Blender version (2.79a). Comparison of expectation and actual behavior in Blender.
This is what one might expect from vertical alignment in single line text:
This is, however how it works in Blender:
Things are also not good with text boxes. This is what is expected:
Instead, this is how it works now:
Non-Destructive text object workflow
Text objects are basically curves and as such we can’t do much interesting with them. List of modifiers available for text is very limited., and some don’t work as expected. Because of this, animators usually convert text to mesh and then animate it in various ways. But this is bad if later on we need to change text to something else. This is particularly bad when making motion graphics templates.
Most common issues:
- Lack of modifiers
- Bad automatic UV mapping
- No way to add different materials to extrusion, bevel and face of text
Due to flat triangulated or ngon faces, text deforms badly with shrinkwrap, or following a curve.
So we need text to behave as mesh. Using the example of Cinema4D has the ability to grid text face so it can be wrapped and deformed well. This is how faces of extruded text object should look like:
What is needed:
- Have text act as mesh with ability to control mesh resolution dynamically and adding modifiers to it just like we would to any other mesh object. Following C4D example, text front should have ngon, triangles and grid option and match bevel and extrusion topology to match. No separated surfaces on a single letter (except international letters with additional shapes of course)
- Advanced UV options for text face, bevel and extrusion and ability to set different materials for each
- Optionally simple text animation effects like typewriter, decoder effect and simple transformations per character/word/line.
- Ability to emit particles from text in same way it’s done with mesh objects.
Auto-fitting (this is optional)
For motion graphics work, and motion graphics templates, it’s very important to be able to limit text to a certain area. This is currently not possible in Blender. While there are some complex driver setups that can achieve this, with relatively consistent behaviors, we need to be able to define a rectangle which will contain text and resize text dynamically. Something like this for single line text:
In case of text box:
- Keyframable Particle Emission Rate
- Billboard particles with motion blur and blending modes
Keyframable emission rate
Almost every other piece of software uses particle emission rate, not total particle count. Lack of particle emission rate prevents us from starting particle emission gradually, fading it off at the end, or stopping and resuming particle emission.
Desired look with keyframable emission rate:
Billboard particles with depth of field, motion blur and blending modes
Billboard particles are one of most useful ones in motion graphics. Not actual objects, but still with material properties. In most other software they can have motion blur and blending modes like add and multiply so they can easily simulate bright fire or dark smoke by using correct textures.
Current billboard particles are not useful outside Blender Internal render engine, can’t have correct motion blur or depth of field since they’re rendering vector and depth pass for entire square instead of only visible texture.
For cycles, the only viable particle approach is instancing objects as particles, which can be resource consuming for large particle counts. With Eevee render engine, it would be very useful to have billboard particles working similar to game engines such as Unity.
In motion graphics it’s very important to have the ability to create complex animation somewhere outside our main scene, and then instance that animation in main scene and offset it in time and space to create beautiful and complex motion.
For that we need ability to instance animated group of objects and offset it’s animation in time per instance. Currently we can use NLE to offset individual objects instance animation in time, but no way to do it with a group of objects. Example:
With a single motion graphics element like this looping curve:
We can instance it and transform it:
But it’s boring so we can use NLE editor to offset animation in time so they play out in sequence. Important thing to note is that these are still instances, sharing same action and we can go back, change one and all instances will reflect this change:
Unfortunately this is only possible with a single object. On the example of this group of objects:
We can surely instance them, and transform them, but since they all play out at the same time, the effect is not interesting and repetitive.
We need to be able to do this:
But this shouldn’t only be limited to object transformations. Material properties (for fading in and out, glows, color change,…) as well as texture animations (image sequences, noise texture offset,…) should also be offset in time. Similar to placing a character asset in game engine;
It starts it’s own idle or walk animation with all the textures and material animations contained in it at the moment of instancing, not playing the same animation loop as all other instances of that character in the game.
Scene render result as real time texture
Same reasoning as previous item, we need to be able to create something complex elsewhere, and instance it around our main scene. In this case, if we look at the example of After Effects, it uses multiple compositions (each having their own resolution and effects), nested one inside another to quickly get complex motion graphics with non-destructive workflow. User can make a final composition and then go back to one of it’s compositions, change something and observe that change taking effect in main composition.
Blender has Scenes. Which are very much like compositions. And it would be very useful to have feed from camera in Scene 2, render as texture in Scene 1. Combined with the ability to offset this playback in time, it would be almost exactly what After Effect has.
Orthographic camera in Scene 2 looks at complex HUD rings
Sends this feed as texture to Scene 1 (with motion blur and transparency rendered):
In Scene 1 it’s placed on a plane, duplicated several times, transformed and offset in time, to create a rich scene:
- Noise evolution
- Image effects
Noise texture evolution
Blender noise texture (or Clouds in Blender Internal) is great, but it lacks the “evolution” option often seen in other software. It’s a static noise that can be scaled up, or translated, but in order to get it to animate we need to combine two or more with multiply overlay or add blending mode and translate them or scale them in opposite directions, which usually produces less than desirable results. To make it better and actually useful in motion graphics we need evolution option (similar if not identical to noise seed value). Cyclic option would be very welcome too, as it would allow noise evolution to cycle seamlessly.
Animating noise “evolution” or “seed” should produce result similar to this (Made in After Effects with Cycle Evolution option on so it loops):
While we can do much with filters in final render result in compositor, we lack the ability to apply filters to textures. Simple keyframable filters like blur, find edges, and pixelate would go a long way in easing the workflow for some logo stingers. Example:
Some nodes like Blur, Crop, Scale, Translate have a “relative” checkbox or setting which causes them to behave consistently in any render resolution. Other nodes act relative by default. This is important as sometimes same project might be exported for preview in lower resolution or clients can decide they want 4k instead of 1080p video file. With nodes acting inconsistently on all render sizes, this causes horrible complications for studios working using Blender professionally.
Following nodes use pixels as units, and therefore act very differently in various render sizes:
- Transform (for X and Y translation)
These all need updating with “relative” option or otherwise being made consistent on all sizes.
Just lens flares as effect added to point and spot lights. Some presets. Several preset looks and ability to edit and add custom lens flare elements. Simple lens flare color options.
Lens effects in Cinema4D: