Blender Blog

Blender Motion Graphics Artist’s Wishlist

April 9, 2018


With the experience of being involved in creation of about 1000 motion graphics templates in Blender for our automated video creation platform Viddyoze ( I’ve come to  realise that Blender’s incredible versatility is often inhibited by lack of some of the most basic options used in creation of motion graphics. Following document describes requests for fixes of existing features as well as recommendations for additional features that motion graphics artists need in Blender. Areas in which Blenders severely lacks, in order of urgency, are:

  1. Text
  2. Particles
  3. Grouped instancing
  4. Scene render result as real time texture
  5. Textures
  6. Relative Nodes
  7. Lens flares


  • Alignment
  • Non-Destructive Conversion to Mesh (gridding)
  • Auto-fitting


Horizontal text alignment works as expected, but vertical alignment is completely broken in current Blender version (2.79a). Comparison of expectation and actual behavior in Blender.

This is what one might expect from vertical alignment in single line text:

This is, however how it works in Blender:

Top Base




Things are also not good with text boxes. This is what is expected:

Instead, this is how it works now:

Top Base




Non-Destructive text object workflow

Text objects are basically curves and as such we can’t do much interesting with them. List of modifiers available for text is very limited., and some don’t work as expected. Because of this, animators usually convert text to mesh and then animate it in various ways. But this is bad if later on we need to change text to something else. This is particularly bad when making motion graphics templates.


Most common issues:

  • Lack of modifiers
  • Bad automatic UV mapping
  • No way to add different materials to extrusion, bevel and face of text

Due to flat triangulated or ngon faces, text deforms badly with shrinkwrap, or following a curve.

Letter A shrink wrapped to a sphere

So we need text to behave as mesh. Using the example of Cinema4D has the ability to grid text face so it can be wrapped and deformed well. This is how faces of extruded text object should look like:

What is needed:

  1. Have text act as mesh with ability to control mesh resolution dynamically and adding modifiers to it just like we would to any other mesh object. Following C4D example, text front should have ngon, triangles and grid option and match bevel and extrusion topology to match. No separated surfaces on a single letter (except international letters with additional shapes of course)
    Text topology:

    Triangles, ngons, grid

    Current result with curve modifier

    Desired look with text gridding

  2. Advanced UV options for text face, bevel and extrusion and ability to set different materials for each

    Desired look

    Current non-destructive text with “Auto Texture” on

    Current non-destructive text with “Auto Texture” and “Use UV for mapping” on

  3. Optionally simple text animation effects like typewriter, decoder effect and simple transformations per character/word/line.



    Character Animation

    Character animation with random order

  4. Ability to emit particles from text in same way it’s done with mesh objects.

Auto-fitting (this is optional)

For motion graphics work, and motion graphics templates, it’s very important to be able to limit text to a certain area. This is currently not possible in Blender. While there are some complex driver setups that can achieve this, with relatively consistent behaviors, we need to be able to define a rectangle which will contain text and resize text dynamically. Something like this for single line text:

Note here that text size doesn’t change in first two cases. It’s only when text hits the boundary of its container that it starts scaling down.

In case of text box:

Again, text size doesn’t change until it hits the boundary and remains at it’s designated font size until then. It’s only when text hits the boundary of its container that it starts scaling down.


  • Keyframable Particle Emission Rate
  • Billboard particles with motion blur and blending modes

Keyframable emission rate

Almost every other piece of software uses particle emission rate, not total particle count. Lack of particle emission rate prevents us from starting particle emission gradually, fading it off at the end, or stopping and resuming particle emission.

Currently in Blender

Desired look with keyframable emission rate:

Particle System in Unity3D

Particles in After Effects

Billboard particles with depth of field, motion blur and blending modes

Billboard particles are one of most useful ones in motion graphics. Not actual objects, but still with material properties. In most other software they can have motion blur and blending modes like add and multiply so they can easily simulate bright fire or dark smoke by using correct textures.

Current billboard particles are not useful outside Blender Internal render engine, can’t have correct motion blur or depth of field since they’re rendering vector and depth pass for entire square instead of only visible texture.

Billboard particles and their depth and vector passes.

For cycles, the only viable particle approach is instancing objects as particles, which can be resource consuming for large particle counts. With Eevee render engine, it would be very useful to have billboard particles working similar to game engines such as Unity.

Grouped instancing

In motion graphics it’s very important to have the ability to create complex animation somewhere outside our main scene, and then instance that animation in main scene and offset it in time and space to create beautiful and complex motion.

For that we need ability to instance animated group of objects and offset it’s animation in time per instance. Currently we can use NLE to offset individual objects instance animation in  time, but no way to do it with a group of objects. Example:

With a single motion graphics element like this looping curve:

We can instance it and transform it:

But it’s boring so we can use NLE editor to offset animation in time so they play out in sequence. Important thing to note is that these are still instances, sharing same action and we can go back, change one and all instances will reflect this change:

Unfortunately this is only possible with a single object. On the example of this group of objects:

We can surely instance them, and transform them, but since they all play out at the same time, the effect is not interesting and repetitive.

We need to be able to do this:

But this shouldn’t only be limited to object transformations. Material properties (for fading in and out, glows, color change,…) as well as texture animations (image sequences, noise texture offset,…) should also be offset in time. Similar to placing a character asset in game engine;

It starts it’s own idle or walk animation with all the textures and material animations contained in it at the moment of instancing, not playing the same animation loop as all other instances of that character in the game.

Scene render result as real time texture

Same reasoning as previous item, we need to be able to create something complex elsewhere, and instance it around our main scene. In this case, if we look at the example of After Effects, it uses multiple compositions (each having their own resolution and effects), nested one inside another to quickly get complex motion graphics with non-destructive workflow. User can make a final composition and then go back to one of it’s compositions, change something and observe that change taking effect in main composition.

Blender has Scenes. Which are very much like compositions. And it would be very useful to have feed from camera in Scene 2, render as texture in Scene 1. Combined with the ability to offset this playback in time, it would be almost exactly what After Effect has.


Orthographic camera in Scene 2 looks at complex HUD rings

Sends this feed as texture to Scene 1 (with motion blur and transparency rendered):

In Scene 1 it’s placed on a plane, duplicated several times, transformed and offset in time, to create a rich scene:


  • Noise evolution
  • Image effects

Noise texture evolution

Blender noise texture (or Clouds in Blender Internal) is great, but it lacks the “evolution” option often seen in other software. It’s a static noise that can be scaled up, or translated, but in order to get it to animate we need to combine two or more with multiply overlay or add blending mode and translate them or scale them in opposite directions, which usually produces less than desirable results. To make it better and actually useful in motion graphics we need evolution option (similar if not identical to noise seed value). Cyclic option would be very welcome too, as it would allow noise evolution to cycle seamlessly.

Animating noise “evolution” or “seed” should produce result similar to this (Made in After Effects with Cycle Evolution option on so it loops):

Texture effects

While we can do much with filters in final render result in compositor, we lack the ability to apply filters to textures. Simple keyframable filters like blur, find edges, and pixelate would go a long way in easing the workflow for some logo stingers. Example:

Relative Nodes

Some nodes like Blur, Crop, Scale, Translate have a “relative” checkbox or setting which causes them to behave consistently in any render resolution. Other nodes act relative by default. This is important as sometimes same project might be exported for preview in lower resolution or clients can decide they want 4k instead of 1080p video file. With nodes acting inconsistently on all render sizes, this causes horrible complications for studios working using Blender professionally.

Glare Streaks with same settings rendered at 100%, 50% and 25% resolution

Following nodes use pixels as units, and therefore act very differently in various render sizes:

  • Glare
  • Displace
  • Transform (for X and Y translation)
  • Dilate/Erode
  • Filter
  • Inpaint

These all need updating with “relative” option or otherwise being made consistent on all sizes.

Lens Flares

Just lens flares as effect added to point and spot lights. Some presets. Several preset looks and ability to edit and add custom lens flare elements. Simple lens flare color options.

Lens effects in Cinema4D:



  • Reply
    April 10, 2018 at 1:21 pm

    Aaah, the Blender text problems are endless (but its old so thats ok). It would be nice to access font variables on a per character level. This would allow true kerning and the ability to switch fonts within the object. And it would be useful to access the text box as a clipping field for modern reveal effects. We could use the text box as a mask. This would allow a slide-on within the width/height of the text object, or we could alter the box size to perform a wipe effect. To be honest it would be useful to do this per line OR per paragraph. Perhaps this could just be some sort of boolean but the boolean modifier is very bad when animated across non-manifold shapes.
    Thanks for the great article!

  • Reply
    April 10, 2018 at 2:08 pm

    Oh I nearly forgot… speaking of text, wouldn’t it be nice to have a better text tool in the VSE? Then the user could make better tutorial videos with legible notes (Supers or superimposed captions), or a director could add credits really easily without having to use another scene or external application. There is already a basic text tool for subtitles, this could easily be enhanced to provide the text attributes I mention in the earlier reply i.e. Clipping box, per character definition as well as illustration effects, like outline, drop shadow (soft/hard/extruded) and face colouring.

    • Reply
      April 10, 2018 at 2:57 pm

      Oh yes. I only tried using it once and couldn’t believe how painfully basic it is.

  • Reply
    Leonov Timofey
    April 10, 2018 at 7:26 pm

    And .sbsar file native support, more flexibility as texture pack

  • Reply
    April 10, 2018 at 7:38 pm

    The idea of making the scenes behave like compositions in AE is genius! Maybe the new concept of collections for 2.8 could allow for something like this?
    As for instancing, offset and in general procedural animation the Animation Nodes addon should be the guide to build a system inside Blender, it would bring it close to what MASH is to Maya, as long as presets and templates can be made, saved and shared between files.
    One more thing I would add for particles is physics, particles should be able to collide with each other and we should be able to choose between soft bodies or rigid bodies behaviour when using objects as particle instances.
    Thanks for the nice article, let’s hope this is seen by the devs 🙂

    • Reply
      April 11, 2018 at 7:41 am

      Physics in particles. Yes. I’ve been experimenting with Particle Instancer addon that converts particles to rigid objects at some point during their lifetime, but it’s not very intuitive and it really takes a lot of trial and error to get it right.

  • Reply
    April 10, 2018 at 9:19 pm

    I think that your issues with the nose might stem from not understanding how to operate on 3 dimensional noise algorithms.

    If you use an input vector (ie: from a UV coordinate set) and a vector mapping node between it and the noise, you can ‘evolve’ your noise by translating on the Z axis. This is how it’s done in Touch Designer, for example.

    I also don’t know of any noise functions where the random seed can be animated to produce smoothly interpolated results. Non-consecutive results for consecutive seed values is an essential requirement for noise, as predictable results are not ‘random’.

    I do think the noise function in blender could use some of the options we see in implementations of simplex and Perlin functions on RT packages (ie: harmonic steps and offset)

    Moving a 2D sampling plane through a static noise volume is the best way to get smooth interpolation of 2D noise.

    • Reply
      April 11, 2018 at 7:33 am

      Woah! This is eye opener. Thanks for this. Works great on a plane. Won’t work on a volumetric texture of course, but still it’s great that this works. Makes perfect sense now. Thanks!

    • Reply
      Gottfried Hofmann
      April 11, 2018 at 8:23 am

      Florian, procedural textures in 3D programs are usually 4D(!). The fourth dimension is what allows users to “evolve” the noise in 3D. Think of it like having a 3-dimensional slice in a 4-dimensional space. Moving the slice through the fourth dimension is what creates the evolution of the texture, but in 3D space! Usually that fourth dimension is exposed as a property called “time” or “evolution” in the noise function. It has nothing to do with the seed value.

      Your proposed method is a well-known hack that does not work in the following situations:
      * Volumetric texturing (clouds etc.)
      * Force fields (the turbulence force field is internally just perlin noise)
      * Whenever you apply something to a 3D model like for example the displace modifier and you want evolution

      Every DCC except Blender has procedural textures like Perlin noise implemented in 4D because you really need that for certain effects. Blender for some reason decided to skip on the fourth dimension for more than two decades now…

      But there is perlin noise with evolution in Blender – it’s hidden in an OSL template (Templates -> Open Shading Language -> Noise). That is because OSL requires “Time” as an input. Yes, 4D noise is so important that it is a requirement of Open Shading Language.

Leave a Reply