What I did:
Production is now moving at a pace. Since the last post, scenes 2, 3 and the opening of scene 4 have been animated and rendered, the final scene is also in the can, and a significant amount of technical ground has been covered, with walk cycles, NLA strip workflow, lip sync and voice generation.
Rendering progress
Render times continue to be faster than originally calculated, which is providing welcome breathing room in a tight schedule. The hallway scene rendered at 20-25 seconds per frame, the driveway scenes at around 30 seconds, and the current scene 4 hallway with ‘Retired John’ is coming in at just 17 seconds per frame.
Everything up to and including scene 3 is now rendered, and the final nursing home scene has also been completed and rendered. Rendering the final scene early was a practical decision, as it’s largely a static scene with minimal animation.
Walk cycles and NLA workflow
Three walk cycles have now been completed across different scenes and character variants, and the workflow is becoming much more intuitive. The approach uses Blender’s NLA Editor to combine multiple action strips simultaneously:
The walk cycle itself (as mentioned in the previous post) is created on the spot in the Action Editor, then pushed down to the NLA Editor as a repeating strip, handling leg movement and body bounce. A separate root bone movement action runs alongside it on a different track, carrying the character forward through the scene. The two strips play simultaneously and combine to produce a character walking through space. Additional strips for facial animation, head turns and blinks are layered on top on further tracks without touching the walk cycle itself, keeping actions clean and reusable.
The third walk cycle took a different approach as only a couple of steps were needed. Rather than using the NLA root strip method, the movement was handled frame by frame, positioning the character manually on each step – a useful technique for very short movements..
Appending actions and characters between Blender files proved more complicated than expected. The cleanest solution was to build the walk cycle in a dedicated clean file containing just the character, then append the entire collection (including the rig, mesh, props and WGTS rig) into the scene file with ‘Localise All’ checked. This brought everything in as fully editable local data with the action already correctly assigned to the rig.
Sourcing voices with ElevenLabs
The voiced dialogue scenes required realistic speech, so voices were generated using ElevenLabs text-to-speech AI. I’d never used this site before, but the one I have used previously didn’t have anything close to what I needed on this occasion, such as mum shouting in the roundabout scene.
Lip sync
With voices generated, lip sync was tackled using the built-in Blender ‘Lip Sync’ extension, which I discovered after watching a tutorial by MK Graphics – How to Create Lip Sync Animation in Blender. In preparation for this, a full set of mouth shape keys was created specifically for lip sync. These are separate from and in addition to the existing facial expression shape keys for smiling, sadness and so on. The lip sync shape keys cover the phoneme mouth shapes required by the extension, giving the system the correct vocabulary of mouth positions to work with.
The extension analyses the audio waveform and automatically drives the appropriate shape keys to match the speech, generating the lip sync animation without requiring every mouth shape to be manually keyframed to the dialogue.
It remains to be seen how well the automatic lip sync has matched the generated speech to the shape keys in practice — some manual refinement may be needed to tighten up any shapes that don’t read correctly on screen. However, even an approximate lip sync that reads convincingly is far faster than animating it by hand.
The nursing home scene is largely complete and rendered. This includes the memory cabinet sequence, which forms the emotional centrepiece of the final act. The one element still undecided is how to handle the ‘I AM JOHN’ reveal transition, which needs further thought before it can be finalised.
Martin has a first draft of the score in hand but won’t be refining it further until he receives the final timing and feedback notes from the completed cut. That handoff is dependent on the animation being locked.
What I learned:
The NLA Editor is one of Blender’s most powerful tools for character animation once you understand the logic of layering strips, which is very much like After Effects and Premiere Pro. Keeping actions clean and single-purpose, makes the whole system much more flexible and easier to edit than trying to put everything into one action.
Appending between files works best when done at the collection level with Localise All enabled. Appending individual objects or actions in isolation leads to broken links and unassigned data blocks.
AI voice generation is a practical and credible solution for solo animation production. The quality of ElevenLabs output is convincing enough to serve the film’s needs.
Next:
The remainder of scene 4 still needs to be animated – covering the Peugeot 308cc driving away, the daughter’s house interior with the clock animation showing time passing, and the late arrival of John after his three-hour detour. Once scene 4 is complete, the roundabout incident, garage crash and licence renewal scenes remain, before the full animation is locked and the final composition can be assembled, colour treated and sent to Martin to work on the final score, before going to the user testers. Still a way to go.

