Karyn — Audio/Systems Programmer on a Story-Driven Action-Adventure Game

Role: Audio/Systems Programmer · Freelance Contractor · Duration: 2021–2022 · Engine: UE4 ·Platforms: Windows · In Development
Quartz Reaper Lua C++ Blueprints UE4 Facial Mocap
Karyn — Audio/Systems Programmer on a Story-Driven Action-Adventure Game

The Project

Karyn is a story-driven, single-player, action-adventure platformer developed by MythoWorks Inc. Set in 2035, the game follows 60-year-old Karyn — a real estate developer and social media star who becomes trapped in a virtual world after a failed procedure. The player controls Karyn while she communicates directly with them, breaking the fourth wall. The game is described as a mind-bending dark comedy exploring themes of femininity, judgment, myth, and consciousness.

The project was led by Gethin Aldous (CEO/Founder of MythoWorks, former Rockstar Games performance director) and built in Unreal Engine 4. A prior team had already established foundational assets — character models, face rigs, some level design — before our contractor team joined.

Status: Development was paused due to budget constraints. The original contractor team completed their engagement, a successor team later took over, and the project was ultimately cancelled or put on hold.

Links: Website


My Role

I worked as an Audio/Systems Programmer on a contractor team, part-time over approximately 1 year (2021–2022). I started with general gameplay tasks — locomotion, minigame implementation, doors, NPC paths, health systems — and quickly moved into owning the audio and dialog systems entirely: designing, architecting, implementing, and documenting them.

Beyond programming, I authored all technical documentation for the dialog system and voiceline creation pipeline, created pipeline tooling scripts in Lua for Reaper, designed the architecture for both the dialog and music systems from scratch, handled the engine version migration from UE 4.23 to 4.27, and built a proof-of-concept dialog branching prototype that the successor team later expanded using Articy Draft.


What I Built

Procedural Dialog System

The centerpiece of my work on Karyn. I architected and built a fully data-driven procedural dialog system from scratch that synchronizes voiceline audio with facial motion capture animation playback.

Voiceline Queue & Prioritization — The system manages a queue of voicelines with 5 priority levels that determine insertion order: add to end of queue, insert after active sequence, insert after active line, interrupt active line, and interrupt same character. This allows high-priority lines (e.g., a scream when the player falls off a ledge) to cut through lower-priority ambient dialog naturally.

Interruption & Sequence Tracking — 3 interruption behaviors control what happens to a voiceline when it gets interrupted: play after the interrupting line finishes, remove if interrupted, or remove the entire sequence if interrupted. Voicelines are tagged with sequence states (single line, first in sequence, in sequence, last in sequence) to maintain conversational coherence — so when a multi-line exchange gets interrupted, the system knows to discard the remaining lines rather than play orphaned replies out of context.

Simultaneous Voicelines & Edge Cases — The system supports multiple voicelines playing at the same time (e.g., two characters speaking over each other), delayed voiceline playback, and death/respawn handling where the queue pauses on death and resumes after respawn.

Dialog Manager Architecture — Built with a base class and extensible child classes so level designers could create level-specific dialog scripting without modifying the core system. I also prototyped a dialog branching interface that the successor team later adopted and expanded using Articy Draft.

Facial Motion Capture Playback

Built a system to play iPhone LiveLink facial capture data synchronized with voiceline audio. The LiveLink footage was captured simultaneously with voice acting performances — actors’ facial movements drove a character whose face was a pixelated screen with stylized features. Despite the stylized aesthetic, the live-captured performances gave the animations a uniquely lifelike quality, as if a real person were behind the simple pixelated display.

The system uses LevelSequence assets containing morph target keyframes, played back via additive animation blending in the animation blueprint. Each voiceline’s start timestamp and a per-take AnimationSyncValue offset (stored in the data tables) align the separately captured audio and facial animation tracks at playback time. When a voiceline is interrupted, both audio and animation stop simultaneously.

Adaptive Music System

Designed the architecture and built an adaptive music manager from scratch. The system plays quantized music stems that react dynamically to gameplay — when the player enters a new area or triggers a transition, the music doesn’t cut abruptly but waits for the next beat boundary before switching.

Built on Unreal Engine’s Quartz subsystem for beat-accurate quantization. The system supports extension stems (looping segments that play while waiting for the next transition), bounces (transitional musical phrases with volume fading), queued quantized stem switching, and audio component switching. Music transitions are triggered by fall triggers and gameplay trigger volumes placed throughout the level.

Voiceline Pipeline & Tooling

Designed and built an end-to-end pipeline for taking raw voice acting recordings and turning them into fully implemented in-game dialog with minimal manual work.

Reaper Lua Script — A custom script that exports voiceline timestamps in Unreal Engine Data Table format directly from Reaper, eliminating manual data entry entirely.

Naming Convention System — A Scene_VoicelineNumber_TakeID naming convention that drives the entire data-driven pipeline — from Reaper regions through to data table lookups and audio file references in-engine.

Automated Rendering — Reaper wildcard-based file naming ensures exported audio files automatically match their corresponding data table rows.

Documentation — Wrote comprehensive step-by-step pipeline documentation with annotated screenshots, enabling non-programmers to create and implement voicelines independently.

Gameplay Systems

Implemented various gameplay features during the early project phase:

Engine Migration & Memory Management

Migrated the project from Unreal Engine 4.23 to 4.27, handling deprecations and API changes across multiple engine versions. Designed a data table architecture using soft references and per-take table splitting to prevent large dialog data tables from loading all audio assets into memory simultaneously.


Key Challenges & Solutions

Synchronizing separately captured audio and facial animation: Face animation (iPhone LiveLink) and voiceline audio were captured by different software, producing inherent desync. Solved by implementing a per-take AnimationSyncValue offset stored in the data tables, applied at playback time to align the LevelSequence animation with audio playback.

Maintaining conversational coherence through interruptions: When a voiceline gets interrupted (e.g., the player falls and Karyn screams), reply voicelines from other characters could play out of context. Solved with a sequence tracking system — voicelines in the same conversation are tagged with sequence states, and the “RemoveSequenceIfInterrupted” behavior discards the entire exchange when any part is interrupted.

Beat-accurate music transitions: Music stems needed to transition cleanly on beat boundaries rather than at arbitrary times. Solved by implementing the Quartz subsystem for quantization, with a queue system that holds pending stems until the next valid transition point.

Eliminating manual voiceline implementation: Manually entering timestamps and file references for hundreds of voicelines was error-prone and slow. Solved by building a Lua script for Reaper that auto-generates Unreal Engine data tables, combined with a wildcard-based rendering workflow that auto-names exported audio files to match data table rows.

Memory management for dialog data: Large monolithic data tables caused memory issues in Unreal Engine. Solved by splitting into individual per-take data tables referenced via soft object references from a master table, so assets load on demand rather than all at once.


Tools & Tech

Unreal Engine 4 (4.23 → 4.27), Blueprints, Unreal Audio System, Quartz, LiveLink, LevelSequence, Data Tables, Reaper, Lua (ReaScript), Ableton Live Suite, PC, Console

Need dialog systems or adaptive music for a narrative game?

Tell me what you're building and what you need. I typically respond within 24–48 hours.

Start a Conversation