Playbook3D
At Playbook3D, I developed core features and internal tools for a collaborative XR design platform built in Unity. My responsibilities included implementing systems for multi-user collaboration, passthrough and spatial anchors, keyframe-based animation, and object selection and manipulation.
I also developed a Unity package and Blender add-on that enabled in-editor image and video capture, integrating directly with Playbook's ComfyUI generative AI pipeline to support real-time AI-enhanced rendering across both platforms.
Software Engineer (2021 - 2025)
Unity | C# | Blender | Python | ComfyUI | GitHub


Playbook is a Unity-based tool for designing, prototyping, and refining interfaces collaboratively between XR and web. During my time on this project, I contributed to several key systems:


Unity | C#
Playbook3D
Object Selection & Manipulation
I implemented three main interaction methods: direct grabbing (using controllers or hand tracking), ray-based pointing, and gimbal controls for precise translation, rotation, and scaling. To support bulk editing, I added group selection capabilities and designed a visual wireframe system to clearly highlight selected objects.
Hand Tracking & Passthrough
I integrated Meta's XR SDK to add passthrough, spatial anchors, and hand tracking. This let users see and interact with their real environment while working with virtual objects.
Real-Time Cross-Platform Collaboration
Playbook required real-time synchronization between multiple users across web and Oculus. Using Normcore’s networking system, I built components that synchronized object transforms, user positions, and selection states, ensuring a consistent shared environment for all participants.
Animation & Keyframing
I designed and implemented a custom animation timeline system, allowing users to create, preview, and trigger keyframed animations. This system included support for playback controls and event-driven behaviors like “on hover” or “on click” triggers.






Object manipulation, wireframes, and real-time collaboration.
Group selection and wireframes.
Hand detection and passthrough.
Object manipulation and animation keyframing.


I built a custom Blender add-on to integrate with Playbook3D’s AI Render Engine. It captures renders through the scene camera and generates multiple image passes (mask, normal, outline, and depth) which are sent to Playbook’s backend for AI-enhanced rendering. I also implemented an auto-update feature to notify users of plugin version mismatches and allow for one-click upgrades.
Playbook Blender Add-on
Blender | Python | ComfyUI
This Unity package enables AI-based rendering directly from within the Unity editor. I developed a capture system that records a series of in-game frames and processes them using fullscreen shaders to extract key image passes (mask, normal, outline, depth). The final data is transmitted via WebSocket to Playbook’s backend pipeline for AI-enhanced render generation. I also implemented several core features like a customizable segmentation mask pass, frame rate selection, and video clip length selection.
Playbook Unity SDK
Unity | C# | ComfyUI


An example of the original, outline, depth, and normal passes.




Resulting image.