Industry Role Research Part3: Understanding PBR Workflow

As I continued studying 3D asset creation, one of the most consistent standards I encountered—across games, film, VR, and even product rendering—is the PBR workflow. PBR (Physically Based Rendering) is basically a universal shading system designed to make materials react to light in a predictable, physically accurate way. The more I researched it in professional pipelines, the clearer it became that PBR isn’t just a “texturing method”—it’s an interconnected process that starts at modeling and ends at lighting.

Below is a breakdown of the PBR pipeline as I now understand it, with each step based on industry practice.

1. Preparing the Model (Before Texturing Even Begins)

The PBR workflow starts earlier than I expected. Before I can even touch textures, the model must be prepared correctly:

  • Clean topology ensures shading behaves correctly.
  • Proper UVs with consistent texel density prevent stretching and artifacts.
  • Correct smoothing groups/normals create smooth or sharp transitions exactly where needed.

I learned that bad modeling decisions will always show up in the final PBR material—PBR is unforgiving that way.

2. High-Poly Sculpt > Low-Poly Retopo (for Game Assets)

In real-time pipelines, PBR relies heavily on transferring surface detail from the sculpt to the low-poly mesh.

Pipeline:

  • Sculpt high-res detail (ZBrush/Blender)
  • Create clean, low-poly retopo
  • Bake all surface information down into texture maps

This is where the foundation of a believable PBR material begins.

3. Baking Maps (The Core of PBR Start)

Through research and practice, I realized baking is where PBR gets most of its “micro detail.”
Common maps include:

  • Normal Map – replaces high-poly surface detail
  • Ambient Occlusion – grounding shadow information
  • Curvature Map – helps auto-generate edge wear
  • World/Position Map – useful for procedural masks
  • ID Map – speeds up material assignments
  • Thickness Map – used for subsurface materials

For games, these maps are essential.
For film, they support displacement and shader networks.

4. Base Material Setup (The Heart of PBR)

When texturing in Substance Painter/Mari, every PBR material follows two core values:

Metallic

0 = non-metal
1 = metal
No in-between, no guessing.

Roughness

Controls how sharp or blurry reflections are.

I learned that these two channels do most of the heavy lifting in PBR.
Color (base color/albedo) only describes true material color—no lighting painted in.

5. Building Materials Layer by Layer

The more pipelines I researched, the more I saw the same workflow repeated:

  • Start with a flat, correct base color
  • Add roughness variation (fingerprints, dirt, smudges)
  • Add micro detail using baked maps
  • Add edge wear using curvature maps
  • Add dirt/dust using AO masks
  • Finalize with manual painting where needed

PBR materials feel believable because of roughness variation, not because of noisy textures.

6. Exporting PBR Maps (Game or Film)

Once texturing is complete, studios export PBR maps depending on whether the final asset goes into:

Game Engines (Unreal/Unity)

Export:

  • Base Color
  • Metallic
  • Roughness
  • Normal
  • AO
  • Emissive (if needed)

(Some studios pack channels into a single texture to optimize memory.)

Film / Offline Rendering (Arnold, RenderMan)

Export:

  • Albedo
  • Roughness
  • Specular Depth
  • Displacement
  • Normal (if used)
  • Additional masks for look-dev

Offline renderers allow more complexity, but the principle is the same.

7. Look-Dev: Testing Materials Under Real Lighting

This was one of the most important things I learned:
PBR is only “finished” once it’s tested under proper lighting.

In production, look-dev artists check materials using:

  • HDRIs
  • Direct spotlights
  • Backlights
  • Studio lighting rigs

If a material only works in one lighting setup, it’s not ready yet.

This is why so many breakdowns show turntables with multiple lights:
Good PBR materials are lighting-independent.

8. Integration Into Rendering or Game Engine

Finally, the asset moves downstream:

Games:

Plug maps into the engine’s PBR shader, test under real-time lighting, adjust roughness and metallic values, and optimize.

Film:

Shader artists plug maps into more complex networks, often adding displacement, SSS, or custom layers on top of the PBR base.

By the end of this step, the asset becomes production-ready.

Conclusion of My Research

As I researched PBR workflows across multiple studios and tutorials, I realized that PBR is less about “hitting the right settings” and more about building accurate materials from the ground up—starting with modeling, UVs, bakes, and consistent physical properties.

PBR forces me to think like both an artist and a technician:

  • Artist (shape, color, material identity)
  • Technician (maps, accuracy, lighting behavior)

Understanding this pipeline has made me appreciate how much the texturing and look-dev stages rely on solid modeling and preparation. It’s a system where every step affects the next, and small decisions early on can shape the entire final result.

Some of my Texturing Artworks:

Industry Role Research Part 2 : 3D Modeling

3D Modeling in the Industry

As I’ve researched the CG world more deeply, 3D modeling has become one of the clearest and most universal foundations across film, games, and animation. No matter which studio I look at—whether it’s a AAA game team or a feature-film VFX house—the modeling pipeline follows a surprisingly similar structure. What changes is the level of detail, the technical requirements, and how the asset is used downstream. Understanding this pipeline has helped me see exactly where modeling sits in the bigger production ecosystem, and why it’s such a critical position.

How I Understand the Standard 3D Modeling Pipeline

As I studied professional workflows and artist breakdowns, I realized that modeling usually follows these core steps:

1. Concept & Reference Gathering
Everything begins with solid references—silhouette studies, material boards, anatomy charts, even screenshots from films or games. I’ve learned that modelers don’t just “start modeling”; they first build a visual library.

2. High-Poly Modeling / Sculpting
This is where the main forms come to life. Artists sculpt in ZBrush or model in Maya/Blender to nail down the shape, structure, and proportion. From my perspective, this is the most creative stage—pushing forms, experimenting, and defining personality.

3. Retopology (Clean, Industry-Standard Topology)
A beautiful sculpt doesn’t mean it’s usable. Retopo is where the model becomes efficient, clean, and animation-ready. I now understand why studios emphasize:

  • quad-based topology
  • good edge loops for deformation
  • minimal Ngons
  • optimized mesh flow

It’s not just a rule—it determines whether your asset survives the pipeline.

4. UV Unwrapping
UVs used to intimidate me when I encounter Heavy-Load polycounts, but the more I researched industry standards, the more I realized it’s all about consistency:

  • even texel density
  • clean UV islands
  • strategic seam placement
  • UDIMs for film, simple tiles for games

Good UVs directly affect texturing and shading later.

5. Baking (Primarily for Games)
Game artists transfer high-poly detail onto low-poly models. Learning about normal maps, AO, curvature, and cage settings showed me how much detail can be preserved without heavy geometry.

6. Texturing & Surfacing
This is where color, material definition, and realism come in. When the model finally enters Substance Painter or Mari, the forms I built earlier get brought to life with roughness breakup, edge wear, and material variation.

7. Look Development (Film/High-End Production)
In film/VFX pipelines, assets go through look-dev to make sure shaders react correctly under studio lighting. This is where modeling connects to shading, displacement, and render engines like Arnold or RenderMan.

8. Integration into the Next Department
At this point, the asset is ready for rigging, animation, lighting, or game-engine import. The cleaner my model is, the smoother this hand-off becomes.

Where I See Myself in This Pipeline

Learning all this has made me appreciate how foundational modeling really is. Modelers are the first people to “build” the world—characters, environments, props, everything. The choices made in the modeling stage ripple forward into rigging, animation, texturing, and lighting.

For me, that blend of artistry and technical precision is exactly what I enjoy. Modeling feels like the perfect balance between creativity and logic, and exploring these industry pipelines has only made me more excited to specialize in 3D modeling, asset creation, and texturing as I move further into the film and game industry.

Industry Role Research Part 1: Overview of CGI in Game and Film Industry

About My Journey

I began my journey as a 3D artist during the COVID era, a time when digital creation became both a refuge and a professional doorway. My first exposure to 3D was through Rhino, a NURBS-based modeling tool fundamentally different from the polygonal pipelines used in film and game production. Starting with precise, mathematically driven surfaces gave me a unique foundation, but as my interests expanded, I gradually moved into broader areas of 3D art. That transition led me to Cinema 4D, where I explored motion graphics and briefly worked within the advertising industry. Through this experience I gained an understanding of fast-paced production environments, procedural workflows, and visual communication for commercial clients. However, as I continued growing, I realised that my long-term passion extended beyond motion graphics. I wanted to work more deeply in the game and film industries, where storytelling, world-building, and complex technical pipelines intersect. This shift prompted me to investigate the wider landscape of CG roles, workflows, and opportunities across both industries.

Industry Roles

The 3D industry spans animation, VFX, games, advertising, and virtual production, but most studios follow a similar end-to-end pipeline that moves from early planning to final output. A typical 3D production pipeline can be understood through the following major stages:

1. Pre-Production — Planning & Visual Direction

  • Concept art and style development
  • Storyboards and animatics
  • Previs and 3D layout (early blocking, cameras, staging)

2. Asset Creation — Building the World

  • 3D modeling (characters, props, environments)
  • UV mapping and texturing / surfacing (PBR materials, maps)
  • Grooming (hair, fur, feathers)
  • Look development / shading (materials and rendering behavior)

3. Character & Technical Setup — Making Assets Functional

  • Rigging and character TD work (skeletons, controllers, deformation)
  • Creature FX / CFX (cloth, fur, muscle simulations)
  • Technical tools and pipeline preparation for animation

4. Animation & Simulation — Bringing Things to Life

  • Character and creature animation
  • FX simulation (fire, smoke, water, magic, destruction)
  • Crowd simulation and behavior systems

5. Lighting, Rendering & Finalization — Creating the Final Look

  • Lighting (mood, clarity, realism)
  • Rendering and optimization (AOVs, passes, farm management)
  • Compositing (final image integration, color, depth, polish)

6. Game-Specific Integration — Real-Time Implementation

  • Shader creation and real-time look-dev
  • Technical art and engine tools
  • Importing assets into Unreal/Unity
  • Performance optimization, LODs, and real-time VFX

Across film and game workflows, these stages form a highly interconnected system where assets move from team to team, growing more refined at each step. Studying these pipelines has helped me understand how many different specialties contribute to a finished production. While I find the entire process fascinating, the areas that resonate most strongly with me are 3D modeling, asset creation, and texturing, where both artistic design and technical craft come together at the foundation of CG production.

Student’s Film

This project, inspired by Animal Farm, focused on building a fictional dystopian environment using a PBR workflow with Substance Painter. I created modular assets like fences, walls, and propaganda boards, applying weathered materials such as rusted metal, chipped concrete, and decaying wood to reflect neglect and oppression. Using smart masks, decals, and custom textures, I layered dirt, rust, and worn edges to enhance realism and storytelling. The scene was assembled with careful composition and lighting, using muted colors and fog to evoke an eerie, authoritarian atmosphere, capturing the breakdown of control and ideals central to the story.

The process for this project was very complex, starting with sculpting a stylized character to match the dystopian theme. After completing the high-detail sculpt, I had to retopologize the model to create a clean, optimized mesh suitable for animation, ensuring proper edge flow and manageable topology for rigging and deformation. This workflow required balancing artistic detail with technical usability, making sure the final asset retained the stylized look while being efficient enough for use in animation and the overall scene.

For rigging, I used Auto-Rig Pro as my solution, which significantly streamlined the process but still required a lot of time to set up and refine for the layers of animation I needed. After animation, I composited the final renders in different passes, separating elements like characters, background, and effects, which gave me more flexibility to tweak colors, lighting, and atmosphere during post-production, ensuring the final look matched the dystopian tone of the project.

Simulated Work Experience Journal Entry

During this simulated work experience, I found myself extremely disappointed, both in the tasks I was assigned and the overall planning of the project. The only task given to me was finding references and gathering a model list, but the list itself made absolutely no sense. Many of the objects on the list were extremely simple — things that could easily be found online for free or modeled from scratch in literally five minutes. There was no logic behind outsourcing such trivial work to team members when the group leader could have handled it independently in less time than it took to type out the request.

This left me feeling like my role was completely insufficient and unnecessary, with no real opportunity to apply my skills or contribute creatively. What made it worse was the fact that the entire assignment was scheduled to last two weeks, yet my actual workload amounted to barely 30 minutes of effort. The mismatch between the timeline and the amount of work was frustrating and illogical, and it felt like a complete failure in project planning and team management.

On top of that, the group leader’s refusal to meet or discuss the project properly only added to the disorganization. Even though I asked to meet in person to clarify the goals and workflow, he insisted on communicating purely through text messages, which made everything slower and less clear. With almost 80% of the work already completed by the group leader himself, there was practically no room left for the rest of the team to contribute anything meaningful.

This experience stood in stark contrast to my time working with the Brown RISD Game Development Club, where communication was smooth, the work was well-distributed, and everyone had a clear role with real creative input. That experience taught me how important collaborative planning and communication are for any successful team project, and this simulation highlighted exactly what happens when those are missing.

Overall, this project felt like a waste of time and a missed opportunity to learn anything useful. It also showed me just how irresponsible and inefficient poor project planning can be, especially when the work isn’t properly matched to the schedule or the skills of the team. I hope future projects will be better structured, with clearer communication and more meaningful work for all team members.

The following images were assigned to me by the team leader as part of my tasks for this two-week project. However, upon reviewing them, it became clear that these are some of the most basic, primitive shapes imaginable — objects so simple that they could either be sourced online for free or modeled from scratch in just a few minutes. Assigning these for a two-week period is completely illogical and unnecessary, showing a lack of consideration for both time management and team members’ skills:

The team leader mentioned that the project would follow a low-poly art direction, and as part of my assigned tasks, I was asked to find reference photos to support that style. Below are some of the references I found and selected, which I dedicated time to developing into a useful collection.

However, this task raised several concerns for me. Researching references is something that should typically be part of the pre-production phase, where the overall visual direction is decided before the actual work starts. Being asked to do this after the project was already underway felt unorganized and unprofessional, especially for a project with such a tight and simple scope.

This kind of disorganized workflow — assigning basic pre-production work mid-project — wastes time and prevents the team from focusing on actual production tasks, where creative and technical contributions are more valuable. It also made it difficult to feel like the project had any clear direction or plan, which contributed to the overall lack of efficiency and clarity throughout the experience.

Exploring the 12 Principles of Animation: A Personal Reflection

As an animator, I’ve found the 12 Principles of Animation to be invaluable in shaping my approach to both the technical and creative aspects of animation. These principles, originally established by Disney animators Ollie Johnston and Frank Thomas in The Illusion of Life: Disney Animation, have influenced the way I think about movement, character development, and storytelling in animation. Here’s a reflection on what each principle has meant to me and how they continue to shape my work.

  1. Squash and Stretch: This principle has been central to my understanding of how to make characters and objects feel more tangible. The exaggeration of movements, such as the stretching of a character during a jump or the squashing of a ball as it hits the ground, adds a sense of weight and volume that makes the animation feel more alive and realistic. This dynamic quality has become a fundamental part of my animation toolkit.
  2. Anticipation: I’ve come to realize how important anticipation is in creating engaging, believable movements. When I began incorporating anticipation into my animations, I noticed that actions became more fluid and natural. Whether it’s a character preparing to run or an object getting ready to fall, building up to an action helps the audience understand what’s coming next, creating a smoother experience and heightening emotional engagement.
  3. Staging: Staging has taught me the importance of clarity in communication. By carefully planning the composition and positioning of characters within a scene, I can ensure that the focus remains on the key actions or emotions I want to highlight. It’s not just about where things are placed, but about directing the viewer’s attention to what’s most important.
  4. Straight-Ahead Action and Pose-to-Pose: I’ve experimented with both approaches to animation and learned to use them based on the needs of the scene. Straight-ahead action allows for more fluid and dynamic movement, while pose-to-pose provides better control over the overall structure and timing of the animation. By combining both techniques, I’ve been able to strike a balance between spontaneity and precision.
  5. Follow Through and Overlapping Action: Implementing follow-through and overlapping action has made my animations feel more realistic. I now pay close attention to how different parts of a character’s body continue moving after the primary action has stopped. This subtle detail adds weight and fluidity to movements and enhances the overall believability of the animation.
  6. Slow In and Slow Out: This principle has been crucial in helping me understand how to convey weight and fluidity in motion. By easing into and out of actions, rather than having them start and stop abruptly, I’ve learned to create more lifelike and natural movements. It’s these small touches that make the animation feel grounded and believable.
  7. Arc: Realizing that most natural movements follow an arc has greatly improved my animations. Whether it’s the swing of a character’s arm or the path of a bouncing ball, animating along curved trajectories adds a fluidity and organic feel to movements that makes them much more engaging.
  8. Secondary Action: Adding secondary actions has helped bring my animations to life by making them feel more layered and nuanced. For example, when a character takes a step, adding small movements like the swaying of their clothes or a shift in their posture brings the character’s actions into sharper focus, creating a more complete and immersive experience.
  9. Timing: Timing is the backbone of effective animation, and I’ve learned how it influences the perception of speed, weight, and emotion. Whether it’s a quick, snappy action or a slow, deliberate movement, precise timing helps me convey the right emotional tone and make the animation feel more natural.
  10. Exaggeration: I’ve found that exaggeration is key to making animations more compelling. By pushing the limits of movement or expression, I can create more engaging, visually striking animations that capture the audience’s attention. This principle has taught me that animation is about exaggerating reality to make it more appealing and expressive.
  11. Solid Drawing: Understanding the fundamentals of drawing has improved the way I approach character design and animation. Solid drawing is about ensuring that characters have a clear volume and structure, making them feel three-dimensional and believable. It’s a principle that has pushed me to refine my skills and focus on creating more convincing characters and environments.
  12. Appeal: Finally, appeal has become one of the most important aspects of my work. Whether it’s a character’s design, personality, or how they interact with their environment, creating something that resonates with the audience is essential. By focusing on creating engaging, likeable characters with depth and personality, I can forge stronger emotional connections with my viewers.

Looking back, these 12 principles have shaped not just my technical skills but also my creative process. They serve as a constant reminder to focus on the details that make an animation feel alive and engaging. As I continue to explore and experiment with these principles, I am reminded that animation is a craft that requires both creativity and precision. These principles provide the foundation for that balance, allowing me to push my work to new heights.

Short Lip Sync Film Rebooted

“Rebooted” is a short film that explores the intersection of technology, identity, and transformation. Set in a world where machines constantly evolve through software updates, the story follows a once-cutting-edge robot who wakes up after an unexpected reboot, only to find itself outdated and out of place in a rapidly advancing environment.

Blending visual storytelling with a touch of humor and heart, “Rebooted” reflects on what it means to adapt — or fail to — in a world that never stops upgrading. With stunning visuals and a layered narrative, the film invites the audience to question whether progress always equals improvement and whether identity can survive constant reinvention.

My Workflow for “Rebooted”

The creation of “Rebooted” involved a mix of 3D modeling, animation, and compositing techniques, blending both technical precision and creative storytelling.

The process began with concept development — defining the story beats, visual style, and overall tone. Once the core narrative was set, I moved into asset creation, designing the robot character and environment, ensuring every element fit the film’s slightly glitchy, futuristic aesthetic.

For animation, I focused on expressive movement to bring personality to the robot, even with its mechanical design. Lip-sync and facial animations were especially important to convey subtle emotions during the robot’s reboot sequence.

In After Effects, I handled compositing and post-production, fine-tuning colors, adjusting contrast, adding glitch effects, and enhancing the overall atmosphere with sound design and subtle VFX to emphasize the reboot process.

Throughout the workflow, I maintained a balance between technical accuracy and creative flexibility, allowing the film to evolve naturally while staying true to the original vision.

Basic Parent Switch Driver in Blender

In this blog, I’ll walk you through how I set up a basic parent switch in Blender. This switch allows me to change the parent of a bone between three different options (Parent Bone 0, 1, and 2), giving me more control over how objects or bones behave in relation to each other during animation.

The Setup:

The key idea is to use a driver that controls the parent relationship based on a custom property. By using a simple conditional script (1 if Var == 1 else 0), I can control which bone acts as the parent.

How I Did It:

  1. Create a Custom Property:
    • I started by creating a custom property on the object or bone I wanted to control. This property will be used to switch between the different parent options (Parent Bone 0, 1, or 2).
  2. Set Up Drivers:
    • For each parent constraint, I added a driver that controls its influence. This driver checks the value of the custom property and determines if the bone is active as a parent or not.
  3. The Conditional Script:
    • The logic I used for the driver is straightforward. I wrote a simple expression like:python复制代码1 if Var == 1 else 0 This script checks if the custom property (Var) is equal to a certain value (like 1 for Parent Bone 1), and if it is, the influence is set to 1 (active). If not, it’s set to 0 (inactive). This way, I can easily switch between different parent bones by simply changing the value of the custom property.
  4. Switching Between Parents:
    • Once the drivers are set up, I can quickly switch between parent bones by adjusting the value of the custom property in the sidebar. If I set the value to 0, Parent Bone 0 becomes the active parent. Setting it to 1 switches to Parent Bone 1, and setting it to 2 activates Parent Bone 2.

Why It’s Useful:

This method is extremely useful in scenarios where you need to dynamically switch parent relationships during an animation. Instead of manually adjusting parenting or creating complex constraints, you can use this simple switch to quickly toggle between different parent bones. It’s also flexible enough to be expanded to more than three options if needed.

This was a fun and simple way to add more control to my rigging setup, and I found it streamlined my workflow, especially in complex animations with multiple parent dependencies.

Creating an IK-FK Switch: The Basics

In this blog, I’ll walk you through the basics of creating an IK-FK switch for a character rig, something that makes switching between Inverse Kinematics (IK) and Forward Kinematics (FK) much smoother. It may sound complicated, but with a little bit of practice, it becomes a straightforward process.

The key to setting up the IK-FK switch is adding two constraints—one for IK and one for FK—on the deform bone. These constraints are Copy Location constraints, with the second constraint overriding the first. The trick is to manage their influence using a driver, which allows you to blend between IK and FK easily.

Here’s how I did it:

  1. Two Copy Location Constraints: Add two Copy Location constraints to the deform bone. One targets the IK control, and the other targets the FK control. The second constraint should overwrite the influence of the first, allowing you to switch between the two.
  2. Driver Setup: The next step is to create a driver. The driver will control the influence of the last constraint, which determines whether you’re in IK or FK mode. You can create the driver on a control bone or a custom property, depending on your setup.
  3. Copy the Driver: Once you have the driver set up, copy it to the influence of the second (overwriting) constraint. This gives you control over the switching between IK and FK. By animating the driver, you can seamlessly blend between the two systems.

The great part about this setup is that it gives you flexibility and control over how your character interacts with the environment. I found this method useful for smooth transitions in my animations, without having to manually adjust between IK and FK.

If you’re new to creating IK-FK switches, don’t worry—it’s a bit of trial and error at first, but once you get the hang of it, it becomes a powerful tool in your rigging process.