Nanite in Blender?

Why I think that would be a bad idea

Yeah, you read that subtitle right. Nanite should not be in Blender.

Strap yourselves in.

I see feedback posts on devtalk and RightClickSelect saying Blender should put Nanite or a similar mesh optimization in Blender, and apparently “solve all its problems.”

Well, I'm here to say: not only should they not, but it's even unnecessary.

So let's make it like Squints from The Sandlot, and dive in.

What is Nanite?

Firstly, a quick summary of Nanite and how it works. Let me keep it high-level.

Nanite is an automated level-of-detail (LOD) manager. It eliminates level-of-detail "pops" you see in video games.

Upon importing a mesh, Nanite breaks it into chunks and organizes a hierarchy for its LOD management. At runtime, these chunks are selectively displayed. Need more detail? Select more, detailed chunks. Mesh gets smaller onscreen? Fewer, less detailed chunks.

This lets Nanite preserve an ideal density of geometry detail per pixel. Cool, right?

It's ideal on photoscans, or any high-detail mesh you won't be editing. That becomes important later.

It should be no surprised that Nanite adds performance overhead. But this scales far better as more geometry is shown onscreen. Lot of optimization relies on this: take some time upfront, save more time later as input gets bigger.

I'd highly recommend watching the official Nanite presentation. I'm not even asking you to understand it all; I certainly don't. Just see how massive an undertaking Nanite is.

Before I get into my opinions on implementing it in Blender, I want to first say that Nanite is far from perfect. Nanite's biggest optimization is not just render times on huge scenes, but also development time. It isn’t just about the gamer. It is less performant than more direct optimizations, and often requires combining with other optimizations.

But let's put those aside and just say there’s a "Blender Nanite" implemented in a similar way. What’s stopping us?

Not a Simple Code Change

This is not just changing a few lines of code. Nanite is not just implemented in one place of Unreal Engine. It handles all aspects of rendering meshes: culling, instancing, even file storage (you don’t want to store a billion-triangle mesh in plain text, do you?).

A “Blender Nanite” would require changing the entire rendering architecture so optimizations can flow throughout the 3D pipeline. And it would have to be supported by both Eevee and Cycles, let alone third-party engines like Octane.

Slower Editing

But let's say developers finally take on coding "Blender Nanite," and you get to test it.

The viewport is much faster now. But this optimization is short-lived; it ends the moment you edit any mesh.

If you go into edit mode for a, say, 10 million triangle mesh, Blender has no choice but to make all those triangles available for you to edit. How can it know which polygons you want to edit?

Blender screenshot of a 990k triangle mesh

This isn’t even a million triangles, and I preemptively get a headache.

Nearly all optimizations are a trade-off between memory usage and execution speed. In this case, Nanite increases memory usage (building a Nanite-compatible mesh) for faster render speed. But that performance improvement is wasted if the user keeps changing the mesh, repeatedly building and wiping that memory.

So why can Unreal Engine take advantage of this? Because in Unreal, you rarely edit mesh files within Unreal. You either edit it in a separate software (like Blender) and re-import, or it's a photoscan you never edit anyway.

Meshes are (for the most part) locked within Unreal Engine. The engine can then do whatever it wants with it, optimizations and all. Unreal Engine's goal for quality is shadowed by its need for performance. It is primarily a game engine, after all (for now).

That is not Blender’s goal. Direct interactivity are important. It expects you to be able to change mesh data. So Blender optimizing the mesh for you becomes wasted effort when you keep changing it.

In one post I found, Nanite takes 15 seconds to process a 1 million triangle mesh, assuming decent hardware. Imagine having to pause 10-15 seconds every time you leave mesh edit mode, or sculpt mode.

Some of you don't have to imagine, sadly. I really hope you get a new computer soon.

Hat’s off to you. (From Pirates of the Caribbean: Curse of the Black Pearl)

In summary, Blender relies on users being able to change a scene whenever and however they see fit. And more importantly, Blender will only display the scene like you told it to.

More on the “told it to” part later.

Nanite for Renders

But what about Nanite for renders only?

Now, this is something I actually agree with (partially). Automatically reducing (or adding) subpixel geometry? That can make a big difference. But a few points I need to share:

You Lose Detail

"Spencer, it's subpixel, the detail won't show up in the render."

Offline render engines often sample the same pixel multiple times. These samples slightly change offset and direction for clearer renders. While often very subtle, those subpixel details can add up to more nuanced pixel-size details, including clearer edges.

Example with Blender’s microdisplacement. Left is the RGB difference between 1.0 pixel dicing rate and 0.5 dicing rate, right is the subdivided cube (0.5 dicing rate) with noise. While subtle, it adds up along edges and crevices.

Stating the obvious, reducing detail reduces detail. Sometimes it’s not a big difference, sometimes you don’t care. But sometimes you do. It's a balance, and a conscious choice.

Separate Render Pipeline

Many performance slowdowns seen in Nanite examples online is because developers may be using both Nanite and Unreal’s default render pipeline.

Running two render pipelines is certainly slower than one.

So for best performance, either run everything on Nanite or nothing. But what about third-party engine compatibility? Or low-poly meshes that run far faster without Nanite?

Do you want to run two rendering pipelines, when your system barely handles one?

Sorry, too far. But you get the point.

You Can … Already

Yep, this already exists in Blender. Well, kind of, in a few ways:

You can fake level-of-detail with modifiers and geometry nodes.

If it's geometry via subdivisions, you can automatically manage it with manual viewport-vs-render levels or adaptive subdivisions. The latter has been experimental for over a decade in Cycles, but will finally be available in 4.5. In the right circumstances, this can be a big performance boost.

“So Blender is just Perfect Then?”

No, I'm not saying Blender is perfectly optimized. But if there's one thing open source developers love coding, it's optimization. If you have been around for a while, Blender has come a long way in performance, just in the past few years alone.

One of the latest changes, supporting Vulkan, makes a huge difference this alone.

But if you have a slow scene, I firmly believe the last thing you should do is just give up and blame Blender.

I want to give you something you can do about it.

Record and Report

Suggest new features on Right Click Select. If it's a bug of an existing feature or slower when it wasn't before, report it on Blender's issue tracker.

Those are the places developers look for issues and ideas. Not YouTube videos. Not social media rants.

Optimization by Users

Now, everyone wishes a new feature just magically optimized their scenes. I get it. Optimization is tedious and painful. And working on an old, slow computer is even worse.

My first big render, over a decade ago now, rendered overnight on my graphics card, with 256 MB of memory. Not gigabytes, megabytes. On my current PC, it's only a couple minutes.

For an old computer, that's a painful difference.

Now I’ve made optimization related add-ons like nView for Blender. As a result, I have also helped customers troubleshoot many many unoptimized scenes.

With this experience, I have come to a realization: the best optimizations are ones you define yourself. I believe optimization must be a choice.

Yes, there are automatic solutions already in Blender, but they are always for specific use cases. Maybe yours fits. Did you know:

  • Eevee has per-material backface culling for camera, shadow, and light probe volume. Memory usage for shadows and light probes can also be adjusted.

  • Cycles has frustum and distance culling in the simplify panel. This mostly affects indirect lighting and global illumination. You can also manage tile size or persist mesh data to save time recalculating.

  • Workbench has backface culling as well. But if you're having problems with just workbench, you might need a new computer.

Not every use case requires these optimizations. Some users may not want them. That's why they are and must be optional. Users must opt-in manually, know it's being used. And most importantly: understand the pros and cons, the “why,” the use case.

That's a problem I see with Blender content online. Countless Blender optimization tutorials. Few adequate, well-rounded explanations of settings. Most say, "Press this button," without telling you there are times you shouldn't press it.

I've watched videos straight-up contradict the Blender user manual (please read it. It is invaluable in understanding how Blender works).

Why is good info hard to find (or understand)?

I don’t believe it's inherently the content creator's fault. It’s partially a lack of documentation on Blender’s side (which is improving), but not all.

Most Blender users are not professionals in studios working with multiple 3D softwares. They are beginners. Beginners to 3D graphics, with low-end computers, who just want to create some cool stuff.

And Blender is the cheapest, most accessible way to do that. Open source makes that barrier of entry to be very, very low. That's a good thing.

But it's a scary-large audience to cater to.

Content creators naturally focus on their biggest audience: those beginners. And to keep retention and engagement, creators cut time and only share which buttons to press, not all the reasoning and nuances behind it.

But Blender isn't trying to just appeal to beginners. It wants to appeal to studios and professional artists, with decades of experience in more commonplace 3D applications like Maya and Houdini. And so Blender adopts existing 3D workflows, standards, and user experiences to streamline that adoption process.

These workflows are rarely intuitive to beginners, even though they are intuitive for professionals.

(and yes, I am implying that “intuitive” is subjective and relative from person to person)

But you are making art and stories possible with Blender. And you can only do that if you have an understanding of the software, manage your scenes, and optimize them to suit your needs.

That is the missing piece of the puzzle: optimization and scene management. Utilizing proxies, object visibility, view layers. The list goes on.

Of course having the hardware for a real-time viewport and a what-you-see-is-what-you-get experience is ideal. But that's just not possible for most artists.

Studios rarely do that. Studios are not buying RTX 5090s for every single artists' computer, rendering scenes at full resolution in real-time. No, they optimize their files so artists work with responsive viewports, and leave the slow stuff for the render farm.

That is why I did my Blender optimization series (did, doing? I’ll add more as I go). Not quick tips. Not "just check this box." Not "it can only be done with this expensive add-on I'm an affiliate for."

Like I said, I have optimization add-ons, some free and some for sale. I’m even coding a new optimization add-on in collaboration with another add-on product owner.

But I want to make sure you know how to optimize by yourself first. By educating you on how to change your workflow and mindset so you can tell your stories more effectively.

Instead of waiting for your computer to update the viewport. Let alone your render.