Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to construct a subset scene from existing scene objects? #1328

Open
ebhrz opened this issue Oct 4, 2024 · 9 comments
Open

How to construct a subset scene from existing scene objects? #1328

ebhrz opened this issue Oct 4, 2024 · 9 comments

Comments

@ebhrz
Copy link

ebhrz commented Oct 4, 2024

I have a very big scene, including thousands of shapes and I need to do the ray tracing. I found that the ray tracing speed seems relate to the size of the scenario. I want to write an octree to coarsely query the surrounding shapes. But once I get all the shapes, how can I use these shapes to construct a new scene? I can't find any API to create a scene from the object, but only load_file, load_dict and load_string

@DoeringChristian
Copy link

Hi @ebhrz,

One option would be to construct a dictionary from all the objects in your scene i.e. shapes, emitters, sensors and integrators.
The code for that could look something like this:

def to_dict(scene: mi.Scene):
    assert isinstance(scene, mi.Scene)

    children = [
        *scene.shapes(),
        *scene.emitters(),
        *scene.sensors(),
        scene.integrator(),
    ]
    return {
        "type": "scene",
        **{child.id(): child for child in children},
    }

Note, that this will not construct the same dict from which the scene has been created, but it will create the same scene when calling scene = mi.load_dict(to_dict(scene)).
You could then filter the dictionary according to which shapes you want to include or exclude.

@ziyi-zhang
Copy link
Contributor

Hi @ebhrz,

(1) AFAIK, we cannot dynamically maintain an initialized scene.
(2) ShapeGroup is useful if we want to add some hierarchy in the scene.
(3) Manual intervention in the ray tracing logic should not be necessary.
Mitsuba leverages Optix or Embree to perform the ray tracing operation, depending on if we are running on GPU or CPU. Alternatively, Mitsuba's builtin "Shape Kd-tree" can be used for ray tracing. This is unfortunately slower than the modern industrial RT engines that builds BVHs.
An OcTree is likely to be significantly slower than Kd-tree in most scenes.
(4) Before focusing on ray tracing optimization, we should first verify that the bottleneck isn't due to JIT compilation.
It depends on the shape primitive type (mesh, shape, curve...) and what operations you are performing. One potential issue is that we are compiling some kernel for each shape -- if you have thousands of shapes, this will take way longer than actually running these kernels.

@ebhrz
Copy link
Author

ebhrz commented Oct 4, 2024

@DoeringChristian Thanks so much for your solution! I think it will work well, let me have a try.
@ziyi-zhang Thanks Ziyi, I always met the "jit_optix_compile(): optixPipelineCreate() failed" error, I think that is caused by the re-compiling for each epoch. This error usually occurs after four to five hundred epochs running, and not for the specific epoch, even I can't debug. Thus, I want to try to reduce the shapes in a scene.

@ebhrz
Copy link
Author

ebhrz commented Oct 5, 2024

@ziyi-zhang Hi Ziyi, sorry for bothering again. I've resolved the previous issue with optixPipelineCreate failing. It was due to an out-of-memory error because TensorFlow was consuming a substantial amount of memory. Additionally, I suspect there might be a memory leak, as memory usage increases progressively during program execution. I’m uncertain whether this is due to Mitsuba or TensorFlow. To address this, I've implemented a memory growth strategy in TensorFlow to ensure that Mitsuba has sufficient memory available.

However, I’ve encountered another issue:


Critical Dr.Jit compiler failure: cuda_check(): API error 0700 (CUDA_ERROR_ILLEGAL_ADDRESS): "an illegal memory access was encountered" in /project/ext/drjit-core/src/init.cpp:454.


This error does not consistently reproduce; it only appears after the program has run numerous epochs. Running a specific epoch in isolation does not trigger the error. Could you provide any insights or suggestions on how to tackle this problem?

Thank you for your assistance.

@ebhrz
Copy link
Author

ebhrz commented Oct 5, 2024

@ziyi-zhang Hi Ziyi, sorry for bothering again. I've resolved the previous issue with optixPipelineCreate failing. It was due to an out-of-memory error because TensorFlow was consuming a substantial amount of memory. Additionally, I suspect there might be a memory leak, as memory usage increases progressively during program execution. I’m uncertain whether this is due to Mitsuba or TensorFlow. To address this, I've implemented a memory growth strategy in TensorFlow to ensure that Mitsuba has sufficient memory available.

However, I’ve encountered another issue:

Critical Dr.Jit compiler failure: cuda_check(): API error 0700 (CUDA_ERROR_ILLEGAL_ADDRESS): "an illegal memory access was encountered" in /project/ext/drjit-core/src/init.cpp:454.

This error does not consistently reproduce; it only appears after the program has run numerous epochs. Running a specific epoch in isolation does not trigger the error. Could you provide any insights or suggestions on how to tackle this problem?

Thank you for your assistance.

I use it in this way:

for depth in range(max_depth):
        si = scene.ray_intersect(rays)
        active &= si.is_valid()
        # Record which primitives were hit
        shape_i = dr.gather(mi.Int32, shape_indices,
                            dr.reinterpret_array_v(mi.UInt32, si.shape),
                            active)
        offsets = dr.gather(mi.Int32, prim_offsets, shape_i,
                            active)
        prims_i = dr.select(active, offsets + si.prim_index, -1)
        candidates.append(prims_i)
        # Record the hit point
        hit_p = rays.o + si.t*rays.d
        t_all[depth] = si.t.numpy()
        hit_points.append(hit_p.numpy())
        clos = ~active.numpy()
        los[depth] = clos
        # Prepare the next interaction, assuming purely specular
        # reflection
        rays = si.spawn_ray(si.to_world(mi.reflect(si.wi)))
        direct[depth+1] = rays.d.numpy()

@merlinND
Copy link
Member

merlinND commented Oct 7, 2024

One potential issue is that we are compiling some kernel for each shape -- if you have thousands of shapes, this will take way longer than actually running these kernels.

A quick tip about this particular problem: if your application allows, enclosing your shapes in a merge shape can significantly speed up tracing, assuming that most shapes use a few BSDF instances (and not a different unique BSDF for each shape).

@ebhrz
Copy link
Author

ebhrz commented Oct 8, 2024

One potential issue is that we are compiling some kernel for each shape -- if you have thousands of shapes, this will take way longer than actually running these kernels.

A quick tip about this particular problem: if your application allows, enclosing your shapes in a merge shape can significantly speed up tracing, assuming that most shapes use a few BSDF instances (and not a different unique BSDF for each shape).

Thanks @merlinND, yes, I'll not use BSDF. In fact I'm not doing rendering, but just the ray tracing. Besides, sorry for my stupid question, how can I enclose these shapes? I can't find the related api in the document. Currently I use the solution from Christian.

@ebhrz
Copy link
Author

ebhrz commented Oct 8, 2024

Hi @ziyi-zhang , I've solved the issue by adding a dr. eval() expression. Thanks for your kind assistant.

@merlinND
Copy link
Member

merlinND commented Oct 8, 2024

@ebhrz The idea is simply to have the shapes in your scene nested inside of a merge-typed shape:

<shape type="merge">
    <shape type="obj" ...>
    ....
</shape>

Or with @DoeringChristian's solution, something like:

def to_dict(scene: mi.Scene):
    assert isinstance(scene, mi.Scene)

    children = [
        *scene.emitters(),
        *scene.sensors(),
        scene.integrator(),
    ]
    return {
        "type": "scene",
        "merged_shape": dict({ "type": "merge", 
                             **{child.id(): child for child in scene.shapes()}),
        **{child.id(): child for child in children},
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants