Skip to content

Slow performance on python wrapper #275

@brupelo

Description

@brupelo

Let's start by considering this snippet:

import time
from pathlib import Path

import tinyobjloader


def load_obj(filename):
    path = Path(filename).resolve()
    start = time.time()
    reader = tinyobjloader.ObjReader()
    if not reader.ParseFromFile(str(path)):
        raise Exception(f"Problem loading '{path}'")
    attrib = reader.GetAttrib()
    start = time.time()
    num_vertices = 0
    num_normals = 0
    num_texcoords = 0

    for shape in reader.GetShapes():
        num_vertices += len(attrib.vertices)
        num_normals += len(attrib.normals)
        num_texcoords += len(attrib.texcoords)

        for f in shape.mesh.indices:
            attrib.vertices[f.vertex_index * 3 + 0]
            attrib.vertices[f.vertex_index * 3 + 1]
            attrib.vertices[f.vertex_index * 3 + 2]
            attrib.normals[f.normal_index * 3 + 0]
            attrib.normals[f.normal_index * 3 + 1]
            attrib.normals[f.normal_index * 3 + 2]
            attrib.texcoords[f.texcoord_index * 2 + 0]
            attrib.texcoords[f.texcoord_index * 2 + 1]

    print(
        f"num_vertices={num_vertices} num_normals={num_normals} num_texcoords={num_texcoords}"
    )
    print(time.time() - start)


if __name__ == "__main__":
    load_obj(r"mcve.obj")

And here's the link to both mcve.obj and mcve.mtl, this object is a very small & trivial scene, that looks like this:

showcase

Problem being... if you run that snippet you'll get this output:

num_vertices=36150 num_normals=38400 num_texcoords=27950
13.565775871276855

That's really slow & unusable for such a trivial scene. I wanted to san miguel's scene from https://casual-effects.com/data/ but after seeing these poor results on such a trivial scene I'm not even considering it till I figure out how to proceed here.

So yeah, the main question would be, how could we use the tinyobjloader effectively on python so we can test real-world scenes?

Thanks in advance.

Ps. I've got allocated a huge chunk of memory for interleaved data (vertices+normals+texcoords) so I'm not sure how difficult would be providing a wrapper to make a fast memcpy or similar... Main problem seems to be the whole overhead of wrapping c++ to python

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions