No, there doesn't exist anything better, at least not when there are no other preset limits, like on the range of values that the structure can hold.
But if you consider that time complexity analysis concerns asymptotic behaviour while in reality we work with realistic amounts of data and memory, you could aim to reduce the actual running times.
A Python list performs really well when its size remains in the order of 103, so you could go for a collection of such lists. This can be organised like a B+ tree that is shallow in depth, but relatively wide in its block size. Imagine a B+ tree with two levels, where the bottom level has nodes that each have an ordered list with a load of like 103 values, then if the root node has like 103 children, you have a capacity of 106 values that will have a 2-step lookup, once with binary search in the root node, and then an insertion cost as if inserting with bisect in a Python list with just 103 elements. Depending on the actual expectation of what your data structure will be faced with, you could choose an optimal "load" (average size of the lists involved).
For example, a similar idea is used in the sorted containers library for Python (NB: I have no affiliation with it).
While this doesn't provide better time complexities, it does aim to reduce actual running times for practical uses, knowing that you will not aim to insert 1080 values in your data structure -- to give an extreme figure -- as never ever will humanity have the capacity to store that volume of data.