1

I want an ordered index-able data structure like a python list, which can access, update, insert and delete at arbitrary indices efficiently.

I have modified a skip list and an AVL tree to do the above in O(log n) time. Does there exist a data structure that can do better? Possibly O(log log n) or O(1) time?

3
  • no? how can u insert a value at an arbitrary index and expect the elements after to shift? Commented Aug 11, 2024 at 10:23
  • Unordered means the elements inserted are not to be stored in sorted order. Just like a regular python list. Is my definition of unordered wrong? Commented Aug 11, 2024 at 10:35
  • "Unordered" means that the elements do not have a position, so there is no "after" or "before". But what you are describing sounds like an list, which is ordered (but not necessary sorted). Commented Aug 11, 2024 at 10:38

1 Answer 1

2

No, there doesn't exist anything better, at least not when there are no other preset limits, like on the range of values that the structure can hold.

But if you consider that time complexity analysis concerns asymptotic behaviour while in reality we work with realistic amounts of data and memory, you could aim to reduce the actual running times.

A Python list performs really well when its size remains in the order of 103, so you could go for a collection of such lists. This can be organised like a B+ tree that is shallow in depth, but relatively wide in its block size. Imagine a B+ tree with two levels, where the bottom level has nodes that each have an ordered list with a load of like 103 values, then if the root node has like 103 children, you have a capacity of 106 values that will have a 2-step lookup, once with binary search in the root node, and then an insertion cost as if inserting with bisect in a Python list with just 103 elements. Depending on the actual expectation of what your data structure will be faced with, you could choose an optimal "load" (average size of the lists involved).

For example, a similar idea is used in the sorted containers library for Python (NB: I have no affiliation with it).

While this doesn't provide better time complexities, it does aim to reduce actual running times for practical uses, knowing that you will not aim to insert 1080 values in your data structure -- to give an extreme figure -- as never ever will humanity have the capacity to store that volume of data.

Sign up to request clarification or add additional context in comments.

1 Comment

I hypothesised there can not exist better than log n complexity, but can one prove it?

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.