3

Consider this traversal algorithm in pseudocode:

Traverse(G, v):
    S is stack
    stack.push(v)
    label v as visited
    while S is not empty:
        v = S.pop()
        for all w in G.adjacentNodes(v) do:
            if w is not labeled:
                label w
                S.push(w)

This traversal algorithm differs from BFS in its use of the stack instead of queue and from DFS in the moment when it marks a visited vertex (in iterative DFS you mark it as visited when you pop it from the stack). It also provides ordering different from both these approaches.

I have the following questions:

  1. Is this traversal correct for a graph of any complexity, i.e. will it visit all vertices of the connected component in which the initial vertex v lies?
  2. When should DFS be preferred to this approach? The original DFS creates duplicates in the stack, does it have advantages?

I've looked at similar questions on this site, but they don't provide satisfactory and complete answers.

1
  • "It also provides ordering" - in which step exactly do you add a vertex to the output order? Commented Jul 23 at 3:02

3 Answers 3

3

It's a hybrid. It "visits" from left to right the siblings (a breadth-first element) but separates the handling from the visit, as the handling is done in depth-first fashion.

So for each node:

  • its children are added to the stack from left to right so they will be evaluated from right to left
  • its children will be handled before the nodes that are already in the stack before "visiting" the children of the current node
  • a "right" child's entirety of subtree will be handled before a "left" child is even explored let alone handled (but it was visited)

To answer your question: yes, this will successfully traverse the graph and it depends on your goals when it is to be preferred. But objective reasons for preference for this algorithm could stem from a desire to get a breadth-first from right to left.

But, if "visiting" does not do any magic besides marking a node as being visited, then you could easily use a depth-first search with descending sibling order instead, because that only differs from your algorithm by its non-separation of visit from handling.

Sign up to request clarification or add additional context in comments.

Comments

1

Yes, this traversal algorithm is correct and will visit all vertices in the connected component of the starting node v. It’s a variation of DFS that marks nodes as visited when they are pushed onto the stack, rather than when they are popped, which avoids pushing the same node multiple times. This makes it more memory-efficient than standard iterative DFS, but it doesn't produce a post-order traversal (which is important for algorithms like topological sort or strongly connected components). Use this approach when you simply need to visit all reachable nodes efficiently without duplicates in the stack. Use standard DFS if you need post-order properties or more control over the traversal order.

2 Comments

No duplicates should ever end up being on the stack. That's why the visiting label is being used.
@LajosArpad No, the purpose of the "visited" labels is just to handle cycles. An iterative DFS (with a stack data structure, not using the call stack in a recursive implementation) does put duplicates on the stack to achieve the standard traversal order
-1

TLDR: You should always use this method (or something equivalent) and not what you are calling "DFS" (and I would call an inefficient DFS variant).

The "normal" DFS algorithm I've always seen described is (using your syntax):

Traverse(G, v):
    stack.push(v)
    while stack is not empty:
        v = stack.pop()
        mark v as visited
        for all w in G.adjacentNodes(v) do:
            if w is not visited and w is not on stack:
                stack.push(w)

As you note, it does not mark a node as "visited" until it is popped off the stack. However, when pushing nodes on the stack it also checks each node to see if it is already on the stack and does not push it in that case.

If you leave out the check of "w is not on stack" as you seem to think DFS does, then you end up pushing nodes on the stack multiple times if your graph has any cycles or is a DAG. This will result in a different order of visiting for these nodes, but, more importantly, will waste stack space. In the worst case, this will require O(n2) stack space (worst case being an SCC -- strongly connected component graph). A "normal" DFS requires worst-case O(n) space.

3 Comments

"If you leave out the check of "w is not on stack" as you seem to be proposing, then you end up pushing nodes on the stack multiple times" - no, the OP's algorithm does not do that, since it skips labelled nodes (instead of pushing them) and everything on the stack is also labelled. They've basically made the w is not on stack check (if assumed to be a naive linear search) more efficient by using labels.
That's my point exactly. His "new" algorithm is the same as what is normally referred to as DFS. What he seems to think DFS is, is not.
Ok then I misunderstood your answer at first. But still, "his "new" algorithm is the same as what is normally referred to as DFS." - no it is not, it results in a different order (see e.g. this explanation), as the other answers also point out. If you implement a DFS with a stack of nodes (and not recursively, or with a stack of iterators), then you need to accept duplicates on the stack to get the desired traversal order

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.