2

I use temp tables frequently to simplify data loads (easier debugging, cleaner select statements, etc). If performance demands it, I'll create a physical table etc.

I noticed recently that I automatically declare my temp tables as global (##temp_load) as opposed to local (#temp_table). I don't know why but that's been my habit for years. I never need the tables to be global but I'm curious if there is additional overhead for creating them as global. And should I work on changing my habits.

Are there additional risks for making them global?

1 Answer 1

3

Non-Global temp tables are pretty much guaranteed never to collide.

Global temp tables are similar to materialized tables in that the name needs to be unique per server.

As a rule, only use ##GLOBAL_TEMP tables when you must.

Otherwise, if you are writing a proc that could me run more than once simultaneously, the procs will interact with each other in unpredictable ways, making it extremely difficult to troubleshoot - Instance 1 can change data being used by Instance 2 which causes Instance 3 to generate incorrect results as well.

My personal opinion on Temp tables is that I only use them when:

  • I have a medium-to-large resultset (more than 1m rows)
  • I will need to index that resultset
  • I will not need to use that resultset more than once per iteration of the process
  • I am confident I will not need to resume the process at any point

I highlighted that last bullet because this is the main reason I try to minimize temp table use:

If you have a long-running process, and you use temp tables to store intermediate data sets, and something dies say 90% of the way through, you have to completely restart if that data is not in a materialized table most of the time.

Some of my processes run for days on billions of rows of data, so I am not interested in restarting from scratch ever.

Sign up to request clarification or add additional context in comments.

4 Comments

Some awesome points there. The collision issue occurred to me even as I was writing my question. Thanks!
I've actually yet to find a good use case for global ##temp tables. This is a rhetorical question, but why not just create a permanent table and then drop it? You get the same concurrency limitations without losing the table if anything dies. You could argue whether that's better or worse, I suppose, just like you could argue that a gunshot wound is better or worse than a crossbow wound.
I assumed that physical tables had additional overhead (logging, caching) vs temp tables. Is this false? I usually use temp tables for small(ish) datasets. I primarily data warehouse (900k fact records per day) and my source shadows and staging are all physical. I typically use temp tables for some simple lookups and supporting joins.
@Tricky - Yes that's correct on logging but you can also create a "normal" table in tempdb to take advantage of the logging optimisations as long as you don't need the data after service restart.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.