0

I've made extensive use of bulk insert for onboarding and similar tasks. Unless I'm mistaken, one of the main reasons this runs rapidly is that much of the work is done locally on the server as opposed to sending up a large number of INSERT queries.

Now I'm looking at a similar issue for UPDATE. We are collecting changes to different fields in different tables from a file of the basic form TABLE/FIELD/PKEY=new value. I do not find anything similar to a BULK UPDATE, and looking at previous threads here it seems the solution is to upload ~= 2000 UPDATEs at a time.

I built a DB with 30 tables. Using BULK INSERT I can populate 1k rows in each in < 1 second. I then wrote some UPDATE code using a 2k row limit (so everything was a single batch per table in my DB) and that took 300 seconds. This is not unexpected, each UPDATE includes a WHERE so this is going to be more expensive.

Am I missing a higher performance solution?

8
  • I usually do a json of the changes and then join the main table with an OPENJSON call to do the update once Commented Oct 11, 2024 at 14:51
  • Oh, that is interesting! Can you speak to the performance? Commented Oct 11, 2024 at 14:53
  • It was much faster for my case, but of course your mileage will vary :) Commented Oct 11, 2024 at 14:55
  • "Am I missing a higher performance solution?" We can't effectively answer that unless we see a sample of your table schema (including PKs, FKs, and indexes), the actual code you are running, and preferably the actual execution plan for the updates uploaded to Paste the Plan. Commented Oct 11, 2024 at 17:07
  • 1
    No there isn't. A common technique is to use a Table Valued Parameter, or a temp table with bulk-copy, both of which should be fast to insert. If they are indexed then the resulting joined update should be pretty fast. Commented Oct 12, 2024 at 20:28

1 Answer 1

1

Am I missing a higher performance solution?

No there isn't. A common technique is to use a Table Valued Parameter, or a temp table with bulk-copy, both of which should be fast to insert. If they are indexed then the resulting joined update should be pretty fast.

You can also create a json of the changes and then join the main table with an OPENJSON call to do the update once.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks! Just wanted to be sure I wasn't missing something. I think I'll go with the JSON solution as it would solve another problem at the same time.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.