0

I am having issues with a series of pipelines that build our data platform Spark databases hosted in Azure Synapse.

The pipelines host dataflows which have 'recreate table' enabled. The dataflows extract data and are supposed to recreate the tables each time the pipelines run. There is a step at the start of the job to drop all the tables as well. However the jobs randomly fail at different stages of the jobs with errors that look like the one below (sensitive system details have been removed):

Operation on target failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sinkname': Spark job failed in one of the cluster nodes while writing data in one of the partitions to sink, with following error message: Failed to rename VersionedFileStatus{VersionedFileStatus{path=abfss://synapsename.dfs.core.windows.net/synapse/workspaces/synapsename/warehouse/databasename.db/tablename/.name removed/_temporary/0/_temporary/idremoved/part-idremoved.snappy.parquet; isDirectory=false; length=636844; replication=1; blocksize=268435456; modification_time=1731778904698; access_time=0; owner=81aba2ef-674d-4bcb-a036-f4ab2ad78d3e; group=trusted-service-user; permission=rw-r-----; isSymlink=false; hasAcl=true; isEncrypted=false; isErasureCoded=false}; version='0x8DD0665F02661DC'} to abfss://[email protected]/synapse/workspaces/synapsename/warehouse/dataplatform","Details":null}

This might occur at any Spark database table loads randomly or might not occur at all the next day and might reoccur again in a few days.

To fix this, we go to the Synapse backend data lake storage and manually delete the Spark database table (parquet file) and rerun the job and then it succeeds. Tried increasing the resources including the spark run time.

Any thoughts, anyone?

1
  • Update: The MS team gave an update that this is an issue with their blob storage and is looking into it. Seems to be a known issue. Has anyone encountered this issue ? Commented Jan 9 at 9:15

1 Answer 1

0

Set the concurrency to 1.Typically it the _temporary file.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you for your suggestion. This pipeline is triggered every morning. Its highly unlikely that we have multiple instances of the pipeline running at the same time. But will def give this a shot and will also check the _temporary file.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.