Fix UDTs registration ordering#573
Conversation
core/src/main/scala/org/locationtech/rasterframes/ref/RFRasterSource.scala
Outdated
Show resolved
Hide resolved
core/src/main/scala/org/locationtech/rasterframes/tiles/ProjectedRasterTile.scala
Outdated
Show resolved
Hide resolved
| pytest = 'pytest>=4.0.0,<5.0.0' | ||
|
|
||
| pyspark = 'pyspark==3.1.1' | ||
| pyspark = 'pyspark==3.1.2' |
There was a problem hiding this comment.
Don't know how it worked before 🤦
| @@ -1,4 +1,4 @@ | |||
| pyspark>=3.1 | |||
| pyspark==3.1.2 | |||
There was a problem hiding this comment.
Spark 3.2 has been released and we need more strict constraints here.
0ea5e2d to
1eae180
Compare
|
@pomadchin Do you know why the difference between |
|
@pomadchin @echeipesh What do you think about sticking with CircleCI until we have time to figure out GitHub actions? |
|
@metasim Nope don't know, I noticed that even minor spark version mismatch on a driver and workers may cause to fail even jobs scheduling. I dont know why it worked before 🦐 but worked: I guess it matters what spark version is installed on a host and it 'just works'. |
Maybe only triggered in cluster mode ¯_(ツ)_/¯ |

Without this fix workers may try to access Encoders before registering UDTs.
Tested both 66cca65 and 8b5c165 within the k8s notebook.
The image to play with is
quay.io/daunnc/rasterframes-notebook:0.10.1-SNAPSHOT