-
Notifications
You must be signed in to change notification settings - Fork 48
Expand file tree
/
Copy pathsupervised-learning.pymd
More file actions
222 lines (165 loc) · 11 KB
/
supervised-learning.pymd
File metadata and controls
222 lines (165 loc) · 11 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
# Supervised Machine Learning
In this example we will demonstrate how to fit and score a supervised learning model with a sample of Sentinel-2 data and hand-drawn vector labels over different [land cover](https://en.wikipedia.org/wiki/Land_cover) types.
```python, setup, echo=False
from IPython.core.display import display
from pyrasterframes.utils import create_rf_spark_session
from pyrasterframes.rasterfunctions import *
import pyrasterframes.rf_ipython
from pyspark.sql.functions import lit
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
spark = create_rf_spark_session()
```
## Create and Read Raster Catalog
The first step is to create a Spark DataFrame containing our imagery data. To achieve that we will create @ref:[a catalog DataFrame](raster-catalogs.md#creating-a-catalog). In the catalog, each row represents a distinct area and time, and each column is the URI to a band's image product. In this example our catalog just has one row. After reading the catalog, the resulting Spark DataFrame may have many rows per URI, with a column corresponding to each band.
The imagery for feature data will come from [eleven bands of 60 meter resolution Sentinel-2](https://earth.esa.int/web/sentinel/user-guides/sentinel-2-msi/resolutions/spatial) imagery. We also will use the [scene classification (SCL)](https://earth.esa.int/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm) data to identify high quality, non-cloudy pixels.
```python, read_bands
uri_base = 'https://rasterframes.s3.amazonaws.com/samples/luray_snp/{}.tif'
bands = ['B01', 'B02', 'B03', 'B04', 'B05', 'B06', 'B07', 'B08', 'B09', 'B11', 'B12']
cols = ['SCL'] + bands
catalog_df = pd.DataFrame([
{b: uri_base.format(b) for b in cols}
])
tile_size = 256
df = spark.read.raster(catalog_df, catalog_col_names=cols, tile_dimensions=(tile_size, tile_size)) \
.repartition(100)
df = df.select(
rf_crs(df.B01).alias('crs'),
rf_extent(df.B01).alias('extent'),
rf_tile(df.SCL).alias('scl'),
rf_tile(df.B01).alias('B01'),
rf_tile(df.B02).alias('B02'),
rf_tile(df.B03).alias('B03'),
rf_tile(df.B04).alias('B04'),
rf_tile(df.B05).alias('B05'),
rf_tile(df.B06).alias('B06'),
rf_tile(df.B07).alias('B07'),
rf_tile(df.B08).alias('B08'),
rf_tile(df.B09).alias('B09'),
rf_tile(df.B11).alias('B11'),
rf_tile(df.B12).alias('B12'),
)
df.printSchema()
```
## Data Prep
### Label Data
The land classification labels are based on a small set of hand drawn polygons in the GeoJSON file [here](https://github.com/locationtech/rasterframes/blob/develop/pyrasterframes/src/test/resources/luray-labels.geojson). The property `id` indicates the type of land cover in each area. For these integer values, 1 is forest, 2 is cropland, and 3 is developed areas.
We will create a very small Spark DataFrame of the label shapes and then join it to the raster DataFrame. Such joins are typically expensive, but in this case both datasets are quite small. To speed up the join for the small vector DataFrame, we put the `broadcast` hint on it, which will tell Spark to put a copy of it on each Spark executor.
After the raster and vector data are joined, we will convert the vector shapes into _tiles_ using the @ref:[`rf_rasterize`](reference.md#rf-rasterize) function. This procedure is sometimes called "burning in" a geometry into a raster. The values in the resulting _tile_ cells are the `id` property of the GeoJSON, which we will use as labels in our supervised learning task. In areas where the geometry does not intersect, the cells will contain NoData.
```python join_and_rasterize
crses = df.select('crs.crsProj4').distinct().collect()
print('Found ', len(crses), 'distinct CRS.')
crs = crses[0][0]
from pyspark import SparkFiles
spark.sparkContext.addFile('https://rasterframes.s3.amazonaws.com/samples/luray_snp/luray-labels.geojson')
label_df = spark.read.geojson(SparkFiles.get('luray-labels.geojson')) \
.select('id', st_reproject('geometry', lit('EPSG:4326'), lit(crs)).alias('geometry')) \
.hint('broadcast')
df_joined = df.join(label_df, st_intersects(st_geometry('extent'), 'geometry')) \
.withColumn('dims', rf_dimensions('B01'))
df_labeled = df_joined.withColumn('label',
rf_rasterize('geometry', st_geometry('extent'), 'id', 'dims.cols', 'dims.rows')
)
```
## Masking Poor Quality Cells
To filter only for good quality pixels, we follow roughly the same procedure as demonstrated in the @ref:[quality masking](nodata-handling.md#masking) section of the chapter on NoData. Instead of actually setting NoData values in the unwanted cells of any of the imagery bands, we will just filter out the mask cell values later in the process.
```python, make_mask
from pyspark.sql.functions import lit
df_labeled = df_labeled \
.withColumn('mask', rf_local_is_in('scl', [0, 1, 8, 9, 10]))
# at this point the mask contains 0 for good cells and 1 for defect, etc
# convert cell type and set value 1 to NoData
df_mask = df_labeled.withColumn('mask',
rf_with_no_data(rf_convert_cell_type('mask', 'uint8'), 1.0)
)
df_mask.printSchema()
```
## Create ML Pipeline
We import various Spark components that we need to construct our [`Pipeline`](https://spark.apache.org/docs/latest/ml-pipeline.html). These are the objects that will work in sequence to conduct the data preparation and modeling.
```python, imports, echo=True
from pyrasterframes import TileExploder
from pyrasterframes.rf_types import NoDataFilter
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.ml import Pipeline
```
SparkML requires that each observation be in its own row, and those observations be packed into a single [`Vector`](https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#module-pyspark.ml.linalg) object. The first step is to "explode" the _tiles_ into a single row per cell or pixel with the `TileExploder` (see also @ref:[`rf_explode_tiles`](reference.md#rf_explode_tiles)). If a _tile_ cell contains a NoData it will become a null value after the exploder stage. Then we use the `NoDataFilter` to filter out any rows that missing or null values, which will cause an error during training. Finally we use the SparkML `VectorAssembler` to create that `Vector`.
Recall above we set undesirable pixels to NoData, so the `NoDataFilter` will remove them at this stage. We apply the filter to the `mask` column and the `label` column, the latter being used during training. When it is time to score the model, the pipeline will ignore the fact that there is no `label` column on the input DataFrame.
```python, transformers
exploder = TileExploder()
noDataFilter = NoDataFilter() \
.setInputCols(['label', 'mask'])
assembler = VectorAssembler() \
.setInputCols(bands) \
.setOutputCol("features")
```
We are going to use a decision tree for classification. You can swap out one of the other multi-class classification algorithms if you like. With the algorithm selected we can assemble our modeling pipeline.
```python, pipeline
classifier = DecisionTreeClassifier() \
.setLabelCol('label') \
.setFeaturesCol(assembler.getOutputCol())
pipeline = Pipeline() \
.setStages([exploder, noDataFilter, assembler, classifier])
pipeline.getStages()
```
## Train the Model
The next step is to actually run each step of the Pipeline we created, including fitting the decision tree model. We filter the DataFrame for only _tiles_ intersecting the label raster because the label shapes are relatively sparse over the imagery. It would be logically equivalent to either include or exclude thi step, but it is more efficient to filter because it will mean less data going into the pipeline.
```python, train
model_input = df_mask.filter(rf_tile_sum('label') > 0).cache()
model = pipeline.fit(model_input)
```
## Model Evaluation
To view the model's performance, we first call the pipeline's `transform` method on the training dataset. This transformed dataset will have the model's prediction included in each row. We next construct an evaluator and pass it the transformed dataset to easily compute the performance metric. We can also create custom metrics using a variety of DataFrame or SQL transformations.
```python, eval
prediction_df = model.transform(df_mask) \
.drop(assembler.getOutputCol()).cache()
prediction_df.printSchema()
eval = MulticlassClassificationEvaluator(
predictionCol=classifier.getPredictionCol(),
labelCol=classifier.getLabelCol(),
metricName='accuracy'
)
accuracy = eval.evaluate(prediction_df)
print("\nAccuracy:", accuracy)
```
As an example of using the flexibility provided by DataFrames, the code below computes and displays the confusion matrix.
```python, confusion_mtrx
cnf_mtrx = prediction_df.groupBy(classifier.getPredictionCol()) \
.pivot(classifier.getLabelCol()) \
.count() \
.sort(classifier.getPredictionCol())
cnf_mtrx
```
## Visualize Prediction
Because the pipeline included a `TileExploder`, we will recreate the tiled data structure. The explosion transformation includes metadata enabling us to recreate the _tiles_. See the @ref:[`rf_assemble_tile`](reference.md#rf-assemble-tile) function documentation for more details. In this case, the pipeline is scoring on all areas, regardless of whether they intersect the label polygons. This is simply done by removing the `label` column, as @ref:[discussed above](supervised-learning.md#create-ml-pipeline).
```python, assemble_prediction
scored = model.transform(df_mask.drop('label'))
retiled = scored \
.groupBy('extent', 'crs') \
.agg(
rf_assemble_tile('column_index', 'row_index', 'prediction', tile_size, tile_size).alias('prediction'),
rf_assemble_tile('column_index', 'row_index', 'B04', tile_size, tile_size).alias('red'),
rf_assemble_tile('column_index', 'row_index', 'B03', tile_size, tile_size).alias('grn'),
rf_assemble_tile('column_index', 'row_index', 'B02', tile_size, tile_size).alias('blu')
)
retiled.printSchema()
```
Take a look at a sample of the resulting prediction and the corresponding area's red-green-blue composite image. Note that because each `prediction` tile is rendered independently, the colors may not have the same meaning across rows.
```python
scaling_quantiles = retiled.agg(
rf_agg_approx_quantiles('red', [0.03, 0.97]).alias('red_q'),
rf_agg_approx_quantiles('grn', [0.03, 0.97]).alias('grn_q'),
rf_agg_approx_quantiles('blu', [0.03, 0.97]).alias('blu_q')
).first()
retiled.select(
rf_render_png(
rf_local_clamp('red', *scaling_quantiles['red_q']).alias('red'),
rf_local_clamp('grn', *scaling_quantiles['grn_q']).alias('grn'),
rf_local_clamp('blu', *scaling_quantiles['blu_q']).alias('blu')
).alias('tci'),
rf_render_color_ramp_png('prediction', 'ClassificationBoldLandUse').alias('prediction')
)
```