earthengine-api/python/examples/ipynb/TF_demo1_keras.ipynb
Google Earth Engine Authors b06d9510af v0.1.352
PiperOrigin-RevId: 527762973
2023-05-03 17:18:18 +00:00

1 line
38 KiB
Plaintext

{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"TF_demo1_keras.ipynb","provenance":[],"private_outputs":true,"collapsed_sections":[],"toc_visible":true},"kernelspec":{"name":"python3","display_name":"Python 3"},"accelerator":"GPU"},"cells":[{"cell_type":"code","metadata":{"id":"fSIfBsgi8dNK","colab_type":"code","colab":{}},"source":["#@title Copyright 2020 Google LLC. { display-mode: \"form\" }\n","# Licensed under the Apache License, Version 2.0 (the \"License\");\n","# you may not use this file except in compliance with the License.\n","# You may obtain a copy of the License at\n","#\n","# https://www.apache.org/licenses/LICENSE-2.0\n","#\n","# Unless required by applicable law or agreed to in writing, software\n","# distributed under the License is distributed on an \"AS IS\" BASIS,\n","# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n","# See the License for the specific language governing permissions and\n","# limitations under the License."],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"aV1xZ1CPi3Nw","colab_type":"text"},"source":["<table class=\"ee-notebook-buttons\" align=\"left\"><td>\n","<a target=\"_blank\" href=\"http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/TF_demo1_keras.ipynb\">\n"," <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a>\n","</td><td>\n","<a target=\"_blank\" href=\"https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/TF_demo1_keras.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td></table>"]},{"cell_type":"markdown","metadata":{"id":"AC8adBmw-5m3","colab_type":"text"},"source":["# Introduction\n","\n","This is an Earth Engine <> TensorFlow demonstration notebook. Specifically, this notebook shows:\n","\n","1. Exporting training/testing data from Earth Engine in TFRecord format.\n","2. Preparing the data for use in a TensorFlow model.\n","2. Training and validating a simple model (Keras `Sequential` neural network) in TensorFlow.\n","3. Making predictions on image data exported from Earth Engine in TFRecord format.\n","4. Ingesting classified image data to Earth Engine in TFRecord format.\n","\n","This is intended to demonstrate a complete i/o pipeline. For a workflow that uses a [Google AI Platform](https://cloud.google.com/ai-platform) hosted model making predictions interactively, see [this example notebook](http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb)."]},{"cell_type":"markdown","metadata":{"id":"KiTyR3FNlv-O","colab_type":"text"},"source":["# Setup software libraries\n","\n","Import software libraries and/or authenticate as necessary."]},{"cell_type":"markdown","metadata":{"id":"dEM3FP4YakJg","colab_type":"text"},"source":["## Authenticate to Colab and Cloud\n","\n","To read/write from a Google Cloud Storage bucket to which you have access, it's necessary to authenticate (as yourself). *This should be the same account you use to login to Earth Engine*. When you run the code below, it will display a link in the output to an authentication page in your browser. Follow the link to a page that will let you grant permission to the Cloud SDK to access your resources. Copy the code from the permissions page back into this notebook and press return to complete the process.\n","\n","(You may need to run this again if you get a credentials error later.)"]},{"cell_type":"code","metadata":{"id":"sYyTIPLsvMWl","colab_type":"code","cellView":"code","colab":{}},"source":["from google.colab import auth\n","auth.authenticate_user()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Ejxa1MQjEGv9","colab_type":"text"},"source":["## Authenticate to Earth Engine\n","\n","Authenticate to Earth Engine the same way you did to the Colab notebook. Specifically, run the code to display a link to a permissions page. This gives you access to your Earth Engine account. *This should be the same account you used to login to Cloud previously*. Copy the code from the Earth Engine permissions page back into the notebook and press return to complete the process."]},{"cell_type":"code","metadata":{"id":"HzwiVqbcmJIX","colab_type":"code","cellView":"code","colab":{}},"source":["import ee\n","ee.Authenticate()\n","ee.Initialize()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"iJ70EsoWND_0","colab_type":"text"},"source":["## Test the TensorFlow installation\n","\n","Import the TensorFlow library and check the version."]},{"cell_type":"code","metadata":{"id":"i1PrYRLaVw_g","colab_type":"code","cellView":"code","colab":{}},"source":["import tensorflow as tf\n","print(tf.__version__)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"b8Xcvjp6cLOL","colab_type":"text"},"source":["## Test the Folium installation\n","\n","We will use the Folium library for visualization. Import the library and check the version."]},{"cell_type":"code","metadata":{"id":"YiVgOXzBZJSn","colab_type":"code","colab":{}},"source":["import folium\n","print(folium.__version__)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"DrXLkJC2QJdP","colab_type":"text"},"source":["# Define variables\n","\n","This set of global variables will be used throughout. For this demo, you must have a Cloud Storage bucket into which you can write files. ([learn more about creating Cloud Storage buckets](https://cloud.google.com/storage/docs/creating-buckets)). You'll also need to specify your Earth Engine username, i.e. `users/USER_NAME` on the [Code Editor](https://code.earthengine.google.com/) Assets tab."]},{"cell_type":"code","metadata":{"id":"GHTOc5YLQZ5B","colab_type":"code","colab":{}},"source":["# Your Earth Engine username. This is used to import a classified image\n","# into your Earth Engine assets folder.\n","USER_NAME = 'username'\n","\n","# Cloud Storage bucket into which training, testing and prediction \n","# datasets will be written. You must be able to write into this bucket.\n","OUTPUT_BUCKET = 'your-bucket'\n","\n","# Use Landsat 8 surface reflectance data for predictors.\n","L8SR = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')\n","# Use these bands for prediction.\n","BANDS = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']\n","\n","# This is a training/testing dataset of points with known land cover labels.\n","LABEL_DATA = ee.FeatureCollection('projects/google/demo_landcover_labels')\n","# The labels, consecutive integer indices starting from zero, are stored in\n","# this property, set on each point.\n","LABEL = 'landcover'\n","# Number of label values, i.e. number of classes in the classification.\n","N_CLASSES = 3\n","\n","# These names are used to specify properties in the export of\n","# training/testing data and to define the mapping between names and data\n","# when reading into TensorFlow datasets.\n","FEATURE_NAMES = list(BANDS)\n","FEATURE_NAMES.append(LABEL)\n","\n","# File names for the training and testing datasets. These TFRecord files\n","# will be exported from Earth Engine into the Cloud Storage bucket.\n","TRAIN_FILE_PREFIX = 'Training_demo'\n","TEST_FILE_PREFIX = 'Testing_demo'\n","file_extension = '.tfrecord.gz'\n","TRAIN_FILE_PATH = 'gs://' + OUTPUT_BUCKET + '/' + TRAIN_FILE_PREFIX + file_extension\n","TEST_FILE_PATH = 'gs://' + OUTPUT_BUCKET + '/' + TEST_FILE_PREFIX + file_extension\n","\n","# File name for the prediction (image) dataset. The trained model will read\n","# this dataset and make predictions in each pixel.\n","IMAGE_FILE_PREFIX = 'Image_pixel_demo_'\n","\n","# The output path for the classified image (i.e. predictions) TFRecord file.\n","OUTPUT_IMAGE_FILE = 'gs://' + OUTPUT_BUCKET + '/Classified_pixel_demo.TFRecord'\n","# Export imagery in this region.\n","EXPORT_REGION = ee.Geometry.Rectangle([-122.7, 37.3, -121.8, 38.00])\n","# The name of the Earth Engine asset to be created by importing\n","# the classified image from the TFRecord file in Cloud Storage.\n","OUTPUT_ASSET_ID = 'users/' + USER_NAME + '/Classified_pixel_demo'"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"ZcjQnHH8zT4q","colab_type":"text"},"source":["# Get Training and Testing data from Earth Engine\n","\n","To get data for a classification model of three classes (bare, vegetation, water), we need labels and the value of predictor variables for each labeled example. We've already generated some labels in Earth Engine. Specifically, these are visually interpreted points labeled \"bare,\" \"vegetation,\" or \"water\" for a very simple classification demo ([example script](https://code.earthengine.google.com/?scriptPath=Examples%3ADemos%2FClassification)). For predictor variables, we'll use [Landsat 8 surface reflectance imagery](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR), bands 2-7."]},{"cell_type":"markdown","metadata":{"id":"0EJfjgelSOpN","colab_type":"text"},"source":["## Prepare Landsat 8 imagery\n","\n","First, make a cloud-masked median composite of Landsat 8 surface reflectance imagery from 2018. Check the composite by visualizing with folium."]},{"cell_type":"code","metadata":{"id":"DJYucYe3SPPr","colab_type":"code","colab":{}},"source":["# Cloud masking function.\n","def maskL8sr(image):\n"," cloudShadowBitMask = ee.Number(2).pow(3).int()\n"," cloudsBitMask = ee.Number(2).pow(5).int()\n"," qa = image.select('pixel_qa')\n"," mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(\n"," qa.bitwiseAnd(cloudsBitMask).eq(0))\n"," return image.updateMask(mask).select(BANDS).divide(10000)\n","\n","# The image input data is a 2018 cloud-masked median composite.\n","image = L8SR.filterDate('2018-01-01', '2018-12-31').map(maskL8sr).median()\n","\n","# Use folium to visualize the imagery.\n","mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3})\n","map = folium.Map(location=[38., -122.5])\n","\n","folium.TileLayer(\n"," tiles=mapid['tile_fetcher'].url_format,\n"," attr='Map Data &copy; <a href=\"https://earthengine.google.com/\">Google Earth Engine</a>',\n"," overlay=True,\n"," name='median composite',\n"," ).add_to(map)\n","map.add_child(folium.LayerControl())\n","map"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"UEeyPf3zSPct","colab_type":"text"},"source":["## Add pixel values of the composite to labeled points\n","\n","Some training labels have already been collected for you. Load the labeled points from an existing Earth Engine asset. Each point in this table has a property called `landcover` that stores the label, encoded as an integer. Here we overlay the points on imagery to get predictor variables along with labels."]},{"cell_type":"code","metadata":{"id":"iOedOKyRExHE","colab_type":"code","colab":{}},"source":["# Sample the image at the points and add a random column.\n","sample = image.sampleRegions(\n"," collection=LABEL_DATA, properties=[LABEL], scale=30).randomColumn()\n","\n","# Partition the sample approximately 70-30.\n","training = sample.filter(ee.Filter.lt('random', 0.7))\n","testing = sample.filter(ee.Filter.gte('random', 0.7))\n","\n","from pprint import pprint\n","\n","# Print the first couple points to verify.\n","pprint({'training': training.first().getInfo()})\n","pprint({'testing': testing.first().getInfo()})"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"uNc7a2nRR4MI","colab_type":"text"},"source":["## Export the training and testing data\n","\n","Now that there's training and testing data in Earth Engine and you've inspected a couple examples to ensure that the information you need is present, it's time to materialize the datasets in a place where the TensorFlow model has access to them. You can do that by exporting the training and testing datasets to tables in TFRecord format ([learn more about TFRecord format](https://www.tensorflow.org/tutorials/load_data/tf-records)) in your Cloud Storage bucket."]},{"cell_type":"code","metadata":{"id":"Pb-aPvQc0Xvp","colab_type":"code","colab":{}},"source":["# Make sure you can see the output bucket. You must have write access.\n","print('Found Cloud Storage bucket.' if tf.io.gfile.exists('gs://' + OUTPUT_BUCKET) \n"," else 'Can not find output Cloud Storage bucket.')"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Wtoqj0Db1TmJ","colab_type":"text"},"source":["Once you've verified the existence of the intended output bucket, run the exports."]},{"cell_type":"code","metadata":{"id":"TfVNQzg8R6Wy","colab_type":"code","colab":{}},"source":["# Create the tasks.\n","training_task = ee.batch.Export.table.toCloudStorage(\n"," collection=training,\n"," description='Training Export',\n"," fileNamePrefix=TRAIN_FILE_PREFIX,\n"," bucket=OUTPUT_BUCKET,\n"," fileFormat='TFRecord',\n"," selectors=FEATURE_NAMES)\n","\n","testing_task = ee.batch.Export.table.toCloudStorage(\n"," collection=testing,\n"," description='Testing Export',\n"," fileNamePrefix=TEST_FILE_PREFIX,\n"," bucket=OUTPUT_BUCKET,\n"," fileFormat='TFRecord',\n"," selectors=FEATURE_NAMES)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"QF4WGIekaS2s","colab_type":"code","colab":{}},"source":["# Start the tasks.\n","training_task.start()\n","testing_task.start()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"q7nFLuySISeC","colab_type":"text"},"source":["### Monitor task progress\n","\n","You can see all your Earth Engine tasks by listing them. Make sure the training and testing tasks are completed before continuing."]},{"cell_type":"code","metadata":{"id":"oEWvS5ekcEq0","colab_type":"code","colab":{}},"source":["# Print all tasks.\n","pprint(ee.batch.Task.list())"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"43-c0JNFI_m6","colab_type":"text"},"source":["### Check existence of the exported files\n","\n","If you've seen the status of the export tasks change to `COMPLETED`, then check for the existence of the files in the output Cloud Storage bucket."]},{"cell_type":"code","metadata":{"id":"YDZfNl6yc0Kj","colab_type":"code","colab":{}},"source":["print('Found training file.' if tf.io.gfile.exists(TRAIN_FILE_PATH) \n"," else 'No training file found.')\n","print('Found testing file.' if tf.io.gfile.exists(TEST_FILE_PATH) \n"," else 'No testing file found.')"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"NA8QA8oQVo8V","colab_type":"text"},"source":["## Export the imagery\n","\n","You can also export imagery using TFRecord format. Specifically, export whatever imagery you want to be classified by the trained model into the output Cloud Storage bucket."]},{"cell_type":"code","metadata":{"id":"tVNhJYacVpEw","colab_type":"code","colab":{}},"source":["# Specify patch and file dimensions.\n","image_export_options = {\n"," 'patchDimensions': [256, 256],\n"," 'maxFileSize': 104857600,\n"," 'compressed': True\n","}\n","\n","# Setup the task.\n","image_task = ee.batch.Export.image.toCloudStorage(\n"," image=image,\n"," description='Image Export',\n"," fileNamePrefix=IMAGE_FILE_PREFIX,\n"," bucket=OUTPUT_BUCKET,\n"," scale=30,\n"," fileFormat='TFRecord',\n"," region=EXPORT_REGION.toGeoJSON()['coordinates'],\n"," formatOptions=image_export_options,\n",")"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"6SweCkHDaNE3","colab_type":"code","colab":{}},"source":["# Start the task.\n","image_task.start()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"JC8C53MRTG_E","colab_type":"text"},"source":["### Monitor task progress"]},{"cell_type":"code","metadata":{"id":"BmPHb779KOXm","colab_type":"code","colab":{}},"source":["# Print all tasks.\n","pprint(ee.batch.Task.list())"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"SrUhA1JKLONj","colab_type":"text"},"source":["It's also possible to monitor an individual task. Here we poll the task until it's done. If you do this, please put a `sleep()` in the loop to avoid making too many requests. Note that this will block until complete (you can always halt the execution of this cell)."]},{"cell_type":"code","metadata":{"id":"rKZeZswloP11","colab_type":"code","colab":{}},"source":["import time\n","\n","while image_task.active():\n"," print('Polling for task (id: {}).'.format(image_task.id))\n"," time.sleep(30)\n","print('Done with image export.')"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"9vWdH_wlZCEk","colab_type":"text"},"source":["# Data preparation and pre-processing\n","\n","Read data from the TFRecord file into a `tf.data.Dataset`. Pre-process the dataset to get it into a suitable format for input to the model."]},{"cell_type":"markdown","metadata":{"id":"LS4jGTrEfz-1","colab_type":"text"},"source":["## Read into a `tf.data.Dataset`\n","\n","Here we are going to read a file in Cloud Storage into a `tf.data.Dataset`. ([these TensorFlow docs](https://www.tensorflow.org/guide/data) explain more about reading data into a `Dataset`). Check that you can read examples from the file. The purpose here is to ensure that we can read from the file without an error. The actual content is not necessarily human readable.\n","\n"]},{"cell_type":"code","metadata":{"id":"T3PKyDQW8Vpx","colab_type":"code","cellView":"code","colab":{}},"source":["# Create a dataset from the TFRecord file in Cloud Storage.\n","train_dataset = tf.data.TFRecordDataset(TRAIN_FILE_PATH, compression_type='GZIP')\n","# Print the first record to check.\n","print(iter(train_dataset).next())"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"BrDYm-ibKR6t","colab_type":"text"},"source":["## Define the structure of your data\n","\n","For parsing the exported TFRecord files, `featuresDict` is a mapping between feature names (recall that `featureNames` contains the band and label names) and `float32` [`tf.io.FixedLenFeature`](https://www.tensorflow.org/api_docs/python/tf/io/FixedLenFeature) objects. This mapping is necessary for telling TensorFlow how to read data in a TFRecord file into tensors. Specifically, **all numeric data exported from Earth Engine is exported as `float32`**.\n","\n","(Note: *features* in the TensorFlow context (i.e. [`tf.train.Feature`](https://www.tensorflow.org/api_docs/python/tf/train/Feature)) are not to be confused with Earth Engine features (i.e. [`ee.Feature`](https://developers.google.com/earth-engine/api_docs#eefeature)), where the former is a protocol message type for serialized data input to the model and the latter is a geometry-based geographic data structure.)"]},{"cell_type":"code","metadata":{"id":"-6JVQV5HKHMZ","colab_type":"code","cellView":"code","colab":{}},"source":["# List of fixed-length features, all of which are float32.\n","columns = [\n"," tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for k in FEATURE_NAMES\n","]\n","\n","# Dictionary with names as keys, features as values.\n","features_dict = dict(zip(FEATURE_NAMES, columns))\n","\n","pprint(features_dict)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"QNfaUPbcjuCO","colab_type":"text"},"source":["## Parse the dataset\n","\n","Now we need to make a parsing function for the data in the TFRecord files. The data comes in flattened 2D arrays per record and we want to use the first part of the array for input to the model and the last element of the array as the class label. The parsing function reads data from a serialized [`Example` proto](https://www.tensorflow.org/api_docs/python/tf/train/Example) into a dictionary in which the keys are the feature names and the values are the tensors storing the value of the features for that example. ([These TensorFlow docs](https://www.tensorflow.org/tutorials/load_data/tfrecord) explain more about reading `Example` protos from TFRecord files)."]},{"cell_type":"code","metadata":{"id":"x2Q0g3fBj2kD","colab_type":"code","cellView":"code","colab":{}},"source":["def parse_tfrecord(example_proto):\n"," \"\"\"The parsing function.\n","\n"," Read a serialized example into the structure defined by featuresDict.\n","\n"," Args:\n"," example_proto: a serialized Example.\n","\n"," Returns:\n"," A tuple of the predictors dictionary and the label, cast to an `int32`.\n"," \"\"\"\n"," parsed_features = tf.io.parse_single_example(example_proto, features_dict)\n"," labels = parsed_features.pop(LABEL)\n"," return parsed_features, tf.cast(labels, tf.int32)\n","\n","# Map the function over the dataset.\n","parsed_dataset = train_dataset.map(parse_tfrecord, num_parallel_calls=5)\n","\n","# Print the first parsed record to check.\n","pprint(iter(parsed_dataset).next())"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Nb8EyNT4Xnhb","colab_type":"text"},"source":["Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands for keys and the numeric value of the bands for values. The second element of the tuple is a class label."]},{"cell_type":"markdown","metadata":{"id":"xLCsxWOuEBmE","colab_type":"text"},"source":["## Create additional features\n","\n","Another thing we might want to do as part of the input process is to create new features, for example NDVI, a vegetation index computed from reflectance in two spectral bands. Here are some helper functions for that."]},{"cell_type":"code","metadata":{"id":"lT6v2RM_EB1E","colab_type":"code","cellView":"code","colab":{}},"source":["def normalized_difference(a, b):\n"," \"\"\"Compute normalized difference of two inputs.\n","\n"," Compute (a - b) / (a + b). If the denomenator is zero, add a small delta.\n","\n"," Args:\n"," a: an input tensor with shape=[1]\n"," b: an input tensor with shape=[1]\n","\n"," Returns:\n"," The normalized difference as a tensor.\n"," \"\"\"\n"," nd = (a - b) / (a + b)\n"," nd_inf = (a - b) / (a + b + 0.000001)\n"," return tf.where(tf.math.is_finite(nd), nd, nd_inf)\n","\n","def add_NDVI(features, label):\n"," \"\"\"Add NDVI to the dataset.\n"," Args:\n"," features: a dictionary of input tensors keyed by feature name.\n"," label: the target label\n","\n"," Returns:\n"," A tuple of the input dictionary with an NDVI tensor added and the label.\n"," \"\"\"\n"," features['NDVI'] = normalized_difference(features['B5'], features['B4'])\n"," return features, label"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"nEx1RAXOZQkS","colab_type":"text"},"source":["# Model setup\n","\n","The basic workflow for classification in TensorFlow is:\n","\n","1. Create the model.\n","2. Train the model (i.e. `fit()`).\n","3. Use the trained model for inference (i.e. `predict()`).\n","\n","Here we'll create a `Sequential` neural network model using Keras. This simple model is inspired by examples in:\n","\n","* [The TensorFlow Get Started tutorial](https://www.tensorflow.org/tutorials/)\n","* [The TensorFlow Keras guide](https://www.tensorflow.org/guide/keras#build_a_simple_model)\n","* [The Keras `Sequential` model examples](https://keras.io/getting-started/sequential-model-guide/#multilayer-perceptron-mlp-for-multi-class-softmax-classification)\n","\n","Note that the model used here is purely for demonstration purposes and hasn't gone through any performance tuning."]},{"cell_type":"markdown","metadata":{"id":"t9pWa54oG-xl","colab_type":"text"},"source":["## Create the Keras model\n","\n","Before we create the model, there's still a wee bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See [the Keras loss function docs](https://keras.io/losses/), [the TensorFlow categorical identity docs](https://www.tensorflow.org/guide/feature_columns#categorical_identity_column) and [the `tf.one_hot` docs](https://www.tensorflow.org/api_docs/python/tf/one_hot) for details). \n","\n","Here we will use a simple neural network model with a 64 node hidden layer, a dropout layer and an output layer. Once the dataset has been prepared, define the model, compile it, fit it to the training data. See [the Keras `Sequential` model guide](https://keras.io/getting-started/sequential-model-guide/) for more details."]},{"cell_type":"code","metadata":{"id":"OCZq3VNpG--G","colab_type":"code","cellView":"code","colab":{}},"source":["from tensorflow import keras\n","\n","# Add NDVI.\n","input_dataset = parsed_dataset.map(add_NDVI)\n","\n","# Keras requires inputs as a tuple. Note that the inputs must be in the\n","# right shape. Also note that to use the categorical_crossentropy loss,\n","# the label needs to be turned into a one-hot vector.\n","def to_tuple(inputs, label):\n"," return (tf.transpose(list(inputs.values())),\n"," tf.one_hot(indices=label, depth=N_CLASSES))\n","\n","# Map the to_tuple function, shuffle and batch.\n","input_dataset = input_dataset.map(to_tuple).batch(8)\n","\n","# Define the layers in the model.\n","model = tf.keras.models.Sequential([\n"," tf.keras.layers.Dense(64, activation=tf.nn.relu),\n"," tf.keras.layers.Dropout(0.2),\n"," tf.keras.layers.Dense(N_CLASSES, activation=tf.nn.softmax)\n","])\n","\n","# Compile the model with the specified loss function.\n","model.compile(optimizer=tf.keras.optimizers.Adam(),\n"," loss='categorical_crossentropy',\n"," metrics=['accuracy'])\n","\n","# Fit the model to the training data.\n","model.fit(x=input_dataset, epochs=10)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Pa4ex_4eKiyb","colab_type":"text"},"source":["## Check model accuracy on the test set\n","\n","Now that we have a trained model, we can evaluate it using the test dataset. To do that, read and prepare the test dataset in the same way as the training dataset. Here we specify a batch size of 1 so that each example in the test set is used exactly once to compute model accuracy. For model steps, just specify a number larger than the test dataset size (ignore the warning)."]},{"cell_type":"code","metadata":{"id":"tE6d7FsrMa1p","colab_type":"code","cellView":"code","colab":{}},"source":["test_dataset = (\n"," tf.data.TFRecordDataset(TEST_FILE_PATH, compression_type='GZIP')\n"," .map(parse_tfrecord, num_parallel_calls=5)\n"," .map(add_NDVI)\n"," .map(to_tuple)\n"," .batch(1))\n","\n","model.evaluate(test_dataset)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"nhHrnv3VR0DU","colab_type":"text"},"source":["# Use the trained model to classify an image from Earth Engine\n","\n","Now it's time to classify the image that was exported from Earth Engine. If the exported image is large, it will be split into multiple TFRecord files in its destination folder. There will also be a JSON sidecar file called \"the mixer\" that describes the format and georeferencing of the image. Here we will find the image files and the mixer file, getting some info out of the mixer that will be useful during model inference."]},{"cell_type":"markdown","metadata":{"id":"nmTayDitZgQ5","colab_type":"text"},"source":["## Find the image files and JSON mixer file in Cloud Storage\n","\n","Use `gsutil` to locate the files of interest in the output Cloud Storage bucket. Check to make sure your image export task finished before running the following."]},{"cell_type":"code","metadata":{"id":"oUv9WMpcVp8E","colab_type":"code","colab":{}},"source":["# Get a list of all the files in the output bucket.\n","files_list = !gsutil ls 'gs://'{OUTPUT_BUCKET}\n","# Get only the files generated by the image export.\n","exported_files_list = [s for s in files_list if IMAGE_FILE_PREFIX in s]\n","\n","# Get the list of image files and the JSON mixer file.\n","image_files_list = []\n","json_file = None\n","for f in exported_files_list:\n"," if f.endswith('.tfrecord.gz'):\n"," image_files_list.append(f)\n"," elif f.endswith('.json'):\n"," json_file = f\n","\n","# Make sure the files are in the right order.\n","image_files_list.sort()\n","\n","pprint(image_files_list)\n","print(json_file)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"RcjYG9fk53xL","colab_type":"text"},"source":["## Read the JSON mixer file\n","\n","The mixer contains metadata and georeferencing information for the exported patches, each of which is in a different file. Read the mixer to get some information needed for prediction."]},{"cell_type":"code","metadata":{"id":"Gn7Dr0AAd93_","colab_type":"code","colab":{}},"source":["import json\n","\n","# Load the contents of the mixer file to a JSON object.\n","json_text = !gsutil cat {json_file}\n","# Get a single string w/ newlines from the IPython.utils.text.SList\n","mixer = json.loads(json_text.nlstr)\n","pprint(mixer)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"6xyzyPPJwpVI","colab_type":"text"},"source":["## Read the image files into a dataset\n","\n","You can feed the list of files (`imageFilesList`) directly to the `TFRecordDataset` constructor to make a combined dataset on which to perform inference. The input needs to be preprocessed differently than the training and testing. Mainly, this is because the pixels are written into records as patches, we need to read the patches in as one big tensor (one patch for each band), then flatten them into lots of little tensors."]},{"cell_type":"code","metadata":{"id":"tn8Kj3VfwpiJ","colab_type":"code","cellView":"code","colab":{}},"source":["# Get relevant info from the JSON mixer file.\n","patch_width = mixer['patchDimensions'][0]\n","patch_height = mixer['patchDimensions'][1]\n","patches = mixer['totalPatches']\n","patch_dimensions_flat = [patch_width * patch_height, 1]\n","\n","# Note that the tensors are in the shape of a patch, one patch for each band.\n","image_columns = [\n"," tf.io.FixedLenFeature(shape=patch_dimensions_flat, dtype=tf.float32) \n"," for k in BANDS\n","]\n","\n","# Parsing dictionary.\n","image_features_dict = dict(zip(BANDS, image_columns))\n","\n","# Note that you can make one dataset from many files by specifying a list.\n","image_dataset = tf.data.TFRecordDataset(image_files_list, compression_type='GZIP')\n","\n","# Parsing function.\n","def parse_image(example_proto):\n"," return tf.io.parse_single_example(example_proto, image_features_dict)\n","\n","# Parse the data into tensors, one long tensor per patch.\n","image_dataset = image_dataset.map(parse_image, num_parallel_calls=5)\n","\n","# Break our long tensors into many little ones.\n","image_dataset = image_dataset.flat_map(\n"," lambda features: tf.data.Dataset.from_tensor_slices(features)\n",")\n","\n","# Add additional features (NDVI).\n","image_dataset = image_dataset.map(\n"," # Add NDVI to a feature that doesn't have a label.\n"," lambda features: add_NDVI(features, None)[0]\n",")\n","\n","# Turn the dictionary in each record into a tuple without a label.\n","image_dataset = image_dataset.map(\n"," lambda data_dict: (tf.transpose(list(data_dict.values())), )\n",")\n","\n","# Turn each patch into a batch.\n","image_dataset = image_dataset.batch(patch_width * patch_height)"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"_2sfRemRRDkV","colab_type":"text"},"source":["## Generate predictions for the image pixels\n","\n","To get predictions in each pixel, run the image dataset through the trained model using `model.predict()`. Print the first prediction to see that the output is a list of the three class probabilities for each pixel. Running all predictions might take a while."]},{"cell_type":"code","metadata":{"id":"8VGhmiP_REBP","colab_type":"code","colab":{}},"source":["# Run prediction in batches, with as many steps as there are patches.\n","predictions = model.predict(image_dataset, steps=patches, verbose=1)\n","\n","# Note that the predictions come as a numpy array. Check the first one.\n","print(predictions[0])"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"bPU2VlPOikAy","colab_type":"text"},"source":["## Write the predictions to a TFRecord file\n","\n","Now that there's a list of class probabilities in `predictions`, it's time to write them back into a file, optionally including a class label which is simply the index of the maximum probability. We'll write directly from TensorFlow to a file in the output Cloud Storage bucket.\n","\n","Iterate over the list, compute class label and write the class and the probabilities in patches. Specifically, we need to write the pixels into the file as patches in the same order they came out. The records are written as serialized `tf.train.Example` protos. This might take a while."]},{"cell_type":"code","metadata":{"id":"AkorbsEHepzJ","colab_type":"code","colab":{}},"source":["print('Writing to file ' + OUTPUT_IMAGE_FILE)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"kATMknHc0qeR","colab_type":"code","cellView":"code","colab":{}},"source":["# Instantiate the writer.\n","writer = tf.io.TFRecordWriter(OUTPUT_IMAGE_FILE)\n","\n","# Every patch-worth of predictions we'll dump an example into the output\n","# file with a single feature that holds our predictions. Since our predictions\n","# are already in the order of the exported data, the patches we create here\n","# will also be in the right order.\n","patch = [[], [], [], []]\n","cur_patch = 1\n","for prediction in predictions:\n"," patch[0].append(tf.argmax(prediction, 1))\n"," patch[1].append(prediction[0][0])\n"," patch[2].append(prediction[0][1])\n"," patch[3].append(prediction[0][2])\n"," # Once we've seen a patches-worth of class_ids...\n"," if (len(patch[0]) == patch_width * patch_height):\n"," print('Done with patch ' + str(cur_patch) + ' of ' + str(patches) + '...')\n"," # Create an example\n"," example = tf.train.Example(\n"," features=tf.train.Features(\n"," feature={\n"," 'prediction': tf.train.Feature(\n"," int64_list=tf.train.Int64List(\n"," value=patch[0])),\n"," 'bareProb': tf.train.Feature(\n"," float_list=tf.train.FloatList(\n"," value=patch[1])),\n"," 'vegProb': tf.train.Feature(\n"," float_list=tf.train.FloatList(\n"," value=patch[2])),\n"," 'waterProb': tf.train.Feature(\n"," float_list=tf.train.FloatList(\n"," value=patch[3])),\n"," }\n"," )\n"," )\n"," # Write the example to the file and clear our patch array so it's ready for\n"," # another batch of class ids\n"," writer.write(example.SerializeToString())\n"," patch = [[], [], [], []]\n"," cur_patch += 1\n","\n","writer.close()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"1K_1hKs0aBdA","colab_type":"text"},"source":["# Upload the classifications to an Earth Engine asset"]},{"cell_type":"markdown","metadata":{"id":"M6sNZXWOSa82","colab_type":"text"},"source":["## Verify the existence of the predictions file\n","\n","At this stage, there should be a predictions TFRecord file sitting in the output Cloud Storage bucket. Use the `gsutil` command to verify that the predictions image (and associated mixer JSON) exist and have non-zero size."]},{"cell_type":"code","metadata":{"id":"6ZVWDPefUCgA","colab_type":"code","colab":{}},"source":["!gsutil ls -l {OUTPUT_IMAGE_FILE}"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"2ZyCo297Clcx","colab_type":"text"},"source":["## Upload the classified image to Earth Engine\n","\n","Upload the image to Earth Engine directly from the Cloud Storage bucket with the [`earthengine` command](https://developers.google.com/earth-engine/command_line#upload). Provide both the image TFRecord file and the JSON file as arguments to `earthengine upload`."]},{"cell_type":"code","metadata":{"id":"NXulMNl9lTDv","colab_type":"code","cellView":"code","colab":{}},"source":["print('Uploading to ' + OUTPUT_ASSET_ID)"],"execution_count":0,"outputs":[]},{"cell_type":"code","metadata":{"id":"V64tcVxsO5h6","colab_type":"code","colab":{}},"source":["# Start the upload.\n","!earthengine upload image --asset_id={OUTPUT_ASSET_ID} --pyramiding_policy=mode {OUTPUT_IMAGE_FILE} {json_file}"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"Yt4HyhUU_Bal","colab_type":"text"},"source":["## Check the status of the asset ingestion\n","\n","You can also use the Earth Engine API to check the status of your asset upload. It might take a while. The upload of the image is an asset ingestion task."]},{"cell_type":"code","metadata":{"id":"_vB-gwGhl_3C","colab_type":"code","cellView":"code","colab":{}},"source":["ee.batch.Task.list()"],"execution_count":0,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"vvXvy9GDhM-p","colab_type":"text"},"source":["## View the ingested asset\n","\n","Display the vector of class probabilities as an RGB image with colors corresponding to the probability of bare, vegetation, water in a pixel. Also display the winning class using the same color palette."]},{"cell_type":"code","metadata":{"id":"kEkVxIyJiFd4","colab_type":"code","colab":{}},"source":["predictions_image = ee.Image(OUTPUT_ASSET_ID)\n","\n","prediction_vis = {\n"," 'bands': 'prediction',\n"," 'min': 0,\n"," 'max': 2,\n"," 'palette': ['red', 'green', 'blue']\n","}\n","probability_vis = {'bands': ['bareProb', 'vegProb', 'waterProb'], 'max': 0.5}\n","\n","prediction_map_id = predictions_image.getMapId(prediction_vis)\n","probability_map_id = predictions_image.getMapId(probability_vis)\n","\n","map = folium.Map(location=[37.6413, -122.2582])\n","folium.TileLayer(\n"," tiles=prediction_map_id['tile_fetcher'].url_format,\n"," attr='Map Data &copy; <a href=\"https://earthengine.google.com/\">Google Earth Engine</a>',\n"," overlay=True,\n"," name='prediction',\n",").add_to(map)\n","folium.TileLayer(\n"," tiles=probability_map_id['tile_fetcher'].url_format,\n"," attr='Map Data &copy; <a href=\"https://earthengine.google.com/\">Google Earth Engine</a>',\n"," overlay=True,\n"," name='probability',\n",").add_to(map)\n","map.add_child(folium.LayerControl())\n","map"],"execution_count":0,"outputs":[]}]}