{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "AI_platform_demo.ipynb", "provenance": [], "private_outputs": true, "collapsed_sections": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "cells": [ { "cell_type": "code", "metadata": { "colab_type": "code", "id": "esIMGVxhDI0f", "colab": {} }, "source": [ "#@title Copyright 2019 Google LLC. { display-mode: \"form\" }\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "aV1xZ1CPi3Nw", "colab_type": "text" }, "source": [ "
\n", "\n", " Run in Google Colab\n", "\n", " View source on GitHub
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "_SHAc5qbiR8l" }, "source": [ "# Introduction\n", "\n", "This is a demonstration notebook. Suppose you have developed a model the training of which is constrained by the resources available to the notbook VM. In that case, you may want to use the [Google AI Platform](https://cloud.google.com/ml-engine/docs/tensorflow/) to train your model. The advantage of that is that long-running or resource intensive training jobs can be performed in the background. Also, to use your trained model in Earth Engine, it needs to be [deployed as a hosted model](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) on AI Platform. This notebook uses previously created training data (see [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb)) and AI Platform to train a model, deploy it and use it to make predictions in Earth Engine. To do that, code [needs to be structured as a python package](https://cloud.google.com/ml-engine/docs/tensorflow/packaging-trainer) that can be uploaded to AI Platform. The following cells produce that package programatically." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "_MJ4kW1pEhwP" }, "source": [ "# Setup software libraries\n", "\n", "Install needed libraries to the notebook VM. Authenticate as necessary." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "neIa46CpciXq", "colab": {} }, "source": [ "# Cloud authentication.\n", "from google.colab import auth\n", "auth.authenticate_user()" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "jat01FEoUMqg", "colab": {} }, "source": [ "# Import and initialize the Earth Engine library.\n", "import ee\n", "ee.Authenticate()\n", "ee.Initialize()" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "8RnZzcYhcpsQ", "colab": {} }, "source": [ "# Tensorflow setup.\n", "import tensorflow as tf\n", "\n", "tf.enable_eager_execution()\n", "print(tf.__version__)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "n1hFdpBQfyhN", "colab": {} }, "source": [ "# Folium setup.\n", "import folium\n", "print(folium.__version__)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "HO1apj_B4c2R" }, "source": [ "# Training code package setup\n", "\n", "It's necessary to create a Python package to hold the training code. Here we're going to get started with that by creating a folder for the package and adding an empty `__init__.py` file." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "WcO4hFne4yQ4", "colab": {} }, "source": [ "PACKAGE_PATH = 'ai_platform_demo'\n", "\n", "!ls -l\n", "!mkdir {PACKAGE_PATH}\n", "!touch {PACKAGE_PATH}/__init__.py\n", "!ls -l {PACKAGE_PATH}" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "iT8ycmzClYwf" }, "source": [ "## Variables\n", "\n", "These variables need to be stored in a place where other code can access them. There are a variety of ways of accomplishing that, but here we'll use the `%%writefile` command to write the contents of the code cell to a file called `config.py`.\n", "\n", "**Note:** You need to insert the name of a bucket (below) to which you have write access!" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "psz7wJKalaoj", "colab": {} }, "source": [ "%%writefile {PACKAGE_PATH}/config.py\n", "\n", "import tensorflow as tf\n", "\n", "# INSERT YOUR BUCKET HERE!\n", "BUCKET = 'your-bucket-name'\n", "\n", "# Specify names of output locations in Cloud Storage.\n", "FOLDER = 'fcnn-demo'\n", "JOB_DIR = 'gs://' + BUCKET + '/' + FOLDER + '/trainer'\n", "MODEL_DIR = JOB_DIR + '/model'\n", "LOGS_DIR = JOB_DIR + '/logs'\n", "\n", "# Pre-computed training and eval data.\n", "DATA_BUCKET = 'ee-docs-demos'\n", "TRAINING_BASE = 'training_patches'\n", "EVAL_BASE = 'eval_patches'\n", "\n", "# Specify inputs (Landsat bands) to the model and the response variable.\n", "opticalBands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7']\n", "thermalBands = ['B10', 'B11']\n", "BANDS = opticalBands + thermalBands\n", "RESPONSE = 'impervious'\n", "FEATURES = BANDS + [RESPONSE]\n", "\n", "# Specify the size and shape of patches expected by the model.\n", "KERNEL_SIZE = 256\n", "KERNEL_SHAPE = [KERNEL_SIZE, KERNEL_SIZE]\n", "COLUMNS = [\n", " tf.io.FixedLenFeature(shape=KERNEL_SHAPE, dtype=tf.float32) for k in FEATURES\n", "]\n", "FEATURES_DICT = dict(zip(FEATURES, COLUMNS))\n", "\n", "# Sizes of the training and evaluation datasets.\n", "TRAIN_SIZE = 16000\n", "EVAL_SIZE = 8000\n", "\n", "# Specify model training parameters.\n", "BATCH_SIZE = 16\n", "EPOCHS = 50\n", "BUFFER_SIZE = 3000\n", "OPTIMIZER = 'SGD'\n", "LOSS = 'MeanSquaredError'\n", "METRICS = ['RootMeanSquaredError']" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "0feVjClV6dxz" }, "source": [ "Verify that the written file has the expected contents and is working as intended." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "6_BEz5Zn6LvT", "colab": {} }, "source": [ "!cat {PACKAGE_PATH}/config.py\n", "\n", "from ai_platform_demo import config\n", "print('\\n\\n', config.BATCH_SIZE)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hgoDc7Hilfc4" }, "source": [ "## Training data, evaluation data and model\n", "\n", "The following is code to load training/evaluation data and the model. Write this into `model.py`. Note that these functions are developed and explained in [this example notebook](https://colab.sandbox.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/UNET_regression_demo.ipynb)." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "beiasALl-GPo", "colab": {} }, "source": [ "%%writefile {PACKAGE_PATH}/model.py\n", "\n", "from . import config\n", "import tensorflow as tf\n", "from tensorflow.python.keras import layers\n", "from tensorflow.python.keras import losses\n", "from tensorflow.python.keras import metrics\n", "from tensorflow.python.keras import models\n", "from tensorflow.python.keras import optimizers\n", "\n", "# Dataset loading functions\n", "\n", "def parse_tfrecord(example_proto):\n", " return tf.io.parse_single_example(example_proto, config.FEATURES_DICT)\n", "\n", "def to_tuple(inputs):\n", " inputsList = [inputs.get(key) for key in config.FEATURES]\n", " stacked = tf.stack(inputsList, axis=0)\n", " stacked = tf.transpose(stacked, [1, 2, 0])\n", " return stacked[:,:,:len(config.BANDS)], stacked[:,:,len(config.BANDS):]\n", "\n", "def get_dataset(pattern):\n", "\tglob = tf.io.gfile.glob(pattern)\n", "\tdataset = tf.data.TFRecordDataset(glob, compression_type='GZIP')\n", "\tdataset = dataset.map(parse_tfrecord)\n", "\tdataset = dataset.map(to_tuple)\n", "\treturn dataset\n", "\n", "def get_training_dataset():\n", "\tglob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.TRAINING_BASE + '*'\n", "\tdataset = get_dataset(glob)\n", "\tdataset = dataset.shuffle(config.BUFFER_SIZE).batch(config.BATCH_SIZE).repeat()\n", "\treturn dataset\n", "\n", "def get_eval_dataset():\n", "\tglob = 'gs://' + config.DATA_BUCKET + '/' + config.FOLDER + '/' + config.EVAL_BASE + '*'\n", "\tdataset = get_dataset(glob)\n", "\tdataset = dataset.batch(1).repeat()\n", "\treturn dataset\n", "\n", "# A variant of the UNET model.\n", "\n", "def conv_block(input_tensor, num_filters):\n", "\tencoder = layers.Conv2D(num_filters, (3, 3), padding='same')(input_tensor)\n", "\tencoder = layers.BatchNormalization()(encoder)\n", "\tencoder = layers.Activation('relu')(encoder)\n", "\tencoder = layers.Conv2D(num_filters, (3, 3), padding='same')(encoder)\n", "\tencoder = layers.BatchNormalization()(encoder)\n", "\tencoder = layers.Activation('relu')(encoder)\n", "\treturn encoder\n", "\n", "def encoder_block(input_tensor, num_filters):\n", "\tencoder = conv_block(input_tensor, num_filters)\n", "\tencoder_pool = layers.MaxPooling2D((2, 2), strides=(2, 2))(encoder)\n", "\treturn encoder_pool, encoder\n", "\n", "def decoder_block(input_tensor, concat_tensor, num_filters):\n", "\tdecoder = layers.Conv2DTranspose(num_filters, (2, 2), strides=(2, 2), padding='same')(input_tensor)\n", "\tdecoder = layers.concatenate([concat_tensor, decoder], axis=-1)\n", "\tdecoder = layers.BatchNormalization()(decoder)\n", "\tdecoder = layers.Activation('relu')(decoder)\n", "\tdecoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder)\n", "\tdecoder = layers.BatchNormalization()(decoder)\n", "\tdecoder = layers.Activation('relu')(decoder)\n", "\tdecoder = layers.Conv2D(num_filters, (3, 3), padding='same')(decoder)\n", "\tdecoder = layers.BatchNormalization()(decoder)\n", "\tdecoder = layers.Activation('relu')(decoder)\n", "\treturn decoder\n", "\n", "def get_model():\n", "\tinputs = layers.Input(shape=[None, None, len(config.BANDS)]) # 256\n", "\tencoder0_pool, encoder0 = encoder_block(inputs, 32) # 128\n", "\tencoder1_pool, encoder1 = encoder_block(encoder0_pool, 64) # 64\n", "\tencoder2_pool, encoder2 = encoder_block(encoder1_pool, 128) # 32\n", "\tencoder3_pool, encoder3 = encoder_block(encoder2_pool, 256) # 16\n", "\tencoder4_pool, encoder4 = encoder_block(encoder3_pool, 512) # 8\n", "\tcenter = conv_block(encoder4_pool, 1024) # center\n", "\tdecoder4 = decoder_block(center, encoder4, 512) # 16\n", "\tdecoder3 = decoder_block(decoder4, encoder3, 256) # 32\n", "\tdecoder2 = decoder_block(decoder3, encoder2, 128) # 64\n", "\tdecoder1 = decoder_block(decoder2, encoder1, 64) # 128\n", "\tdecoder0 = decoder_block(decoder1, encoder0, 32) # 256\n", "\toutputs = layers.Conv2D(1, (1, 1), activation='sigmoid')(decoder0)\n", "\n", "\tmodel = models.Model(inputs=[inputs], outputs=[outputs])\n", "\n", "\tmodel.compile(\n", "\t\toptimizer=optimizers.get(config.OPTIMIZER), \n", "\t\tloss=losses.get(config.LOSS),\n", "\t\tmetrics=[metrics.get(metric) for metric in config.METRICS])\n", "\n", "\treturn model" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "l0F5czqrABgk" }, "source": [ "Verify that `model.py` is functioning as intended." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "8b0I9BaJ-GXw", "colab": {} }, "source": [ "from ai_platform_demo import model\n", "\n", "eval = model.get_eval_dataset()\n", "print(iter(eval.take(1)).next())\n", "\n", "model = model.get_model()\n", "print(model.summary())" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "8Lul-C8DBXHT" }, "source": [ "## Training task\n", "\n", "At this stage, there should be `config.py` storing variables and `model.py` which has code for getting the training/evaluation data and the model. All that's left is code for training the model. The following will create `task.py`, which will get the training and eval data, train the model and save it when it's done in a Cloud Storage bucket." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "aR8GrYZd-Gb2", "colab": {} }, "source": [ "%%writefile {PACKAGE_PATH}/task.py\n", "\n", "from . import config\n", "from . import model\n", "import tensorflow as tf\n", "\n", "if __name__ == '__main__':\n", "\n", " training = model.get_training_dataset()\n", " evaluation = model.get_eval_dataset()\n", "\n", " m = model.get_model()\n", "\n", " m.fit(\n", " x=training,\n", " epochs=config.EPOCHS, \n", " steps_per_epoch=int(config.TRAIN_SIZE / config.BATCH_SIZE), \n", " validation_data=evaluation,\n", " validation_steps=int(config.EVAL_SIZE),\n", " callbacks=[tf.keras.callbacks.TensorBoard(config.LOGS_DIR)])\n", "\n", " tf.contrib.saved_model.save_keras_model(m, config.MODEL_DIR)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "yTYQ8ftjCqgP" }, "source": [ "# Submit the package to AI Platform for training\n", "\n", "Now there's everything to submit this job, which can be done from the command line. First, define some needed variables.\n", "\n", "**Note:** You need to insert the name of a Cloud project (below) you own!" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "p-PtuGdnEGcv", "colab": {} }, "source": [ "import time\n", "\n", "# INSERT YOUR PROJECT HERE!\n", "PROJECT = 'your-project'\n", "\n", "JOB_NAME = 'demo_training_job_' + str(int(time.time()))\n", "TRAINER_PACKAGE_PATH = 'ai_platform_demo'\n", "MAIN_TRAINER_MODULE = 'ai_platform_demo.task'\n", "REGION = 'us-central1'" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "iVXri8QJETBb" }, "source": [ "Now the training job is ready to be started. First, you need to enable the ML API for your project. This can be done from [this link to the Cloud Console](https://console.developers.google.com/apis/library/ml.googleapis.com). See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/training-jobs) for details. Note that the Python and Tensorflow versions should match what is used in the Colab notebook." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "B_sQ1mo6-Gef", "colab": {} }, "source": [ "!gcloud ai-platform jobs submit training {JOB_NAME} \\\n", " --job-dir {config.JOB_DIR} \\\n", " --package-path {TRAINER_PACKAGE_PATH} \\\n", " --module-name {MAIN_TRAINER_MODULE} \\\n", " --region {REGION} \\\n", " --project {PROJECT} \\\n", " --runtime-version 1.14 \\\n", " --python-version 3.5 \\\n", " --scale-tier basic-gpu" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "c6R65k0viIJS" }, "source": [ "## Monitor the training job\n", "\n", "There's not much more to do until the model is finished training (~24 hours), but it's fun and useful to monitor its progress. You can do that progamatically with another `gcloud` command. The output of that command can be read into an `IPython.utils.text.SList` from which the `state` is extracted and ensured to be `SUCCEEDED`. Or you can monitor it from the [AI Platform jobs page](http://console.cloud.google.com/ai-platform/jobs) on the Cloud Console." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "1oqR6sCrEGoB", "colab": {} }, "source": [ "desc = !gcloud ai-platform jobs describe {JOB_NAME} --project {PROJECT}\n", "state = desc.grep('state:')[0].split(':')[1].strip()\n", "print(state)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "OFnIrvO0StiO" }, "source": [ "# Inspect the trained model\n", "\n", "Once the training job has finished, verify that you can load the trained model and print a summary of the fitted parameters. It's also useful to examine the logs with [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard). There's a convenient notebook extension that will launch TensorBoard in the Colab notebook. Examine the training and testing learning curves to ensure that the training process has converged." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "T9GU8Pl-2Y5p", "colab": {} }, "source": [ "%load_ext tensorboard\n", "%tensorboard --logdir {config.LOGS_DIR}" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "EVRifmE2ffvv" }, "source": [ "# Prepare the model for making predictions in Earth Engine\n", "\n", "Before we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accomdate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predict#binary_data_in_prediction_input) for details.) \n", "\n", "## `earthengine model prepare`\n", "The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically:" ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "KTzneSE_2WgL", "colab": {} }, "source": [ "from tensorflow.python.tools import saved_model_utils\n", "\n", "meta_graph_def = saved_model_utils.get_meta_graph_def(config.MODEL_DIR, 'serve')\n", "inputs = meta_graph_def.signature_def['serving_default'].inputs\n", "outputs = meta_graph_def.signature_def['serving_default'].outputs\n", "\n", "# Just get the first thing(s) from the serving signature def. i.e. this\n", "# model only has a single input and a single output.\n", "input_name = None\n", "for k,v in inputs.items():\n", " input_name = v.name\n", " break\n", "\n", "output_name = None\n", "for k,v in outputs.items():\n", " output_name = v.name\n", " break\n", "\n", "# Make a dictionary that maps Earth Engine outputs and inputs to \n", "# AI Platform inputs and outputs, respectively.\n", "import json\n", "input_dict = \"'\" + json.dumps({input_name: \"array\"}) + \"'\"\n", "output_dict = \"'\" + json.dumps({output_name: \"impervious\"}) + \"'\"\n", "\n", "# Put the EEified model next to the trained model directory.\n", "EEIFIED_DIR = config.JOB_DIR + '/eeified'\n", "\n", "# You need to set the project before using the model prepare command.\n", "!earthengine set_project {PROJECT}\n", "!earthengine model prepare --source_dir {config.MODEL_DIR} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict}" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "buDXUtISnwm0" }, "source": [ "Note that you can also use the TensorFlow saved model command line tool to do this manually. See [this doc](https://www.tensorflow.org/guide/saved_model#cli_to_inspect_and_execute_savedmodel) for details. Also note the names we've specified for the new inputs and outputs: `array` and `impervious`, respectively." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hno8QSo-2XjQ" }, "source": [ "# Perform inference using the trained model in Earth Engine\n", "\n", "Before it's possible to get predictions from the trained and EEified model, it needs to be deployed on AI Platform. The first step is to create the model. The second step is to create a version. See [this guide](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models) for details. Note that models and versions can be monitored from the [AI Platform models page](http://console.cloud.google.com/ai-platform/models) of the Cloud Console. \n", "\n", "To ensure that the model is ready for predictions without having to warm up nodes, you can use a configuration yaml file to set the scaling type of this version to autoScaling, and, set a minimum number of nodes for the version. This will ensure there are always nodes on stand-by, however, you will be charged as long as they are running. For this example, we'll set the minNodes to 10. That means that at a minimum, 10 nodes are always up and running and waiting for predictions. The number of nodes will also scale up automatically if needed." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "Zz_VlU9I1nlK", "colab": {} }, "source": [ "%%writefile config.yaml\n", "autoScaling:\n", " minNodes: 10" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "KSp34aCaySu5", "colab": {} }, "source": [ "MODEL_NAME = 'fcnn_demo_model'\n", "VERSION_NAME = 'v' + str(int(time.time()))\n", "print('Creating version: ' + VERSION_NAME)\n", "\n", "!gcloud ai-platform models create {MODEL_NAME} --project {PROJECT}\n", "!gcloud ai-platform versions create {VERSION_NAME} \\\n", " --project {PROJECT} \\\n", " --model {MODEL_NAME} \\\n", " --origin {EEIFIED_DIR} \\\n", " --runtime-version=1.14 \\\n", " --framework \"TENSORFLOW\" \\\n", " --python-version=3.5\n", " --config=config.yaml" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "2a4IfxvhmDWS" }, "source": [ "There is now a trained model, prepared for serving to Earth Engine, hosted and versioned on AI Platform. We can now connect Earth Engine directly to the trained model for inference. You do that with the `ee.Model.fromAiPlatformPredictor` command.\n", "\n", "## `ee.Model.fromAiPlatformPredictor`\n", "For this command to work, we need to know a lot about the model. To connect to the model, you need to know the name and version.\n", "\n", "### Inputs\n", "You need to be able to recreate the imagery on which it was trained in order to perform inference. Specifically, you need to create an array-valued input from the scaled data and use that for input. (Recall that the new input node is named `array`, which is convenient because the array image has one band, named `array` by default.) The inputs will be provided as 144x144 patches (`inputTileSize`), at 30-meter resolution (`proj`), but 8 pixels will be thrown out (`inputOverlapSize`) to minimize boundary effects.\n", "\n", "### Outputs\n", "The output (which you also need to know), is a single float band named `impervious`." ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "vqcgSGvx-E94", "colab": {} }, "source": [ "# Use Landsat 8 surface reflectance data.\n", "l8sr = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')\n", "\n", "# Cloud masking function.\n", "def maskL8sr(image):\n", " cloudShadowBitMask = ee.Number(2).pow(3).int()\n", " cloudsBitMask = ee.Number(2).pow(5).int()\n", " qa = image.select('pixel_qa')\n", " mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(\n", " qa.bitwiseAnd(cloudsBitMask).eq(0))\n", " mask2 = image.mask().reduce('min')\n", " mask3 = image.select(config.opticalBands).gt(0).And(\n", " image.select(config.opticalBands).lt(10000)).reduce('min')\n", " mask = mask1.And(mask2).And(mask3)\n", " return image.select(config.opticalBands).divide(10000).addBands(\n", " image.select(config.thermalBands).divide(10).clamp(273.15, 373.15)\n", " .subtract(273.15).divide(100)).updateMask(mask)\n", "\n", "# The image input data is a cloud-masked median composite.\n", "image = l8sr.filterDate(\n", " '2015-01-01', '2017-12-31').map(maskL8sr).median().select(config.BANDS).float()\n", "\n", "# Load the trained model and use it for prediction.\n", "model = ee.Model.fromAiPlatformPredictor(\n", " projectName = PROJECT,\n", " modelName = MODEL_NAME,\n", " version = VERSION_NAME,\n", " inputTileSize = [144, 144],\n", " inputOverlapSize = [8, 8],\n", " proj = ee.Projection('EPSG:4326').atScale(30),\n", " fixInputProj = True,\n", " outputBands = {'impervious': {\n", " 'type': ee.PixelType.float()\n", " }\n", " }\n", ")\n", "predictions = model.predictImage(image.toArray())\n", "\n", "# Use folium to visualize the input imagery and the predictions.\n", "map_id_dict = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3})\n", "map = folium.Map(location=[38., -122.5], zoom_start=13)\n", "folium.TileLayer(\n", " tiles=map_id_dict['tile_fetcher'].url_format,\n", " attr='Map Data © Google Earth Engine',\n", " overlay=True,\n", " name='median composite',\n", " ).add_to(map)\n", "\n", "map_id_dict = predictions.getMapId({'min': 0, 'max': 1})\n", "\n", "folium.TileLayer(\n", " tiles=map_id_dict['tile_fetcher'].url_format,\n", " attr='Map Data © Google Earth Engine',\n", " overlay=True,\n", " name='predictions',\n", " ).add_to(map)\n", "map.add_child(folium.LayerControl())\n", "map" ], "execution_count": 0, "outputs": [] } ] }