Disclaimer

This demo used code pulled from https://svn.apache.org/repos/asf/incubator/climate/trunk/ at revision 1502257 and it assumes you are using the RCMES deployment site. If you're running the UI locally you will have to change the file paths when you load local datasets!

How to run a model to observation comparison


Add a Model File

Click 'Select a Dataset'

Pick a model file
  1. Go to the "Local" tab
  2. Point the input box to "/dev/exampleModelFiles/prec.HRM3.ncep.monavg.nc" (without the quotes)
  3. Click "Parse File"

Select the model's attributes

Once the parsing has completed you will need to set which variables to use in the evaluation. Our example file is fairly simple. It only has one evaluation variable and the latitude, longitude, and time variables are easily guessed from their names. It's possible that other model files may have multiple variables from which you'll need to select or variables with names that aren't easily processed. If that were the case you would be required to select which variables to use from the respective drop downs.

  1. Click 'Add Dataset' to add the dataset to the evaluation
  2. Click 'Close' to close the selection window

Add an Observation

Click 'Select a Dataset'

Before you select another dataset, let's take a look at what information the UI is showing us.

  • The dataset that you've selected is displayed in the left hand pane.
    • You're shown the spatial and temporal bounds, the thumbnail map displaying the spatial bounds, and a few options.
  • The map shows you the spatial overlap for all the datasets that you've selected (Currently we've only added one).
    • The black border shows the extent of the spatial overlap for the selected datasets
    • The shaded region shows the extent within the overlap that the user has selected for evaluation. At the moment, you can't change this. Once you've added another dataset you'll be able to pick the area over which the evaluation should run.
  • The timeline shows the temporal overlap for all the datasets that you've selected.
    • The range of dates shows the extent of the temporal overlap for the selected datasets.
    • The black bar shows the user selected temporal range to be used for the evaluation. Again, you can't change this at the moment.
  • Lastly, there are (currently disabled) input boxes that allow you to selected the temporal/spatial evaluation bounds.

  1. Go to the "RCMED" tab
  2. Select "Tropical Rainfall Measuring Mission Dataset" as the dataset
  3. Select "pcp" as the parameter to test
  4. Click "Add Observation"
  5. Click "Close"

Set final evaluation parameters

Set the model as the re-grid base

There's a few things you should notice now that you've added another dataset.

  • The spatial and temporal bounds have been updated on the map and timeline. The map hasn't changed since the model that we added first is the constraining dataset (spatially at least), but trust me it would have if it needed to! The timeline range scaled to take into account that our new dataset's start date is 1 January 1998. If you want to see for yourself, click the little 'x' next to the new dataset's thumbnail to remove it. You'll see that the timeline reverts back. Then you can add the dataset again and compare.
  • The input boxes have updated now that you have a "valid" evaluation. You can't run tests on a single dataset so the UI won't let you edit those little details until you've dealt with the larger issues first.

We want our model to be the base on which we re-grid the other dataset(s). Click the "regrid" checkbox for the model.

Adjust the temporal and spatial overlaps

You can adjust these values however you see fit. For the sake of the demo we recommend settings the start and end times to 01/01/2003 and 01/03/2003 respectively. This should ensure that the evaluation step is nice and quick.

Run the evaluation!

How to run a multi-model to multi-observation comparison


The UI doesn't currently support multi-model to multi-observation comparisons. This will be added in the future as more UI improvements are made.

  • No labels