Enhydris¶
Enhydris is a free database system for the storage and management of hydrological and meteorological data. It allows the storage and retrieval of measurement stations and their time series.
General documentation:
Installation and configuration¶
Prerequisites¶
Prerequisite | Version |
---|---|
Python with setuptools and pip | 3 [1] |
PostgreSQL + PostGIS + TimescaleDB | |
GDAL | 1.9 [2] |
[1] Enhydris runs on Python 3.5 or later. It does not run on Python 2. setuptools and pip are needed in order to install the rest of the Python modules.
[2] In theory, installing the prerequisites with pip will also install gdal. However it can be tricky to install and it’s usually easier to install a prepackaged version for your operating system.
Install Enhydris¶
Install Enhydris by cloning it and then installing the requirements
specified in requirements.txt
, probably in a virtualenv:
git clone https://github.com/openmeteo/enhydris.git
git checkout 3.0
virtualenv --system-site-packages --python=/usr/bin/python3 \
enhydris/venv
./enhydris/venv/bin/pip install -r requirements.txt
./enhydris/venv/bin/pip install -r requirements-dev.txt
Configure Enhydris¶
Create a Django settings file, either in
enhydris_project/settings/local.py
, or wherever you like. It
should begin with this:
from enhydris_project.settings.development import *
and then it should go on to override DEBUG
, SECRET_KEY
,
DATABASES
and STATIC_ROOT
. More settings you may want to
override are the Django settings and the Enhydris
settings.
On production you need to import from enhydris_project.settings
instead.
Create a spatially enabled database¶
(In the following examples, we use enhydris_db
as the database
name, and enhydris_user
as the PostgreSQL username. The user
should not be a super user, and not be allowed to create more users.
In production, it should not be allowed to create databases; in
testing, it should be allowed, in order to be able to run the unit
tests.)
Here is a Debian buster example:
# Install PostgreSQL and PostGIS
apt install postgis postgresql-11-postgis-2.5
# Install TimescaleDB (you need to add repositories in /etc/apt as
# explained in the TimescaleDB installation documentation)
apt install timescaledb-postgresql-11
timescaledb-tune
# Create database template
sudo -u postgres -s
createdb template_postgis
psql -d template_postgis -c "CREATE EXTENSION postgis;"
psql -d template_postgis -c \
"UPDATE pg_database SET datistemplate='true' \
WHERE datname='template_postgis';"
exit
# Create database
sudo -u postgres -s
createuser --pwprompt enhydris_user
createdb --template template_postgis --owner enhydris_user enhydris_db
exit
# Note: We don't need to install the timescaledb extension; the
Django migrations of Enhydris will do so automatically.
Here is a Windows example, assuming PostgreSQL is installed at the default location:
cd C:\Program Files\PostgreSQL\11\bin
createdb template_postgis
psql -d template_postgis -c "CREATE EXTENSION postgis;"
psql -d template_postgis -c "UPDATE pg_database SET datistemplate='true'
WHERE datname='template_postgis';"
createuser -U postgres --pwprompt enhydris_user
createdb --template template_postgis --owner enhydris_user enhydris_db
At some point, these commands will ask you for the password of the operating system user.
Note:
Initialize the database¶
In order to initialize your database and create the necessary database tables for Enhydris to run, run the following commands inside the Enhydris configuration directory:
python manage.py migrate
python manage.py createsuperuser
The above commands will also ask you to create a Enhydris superuser.
Start Django and Celery¶
Inside the Enhydris configuration directory, run the following command:
python manage.py runserver
The above command will start the Django development server and set it to listen to port 8000.
In addition, run the following to start Celery:
celery worker -A enhydris -l info --concurrency=1
Production¶
To use Enhydris in production, you need to setup a web server such as apache. This is described in detail in Deploying Django and in https://djangodeployment.com/.
You also need to start celery as a service.
Post-install configuration: domain name¶
After you run Enhydris, logon as a superuser, visit the admin panel,
go to Sites
, edit the default site, and enter your domain name
there instead of example.com
. Emails to users for registration
confirmation will contain links to that domain. Restart the
Enhydris (by restarting apache/gunicorn/whatever) after changing the
domain name.
Settings reference¶
These are the settings available to Enhydris, in addition to the Django settings.
-
REGISTRATION_OPEN
¶ If
True
, users can register, otherwise they have to be created by the administrator. The default isFalse
.(This setting is defined by
django-registration-redux
.)
-
ENHYDRIS_USERS_CAN_ADD_CONTENT
¶ If set to
True
, it enables all logged in users to add stations to the site, and edit the data of the stations they have entered. When set toFalse
(the default), only privileged users are allowed to add/edit/remove data from the db.See also
ENHYDRIS_OPEN_CONTENT
.
-
ENHYDRIS_OPEN_CONTENT
¶ If set to
True
, users who haven’t logged on can view timeseries data and station file (e.g. image) content. Otherwise, only logged on users can do so. Logged on users can always view everything.When this setting is
False
,REGISTRATION_OPEN
must obviously also be set toFalse
.
-
ENHYDRIS_MAP_BASE_LAYERS
¶ A dictionary of JavaScript definitions of base layers to use on the map. The default is:
{ "Open Street Map": r''' L.tileLayer("https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png", { attribution: ( 'Map data © <a href="https://www.openstreetmap.org/">' + 'OpenStreetMap</a> contributors, ' + '<a href="https://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>' ), maxZoom: 18, }) ''', "Open Cycle Map": r''' L.tileLayer("https://{s}.tile.thunderforest.com/cycle/{z}/{x}/{y}.png", { attribution: ( 'Map data © <a href="https://www.openstreetmap.org/">' + 'OpenStreetMap</a> contributors, ' + '<a href="https://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>' ), maxZoom: 18, }) ''' }
-
ENHYDRIS_MAP_DEFAULT_BASE_LAYER
¶ The name of the base layer that is visible by default; it must be a key in data:ENHYDRIS_MAP_BASE_LAYERS. The default is “Open Street Map”.
-
ENHYDRIS_MAP_MIN_VIEWPORT_SIZE
¶ Set a value in degrees. When a geographical query has a bounding box with dimensions less than
ENHYDRIS_MAP_MIN_VIEWPORT_SIZE
, the map initially shown will be zoomed so that its dimension will be at leastENHYDRIS_MAP_MIN_VIEWPORT_SIZE²
. Useful when showing a single entity, such as a hydrometeorological station. Default value is 0.04, corresponding to an area approximately 4×4 km.
-
ENHYDRIS_MAP_DEFAULT_VIEWPORT
¶ A tuple containing the default viewport for the map in geographical coordinates, in cases of geographical queries that do not return anything. Format is (minlon, minlat, maxlon, maxlat) where lon and lat is in decimal degrees, positive for north/east, negative for west/south.
-
ENHYDRIS_SITE_STATION_FILTER
¶ This is a quick-and-dirty way to create a web site that only displays a subset of an Enhydris database. For example, the database of http://system.deucalionproject.gr/ is the same as that of http://openmeteo.org/; however, the former only shows stations relevant to the Deucalion project, because it has this setting:
ENHYDRIS_SITE_STATION_FILTER = {'owner__id__exact': '9'}
-
ENHYDRIS_STATIONS_PER_PAGE
¶ Number of stations per page for the pagination of the station list. The default is 100.
-
ENHYDRIS_CELERY_SEND_TASK_ERROR_EMAILS
¶ If this is
True
(the default), celery will email theADMINS
whenever an exception occurs, like Django does by default.
How the chart works¶
In the time series detail page there is a chart that aims to give the user a quick overview of the time series. It is zoomable and it should be obvious how it works, but this text explains it in detail.
At the time of this writing, the chart consists of 200 points joined together with a line. Since the chart isn’t much wider than 200 pixels anyway (e.g. at this time it is 400 pixels wide) this “line” that joins the points together isn’t really much of a line—it’s more like an additional point interpolated between the two points, or maybe it’s an almost vertical line.
The thing is that we have only 200 points when the time series might actually have hundreds of thousands or millions of points. So what we actually do is divide the entire time range of the time series in 200 intervals; for each interval we calculate the max, min and mean value; and we plot these three values (dark line for the mean, light line for the min and max). For each point, y is therefore the mean, max or int for the corresponding interval; and x is the center of the interval.
This explains why at high zoom levels the max, min and mean coincide.
This doesn’t always work right. Precipitation, in particular, is problematic because, except for high zoom levels, the max is so much larger than the mean that the latter is plotted very near zero. But as a quick overview it generally does its job.
Release notes¶
Version 3.0¶
Upgrading¶
You may only upgrade from version 2.1 (version 2.1 only exists to facilitate transition to 3.0, and it is otherwise not used; the old stable Enhydris version is 2.0). The procedure is this:
Make sure you are running version 2.0 (any release will do).
Backup the database.
Make sure you have read and understood the list of changes from 2.0 below, as some of these changes may require manual intervention or automatically do things you might not want.
Update the repository:
git fetch originShut down the running service.
Install version 2.1 and migrate:
git checkout 2.1 python manage.py migrate
Empty the migrations table of the database for the
hcore
app:python manage.py migrate --fake hcore zero(This step is optional because in 3.0 the
hcore
app goes away and is replaced byenhydris
. You can omit it in case you need to go back or execute it if you want a cleaner database.)Install TimescaleDB and restart PostgreSQL. You don’t need to create the extension in the database; the Django migrations will do so automatically. See “TimescaleDB” in the “Changes from 2.0” below for more information.
In the settings, make sure SITE_ID, LANGUAGE_CODE and PARLER_LANGUAGES are set properly. See “Multilingual contents” in the “Changes from 2.0” below for more information.
Install version 3.0:
git checkout 3.0 pip install -r requirements.txt
If your settings file has been in
enhydris/settings/
, you need to create a settings file inenhydris_project/settings/
, as this location has changed.Empty the migrations table for the registration app:
python manage.py migrate --fake registration zeroIf you fail to perform this step, you may get the message ‘relation “registration_registrationprofile” does not exist’ or similar. The exact cause is not known, however lots of things have changed regarding the registration system.
Execute migrations:
python manage.py migrate --fake-initialIf some migrations succeed and there is a failure later, you should probably omit the –fake-initial parameter in subsequent attempts. There is, notably, a possibility of an error related to registration happening (as described in the previous step); in such a case, repeat the previous step and then re-execute the above migration command (possibly without –fake-initial).
Remove obsolete settings from the settings file.
Start the service.
Create and start a celery service.
Changes from 2.0¶
Time series groups¶
In 2.0, a station has time series. Now it has time series groups and each group consists of time series with essentially the same kind of data but in a different time step or in a different checking status. For example, if you have a temperature sensor that measures temperature every 10 minutes, then you will have a “temperature” time series group, which will contain the initial time series, and it may also contain the checked time series, the regularized time series, the hourly time series, etc. (If you have two temperature sensors, you’ll have two time series groups.)
We avoid showing the term “time series group” to the user (instead, we are being vague, like “Data”, or we might sometimes use “time series” when we actually mean a time series group). Sometimes we can’t avoid it though (notably in the admin).
Each time series in the group has a “type” (which is enumerated): it can be initial, checked, regularized, or aggregated.
During database upgrade, unless enhydris-autoprocess is installed, each existing time series goes in a separate group, and it is assumed it is the initial. In many cases, this is the correct assumption. If enhydris-autoprocess is installed, the database upgrade attempts to find out which time series is the initial, which is checked, and which is aggregated (however enhydris-autoprocess did not exist for Enhydris 2.0, so this applies only to installations of Enhydris development versions).
TimescaleDB¶
We now store time series data in the database using TimescaleDB. Before that, time series data was stored in files in the filesystem, in CSV format, one file per time series.
The location where the files were being stored was specified by setting
ENHYDRIS_TIMESERIES_DATA_DIR
. This setting has now been abolished.
The size of your database will increase considerably. The increase in
size maybe eight times the size of ENHYDRIS_TIMESERIES_DATA_DIR
.
Make sure you have the available disk space. Also make sure that your
PostgreSQL backup strategy can handle the increased size of the
database.
When executing the migrations, the time series data will be read from the files and entered to the database. The files will not be removed.
The migration will only work if the PostgreSQL server runs in the same
machine as Enhydris. This is because, in order to speed up the importing
of the data to the database, the files are read directly by the database
server using the SQL COPY ... FROM
command. See the code for the
migration for more details.
Since a single transaction could be too much for the entire importing (it would use lots of space and be very slow), the transaction is committed for each time series. This means that if you interrupt the migration, the database will contain some, but not all, records. Attempting to run the migration a second time will therefore fail. In such a case, before attempting to re-run the migration, empty the table like this:
echo "DELETE FROM enhydris_timeseriesrecord" | ./manage.py dbshell
In addition, to speed up importing of the data, table constraints and
indexes are created after the data is imported. This may mean that it
could fail after importing if there are duplicate dates in the
timeseries data. This can happen because of an old bug. In such a
case, reverse the migration (empty the table as above if needed), run
the following inside the ENHYDRIS_TIMESERIES_DATA_DIR
directory to
find the problems, fix them and re-run the migration:
for x in *; do
a=`uniq -w 16 -D $x`
if [ -n "$a" ]; then
echo ========= $x
echo "$a"
echo
fi
done
As an order of magnitude, conversion of the data should take something
like 40 minutes per GB of ENHYDRIS_TIMESERIES_DATA_DIR
storage
space, but of course this depends on several factors. Roughly half of
this time will be for the importing of the data, and another half for
the creation of the indexes (however these times might not actually be
linear).
Celery¶
In 2.0, nothing was done asynchronously. In 3.0, the uploading of time series data through the site (not through the Web API) is performed asynchronously, i.e. the user receives a message that the time series data are about to be imported, and he is emailed when importing finishes.
Therefore, a Celery service must be running on the server.
Some add-on applications, like enhydris-synoptic
and
enhydris-autoprocess
, also use Celery.
Multilingual contents¶
The way we do multilingual database contents has changed.
We were using a hacky system where two languages were offered; e.g.
there was Gentity.name
and Gentity.name_alt
, where the latter
was the name in the “alternative” language. This system, rather than a
“correct” one that uses, e.g., django-parler, was more trouble than it
was worth, therefore all fields ending in _alt
have been abolished.
In the new Enhydris version, several lookups, such as variable names, are multilingual using django-parler. However, station and timeseries names and remarks, event reports, etc. (i.e. everything a non-admin user is expected to enter), are not multilingual. The idea is that a station in Greece will have a Greek name, and this does not need to be transliterated. The rationale is the same as for OSM’s-avoid-transliteration rule: transliterations can be automated, and having users enter them manually would only create noise in the database. There may be valid cases for translation (e.g. when the name of a station is “bridge X”, or translation of remarks), but users generally don’t enter translations so we haven’t developed this functionality yet.
For the case of fields that are untranslated in the new version, while
upgrading, for each row, whichever of fieldname
and
fieldname_alt
is nonempty will be used for fieldname
. If both
are nonempty and they are single-line fields, “value of fieldname
[value of fieldname_alt
]” will be used for fieldname
, i.e. the
value of fieldname_alt
will be appended in square brackets. If the
number of characters available is insufficient an error message will be
given and the upgrade will fail. If both fields are nonempty and they
are multi-line fields such as TextField
, they will be joined
together separated by \n\n---ALT---\n\n
.
For the case of lookups translated with django-parler, fieldname
becomes the main language (set by LANGUAGE_CODE or
PARLER_DEFAULT_LANGUAGE_CODE), and fieldname_alt
becomes the second
language, i.e. the second entry of PARLER_LANGUAGES. If
PARLER_LANGUAGES has fewer than two languages, then the conversion
described in the previous paragraph takes place.
(In fact, because abolishing of _alt
fields was decided and
implemented several months before deciding to use django-parler on
lookups, the migration system will convert everything to unilingual as
described above, and then it will convert lookups back to multilingual.)
Before upgrading the database, it is important to set SITE_ID,
LANGUAGE_CODE, and PARLER_LANGUAGES. SITE_ID is probably already set,
probably by the default Enhydris settings. Keep it as it is. Set
LANGUAGE_CODE to the language that corresponds to the main language of
the site, i.e. the one to which lookup descriptions not ending in
_alt
correspond. Finally, set PARLER_LANGUAGES as follows:
PARLER_LANGUAGES = {
SITE_ID: [
{"code": LANGUAGE_CODE},
{"code": "specify_your_second_language_here"},
],
}
Because of what is likely a bug in django-parler (at least 2.0), it
is important to use SITE_ID
as the key and not None
.
Geographical areas¶
Each station (and more generally each Gentity) used to have three foreign keys to water basins, water divisions, and political divisions (the latter were hierarchical, being countries at the top level). This is no longer the case. Water basins, water divisions, and political divisions have been abolished. Instead, there is a mere Garea entity, that can belong in a category. You create as many categories as you want (countries, water basins, prefectures, whatever you like) and you upload a shapefile of them (it’s mandatory that they have a geometry).
There is no foreign key between stations (or other Gentities) and Gareas. To find which stations are in a Garea, the system does a point-in-polygon query.
The upgrade will delete all existing water basins, water divisions, and political divisions, and all existing relationships between them. This change is non-reversible. It will not create any Gareas. You can use the admin to upload Gareas.
Other changes¶
- The Web API has been reworked. Applications using the Enhydris 2.0 web API won’t work unchanged with 3.0.
- The templates have been refactored. Applications and installations with custom templates or templates inheriting the Enhydris templates may need to be modified.
- Instruments have been abolished. Upgrading requires the database to not have any instruments. If you try to upgrade and there are instruments, it will give you an error message with instructions on how to empty the instruments table.
- GentityGenericData and GentityAltCode have been abolished, as they were practically not being used in any of the known installations. Upgrading requires the tables to be empty; if not, upgrading will stop with an error message. Make sure the tables are empty before upgrading.
Gpoint.point
has been renamed toGpoint.geom
.- Stations now must have co-ordinates, i.e. the related database field
gpoint.geometry
(formerlygpoint.point
) is not null. If you have any stations with null co-ordinates, they will be silently converted to latitude zero and longitude zero during upgrading. - The time step is now stored as a pandas “frequency” string, e.g.
“10min”, “H”, “M”, “Y”. The
TimeStep
model does not exist any more. Thetimestamp_rounding
,timestamp_offset
andinterval_type
properties have been abolished. During the database upgrade, they are simply dropped. - SQLite is no longer supported.
- The fields
approximate
(used to denote that a station’s location has been assigned roughly) andasrid
(altitude SRID) have been abolished. The fieldsrid
has been renamed tooriginal_srid
. - The field
Gentity.short_name
has been renamed toGentity.code
. - Station types have been abolished. Stations now don’t have a type. The related information previously stored in the database will be deleted in the upgrade.
- Stations can now only have a single overseer, specified as a text field. Upgrading will convert as needed, and it will also delete any unreferenced Person objects.
- The field
Station.is_automatic
has been abolished. - The database fields
copyright_years
andcopyright_holder
have been abolished. The database upgrade will remove them and any information stored in them will be lost. Accordingly, the settingENHYDRIS_DISPLAY_COPYRIGHT_INFO
has been abolished. - OpenLayers has been replaced with Leaflet. Accordingly, the form of
the
ENHYDRIS_MAP_BASE_LAYERS
setting has been changed and the settingENHYDRIS_MAP_DEFAULT_BASE_LAYER
has been added. - The setting
ENHYDRIS_SITE_CONTENT_IS_FREE
has been abolished.ENHYDRIS_TSDATA_AVAILABLE_FOR_ANONYMOUS_USERS
has been renamed toENHYDRIS_OPEN_CONTENT
. Several other settings that were rarely being used have been abolished or renamed.
Version 2.0¶
Upgrading¶
You can upgrade directly from versions 0.8 and later. If you have an older version, first upgrade to 0.8.
Enhydris is no longer pip-installable. Instead, it is a typical Django
application with its manage.py
and all. Install it as described
in Installation and configuration and execute the database upgrade procedure:
python manage.py migrate
Changes from 1.1.2¶
- Now a normal Django project, no longer pip-installable.
- Django 1.11 and only that is now supported.
- A favicon has been added.
- Several bugs have been fixed. Notably, object deletions are confirmed.
Changes in 2.0 microversions¶
- Version 2.0.1 removes
EMAIL_BACKEND
from the base settings and leaves the Django default (this broke some production sites that did not specifyEMAIL_BACKEND
and were expecting the Django default.) - Version 2.0.2 adds pagination to the list of stations and requires a Django-1.11-compatible version of django-simple-captcha.
- Version 2.0.3 fixes an undocumented CSV view that sends you a zip file with stations, instruments and time series in CSV when you add ?format=csv to a stations list URL. Apparently this had been broken since version 1.0.
- Version 2.0.4 fixes several crashes.
Version 1.1¶
Upgrading¶
There are no database migrations since version 0.8, so you just need to install the new version and you’re good to go.
Changes in 1.1 microversions¶
- Version 1.1.0 changes an internal API;
enhydris.hcore.models.Timeseries.get_all_data()
is renamed toenhydris.hcore.models.Timeseries.get_data()
and accepts arguments to specify a start and end date. - Version 1.1.1 puts the navbar inside a {% block %}, so that it can be overriden in custom skins.
- Version 1.1.2 fixes two bugs when editing time series: appending wasn’t working properly, and start and end dates were shown as editable fields.
Version 1.0¶
Overview¶
This version has important internal changes, but no change in functionality (except for the fix of a minor bug, that the time series chart would apparently “hang” with a waiting cursor showing for ever when a time series was empty). These important changes are:
- Python 3 is now supported, and there is no more support for Python 2.
- Pthelma is not used anymore; instead, there is a dependency on
pandas
and on the newpd2hts
module.
Upgrading from 0.8¶
Make sure you are running Enhydris 0.8. Discard your virtualenv and follow the Enhydris installation instructions to install the necessary operating system packages and install Enhydris in a new Python 3 virtualenv. You don’t need to change anything in the configuration or perform any database migration.
Changes in 1.0 microversions¶
- When downloading time series and specifying a start date, the resulting time series could start on a slightly different start date because of some confusion with the time zone. The bug was fixed in 1.0.1.
- Gentity files could not be downloading because of a bug in the downloading code. Fixed in 1.0.2.
Version 0.8¶
Overview¶
- The time series data are now stored in files instead of in database blobs. They are stored uncompressed, which means that much more disk space is consumed, but it has way more benefits. If disk space is important to you, use a file system with transparent compression.
- Experimental spatialite support.
Upgrading from 0.6¶
The upgrade procedure is slightly complicated, and uses the intermediate Enhydris version 0.7, which exists only for this purpose.
(Note for developers: the reason for this procedure is that the migrations have been reset. Previously the migrations contained PostgreSQL-specific stuff.)
The upgrade procedure is as follows:
Backup your database, your media files, and your configuration (you are not going to use this backup unless something goes wrong and you need to restore everything to the state it was before).
Make sure you are running Enhydris 0.6.
Follow the Enhydris 0.8 installation instructions to install Enhydris in a new virtualenv; however, rather than installing Enhydris 0.8, install, instead, Enhydris 0.7, like this:
pip install 'enhydris>=0.7,<0.8'
Open your
settings.py
and add the configuration settingENHYDRIS_TIMESERIES_DATA_DIR
. Make sure your server has enough space for that directory (four times as much as your current database, and possibly more), and that it will be backing it up.Apply the database upgrades:
python manage.py migrate
Install Enhydris 0.8:
pip install --upgrade --no-deps 'enhydris>=0.8,<0.9'
Have your database password ready and run the following to empty the django_migrations database table:
python manage.py dbshell delete from django_migrations; \q
Repopulate the django_migrations table:
python manage.py migrate --fake
Version 0.6¶
Overview¶
- The skin overhaul has been completed.
- The confusing fields “Nominal offset” and “Actual offset” have been renamed to “Timestamp rounding” and “Timestamp offset”. For this, pthelma>=0.12 is also required.
- Data entry of station location has been greatly simplified. The user now merely specifies latitude and longitude, and only if he chooses the advanced option does he need, instead, to specify ordinate, abscissa, and srid.
- Several bugs have been fixed.
Backwards incompatible changes¶
The
is_active
fields have been removed.Stations and instruments had an is_active field. Apparently the original designers of Enhydris thought that it would be useful to make queries of, e.g., active stations, as opposed to all stations (including obsolete ones).
However, the correctness of this field depends on the procedures each organization has. Many organizations don’t have a specific procedure for obsoleting a station; a station merely falls out of use (e.g. an overseer stops working and (s)he is never replaced). Therefore, it is unlikely that someone will go and enter the correct value in the is_active field. Even if an organization does have processes that could ensure correctness of the field, they could merely specify an end date to a station or instrument, and therefore is_active is superfluous.
Indeed, in all Hydroscope databases, the field seems to be randomly chosen, and in openmeteo.org it makes even less sense, since it is an open database whose users are expected to merely abandon their stations and not care about “closing” them properly.
Therefore, the fields have been removed. However, the database upgrade script will verify that they are not being used before going on to remove them.
Upgrading from 0.5¶
Backup your database (you are not going to use this backup unless something goes wrong and you need to restore everything to the state it was before).
Make sure you are running the latest version of Enhydris 0.5 and that you have applied all its database upgrades (running python manage.py migrate should apply all such upgrades, and should do nothing if they are already applied).
Install 0.6 and execute the database upgrade procedure:
python manage.py migrate
Changes in 0.6 microversions¶
- Added some explanatory text for timestamp rounding and timestamp offset in the time series form (in 0.6.1).
Version 0.5¶
Overview¶
- There has been a huge overhaul of the Javascript.
- The map base layers are now configurable in settings.py.
- The map has been simplified and now uses OpenLayers 2.12.
- The “advanced search” has been removed. Instead, it is possible to perform advanced searches by writing the appropriate code in the single search box. The “Search tips” link beside the search box provides instructions.
- The skin has been modernized and simplified and uses Bootstrap. This is work in progress.
- The installation procedure has been greatly simplified.
- Django 1.8 support.
Backwards incompatible changes¶
- Only supports Python 2.7 and Django 1.8.
- Removed apps hchartpages and dbsync. These are expected to be replaced by independent applications in the future, but no promises are made. Enhydris is to become a small, reliable and well-maintained core.
Upgrading from 0.2¶
Version 0.5 contains some tricky database changes. The upgrade procedure is slightly complicated, and uses the intermediate Enhydris version 0.3, which exists only for this purpose.
(Note for developers: the reason for this procedure is that hcore used to have a foreign key to a dbsync model. As a result, the initial Django migration listed dbsync as a dependency, making it impossible to remove dbsync.)
The upgrade procedure is as follows:
Backup your database (you are not going to use this backup unless something goes wrong and you need to restore everything to the state it was before).
Make sure you are running the latest version of Enhydris 0.2 and that you have applied all its database upgrades (running python manage.py migrate should apply all such upgrades, and should do nothing if they are already applied).
Follow the Enhydris 0.5 installation instructions to install Enhydris in a new virtualenv; however, rather than installing Enhydris 0.5, install, instead, Enhydris 0.3, like this:
pip install 'enhydris>=0.3,<0.4'
Apply the database upgrades:
python manage.py migrate --fake-initial
Install Enhydris 0.5. The simplest way (but not the safest) is this:
pip install --upgrade --no-deps 'enhydris>=0.5,<0.6'
However, it is best to discard your Enhydris 0.3 virtualenv and create a new one, in which case you would install Enhydris 0.5 like this:
pip install 'enhydris>=0.5,<0.6'
Have your database password ready and run the following to empty the django_migrations database table:
python manage.py dbshell delete from django_migrations; \q
Repopulate the django_migrations table:
python manage.py migrate --fake
Changes in 0.5 microversions¶
- Removed embedmap view (in 0.5.1)
- Removed
example_project
, which was used for development instances; instead, added instructions inREADME.rst
on how to create one (in 0.5.1). - Fixed internal server error when editing station with
ENHYDRIS_USERS_CAN_ADD_CONTENT=True
(in 0.5.2). - Since 0.5.3, Enhydris depends on pthelma<0.12, since pthelma 0.12 has a backwards incompatible change.
Version 0.2¶
Changes¶
There have been too many changes to list here in detail. The most important ones (particularly those affecting backwards compatibility) are:
- Removed apps hrain, gis_objects, contourplot, hfaq, contact. hfaq and contact should be replaced with flatpages. hrain, gis_objects, and contourplot are not supported any more. If they are included again in the future, they will be maintained separately as distinct applications. Enhydris is to become a small, reliable and well-maintained core.
- Removed front page; front page is now station list
- Compatible with Django 1.5 and 1.6.
Upgrading from 0.1¶
Essentially you are on your own. It’s likely that just installing Enhydris 0.2 and executing python manage.py migrate will do the trick. Don’t forget to backup your database before attempting anything!
Copyright and credits¶
Enhydris is
Enhydris is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License, as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
The software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the licenses for more details.
You should have received a copy of the license along with this program. If not, see http://www.gnu.org/licenses/.
Enhydris was funded by several organizations:
- From 2005 to 2015 by NTUA (Antonis Christofides was an employee and worked on Enhydris as part of his work at Itia).
- In 2009-2010 by the Ministry of Environment of Greece as part of the Hydroscope project.
- In 2013-2014 by the TEI of Epirus as part of the IRMA project.
- In 2015 by GRNET as an open technology project.
- In 2018-2021 by NTUA and ICCS as part of the OpenHi project, funded by the EU-Greece Sectoral Structural framework “Antagonistikotita”.
Developer documentation:
Contributing to Enhydris¶
Enhydris is developed at GitHub. You can use the issue tracker there to file bugs. If you want to write code you can submit a pull request. For any non-trivial fix it is better to first co-ordinate with us by emailing us at openmeteo@itia.ntua.gr.
The database¶
Main principles¶
Enhydris supports PostgreSQL (with PostGIS).
In Django parlance, a model is a type of entity, which usually maps to a single database table. Therefore, in Django, we usually talk of models rather than of database tables, and we design models, which is close to conceptual database design, leaving it to Django’s object-relational mapper to translate to the physical. In this text, we also speak more of models than of tables. Since a model is a Python class, we describe it as a Python class rather than as a relational database table. If, however, you feel more comfortable with tables, you can generally read the text understanding that a model is a table.
If you are interested in the physical structure of the database, you need to know the model translation rules, which are quite simple:
- The name of the table is the lower case name of the model, with a
prefix. The prefix for the core of the database is
enhydris_
. (More on the prefix below). - Tables normally have an implicit integer id field, which is the primary key of the table.
- Table fields have the same name as model attributes, except for foreign keys.
- Foreign keys have the name of the model attribute suffixed with
_id
. - When using multi-table inheritance, the primary key of the child
table is also a foreign key to the id field of the parent table. The
name of the database column for the key of the child table is the
lower cased parent model name suffixed with
_ptr_id
.
The core of the Enhydris database is a list of measuring stations, with
additional information such as photos, videos, and the hydrological and
meteorological time series stored for each measuring station. This can
be used in or assisted by many more applications, which may or may not
be needed in each setup. A billing system is needed for agencies that
charge for their data, but not for those who offer them freely or only
internally. Some organisations may need to develop additional software
for managing aqueducts, and some may not. Therefore, the core is kept as
simple as possible. The core database tables use the enhydris_
prefix. Other applications use another prefix. The name of a table is
the lowercased model name preceded by the prefix. For example, the
table that corresponds to the Gentity
model is
enhydris_gentity
.
Lookup tables¶
Lookup tables are those that are used for enumerated values. For example, the list of variables is a lookup table. Most lookup tables in the Enhydris database have three fields: id, descr, and short_descr, and they all inherit the following abstract base class:
-
class
enhydris.models.
Lookup
¶ This class contains the common attribute of the lookup tables:
-
descr
¶
A character field with a descriptive name.
-
Most lookup tables are described in a relevant section of this document, where their description fits better.
Lentities¶
The Lentity is the superclass of people and groups. For example, a measuring station can belong either to an organisation or an individual. Lawyers use the word “entity” to refer to individuals and organisations together, but this would create confusion because of the more generic meaning of “entity” in computing; therefore, we use “lentity”, which is something like a legal entity. The lentity hierarchy is implemented by using Django’s multi-table inheritance.
-
class
enhydris.models.
Person
¶
Gentity and its direct descendants: Gpoint, Gline, Garea¶
A Gentity is a geographical entity. Examples of gentities (short for geographical entities) are measuring stations, cities, boreholes and watersheds. A gentity can be a point (e.g. stations and boreholes), a surface (e.g. lakes and watersheds), a line (e.g. aqueducts), or a network (e.g. a river). The gentities implemented in the core are measuring stations and generic gareas. The gentity hierarchy is implemented by using Django’s multi-table inheritance.
-
class
enhydris.models.
Gentity
¶ -
name
¶ A field with the name of the gentity, such as the name of a measuring station. Up to 200 characters.
-
code
¶ An optional field with a code for the gentity. Up to 50 characters. It can be useful for entities that have a code, e.g. watersheds are codified by the EU, and the watershed of Nestos River has code EL07.
-
remarks
¶ A field with general remarks about the gentity. Unlimited length.
-
geom
¶ This is a GeoDjango GeometryField that stores the geometry of the gentity.
-
-
class
enhydris.models.
Gpoint
(Gentity)¶ -
original_srid
¶ Specifies the reference system in which the user originally entered the co-ordinates of the point. Valid srid’s are registered at http://www.epsg-registry.org/. See also https://medium.com/@aptiko/introduction-to-geographical-co-ordinate-systems-4e143c5b21bc.
-
altitude
¶ The altitude in metres above mean sea level.
-
Additional information for generic gentities¶
This section describes models that provide additional information about gentities.
-
class
enhydris.models.
GentityFile
¶ -
class
enhydris.models.
GentityImage
¶ These models store files and images for the gentity. The difference between
GentityFile
andGentityImage
is thatGentityImage
objects are shown in a gallery in the station detail page, whereas files are shown in a much less prominent list.-
descr
¶ A short description or legend of the file/image.
-
remarks
¶ Remarks of unlimited length.
-
date
¶ For photos, it should be the date the photo was taken. For other kinds of files, it can be any kind of date.
-
content
¶ The actual content of the file; a Django FileField (for
GentityImage
) or ImageField (forGentityFile
).
-
featured
¶ This attribute exists for
GentityImage
only. In the station detail page, one of the images (the “featured” image) is shown in large size (the rest are shown as a thumbnail gallery). This attribute indicates the featured image. If there are more than one featured images (or if there is none), images are sorted bydescr
, and the first one is featured.
-
-
class
enhydris.models.
EventType
(Lookup)¶ Stores types of events.
-
class
enhydris.models.
GentityEvent
¶ An event is something that happens during the lifetime of a gentity and needs to be recorded. For example, for measuring stations, events such as malfunctions, maintenance sessions, and extreme weather phenomena observations can be recorded and provide a kind of log.
-
date
¶ The date of the event.
-
user
¶ The username of the user who entered the event to the database.
-
report
¶ A report about the event; a text field of unlimited length.
-
Webservice API¶
Quick start¶
Get list of stations with a simple unauthenticated request:
$ curl https://openmeteo.org/api/stations/
Response:
{
"count": 109,
"next": "http://openmeteo.org/api/stations/?page=2",
"previous": null,
"bounding_box": [
7.58748007,
34.9857333,
32.9850667,
53.85553
],
"results": [
{
"id": 1386,
"last_modified": "2013-10-10T05:04:42.478447Z",
"name": "ΡΕΜΑ ΠΙΚΡΟΔΑΦΝΗΣ",
"code": "ΠΙΚΡΟΔΑΦΝΗ",
"remarks": "ΕΛΛΗΝΙΚΟ ΚΕΝΤΡΟ ΘΑΛΑΣΣΙΩΝ ΕΡΕΥΝΩΝ",
"original_srid": 2100,
"altitude": 2,
"geom": "SRID=4326;POINT (23.7025252977241 37.91860884428689)",
"start_date": "2012-09-20",
"end_date": null,
"owner": 11,
"overseer": "",
"maintainers": []
},
...
],
}
Some requests need authentication. First, you need to get a token:
curl -X POST -d "username=alice" -d "password=topsecret" \
https://openmeteo.org/api/auth/login/
Response:
{"key": "24122a7ad9cfec48eb536f5ca12fe06116ba3593"}
Subsequently, you can make authenticated requests to the API; for example, the
following will update a time series, modifying its variable
field:
curl -H "Authorization: token 24122a7ad9cfec48eb536f5ca12fe06116ba3593" \
-X PATCH -d "variable=1" \
https://openmeteo.org/api/stations/1334/timeseries/10657/
The response will be 200 with the following content:
{
"id": 10657,
"last_modified": "2011-06-22T06:54:17.064484Z",
"name": "Wind gust (2000-2006)",
"hidden": false,
"precision": 1,
"remarks": "Type: Raw data",
"gentity": 1334,
"variable": 1,
"unit_of_measurement": 7,
"time_zone": 1,
"time_step": "10min"
}
Authentication and user management¶
Client authentication¶
Use OAuth2 token authentication:
curl -H "Authorization: token OAUTH-TOKEN" https://openmeteo.org/api/
To get a token, POST to /auth/login/
:
curl -X POST -d "username=alice" -d "password=topsecret" \
https://openmeteo.org/api/auth/login/
This will result in something like this:
{"key": "24122a7ad9cfec48eb536f5ca12fe06116ba3593"}
You can invalidate a token by POST to /auth/logout/
:
curl -X POST -H "Authorization: token OAUTH-TOKEN" \
https://openmeteo.org/api/auth/logout/
The response is 200 with this content:
{"detail":"Successfully logged out."}
Password management¶
To change password, POST to /auth/password/change/
:
curl -X POST -H "Authorization: token OAUTH-TOKEN" \
-d "old_password=topsecret1" \
-d "new_password1=topsecret2" -d "new_password2=topsecret2" \
https://openmeteo.org/api/auth/password/change/
If all goes well, the response is a 200 with the following content:
{"detail": "New password has been saved."}
If there is an error, the response is a 400 with a standard error response.
To reset the password, POST to /auth/password/reset/
:
curl -X POST -d "email=myself@example.com" \
https://openmeteo.org/api/auth/password/reset/
This will respond with 200 and the following content:
{"detail":"Password reset e-mail has been sent."}
The response will be 200 even if there is no record of this email address (but in this case the response will be ignored); this is in order to avoid disclosing which email addresses are registered. However, the response will be 400 with a standard error response if the email address is invalid.
The user will subsequently be sent an email with a link (under
/api/auth/password/reset/confirm/
) that provides a page where the
user can specify a new password. After succeeding in specifying a new
password, he is redirected to /api/auth/password/reset/complete/
,
which is a page that says “your password has been set”. However these
two aren’t API endpoints (they’re just the convenient defaults of
dj-rest-auth
).
User profile management¶
To get the user data, GET /auth/user
:
curl -H "Authorization: token OAUTH-TOKEN" \
https://openmeteo.org/api/auth/user/
This will normally result in a 200 response with content like this:
{
"pk": 166,
"username": "alice",
"email": "alice@example.com",
"first_name": "Alice",
"last_name": "Burton"
}
You can modify these attributes except for pk
and email
by
PUT or PATCH to the same endpoint:
curl -X PATCH -H "Authorization: token OAUTH-TOKEN" \
-d "username=joe" https://openmeteo.org/api/auth/user/
The response is a 200 with a similar content as the GET response (with the updated data), unless there is a problem, in which case there’s a standard error response.
Lookups¶
GET a single object for stationtypes
:
curl https://openmeteo.org/api/stationtypes/1/
Response:
{
"id": 1,
"last_modified": "2011-06-22T05:21:05.436765Z",
"descr": "Meteorological",
}
GET the list of objects for stationtypes
:
curl https://openmeteo.org/api/stationtypes/
The result is a paginated list of station types:
{
"count": 8,
"next": null,
"previous": null,
"results": [
{...},
{...},
...
]
}
Exactly the same applies to eventtypes
and variables
.
Besides these there are several other lookups for which the response is
similar but may have additional information. These are
organizations
, persons
, timezones
, filetypes
and
units
.
Response format for organizations
:
{
"id": 5,
"last_modified": "2011-06-30T03:03:47.392265Z",
"remarks": "",
"name": "National Technical University of Athens - Dept. of Water Resources and Env. Engineering",
"acronym": "N.T.U.A. - D.W.R.E.",
}
Response format for persons
:
{
"id": 17,
"last_modified": null,
"remarks": "",
"last_name": "Christofides",
"first_name": "Antonis",
"middle_names": "Michael",
"initials": "A. C.",
}
Response format for timezones
:
{
"id": 9,
"last_modified": "2011-06-28T16:42:34.760676Z",
"code": "EST",
"utc_offset": -300
}
Response format for filetypes
:
{
"id": 7,
"last_modified": "2011-06-22T05:04:03.461401Z",
"descr": "png Picture",
"mime_type": "image/png"
}
Response format for units
:
{
"id": 614,
"last_modified": null,
"descr": "Square metres",
"symbol": "m²",
"variables": []
}
Stations¶
Station detail¶
You can GET the detail of a single station at /api/stations/ID/
:
curl https://openmeteo.org/api/stations/1334/
Response:
{
"id": 1386,
"last_modified": "2013-10-10T05:04:42.478447Z",
"name": "ΡΕΜΑ ΠΙΚΡΟΔΑΦΝΗΣ",
"code": "ΠΙΚΡΟΔΑΦΝΗ",
"remarks": "ΕΛΛΗΝΙΚΟ ΚΕΝΤΡΟ ΘΑΛΑΣΣΙΩΝ ΕΡΕΥΝΩΝ",
"original_srid": 2100,
"altitude": 2,
"geom": "SRID=4326;POINT (23.7025252977241 37.91860884428689)",
"start_date": "2012-09-20",
"end_date": null,
"owner": 11,
"overseer": "",
"maintainers": []
}
List stations¶
GET the list of stations at /stations/
:
curl https://openmeteo.org/api/stations/
The result is a paginated list of stations:
{
"count": 109,
"next": "http://openmeteo.org/api/stations/?page=2",
"previous": null,
"bounding_box": [7.58748, 37.03330, 26.88787, 53.85553]
"results": [
{...},
{...},
...
]
}
Except for the standard paginated list attributes count
,
next
, previous
and results
, the returned object also
contains bounding_box
: this is the rectangle that encloses all
stations this query returns (not only of this page): longitude and
latitude of lower left corner, longitude and latitude of top right
corner.
Search stations¶
Limit the returned stations with the q
parameter. The following will
return all stations where the specified words appear anywhere in the
name, remarks, owner name, or timeseries remarks. The match is case
insensitive, and the words are actually substrings (i.e. they can match
part of a word):
curl 'https://openmeteo.org/api/stations/?q=athens+research'
The search string specified by q
consists of space-delimited search
terms. The result set is the “and” of all search terms. If a search
term does not contain a colon (:
), it is searched mostly everywhere,
as explained above. If it does contain a colon, then the form of the
search term is search_type:words
. The words
cannot
contain a space (this is rarely a problem; instead of searching for
“ionian islands”, searching for “ionian” is usually fine). Search terms
where the search_type
isn’t recognized are ignored.
You can search specifically by owner:
curl 'https://openmeteo.org/api/stations/?q=owner:ntua'
Or by type:
curl 'https://openmeteo.org/api/stations/?q=type:meteorological'
Or by variable (i.e. one of the timeseries of the station refers to that variable):
curl 'https://openmeteo.org/api/stations/?q=variable:temperature'
You can also search by bounding box. The following will find stations that are enclosed in the specified rectangle (the numbers are longitude and latitude of lower-left and top-right corner):
curl 'https://openmeteo.org/api/stations/?q=bbox:22.5,37.0,24.3,39.1'
You can include only stations that have time series by specifying
the search term ts_only:
, without a search word:
curl 'https://openmeteo.org/api/stations/?q=ts_only:'
Finally, ts_has_years
can limit to stations based on the range of
their time series. The following will find stations that have at least
one time series containing records in 1988, at least one time series
containing records in 1989, and at least one time series containing
records in 2004:
curl 'https://openmeteo.org/api/stations/?q=ts_has_years:1988,1989,2004'
Sort the list of stations¶
Sort the returned stations with the sort
parameter, which can be
specified many times. This will sort by start date, then by name:
curl 'https://openmeteo.org/api/stations/?sort=start_date&sort=name'
Export stations in a CSV¶
Sometimes users want to get the list of stations and process it in a spreadsheet. This does this:
curl https://openmeteo.org/api/stations/csv/ >data.zip
The list can be sorted and filtered with the q
and sort
parameters as explained above. The result is a zip file that contains a
CSV with the stations and a CSV with all the time series (their metadata
only) of these stations. These lists contain all the columns, so users
can do whatever they want with them.
Create, update or delete stations¶
DELETE a station:
curl -X DELETE -H "Authorization: token OAUTH-TOKEN" \
https://openmeteo.org/api/stations/1334/
The response is normally 204 (no content) or 404.
POST to create a station:
curl -X POST -H "Authorization: token OAUTH-TOKEN" \
-d "name=My station" -d "geom=POINT(20.94565 39.12102)" \
-d "owner=11" https://openmeteo.org/api/stations/
The response is a 201 with a similar content as the GET detail response (with the new data), unless there is a problem, in which case there’s a standard error response.
When specifying nested objects, these objects are not created or updated—only the id is used and a reference to the nested object with that id is created.
PUT or PATCH a station:
curl -X PATCH -H "Authorization: token OAUTH-TOKEN" \
-d "name=Your station" https://openmeteo.org/api/stations/1334/
The response is a 200 with a similar content as the GET detail response (with the updated data), unless there is a problem, in which case there’s a standard error response. Nested objects are handled in the same way as for POST (see above).
Time series groups¶
Time series group detail¶
You can GET the detail of a single time series group at
/api/stations/XXX/timeseriesgroups/YYY/
:
curl https://openmeteo.org/api/stations/1403/timeseriesgroups/483/
Response:
{
"id": 522,
"last_modified": "2015-04-05T05:33:41.140506-05:00",
"name": "Temperature",
"hidden": false,
"precision": 2,
"remarks": "",
"gentity": 1403,
"variable": 5683,
"unit_of_measurement": 14,
"time_zone": 1
}
List time series groups¶
GET the list of time series groups for a station at
/api/stations/XXX/timeseriesgroups/
:
curl https://openmeteo.org/api/stations/1403/timeseriesgroups/
The result is a paginated list of time series groups:
{
"count": 13,
"next": null,
"previous": null,
"results": [
{...},
{...},
...
]
}
Time series¶
Time series detail¶
You can GET the detail of a single time series at
/api/stations/XXX/timeseriesgroups/YYY/timeseries/ZZZ/
:
curl https://openmeteo.org/api/stations/1403/timeseriesgroups/483/timeseries/9511/
Response:
{
"id": 9511,
"last_modified": "2015-04-05T05:33:41.140506-05:00",
"type": "Initial",
"time_step": "10min",
"timeseries_group": 483
}
The type
is one of Initial, Checked, Regularized, and Aggregated.
List time series¶
GET the list of time series for a group at
/api/stations/XXX/timeseriesgroups/YYY/timeseries/
:
curl https://openmeteo.org/api/stations/1403/timeseriesgroups/483/timeseries/
The result is a paginated list of time series:
{
"count": 5,
"next": null,
"previous": null,
"results": [
{...},
{...},
...
]
}
Create time series¶
POST to create a time series:
curl -X POST -H "Authorization: token OAUTH-TOKEN" \
-d "timeseries_group=42" -d "type=Initial"-d "time_step=H" \
https://openmeteo.org/api/stations/5/timeseriesgroups/42/timeseries/
The response is a 201 with a similar content as the GET detail response (with the new data), unless there is a problem, in which case there’s a standard error response.
When specifying nested objects, these objects are not created or updated—only the id is used and a reference to the nested object with that id is created.
Time series data¶
GET the data of a time series in CSV by appending data/
to the
URL:
curl https://openmeteo.org/api/stations/1334/timeseriesgroup/232/timeseries/10659/data/
Example of response:
1998-12-10 16:40,6.3,
1998-12-10 16:50,6.1,
1998-12-10 17:00,6.0,
1998-12-10 17:10,5.6,
...
Instead of CSV, you can get HTS by specifying the parameter
fmt=hts
:
curl 'https://openmeteo.org/api/stations/1334/timeseriesgroup/235/timeseries/10659/data/?fmt=hts`
Response:
Count=926108
Title=Temperature (from 1998)
Comment=NTUA University Campus of Zografou
Comment=
Comment=Type: Raw data
Timezone=EET (UTC+0200)
Time_step=10,0
Variable=Mean temperature
Precision=1
Location=23.787430 37.973850 4326
Altitude=219.00
1998-12-10 16:40,6.3,
1998-12-10 16:50,6.1,
1998-12-10 17:00,6.0,
1998-12-10 17:10,5.6,
...
Get only the last record of the time series (in CSV) with bottom/
:
curl https://openmeteo.org/api/stations/1334/timeseriesgroup/235/timeseries/10659/bottom/
Response:
2018-07-09 11:19,0.000000,
Append data to the time series:
curl -X POST -H "Authorization: token OAUTH-TOKEN" \
-d $'timeseries_records=2018-12-19T11:50,25.0,\n2018-12-19T12:00,25.1,\n' \
https://openmeteo.org/api/stations/1334/timeseriesgroups/235/timeseries/10659/data/
(The $'...'
is a bash idiom that does nothing more than escape the
\n
in the string.)
The response is normally 204 (no content).
Time series chart data¶
GET statistics for timeseries data in intervals by appending chart/
:
curl https://openmeteo.org/api/stations/1334/timeseries/232/chart/
Example of response:
[
{
"timestamp": 1579292086,
"min": "1.00",
"max": "18.00",
"mean": 14.00"
},
{
"timestamp": 1580079590,
"min": "4.00",
"max": "22.00",
"mean": "18.53"
},
...
]
You can provide time limits using the following query parameters
start_date=<TIME>&end_date=<TIME>
. For instance, to request data prior to
2015 only, we can make the following request:
curl 'https://openmeteo.org/api/stations/1334/timeseries/232/chart/?end_date=2015-01-01T00:00`
The purpose of this endpoint is to be used when creating a chart for the
time series. When the user pans or zooms the chart, a new request with
different start_date
and/or end_date
is made. While transferring
the entire time series to the client would be simpler, it can be too
large. This endpoint only provides 200 points, so the transfer is
instant.
What the endpoint does is divide the time between start_date
and
end_date
(or the entire time series time range) in 200 intervals.
For each interval it returns the interval’s statistics and the middle of
the interval as the timestamp.
Other items of stations¶
Media and other station files¶
List station files:
curl https://openmeteo.org/api/stations/1334/files/
Response:
{
"count": 8,
"next": null,
"previous": null,
"results": [
{
"id": 39,
"last_modified": "2011-06-22T07:53:01.349877Z",
"date": "1998-01-05",
"content": "https://openmeteo.org/media/gentityfile/imported_hydria_gentityfile_1334-4.jpg",
"descr": "West view",
"remarks": "",
"gentity": 1334
},
...
]
}
Or you can get the detail of a single one:
curl https://openmeteo.org/api/stations/1334/files/39/
Response:
{
"id": 39,
"last_modified": "2011-06-22T07:53:01.349877Z",
"date": "1998-01-05",
"content": "https://openmeteo.org/media/gentityfile/imported_hydria_gentityfile_1334-4.jpg",
"descr": "West view",
"remarks": "",
"gentity": 1334
},
Get content of such files:
curl https://openmeteo.org/api/stations/1334/files/39/content/
The response is the contents of the file (usually binary data). The
response headers contain the appropriate Content-Type
(derived from
the file’s extension).
Events¶
List or get detail of station events:
curl https://openmeteo.org/api/stations/1334/events/
curl https://openmeteo.org/api/stations/1334/events/524/
Response example for the detail request:
{
"id": 524,
"last_modified": null,
"date": "1998-12-10",
"user": "",
"report": "Added air temperature and humidity sensor.",
"gentity": 1334,
"type": 2
},
For the list request, the result is a paginated list of items.
Pagination¶
Some responses contain a paginated list. This has the following format:
{
"count": 109,
"next": "http://openmeteo.org/api/stations/?page=2",
"previous": null,
"results": [
{...},
{...},
{...},
...
]
}
The returned object contains the following attributes:
- results
- A list of items. Up to 20 items are returned (but this is
configurable by specifying
REST_FRAMEWORK["PAGE_SIZE"]
in the settings). - count
- The total number of items this request returns. If they are 20 or fewer, there is no other page.
- next, previous
- The URLs for the next and previous page of results.
Error responses¶
When there is an error with the data of a POST, PATCH or PUT request, the response code is 400 and the content has an error message for each problematic field. For example:
curl -v -X POST -H "Authorization: token OAUTH-TOKEN" \
-d "gentity=1334" -d "variable=1234" -d "unit_of_measurement=1" \
https://openmeteo.org/api/stations/1334/timeseries/
Response:
{
"time_zone": [
"This field is required."
],
"variable": [
"Invalid pk \"1234\" - object does not exist."
]
}
If there is an error that does not apply to a specific field but to the
data as a whole, the error message goes into non_field_errors
:
{
"non_field_errors": [
"A time series with timeseries_group_id=2 and type=Initial already exists"
]
}