Page 1 of 14 (270 posts)

  • talks about »

Last update:
Fri Dec 15 02:25:25 2017

A Django site.

QGIS Planet

Data exploration with Data Plotly for QGIS3

Data Plotly is a new plugin by Matteo Ghetta for QGIS3 which makes it possible to draw D3 graphs of vector layer attribute values. This is a huge step towards making QGIS a one stop shop for data exploration!

Data Plotly adds a new panel where graphs can be configured and viewed. Currently, there are nine different plot types:

The following examples use tree cadastre data from the city of Linz, Austria.

Scatter plots with both two and three variables are supported. After picking the attributes you want to visualize, press “Create plot”.

If you change some settings and press “Create plot” again, by default, the new graph will be plotted on top of the old one. If you don’t want that to happen, press “Clean plot canvas” before creating a new plot.

The plots are interactive and display more information on mouse over, for example, the values of a box plot:

Even aggregate expressions are supported! Here’s the mean height of trees by type (deciduous L or coniferous N):

For more examples, I strongly recommend to have a look at the plugin home page.


Intro to QGIS3 3D view with Viennese building data

In this post, I want to show how to visualize building block data published by the city of Vienna in 3D using QGIS. This data is interesting due to its level of detail. For example, here you can see the Albertina landmark in the center of Vienna:

an this is the corresponding 3D visualization, including flying roof:

To enable 3D view in QGIS 2.99 (soon to be released as QGIS 3), go to View | New 3D Map View.

Viennese building data (https://www.data.gv.at/katalog/dataset/76c2e577-268f-4a93-bccd-7d5b43b14efd) is provided as Shapefiles. (Saber Razmjooei recently published a similar post using data from New York City in ESRI Multipatch format.) You can download a copy of the Shapefile and a DEM for the same area from my dropbox.  The Shapefile contains the following relevant attributes for 3D visualization

  • O_KOTE: absolute building height measured to the roof gutter(?) (“absolute Gebäudehöhe der Dachtraufe”)
  • U_KOTE: absolute height of the lower edge of the building block if floating above ground (“absolute Überbauungshöhe unten”)
  • HOEHE_DGM: absolute height of the terrain (“absolute Geländehöhe”)
  • T_KOTE: lowest point of the terrain for the given building block (“tiefster Punkt des Geländes auf den Kanten der Gebäudeteilfläche”)

To style the 3D view in QGIS 3, I set height to “U_KOTE” and extrusion to

O_KOTE-coalesce(U_KOTE,0)

both with a default value of 0 which is used if the field or expression is NULL:

The altitude clamping setting defines how height values are interpreted. Absolute clamping is perfect for the Viennese data since all height values are provided as absolute measures from 0. Other options are “relative” and “terrain” which add given elevation values to the underlying terrain elevation. According to the source of qgs3dutils:

  AltClampAbsolute,   //!< Z_final = z_geometry
  AltClampRelative,   //!< Z_final = z_terrain + z_geometry
  AltClampTerrain,    //!< Z_final = z_terrain

The gray colored polygon style shown in the map view on the top creates the illusion of shadows in the 3D view:

 

Beyond that, this example also features elevation model data which can be configured in the 3D View panel. I found it helpful to increase the terrain tile resolution (for example to 256 px) in order to get more detailed terrain renderings:

Overall, the results look pretty good. There are just a few small glitches in the rendering, as well as in the data. For example, the kiosik in front of Albertina which you can also see in the StreetView image, is lacking height information and therefore we can only see it’s “shadow” in the 3D rendering.

So far, I found 3D rendering performance very good. It works great on my PC with Nvidia graphics card. On my notebook with Intel Iris graphics, I’m unfortunately still experiencing crashes which I hope will be resolved in the future.


Movement data in GIS #11: FOSS4G2017 talk recordings

Many of the topics I’ve covered in recent “Movement data in GIS” posts, have also been discussed at this year’s FOSS4G. Here’s a list of videos for you to learn more about the OGC Moving Features standard, modelling AIS data with FOSS, and more:

1. Introduction to the OGC Moving Features standard presented by Kyoung-Sook Kim from the Artificial Intelligence Research Center, Japan:

Another Perspective View of Cesium for OGC Moving Features from FOSS4G Boston 2017 on Vimeo.

2. Modeling AIS data using GDAL & PostGIS presented by Morten Aronsen from the Norwegian Defence Research Establishment:

Density mapping of ship traffic using FOSS4G in C# .NET from FOSS4G Boston 2017 on Vimeo.

3. 3D visualization of movement data from videos presented by Anna Petrasova from the Center for Geospatial Analysis, North Carolina State University:

Visualization and analysis of active transportation patterns derived from public webcams from FOSS4G Boston 2017 on Vimeo.

There are also a ton of Docker presentations on the FOSS4G2017 Vimeo channel, if you liked “Docker basics with Geodocker GeoServer”.


Read more:


Movement data in GIS #10: open tools for AIS tracks from MarineCadastre.gov

MarineCadastre.gov is a great source for AIS data along the US coast. Their data formats and tools though are less open. Luckily, GDAL – and therefore QGIS – can read ESRI File Geodatabases (.gdb).

MarineCadastre.gov also offer a Track Builder script that creates lines out of the broadcast points. (It can also join additional information from the vessel and voyage layers.) We could reproduce the line creation step using tools such as Processing’s Point to path but this post will show how to create PostGIS trajectories instead.

First, we have to import the points into PostGIS using either DB Manager or Processing’s Import into PostGIS tool:

Then we can create the trajectories. I’ve opted to create a materialized view:

The first part of the query creates a temporary table called ptm (short for PointM). This step adds time stamp information to each point. The second part of the query then aggregates these PointMs into trajectories of type LineStringM.

CREATE MATERIALIZED VIEW ais.trajectory AS
 WITH ptm AS (
   SELECT b.mmsi,
     st_makepointm(
       st_x(b.geom), 
       st_y(b.geom), 
       date_part('epoch', b.basedatetime)
     ) AS pt,
     b.basedatetime t
   FROM ais.broadcast b
   ORDER BY mmsi, basedatetime
 )
 SELECT row_number() OVER () AS id,
   st_makeline(ptm.pt) AS st_makeline,
   ptm.mmsi,
   min(ptm.t) AS min_t,
   max(ptm.t) AS max_t
 FROM ptm
 GROUP BY ptm.mmsi
WITH DATA;

The trajectory start and end times (min_t and max_t) are optional but they can help speed up future queries.

One of the advantages of creating trajectory lines is that they render many times faster than the original points.

Of course, we end up with some artifacts at the border of the dataset extent. (Files are split by UTM zone.) Trajectories connect the last known position before the vessel left the observed area with the position of reentry. This results, for example, in vertical lines which you can see in the bottom left corner of the above screenshot.

With the trajectories ready, we can go ahead and start exploring the dataset. For example, we can visualize trajectory speed and/or create animations:

Purple trajectory segments are slow while green segments are faster

We can also perform trajectory analysis, such as trajectory generalization:

This is a first proof of concept. It would be great to have a script that automatically fetches the datasets for a specified time frame and list of UTM zones and loads them into PostGIS for further processing. In addition, it would be great to also make use of the information in the vessel and voyage tables, thus splitting up trajectories into individual voyages.


Read more:


Movement data in GIS #9: trajectory data models

There are multiple ways to model trajectory data. This post takes a closer look at the OGC® Moving Features Encoding Extension: Simple Comma Separated Values (CSV). This standard has been published in 2015 but I haven’t been able to find any reviews of the standard (in a GIS context or anywhere else).

The following analysis is based on the official OGC trajcectory example at http://docs.opengeospatial.org/is/14-084r2/14-084r2.html#42. The header consists of two lines: the first line provides some meta information while the second defines the CSV columns. The data model is segment based. That is, each line describes a trajectory segment with at least two coordinate pairs (or triplets for 3D trajectories). For each segment, there is a start and an end time which can be specified as absolute or relative (offset) values:

@stboundedby,urn:x-ogc:def:crs:EPSG:6.6:4326,2D,50.23 9.23,50.31 9.27,2012-01-17T12:33:41Z,2012-01-17T12:37:00Z,sec
@columns,mfidref,trajectory,state,xsd:token,”type code”,xsd:integer
a, 10,150,11.0 2.0 12.0 3.0,walking,1
b, 10,190,10.0 2.0 11.0 3.0,walking,2
a,150,190,12.0 3.0 10.0 3.0,walking,2
c, 10,190,12.0 1.0 10.0 2.0 11.0 3.0,vehicle,1

Let’s look at the first data row in detail:

  • a … trajectory id
  • 10 … start time offset from 2012-01-17T12:33:41Z in seconds
  • 150 … end time offset from 2012-01-17T12:33:41Z in seconds
  • 11.0 2.0 12.0 3.0 … trajectory coordinates: x1, y1, x2, y2
  • walking …  state
  • 1… type code

My main issues with this approach are

  1. They missed the chance to use WKT notation to make the CSV easily readable by existing GIS tools.
  2. As far as I can see, the data model requires a regular sampling interval because there is no way to store time stamps for intermediate positions along trajectory segments. (Irregular intervals can be stored using segments for each pair of consecutive locations.)

In the common GIS simple feature data model (which is point-based), the same data would look something like this:

traj_id,x,y,t,state,type_code
a,11.0,2.0,2012-01-17T12:33:51Z,walking,1
a,12.0,3.0,2012-01-17T12:36:11Z,walking,1
a,10.0,3.0,2012-01-17T12:36:51Z,walking,2
b,10.0,2.0,2012-01-17T12:33:51Z,walking,2
b,11.0,3.0,2012-01-17T12:36:51Z,walking,2
c,12.0,1.0,2012-01-17T12:33:51Z,vehicle,1
c,10.0,2.0,2012-01-17T12:35:21Z,vehicle,1
c,11.0,3.0,2012-01-17T12:36:51Z,vehicle,1

The main issue here is that there has to be some application logic that knows how to translate from points to trajectory. For example, trajectory a changes from walking1 to walking2 at 2012-01-17T12:36:11Z but we have to decide whether to store the previous or the following state code for this individual point.

An alternative to the common simple feature model is the PostGIS trajectory data model (which is LineStringM-based). For this data model, we need to convert time stamps to numeric values, e.g. 2012-01-17T12:33:41Z is 1326803621 in Unix time. In this data model, the data looks like this:

traj_id,trajectory,state,type_code
a,LINESTRINGM(11.0 2.0 1326803631, 12.0 3.0 1326803771),walking,1
a,LINESTRINGM(12.0 3.0 1326803771, 10.0 3.0 1326803811),walking,2
b,LINESTRINGM(10.0 2.0 1326803631, 11.0 3.0 1326803811),walking,2
c,LINESTRINGM(12.0 1.0 1326803631, 10.0 2.0 1326803771, 11.0 3.0 1326803811),vehicle,1

This is very similar to the OGC data model, with the notable difference that every position is time-stamped (instead of just having segment start and end times). If one has movement data which is recorded at regular intervals, the OGC data model can be a bit more compact, but if the trajectories are sampled at irregular intervals, each point pair will have to be modeled as a separate segment.

Since the PostGIS data model is flexible, explicit, and comes with existing GIS tool support, it’s my clear favorite.


Read more:


Movement data in GIS extra: trajectory generalization code and sample data

Today’s post is a follow-up of Movement data in GIS #3: visualizing massive trajectory datasets. In that post, I summarized a concept for trajectory generalization. Now, I have published the scripts and sample data in my QGIS-Processing-tools repository on Github.

To add the trajectory generalization scripts to your Processing toolbox, you can use the Add scripts from files tool:

It is worth noting, that Add scripts from files fails to correctly import potential help files for the scripts but that’s not an issue this time around, since I haven’t gotten around to actually write help files yet.

The scripts are used in the following order:

  1. Extract characteristic trajectory points
  2. Group points in space
  3. Compute flows between cells from trajectories

The sample project contains input data, as well as output layers of the individual tools. The only required input is a layer of trajectories, where trajectories have to be LINESTRINGM (note the M!) features:

Trajectory sample based on data provided by the GeoLife project

In Extract characteristic trajectory points, distance parameters are specified in meters, stop duration in seconds, and angles in degrees. The characteristic points contain start and end locations, as well as turns and stop locations:

The characteristic points are then clustered. In this tool, the distance has to be specified in layer units, which are degrees in case of the sample data.

Finally, we can compute flows between cells defined by these clusters:

Flow lines scaled by flow strength and cell centers scaled by counts

If you use these tools on your own data, I’d be happy so see what you come up with!


Read more:


Movement data in GIS #8: edge bundling for flow maps

If you follow this blog, you’ll probably remember that I published a QGIS style for flow maps a while ago. The example showed domestic migration between the nine Austrian states, a rather small dataset. Even so, it required some manual tweaking to make the flow map readable. Even with only 72 edges, the map quickly gets messy:

Raw migration flows between Austrian states, line width scaled by flow strength

One popular approach in the data viz community to deal with this problem is edge bundling. The idea is to reduce visual clutter by generate bundles of similar edges. 

Surprisingly, edge bundling is not available in desktop GIS. Existing implementations in the visual analytics field often run on GPUs because edge bundling is computationally expensive. Nonetheless, we have set out to implement force-directed edge bundling for the QGIS Processing toolbox [0]. The resulting scripts are available at https://github.com/dts-ait/qgis-edge-bundling.

The main procedure consists of two tools: bundle edges and summarize. Bundle edges takes the raw straight lines, and incrementally adds intermediate nodes (called control points) and shifts them according to computed spring and electrostatic forces. If the input are 72 lines, the output again are 72 lines but each line geometry has been bent so that similar lines overlap and form a bundle.

After this edge bundling step, most common implementations compute a line heatmap, that is, for each map pixel, determine the number of lines passing through the pixel. But QGIS does not support line heatmaps and this approach also has issues distinguishing lines that run in opposite directions. We have therefore implemented a summarize tool that computes the local strength of the generated bundles.

Continuing our previous example, if the input are 72 lines, summarize breaks each line into its individual segments and determines the number of segments from other lines that are part of the same bundle. If a weight field is specified, each line is not just counted once but according to its weight value. The resulting bundle strength can be used to create a line layer style with data-defined line width:

Bundled migration flows

To avoid overlaps of flows in opposing directions, we define a line offset. Finally, summarize also adds a sequence number to the line segments. This sequence number is used to assign a line color on the gradient that indicates flow direction.

I already mentioned that edge bundling is computationally expensive. One reason is that we need to perform pairwise comparison of edges to determine if they are similar and should be bundled. This comparison results in a compatibility matrix and depending on the defined compatibility threshold, different bundles can be generated.

The following U.S. dataset contains around 4000 lines and bundling it takes a considerable amount of time.

One approach to speed up computations is to first use a quick clustering algorithm and then perform edge bundling on each cluster individually. If done correctly, clustering significantly reduces the size of each compatibility matrix.

In this example, we divided the edges into six clusters before bundling them. If you compare this result to the visualization at the top of this post (which did not use clustering), you’ll see some differences here and there but, overall, the results are quite similar:

Looking at these examples, you’ll probably spot a couple of issues. There are many additional ideas for potential improvements from existing literature which we have not implemented yet. If you are interested in improving these tools, please go ahead! The code and more examples are available on Github.

For more details, leave your email in a comment below and I’ll gladly send you the pre-print of our paper.

[0] Graser, A., Schmidt, J., Roth, F., & Brändle, N. (2017 online) Untangling Origin-Destination Flows in Geographic Information Systems. Information Visualization – Special Issue on Visual Movement Analytics.


Read more:


Drive-time Isochrones from a single Shapefile using QGIS, PostGIS, and Pgrouting

This is a guest post by Chris Kohler .

Introduction:

This guide provides step-by-step instructions to produce drive-time isochrones using a single vector shapefile. The method described here involves building a routing network using a single vector shapefile of your roads data within a Virtual Box. Furthermore, the network is built by creating start and end nodes (source and target nodes) on each road segment. We will use Postgresql, with PostGIS and Pgrouting extensions, as our database. Please consider this type of routing to be fair, regarding accuracy, as the routing algorithms are based off the nodes locations and not specific addresses. I am currently working on an improved workflow to have site address points serve as nodes to optimize results. One of the many benefits of this workflow is no financial cost to produce (outside collecting your roads data). I will provide instructions for creating, and using your virtual machine within this guide.

Steps:–Getting Virtual Box(begin)–

Intro 1. Download/Install Oracle VM(https://www.virtualbox.org/wiki/Downloads)

Intro 2. Start the download/install OSGeo-Live 11(https://live.osgeo.org/en/overview/overview.html).

Pictures used in this workflow will show 10.5, though version 11 can be applied similarly. Make sure you download the version: osgeo-live-11-amd64.iso. If you have trouble finding it, here is the direct link to the download (https://sourceforge.net/projects/osgeo-live/files/10.5/osgeo-live-10.5-amd64.iso/download)
Intro 3. Ready for virtual machine creation: We will utilize the downloaded OSGeo-Live 11 suite with a virtual machine we create to begin our workflow. The steps to create your virtual machine are listed below. Also, here are steps from an earlier workshop with additional details with setting up your virtual machine with osgeo live(http://workshop.pgrouting.org/2.2.10/en/chapters/installation.html).

1.  Create Virutal Machine: In this step we begin creating the virtual machine housing our database.

Open Oracle VM VirtualBox Manager and select “New” located at the top left of the window.

VBstep1

Then fill out name, operating system, memory, etc. to create your first VM.

vbstep1.2

2. Add IDE Controller:  The purpose of this step is to create a placeholder for the osgeo 11 suite to be implemented. In the virtual box main window, right-click your newly-created vm and open the settings.

vbstep2

In the settings window, on the left side select the storage tab.

Find “adds new storage controller button located at the bottom of the tab. Be careful of other buttons labeled “adds new storage attachment”! Select “adds new storage controller button and a drop-down menu will appear. From the top of the drop-down select “Add IDE Controller”.

vbstep2.2

vbstep2.3

You will see a new item appear in the center of the window under the “Storage Tree”.

3.  Add Optical Drive: The osgeo 11 suite will be implemented into the virtual machine via an optical drive. Highlight the new controller IDE you created and select “add optical drive”.

vbstep3

A new window will pop-up and select “Choose Disk”.

vbstep3.2

Locate your downloaded file “osgeo-live 11 amd64.iso” and click open. A new object should appear in the middle window under your new controller displaying “osgeo-live-11.0-amd64.iso”.

vbstep3.3

Finally your virtual machine is ready for use.
Start your new Virtual Box, then wait and follow the onscreen prompts to begin using your virtual machine.

vbstep3.4

–Getting Virtual Box(end)—

4. Creating the routing database, and both extensions (postgis, pgrouting): The database we create and both extensions we add will provide the functions capable of producing isochrones.

To begin, start by opening the command line tool (hold control+left-alt+T) then log in to postgresql by typing “psql -U user;” into the command line and then press Enter. For the purpose of clear instruction I will refer to database name in this guide as “routing”, feel free to choose your own database name. Please input the command, seen in the figure below, to create the database:

CREATE DATABASE routing;

You can use “\c routing” to connect to the database after creation.

step4

The next step after creating and connecting to your new database is to create both extensions. I find it easier to take two-birds-with-one-stone typing “psql -U user routing;” this will simultaneously log you into postgresql and your routing database.

When your logged into your database, apply the commands below to add both extensions

CREATE EXTENSION postgis;
CREATE EXTENSION pgrouting;

step4.2

step4.3

5. Load shapefile to database: In this next step, the shapefile of your roads data must be placed into your virtual machine and further into your database.

My method is using email to send myself the roads shapefile then download and copy it from within my virtual machines web browser. From the desktop of your Virtual Machine, open the folder named “Databases” and select the application “shape2pgsql”.

step5

Follow the UI of shp2pgsql to connect to your routing database you created in Step 4.

step5.2

Next, select “Add File” and find your roads shapefile (in this guide we will call our shapefile “roads_table”) you want to use for your isochrones and click Open.

step5.3

Finally, click “Import” to place your shapefile into your routing database.

6. Add source & target columns: The purpose of this step is to create columns which will serve as placeholders for our nodes data we create later.

There are multiple ways to add these columns into the roads_table. The most important part of this step is which table you choose to edit, the names of the columns you create, and the format of the columns. Take time to ensure the source & target columns are integer format. Below are the commands used in your command line for these functions.

ALTER TABLE roads_table ADD COLUMN "source" integer;
ALTER TABLE roads_table ADD COLUMN "target" integer;

step6

step6.2

7. Create topology: Next, we will use a function to attach a node to each end of every road segment in the roads_table. The function in this step will create these nodes. These newly-created nodes will be stored in the source and target columns we created earlier in step 6.

As well as creating nodes, this function will also create a new table which will contain all these nodes. The suffix “_vertices_pgr” is added to the name of your shapefile to create this new table. For example, using our guide’s shapefile name , “roads_table”, the nodes table will be named accordingly: roads_table_vertices_pgr. However, we will not use the new table created from this function (roads_table_vertices_pgr). Below is the function, and a second simplified version, to be used in the command line for populating our source and target columns, in other words creating our network topology. Note the input format, the “geom” column in my case was called “the_geom” within my shapefile:

pgr_createTopology('roads_table', 0.001, 'geom', 'id',
 'source', 'target', rows_where := 'true', clean := f)

step7

Here is a direct link for more information on this function: http://docs.pgrouting.org/2.3/en/src/topology/doc/pgr_createTopology.html#pgr-create-topology

Below is an example(simplified) function for my roads shapefile:

SELECT pgr_createTopology('roads_table', 0.001, 'the_geom', 'id')

8. Create a second nodes table: A second nodes table will be created for later use. This second node table will contain the node data generated from pgr_createtopology function and be named “node”. Below is the command function for this process. Fill in your appropriate source and target fields following the manner seen in the command below, as well as your shapefile name.

To begin, find the folder on the Virtual Machines desktop named “Databases” and open the program “pgAdmin lll” located within.

step8

Connect to your routing database in pgAdmin window. Then highlight your routing database, and find “SQL” tool at the top of the pgAdmin window. The tool resembles a small magnifying glass.

step8.2

We input the below function into the SQL window of pgAdmin. Feel free to refer to this link for further information: (https://anitagraser.com/2011/02/07/a-beginners-guide-to-pgrouting/)

CREATE TABLE node AS
   SELECT row_number() OVER (ORDER BY foo.p)::integer AS id,
          foo.p AS the_geom
   FROM (     
      SELECT DISTINCT roads_table.source AS p FROM roads_table
      UNION
      SELECT DISTINCT roads_table.target AS p FROM roads_table
   ) foo
   GROUP BY foo.p;

step8.3

  1.  Create a routable network: After creating the second node table from step 8,  we will combine this node table(node) with our shapefile(roads_table) into one, new, table(network) that will be used as the routing network. This table will be called “network” and will be capable of processing routing queries.  Please input this command and execute in SQL pgAdmin tool as we did in step 8. Here is a reference for more information:(https://anitagraser.com/2011/02/07/a-beginners-guide-to-pgrouting/)   

step8.2

 

CREATE TABLE network AS
   SELECT a.*, b.id as start_id, c.id as end_id
   FROM roads_table AS a
      JOIN node AS b ON a.source = b.the_geom
      JOIN node AS c ON a.target = c.the_geom;

step9.2

10. Create a “noded” view of the network:  This new view will later be used to calculate the visual isochrones in later steps. Input this command and execute in SQL pgAdmin tool.

CREATE OR REPLACE VIEW network_nodes AS 
SELECT foo.id,
 st_centroid(st_collect(foo.pt)) AS geom 
FROM ( 
  SELECT network.source AS id,
         st_geometryn (st_multi(network.geom),1) AS pt 
  FROM network
  UNION 
  SELECT network.target AS id, 
         st_boundary(st_multi(network.geom)) AS pt 
  FROM network) foo 
GROUP BY foo.id;

step10

11.​ Add column for speed:​ This step may, or may not, apply if your original shapefile contained a field of values for road speeds.

In reality a network of roads will typically contain multiple speed limits. The shapefile you choose may have a speed field, otherwise the discrimination for the following steps will not allow varying speeds to be applied to your routing network respectfully.

If values of speed exists in your shapefile we will implement these values into a new field, “traveltime“, that will show rate of travel for every road segment in our network based off their geometry. Firstly, we will need to create a column to store individual traveling speeds. The name of our column will be “traveltime” using the format: ​double precision.​ Input this command and execute in the command line tool as seen below.

ALTER TABLE network ADD COLUMN traveltime double precision;

step11

Next, we will populate the new column “traveltime” by calculating traveling speeds using an equation. This equation will take each road segments geometry(shape_leng) and divide by the rate of travel(either mph or kph). The sample command I’m using below utilizes mph as the rate while our geometry(shape_leng) units for my roads_table is in feet​. If you are using either mph or kph, input this command and execute in SQL pgAdmin tool. Below further details explain the variable “X”.

UPDATE network SET traveltime = shape_leng / X*60

step11.2

How to find X​, ​here is an example​: Using example 30 mph as rate. To find X, we convert 30 miles to feet, we know 5280 ft = 1 mile, so we multiply 30 by 5280 and this gives us 158400 ft. Our rate has been converted from 30 miles per hour to 158400 feet per hour. For a rate of 30 mph, our equation for the field “traveltime”  equates to “shape_leng / 158400*60″. To discriminate this calculations output, we will insert additional details such as “where speed = 30;”. What this additional detail does is apply our calculated output to features with a “30” value in our “speed” field. Note: your “speed” field may be named differently.

UPDATE network SET traveltime = shape_leng / 158400*60 where speed = 30;

Repeat this step for each speed value in your shapefile examples:

UPDATE network SET traveltime = shape_leng / X*60 where speed = 45;
UPDATE network SET traveltime = shape_leng / X*60 where speed = 55;

The back end is done. Great Job!

Our next step will be visualizing our data in QGIS. Open and connect QGIS to your routing database by right-clicking “PostGIS” in the Browser Panel within QGIS main window. Confirm the checkbox “Also list tables with no geometry” is checked to allow you to see the interior of your database more clearly. Fill out the name or your routing database and click “OK”.

If done correctly, from QGIS you will have access to tables and views created in your routing database. Feel free to visualize your network by drag-and-drop the network table into your QGIS Layers Panel. From here you can use the identify tool to select each road segment, and see the source and target nodes contained within that road segment. The node you choose will be used in the next step to create the views of drive-time.

12.Create views​: In this step, we create views from a function designed to determine the travel time cost. Transforming these views with tools will visualize the travel time costs as isochrones.

The command below will be how you start querying your database to create drive-time isochrones. Begin in QGIS by draging your network table into the contents. The visual will show your network as vector(lines). Simply select the road segment closest to your point of interest you would like to build your isochrone around. Then identify the road segment using the identify tool and locate the source and target fields.

step12

step12.2

Place the source or target field value in the below command where you see ​VALUE​, in all caps​.

This will serve you now as an isochrone catchment function for this workflow. Please feel free to use this command repeatedly for creating new isochrones by substituting the source value. Please input this command and execute in SQL pgAdmin tool.

*AT THE BOTTOM OF THIS WORKFLOW I PROVIDED AN EXAMPLE USING SOURCE VALUE “2022”

CREATE OR REPLACE VIEW "​view_name" AS 
SELECT di.seq, 
       di.id1, 
       di.id2, 
       di.cost, 
       pt.id, 
       pt.geom 
FROM pgr_drivingdistance('SELECT
     gid::integer AS id, 
     Source::integer AS source, 
     Target::integer AS target,                                    
     Traveltime::double precision AS cost 
       FROM network'::text, ​VALUE::bigint, 
    100000::double precision, false, false)
    di(seq, id1, id2, cost)
JOIN network_nodes pt ON di.id1 = pt.id;

step12.3

13.Visualize Isochrone: Applying tools to the view will allow us to adjust the visual aspect to a more suitable isochrone overlay.

​After creating your view, a new item in your routing database is created, using the “view_name” you chose. Drag-and-drop this item into your QGIS LayersPanel. You will see lots of small dots which represent the nodes.

In the figure below, I named my view “take1“.

step13

Each node you see contains a drive-time value, “cost”, which represents the time used to travel from the node you input in step 12’s function.

step13.2

Start by installing the QGIS plug-in Interpolation” by opening the Plugin Manager in QGIS interface.

step13.3

Next, at the top of QGIS window select “Raster” and a drop-down will appear, select “Interpolation”.

step13.4

 

A new window pops up and asks you for input.

step13.5

Select your “​view”​ as the​ vector layer​, select ​”cost​” as your ​interpolation attribute​, and then click “Add”.

step13.6

A new vector layer will show up in the bottom of the window, take care the type is Points. For output, on the other half of the window, keep the interpolation method as “TIN”, edit the ​output file​ location and name. Check the box “​Add result to project​”.

Note: decreasing the cellsize of X and Y will increase the resolution but at the cost of performance.

Click “OK” on the bottom right of the window.

step13.7

A black and white raster will appear in QGIS, also in the Layers Panel a new item was created.

step13.8

Take some time to visualize the raster by coloring and adjusting values in symbology until you are comfortable with the look.

step13.9

step13.10

14. ​Create contours of our isochrone:​ Contours can be calculated from the isochrone as well.

Find near the top of QGIS window, open the “Raster” menu drop-down and select Extraction → Contour.

step14

Fill out the appropriate interval between contour lines but leave the check box “Attribute name” unchecked. Click “OK”.

step14.2

step14.3

15.​ Zip and Share:​ Find where you saved your TIN and contours, compress them in a zip folder by highlighting them both and right-click to select “compress”. Email the compressed folder to yourself to export out of your virtual machine.

Example Isochrone catchment for this workflow:

CREATE OR REPLACE VIEW "2022" AS 
SELECT di.seq, Di.id1, Di.id2, Di.cost,                           
       Pt.id, Pt.geom 
FROM pgr_drivingdistance('SELECT gid::integer AS id,                                       
     Source::integer AS source, Target::integer AS target, 
     Traveltime::double precision AS cost FROM network'::text, 
     2022::bigint, 100000::double precision, false, false) 
   di(seq, id1, id2, cost) 
JOIN netowrk_nodes pt 
ON di.id1 = pt.id;

References: Virtual Box ORACLE VM, OSGeo-Live 11  amd64 iso, Workshop FOSS4G Bonn(​http://workshop.pgrouting.org/2.2.10/en/index.html​),


Fixing invalid polygon geometries

Invalid geometries can cause a lot of headache: from missing features to odd analysis results.

This post aims to illustrate one of the most common issues and presents an approach that can help with these errors.

The dataset used for this example is the Alaska Shapefile from the QGIS sample data:

This dataset has a couple of issues. One way to find out if a dataset contains errors is the Check Validity tool in the Processing toolbox:

If there are errors, a layer called Error output will be loaded. In our case, there are multiple issues:

If we try to use this dataset for spatial analysis, there will likely be errors. For example, using the Fixed distance buffer tool results in missing features:

Note the errors in the Processing log message panel:

Feature ### has invalid geometry. Skipping ...

So what can we do?

In my experience, GRASS can work wonders for fixing these kind of issues. The idea is to run v.buffer.distance with the distance set to zero:

This will import the dataset into GRASS and run the buffer algorithm without actually growing the polygons. Finally, it should export a fixed version of the geometries:

A quick validity check with the Check validity tool confirms that there are no issues left.

 


Getting started with GeoMesa using Geodocker

In a previous post, I showed how to use docker to run a single application (GeoServer) in a container and connect to it from your local QGIS install. Today’s post is about running a whole bunch of containers that interact with each other. More specifically, I’m using the images provided by Geodocker. The Geodocker repository provides a setup containing Accumulo, GeoMesa, and GeoServer. If you are not familiar with GeoMesa yet:

GeoMesa is an open-source, distributed, spatio-temporal database built on a number of distributed cloud data storage systems … GeoMesa aims to provide as much of the spatial querying and data manipulation to Accumulo as PostGIS does to Postgres.

The following sections show how to load data into GeoMesa, perform basic queries via command line, and finally publish data to GeoServer. The content is based largely on two GeoMesa tutorials: Geodocker: Bootstrapping GeoMesa Accumulo and Spark on AWS and Map-Reduce Ingest of GDELT, as well as Diethard Steiner’s post on Accumulo basics. The key difference is that this tutorial is written to be run locally (rather than on AWS or similar infrastructure) and that it spells out all user names and passwords preconfigured in Geodocker.

This guide was tested on Ubuntu and assumes that Docker is already installed. If you haven’t yet, you can install Docker as described in Install using the repository.

To get Geodocker set up, we need to get the code from Github and run the docker-compose command:

$ git clone https://github.com/geodocker/geodocker-geomesa.git
$ cd geodocker-geomesa/geodocker-accumulo-geomesa/
$ docker-compose up

This will take a while.

When docker-compose is finished, use a second console to check the status of all containers:

$ docker ps
CONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS                                        NAMES
4a238494e15f        quay.io/geomesa/accumulo-geomesa:latest   "/sbin/entrypoint...."   19 hours ago        Up 23 seconds                                                    geodockeraccumulogeomesa_accumulo-tserver_1
e2e0df3cae98        quay.io/geomesa/accumulo-geomesa:latest   "/sbin/entrypoint...."   19 hours ago        Up 22 seconds       0.0.0.0:50095-&gt;50095/tcp                     geodockeraccumulogeomesa_accumulo-monitor_1
e7056f552ef0        quay.io/geomesa/accumulo-geomesa:latest   "/sbin/entrypoint...."   19 hours ago        Up 24 seconds                                                    geodockeraccumulogeomesa_accumulo-master_1
dbc0ffa6c39c        quay.io/geomesa/hdfs:latest               "/sbin/entrypoint...."   19 hours ago        Up 23 seconds                                                    geodockeraccumulogeomesa_hdfs-data_1
20e90a847c5b        quay.io/geomesa/zookeeper:latest          "/sbin/entrypoint...."   19 hours ago        Up 24 seconds       2888/tcp, 0.0.0.0:2181-&gt;2181/tcp, 3888/tcp   geodockeraccumulogeomesa_zookeeper_1
997b0e5d6699        quay.io/geomesa/geoserver:latest          "/opt/tomcat/bin/c..."   19 hours ago        Up 22 seconds       0.0.0.0:9090-&gt;9090/tcp                       geodockeraccumulogeomesa_geoserver_1
c17e149cda50        quay.io/geomesa/hdfs:latest               "/sbin/entrypoint...."   19 hours ago        Up 23 seconds       0.0.0.0:50070-&gt;50070/tcp                     geodockeraccumulogeomesa_hdfs-name_1

At the time of writing this post, the Geomesa version installed in this way is 1.3.2:

$ docker exec geodockeraccumulogeomesa_accumulo-master_1 geomesa version
GeoMesa tools version: 1.3.2
Commit ID: 2b66489e3d1dbe9464a9860925cca745198c637c
Branch: 2b66489e3d1dbe9464a9860925cca745198c637c
Build date: 2017-07-21T19:56:41+0000

Loading data

First we need to get some data. The available tutorials often refer to data published by the GDELT project. Let’s download data for three days, unzip it and copy it to the geodockeraccumulogeomesa_accumulo-master_1 container for further processing:

$ wget http://data.gdeltproject.org/events/20170710.export.CSV.zip
$ wget http://data.gdeltproject.org/events/20170711.export.CSV.zip
$ wget http://data.gdeltproject.org/events/20170712.export.CSV.zip
$ unzip 20170710.export.CSV.zip
$ unzip 20170711.export.CSV.zip
$ unzip 20170712.export.CSV.zip
$ docker cp ~/Downloads/geomesa/gdelt/20170710.export.CSV geodockeraccumulogeomesa_accumulo-master_1:/tmp/20170710.export.CSV
$ docker cp ~/Downloads/geomesa/gdelt/20170711.export.CSV geodockeraccumulogeomesa_accumulo-master_1:/tmp/20170711.export.CSV
$ docker cp ~/Downloads/geomesa/gdelt/20170712.export.CSV geodockeraccumulogeomesa_accumulo-master_1:/tmp/20170712.export.CSV

Loading or importing data is called “ingesting” in Geomesa parlance. Since the format of GDELT data is already predefined (the CSV mapping is defined in geomesa-tools/conf/sfts/gdelt/reference.conf), we can ingest the data:

$ docker exec geodockeraccumulogeomesa_accumulo-master_1 geomesa ingest -c geomesa.gdelt -C gdelt -f gdelt -s gdelt -u root -p GisPwd /tmp/20170710.export.CSV
$ docker exec geodockeraccumulogeomesa_accumulo-master_1 geomesa ingest -c geomesa.gdelt -C gdelt -f gdelt -s gdelt -u root -p GisPwd /tmp/20170711.export.CSV
$ docker exec geodockeraccumulogeomesa_accumulo-master_1 geomesa ingest -c geomesa.gdelt -C gdelt -f gdelt -s gdelt -u root -p GisPwd /tmp/20170712.export.CSV

Once the data is ingested, we can have a look at the the created table by asking GeoMesa to describe the created schema:

$ docker exec geodockeraccumulogeomesa_accumulo-master_1 geomesa describe-schema -c geomesa.gdelt -f gdelt -u root -p GisPwd
INFO  Describing attributes of feature 'gdelt'
globalEventId       | String
eventCode           | String
eventBaseCode       | String
eventRootCode       | String
isRootEvent         | Integer
actor1Name          | String
actor1Code          | String
actor1CountryCode   | String
actor1GroupCode     | String
actor1EthnicCode    | String
actor1Religion1Code | String
actor1Religion2Code | String
actor2Name          | String
actor2Code          | String
actor2CountryCode   | String
actor2GroupCode     | String
actor2EthnicCode    | String
actor2Religion1Code | String
actor2Religion2Code | String
quadClass           | Integer
goldsteinScale      | Double
numMentions         | Integer
numSources          | Integer
numArticles         | Integer
avgTone             | Double
dtg                 | Date    (Spatio-temporally indexed)
geom                | Point   (Spatially indexed)

User data:
  geomesa.index.dtg     | dtg
  geomesa.indices       | z3:4:3,z2:3:3,records:2:3
  geomesa.table.sharing | false

In the background, our data is stored in Accumulo tables. For a closer look, open an interactive terminal in the Accumulo master image:

$ docker exec -i -t geodockeraccumulogeomesa_accumulo-master_1 /bin/bash

and open the Accumulo shell:

# accumulo shell -u root -p GisPwd

When we store data in GeoMesa, there is not only one table but several. Each table has a specific purpose: storing metadata, records, or indexes. All tables get prefixed with the catalog table name:

root@accumulo> tables
accumulo.metadata
accumulo.replication
accumulo.root
geomesa.gdelt
geomesa.gdelt_gdelt_records_v2
geomesa.gdelt_gdelt_z2_v3
geomesa.gdelt_gdelt_z3_v4
geomesa.gdelt_queries
geomesa.gdelt_stats

By default, GeoMesa creates three indices:
Z2: for queries with a spatial component but no temporal component.
Z3: for queries with both a spatial and temporal component.
Record: for queries by feature ID.

But let’s get back to GeoMesa …

Querying data

Now we are ready to query the data. Let’s perform a simple attribute query first. Make sure that you are in the interactive terminal in the Accumulo master image:

$ docker exec -i -t geodockeraccumulogeomesa_accumulo-master_1 /bin/bash

This query filters for a certain event id:

# geomesa export -c geomesa.gdelt -f gdelt -u root -p GisPwd -q "globalEventId='671867776'"
Using GEOMESA_ACCUMULO_HOME = /opt/geomesa
id,globalEventId:String,eventCode:String,eventBaseCode:String,eventRootCode:String,isRootEvent:Integer,actor1Name:String,actor1Code:String,actor1CountryCode:String,actor1GroupCode:String,actor1EthnicCode:String,actor1Religion1Code:String,actor1Religion2Code:String,actor2Name:String,actor2Code:String,actor2CountryCode:String,actor2GroupCode:String,actor2EthnicCode:String,actor2Religion1Code:String,actor2Religion2Code:String,quadClass:Integer,goldsteinScale:Double,numMentions:Integer,numSources:Integer,numArticles:Integer,avgTone:Double,dtg:Date,*geom:Point:srid=4326
d9e6ab555785827f4e5f03d6810bbf05,671867776,120,120,12,1,UNITED STATES,USA,USA,,,,,,,,,,,,3,-4.0,20,2,20,8.77192982456137,2007-07-13T00:00:00.000Z,POINT (-97 38)
INFO  Feature export complete to standard out in 2290ms for 1 features

If the attribute query runs successfully, we can advance to some geo goodness … that’s why we are interested in GeoMesa after all … and perform a spatial query:

# geomesa export -c geomesa.gdelt -f gdelt -u root -p GisPwd -q "CONTAINS(POLYGON ((0 0, 0 90, 90 90, 90 0, 0 0)),geom)" -m 3
Using GEOMESA_ACCUMULO_HOME = /opt/geomesa
id,globalEventId:String,eventCode:String,eventBaseCode:String,eventRootCode:String,isRootEvent:Integer,actor1Name:String,actor1Code:String,actor1CountryCode:String,actor1GroupCode:String,actor1EthnicCode:String,actor1Religion1Code:String,actor1Religion2Code:String,actor2Name:String,actor2Code:String,actor2CountryCode:String,actor2GroupCode:String,actor2EthnicCode:String,actor2Religion1Code:String,actor2Religion2Code:String,quadClass:Integer,goldsteinScale:Double,numMentions:Integer,numSources:Integer,numArticles:Integer,avgTone:Double,dtg:Date,*geom:Point:srid=4326
139346754923c07e4f6a3ee01a3f7d83,671713129,030,030,03,1,NIGERIA,NGA,NGA,,,,,LIBYA,LBY,LBY,,,,,1,4.0,16,2,16,-1.4060533085217,2017-07-10T00:00:00.000Z,POINT (5.43827 5.35886)
9e8e885e63116253956e40132c62c139,671928676,042,042,04,1,NIGERIA,NGA,NGA,,,,,OPEC,IGOBUSOPC,,OPC,,,,1,1.9,5,1,5,-0.90909090909091,2017-07-10T00:00:00.000Z,POINT (5.43827 5.35886)
d6c6162d83c72bc369f68bcb4b992e2d,671817380,043,043,04,0,OPEC,IGOBUSOPC,,OPC,,,,RUSSIA,RUS,RUS,,,,,1,2.8,2,1,2,-1.59453302961275,2017-07-09T00:00:00.000Z,POINT (5.43827 5.35886)
INFO  Feature export complete to standard out in 2127ms for 3 features

Functions that can be used in export command queries/filters are (E)CQL functions from geotools for the most part. More sophisticated queries require SparkSQL.

Publishing GeoMesa tables with GeoServer

To view data in GeoServer, go to http://localhost:9090/geoserver/web. Login with admin:geoserver.

First, we create a new workspace called “geomesa”.

Then, we can create a new store of type Accumulo (GeoMesa) called “gdelt”. Use the following parameters:

instanceId = accumulo
zookeepers = zookeeper
user = root
password = GisPwd
tableName = geomesa.gdelt

Geodocker

Then we can configure a Layer that publishes the content of our new data store. It is good to check the coordinate reference system settings and insert the bounding box information:

Geodocker2

To preview the WMS, go to GeoServer’s preview:

http://localhost:9090/geoserver/geomesa/wms?service=WMS&version=1.1.0&request=GetMap&layers=geomesa:gdelt&styles=&bbox=-180.0,-90.0,180.0,90.0&width=768&height=384&srs=EPSG:4326&format=application/openlayers&TIME=2017-07-10T00:00:00.000Z/2017-07-10T01:00:00.000Z#

Which will look something like this:

Geodocker3

GeoMesa data filtered using CQL in GeoServer preview

For more display options, check the official GeoMesa tutorial.

If you check the preview URL more closely, you will notice that it specifies a time window:

&TIME=2017-07-10T00:00:00.000Z/2017-07-10T01:00:00.000Z

This is exactly where QGIS TimeManager could come in: Using TimeManager for WMS-T layers. Interoperatbility for the win!


Movement data in GIS #7: animated trajectories with TimeManager

In this post, we use TimeManager to visualize the position of a moving object over time along a trajectory. This is another example of what is possible thanks to QGIS’ geometry generator feature. The result can look like this:

What makes this approach interesting is that the trajectory is stored in PostGIS as a LinestringM instead of storing individual trajectory points. So there is only one line feature loaded in QGIS:

(In part 2 of this series, we already saw how a geometry generator can be used to visualize speed along a trajectory.)

The layer is added to TimeManager using t_start and t_end attributes to define the trajectory’s temporal extent.

TimeManager exposes an animation_datetime() function which returns the current animation timestamp, that is, the timestamp that is also displayed in the TimeManager dock, as well as on the map (if we don’t explicitly disable this option).

Once TimeManager is set up, we can edit the line style to add a point marker to visualize the position of the moving object at the current animation timestamp. To do that, we interpolate the position along the trajectory segments. The first geometry generator expression splits the trajectory in its segments:

The second geometry generator expression interpolates the position on the segment that contains the current TimeManager animation time:

The WHEN statement compares the trajectory segment’s start and end times to the current TimeManager animation time. Afterwards, the line_interpolate_point function is used to draw the point marker at the correct position along the segment:

CASE 
WHEN (
m(end_point(geometry_n($geometry,@geometry_part_num)))
> second(age(animation_datetime(),to_datetime('1970-01-01 00:00')))
AND
m(start_point(geometry_n($geometry,@geometry_part_num)))
<= second(age(animation_datetime(),to_datetime('1970-01-01 00:00')))
)
THEN
line_interpolate_point( 
  geometry_n($geometry,@geometry_part_num),
  1.0 * (
    second(age(animation_datetime(),to_datetime('1970-01-01 00:00')))
	- m(start_point(geometry_n($geometry,@geometry_part_num)))
  ) / (
    m(end_point(geometry_n($geometry,@geometry_part_num)))
	- m(start_point(geometry_n($geometry,@geometry_part_num)))
  ) 
  * length(geometry_n($geometry,@geometry_part_num))
)
END

Here is the animation result for a part of the trajectory between 08:00 and 09:00:


Read more:


Dynamic styling expressions with aggregates & variables

In a recent post, we used aggregates for labeling purposes. This time, we will use them to create a dynamic data driven style, that is, a style that automatically adjusts to the minimum and maximum values of any numeric field … and that field will be specified in a variable!

But let’s look at this step by step. (This example uses climate.shp from the QGIS sample dataset.)

Here is a basic expression for data defined symbol color using a color ramp:

Similarly, we can configure a data defined symbol size to create a style like this:

Temperatures in July

To stretch the color ramp from the attribute field’s minimum to maximum value, we can use aggregate functions:

That’s nice but if we want to be able to quickly switch to a different attribute field, we now have two expressions (one for color and one for size) to change. This can get repetitive and can be the source of errors if we miss an expression and don’t update it correctly …

To avoid these issues, we use a layer variable to store the name of the field that we want to use. Layer variables can be configured in layer properties:

Then we adjust our expression to use the layer variable. Here is where it gets a bit tricky. We cannot simply replace the field name “T_F_JUL” with our new layer variable @style_field, since this creates an invalid expression. Instead, we have to use the attribute function:

With this expression in place, we can now change the layer variable to T_M_JAN and the style automatically adjusts accordingly:

Temperatures in January

Note how the style also labels the point with the highest temperature? That’s because the style also defines an expression for the show labels option.

It is worth noting that, in most cases, temperature maps should not be styled using a color ramp that adjusts to a specific dataset’s min and max values. Instead, we would want a style with fixed value to color mapping that makes different datasets comparable. In many other use cases, however, it is very convenient to have a style that can automatically adapt to the data.


Docker basics with Geodocker GeoServer

Today’s post is mostly notes-to-self about using Docker. These steps were tested on a fresh Ubuntu 17.04 install.

Install Docker as described in https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/ “Install using the repository” section.

Then add the current user to the docker user group (otherwise, all docker commands have to be prefixed with sudo)

$ sudo gpasswd -a $USER docker
$ newgrp docker

Test run the hello world image

$ docker run hello-world

For some more Docker basics, see https://github.com/docker/labs/blob/master/beginner/chapters/alpine.md.

Pull Geodocker images, for example from https://quay.io/organization/geodocker

$ docker pull quay.io/geodocker/base
$ docker pull quay.io/geodocker/geoserver

Get a list of pulled images

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/geodocker/geoserver latest c60753e05956 8 months ago 904MB
quay.io/geodocker/base latest 293209905a47 8 months ago 646MB

Test run quay.io/geodocker/base

$ docker run -it --rm quay.io/geodocker/base:latest java -version
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

Run quay.io/geodocker/geoserver

$ docker run --name geoserver -e AUTHOR="Anita" \
 -d -P quay.io/geodocker/geoserver

The important options are:

-d … Run container in background and print container ID

-P … Publish all exposed ports to random ports

Check if the image is running

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
684598b57868 quay.io/geodocker/geoserver "/opt/tomcat/bin/c..." 
2 hours ago Up 2 hours 0.0.0.0:32772->9090/tcp geoserver

You can also check which ports to access using

$ docker port geoserver
9090/tcp -> 0.0.0.0:32772

Geoserver should now run on http://localhost:32772/geoserver/ (user=admin, password=geoserver)

For more tests, let’s connect to Geoserver from QGIS

All default example layers are listed

and can be loaded into QGIS


Even more aggregations: QGIS point cluster renderer

In the previous post, I demonstrated the aggregation support in QGIS expressions. Another popular request is to aggregate or cluster point features that are close to each other. If you have been following the QGIS project on mailing list or social media, you probably remember the successful cluster renderer crowd-funding campaign by North Road.

The point cluster renderer is implemented and can be tested in the current developer version. The renderer is highly customizable, for example, by styling the cluster symbol and adjusting the distance between points that should be in the same cluster:

Beyond this basic use case, the point cluster renderer can also be combined with categorized visualizations and clusters symbols can be colored in the corresponding category color and scaled by cluster size, as demoed in this video by the developer Nyall Dawson:


Aggregate all the things! (QGIS expression edition)

In the past, aggregating field values was reserved to databases, virtual layers, or dedicated plugins, but since QGIS 2.16, there is a way to compute aggregates directly in QGIS expressions. This means that we can compute sums, means, counts, minimum and maximum values and more!

Here’s a quick tutorial to get you started:

Load the airports from the QGIS sample dataset. We’ll use the elevation values in the ELEV field for the following examples:

QGIS sample airport dataset – categorized by USE attribute

The most straightforward expressions are those that only have one parameter: the name of the field that should be aggregated, for example:

mean(ELEV)

We can also add a second parameter: a group-by field, for example, to group by the airport usage type, we use:

mean(ELEV,USE)

To top it all off, we can add a third parameter: a filter expression, for example, to show only military airports, we use:

mean(ELEV,USE,USE='Military')

Last but not least, all this aggregating goodness also works across layers! For example, here is the Alaska layer labeled with the airport layer feature count:

aggregate('airports','count',"ID")

If you are using relations, you can even go one step further and calculate aggregates on feature relations.


Upcoming QGIS3 features – exploring the current developer version

There are tons of things going on under the hood of QGIS for the move from version 2 to version 3. Besides other things, we’ll have access to new versions of Qt and Python. If you are using a HiDPI screen, you should see some notable improvements in the user interface of QGIS 3.

But of course QGIS 3 is not “just” a move to updated dependencies. Like in any other release, there are many new features that we are looking forward to. This list is only a start, including tools that already landed in the developer version 2.99:

Improved geometry editing 

When editing geometries, the node tool now behaves more like editing tools in webmaps: instead of double-clicking to add a new node, the tool automatically suggests a new node when the cursor hovers over a line segment.

In addition, improvements include an undo and redo panel for quick access to previous versions.

Improved Processing dialogs

Like many other parts of the QGIS user interface, Processing dialogs now prominently display the function help.

In addition, GDAL/OGR tools also show the underlying GDAL/OGR command which can be copy-pasted to use it somewhere else.

New symbols and predefined symbol groups

The default symbols have been reworked and categorized into different symbol groups. Of course, everything can be customized in the Symbol Library.

Search in layer and project properties

Both the layer properties and the project properties dialog now feature a search field in the top left corner. This nifty little addition makes it much easier to find specific settings fast.

Save images at custom sizes

Last but not least, a long awaited feature: It’s finally possible to specify the exact size and properties of images created using Project | Save as image.

Of course, we still expect many other features to arrive in 3.0. For example, one of the successful QGIS grant applications was for adding 3D support to QGIS. Additionally, there is an ongoing campaign to fund better layout and reporting functionality in QGIS print composer. Please support it if you can!

 


Movement data in GIS #6: updates from AGILE2017

AGILE 2017 is the annual international conference on Geographic Information Science of the Association of Geographic Information Laboratories in Europe (AGILE) which was established in 1998 to promote academic teaching and research on GIS.

This years conference in Wageningen was my time at AGILE.  I had the honor to present our recent work on pedestrian navigation with landmarks [Graser, 2017].

If you are interested in trying it, there is an online demo. The conference also provided numerous pointers toward ideas for future improvements, including [Götze and Boye, 2016] and [Du et al., 2017]

On the issue of movement data in GIS, there weren’t too many talks on this topic at AGILE but on the conceptual side, I really enjoyed David Jonietz’ talk on how to describe trajectory processing steps:

Source: [Jonietz and Bucher, 2017]

In the pre-conference workshop I attended, there was also an interesting presentation on analyzing trajectory data with PostGIS by Phd candidate Meihan Jin.

I’m also looking forward to reading [Wiratma et al., 2017] “On Measures for Groups of Trajectories” because I think that the presentation only scratched the surface.

References

[Du et al, 2017] Du, S., Wang, X., Feng, C. C., & Zhang, X. (2017). Classifying natural-language spatial relation terms with random forest algorithm. International Journal of Geographical Information Science, 31(3), 542-568.
[Götze and Boye, 2016] Götze, J., & Boye, J. (2016). Learning landmark salience models from users’ route instructions. Journal of Location Based Services, 10(1), 47-63.
[Graser, 2017] Graser, A. (2017). Towards landmark-based instructions for pedestrian navigation systems using OpenStreetMap, AGILE2017, Wageningen, Netherlands.
[Jonietz and Bucher, 2017] Jonietz, D., Bucher, D. (2017). Towards an Analytical Framework for Enriching Movement Trajectories with Spatio-Temporal Context Data, AGILE2017, Wageningen, Netherlands.
[Wiratma et al., 2017] Wiratma L., van Kreveld M., Löffler M. (2017) On Measures for Groups of Trajectories. In: Bregt A., Sarjakoski T., van Lammeren R., Rip F. (eds) Societal Geo-innovation. GIScience 2017. Lecture Notes in Geoinformation and Cartography. Springer, Cham


Report from the Essen dev meeting

From 28th April to 1st May the QGIS project organized another successful developer meeting at the Linuxhotel in Essen, Germany. Here is a quick summary of the key topics I’ve been working on during these days.

New logo rollout

It’s time to get the QGIS 3 logo out there! We’ve started changing our social media profile pictures and Website headers to the new design: 

Resource sharing platform 

In QGIS 3, the resource sharing platform will be available by default – just like the plugin manager is today in QGIS 2. We are constantly looking for people to share their mapping resources with the community. During this developer meeting Paolo Cavallini and I added two more SVG collections:

Road sign SVGs by Bertrand Bouteilles & Roulex_45 (CC BY-SA 3.0)

SVGs by Yury Ryabov & Pavel Sergeev (CC-BY 3.0)

Unified Add Layer button

We also discussed the unified add layer dialog and are optimistic that it will make its way into 3.0. The required effort for a first version is currently being estimated by the developers at Boundless.

TimeManager

The new TimeManager version 2.4 fixes a couple of issues related to window resizing and display on HiDPI screens. Additionally, it now saves all label settings in the project file. This is the change log:

- Fixed #222: hide label if TimeManager is turned off
- Fixed #156: copy parent style to interpolation layer
- Fixed #109: save label settings in project
- Fixed window resizing issues in label options gui
- Fixed window resizing issues in video export gui
- Fixed HiDPI issues with arch gui

Straight and curved arrows with QGIS

After my previous posts on flow maps, many people asked me how to create the curved arrows that you see in these maps.

Arrow symbol layers were introduced in QGIS 2.16.

The following quick screencast shows how it is done. Note how additional nodes are added to make the curved arrows:


Better river styles with tapered lines

In 2012 I published a post on mapping the then newly released Tirol river dataset.

In the comments, reader Michal Zimmermann asked:

Do you think it would be possible to create a river stream which gains width along its way? I mean rivers are usually much narrower on their beginnings, then their width increases and the estuary should be the widest part, right?

For a long time, this kind of river style, also known as “tapered lines” could only be created in vector graphics software, such as Inkscape and Illustrator.

With the help of geometry generators, we can now achieve this look directly in QGIS:

Data cc-by Land Tirol

In the river dataset published by the state of Tirol, all rivers are digitized in upstream direction. For this styling to work, it is necessary that the line direction is consistent throughout the whole dataset.

We use a geometry generator symbol layer to split the river geometry into its individual segments:

 

Then we can use the information about the total number of segments (accessible via the expression variable @geometry_part_count) and the individual segment’s number (@geometry_part_num) to calculate the segment’s line width.

The stroke width expression furthermore uses the river category (GEW_GRKL) to vary the line width depending on the category:

CASE 
WHEN "GEW_GRKL" = '< 10 km2 Fluss' THEN 0.2
WHEN "GEW_GRKL" = '10 km2 Fluss' THEN 0.4
WHEN "GEW_GRKL" = '100 km2 Fluss' THEN 0.6
WHEN "GEW_GRKL" = '1.000 km2 Fluss' THEN 0.8
ELSE 1.0
END 
* ( 1- ( @geometry_part_num /  @geometry_part_count ))

If the rivers are digitized in downstream direction, you can simply remove the 1- term.

Happy mapping!


  • Page 1 of 14 ( 270 posts )
  • >>

Back to Top

Sponsors