Page 1 of 3 (42 posts)

  • talks about »
  • visualization

Tags

Last update:
Wed Oct 18 06:00:18 2017

A Django site.

QGIS Planet

Movement data in GIS extra: trajectory generalization code and sample data

Today’s post is a follow-up of Movement data in GIS #3: visualizing massive trajectory datasets. In that post, I summarized a concept for trajectory generalization. Now, I have published the scripts and sample data in my QGIS-Processing-tools repository on Github.

To add the trajectory generalization scripts to your Processing toolbox, you can use the Add scripts from files tool:

It is worth noting, that Add scripts from files fails to correctly import potential help files for the scripts but that’s not an issue this time around, since I haven’t gotten around to actually write help files yet.

The scripts are used in the following order:

  1. Extract characteristic trajectory points
  2. Group points in space
  3. Compute flows between cells from trajectories

The sample project contains input data, as well as output layers of the individual tools. The only required input is a layer of trajectories, where trajectories have to be LINESTRINGM (note the M!) features:

Trajectory sample based on data provided by the GeoLife project

In Extract characteristic trajectory points, distance parameters are specified in meters, stop duration in seconds, and angles in degrees. The characteristic points contain start and end locations, as well as turns and stop locations:

The characteristic points are then clustered. In this tool, the distance has to be specified in layer units, which are degrees in case of the sample data.

Finally, we can compute flows between cells defined by these clusters:

Flow lines scaled by flow strength and cell centers scaled by counts

If you use these tools on your own data, I’d be happy so see what you come up with!


Read more:


Movement data in GIS #8: edge bundling for flow maps

If you follow this blog, you’ll probably remember that I published a QGIS style for flow maps a while ago. The example showed domestic migration between the nine Austrian states, a rather small dataset. Even so, it required some manual tweaking to make the flow map readable. Even with only 72 edges, the map quickly gets messy:

Raw migration flows between Austrian states, line width scaled by flow strength

One popular approach in the data viz community to deal with this problem is edge bundling. The idea is to reduce visual clutter by generate bundles of similar edges. 

Surprisingly, edge bundling is not available in desktop GIS. Existing implementations in the visual analytics field often run on GPUs because edge bundling is computationally expensive. Nonetheless, we have set out to implement force-directed edge bundling for the QGIS Processing toolbox [0]. The resulting scripts are available at https://github.com/dts-ait/qgis-edge-bundling.

The main procedure consists of two tools: bundle edges and summarize. Bundle edges takes the raw straight lines, and incrementally adds intermediate nodes (called control points) and shifts them according to computed spring and electrostatic forces. If the input are 72 lines, the output again are 72 lines but each line geometry has been bent so that similar lines overlap and form a bundle.

After this edge bundling step, most common implementations compute a line heatmap, that is, for each map pixel, determine the number of lines passing through the pixel. But QGIS does not support line heatmaps and this approach also has issues distinguishing lines that run in opposite directions. We have therefore implemented a summarize tool that computes the local strength of the generated bundles.

Continuing our previous example, if the input are 72 lines, summarize breaks each line into its individual segments and determines the number of segments from other lines that are part of the same bundle. If a weight field is specified, each line is not just counted once but according to its weight value. The resulting bundle strength can be used to create a line layer style with data-defined line width:

Bundled migration flows

To avoid overlaps of flows in opposing directions, we define a line offset. Finally, summarize also adds a sequence number to the line segments. This sequence number is used to assign a line color on the gradient that indicates flow direction.

I already mentioned that edge bundling is computationally expensive. One reason is that we need to perform pairwise comparison of edges to determine if they are similar and should be bundled. This comparison results in a compatibility matrix and depending on the defined compatibility threshold, different bundles can be generated.

The following U.S. dataset contains around 4000 lines and bundling it takes a considerable amount of time.

One approach to speed up computations is to first use a quick clustering algorithm and then perform edge bundling on each cluster individually. If done correctly, clustering significantly reduces the size of each compatibility matrix.

In this example, we divided the edges into six clusters before bundling them. If you compare this result to the visualization at the top of this post (which did not use clustering), you’ll see some differences here and there but, overall, the results are quite similar:

Looking at these examples, you’ll probably spot a couple of issues. There are many additional ideas for potential improvements from existing literature which we have not implemented yet. If you are interested in improving these tools, please go ahead! The code and more examples are available on Github.

For more details, leave your email in a comment below and I’ll gladly send you the pre-print of our paper.

[0] Graser, A., Schmidt, J., Roth, F., & Brändle, N. (accepted) Untangling Origin-Destination Flows in Geographic Information Systems. Information Visualization – Special Issue on Visual Movement Analytics.


Read more:


Drive-time Isochrones from a single Shapefile using QGIS, PostGIS, and Pgrouting

This is a guest post by Chris Kohler .

Introduction:

This guide provides step-by-step instructions to produce drive-time isochrones using a single vector shapefile. The method described here involves building a routing network using a single vector shapefile of your roads data within a Virtual Box. Furthermore, the network is built by creating start and end nodes (source and target nodes) on each road segment. We will use Postgresql, with PostGIS and Pgrouting extensions, as our database. Please consider this type of routing to be fair, regarding accuracy, as the routing algorithms are based off the nodes locations and not specific addresses. I am currently working on an improved workflow to have site address points serve as nodes to optimize results. One of the many benefits of this workflow is no financial cost to produce (outside collecting your roads data). I will provide instructions for creating, and using your virtual machine within this guide.

Steps:–Getting Virtual Box(begin)–

Intro 1. Download/Install Oracle VM(https://www.virtualbox.org/wiki/Downloads)

Intro 2. Start the download/install OSGeo-Live 11(https://live.osgeo.org/en/overview/overview.html).

Pictures used in this workflow will show 10.5, though version 11 can be applied similarly. Make sure you download the version: osgeo-live-11-amd64.iso. If you have trouble finding it, here is the direct link to the download (https://sourceforge.net/projects/osgeo-live/files/10.5/osgeo-live-10.5-amd64.iso/download)
Intro 3. Ready for virtual machine creation: We will utilize the downloaded OSGeo-Live 11 suite with a virtual machine we create to begin our workflow. The steps to create your virtual machine are listed below. Also, here are steps from an earlier workshop with additional details with setting up your virtual machine with osgeo live(http://workshop.pgrouting.org/2.2.10/en/chapters/installation.html).

1.  Create Virutal Machine: In this step we begin creating the virtual machine housing our database.

Open Oracle VM VirtualBox Manager and select “New” located at the top left of the window.

VBstep1

Then fill out name, operating system, memory, etc. to create your first VM.

vbstep1.2

2. Add IDE Controller:  The purpose of this step is to create a placeholder for the osgeo 11 suite to be implemented. In the virtual box main window, right-click your newly-created vm and open the settings.

vbstep2

In the settings window, on the left side select the storage tab.

Find “adds new storage controller button located at the bottom of the tab. Be careful of other buttons labeled “adds new storage attachment”! Select “adds new storage controller button and a drop-down menu will appear. From the top of the drop-down select “Add IDE Controller”.

vbstep2.2

vbstep2.3

You will see a new item appear in the center of the window under the “Storage Tree”.

3.  Add Optical Drive: The osgeo 11 suite will be implemented into the virtual machine via an optical drive. Highlight the new controller IDE you created and select “add optical drive”.

vbstep3

A new window will pop-up and select “Choose Disk”.

vbstep3.2

Locate your downloaded file “osgeo-live 11 amd64.iso” and click open. A new object should appear in the middle window under your new controller displaying “osgeo-live-11.0-amd64.iso”.

vbstep3.3

Finally your virtual machine is ready for use.
Start your new Virtual Box, then wait and follow the onscreen prompts to begin using your virtual machine.

vbstep3.4

–Getting Virtual Box(end)—

4. Creating the routing database, and both extensions (postgis, pgrouting): The database we create and both extensions we add will provide the functions capable of producing isochrones.

To begin, start by opening the command line tool (hold control+left-alt+T) then log in to postgresql by typing “psql -U user;” into the command line and then press Enter. For the purpose of clear instruction I will refer to database name in this guide as “routing”, feel free to choose your own database name. Please input the command, seen in the figure below, to create the database:

CREATE DATABASE routing;

You can use “\c routing” to connect to the database after creation.

step4

The next step after creating and connecting to your new database is to create both extensions. I find it easier to take two-birds-with-one-stone typing “psql -U user routing;” this will simultaneously log you into postgresql and your routing database.

When your logged into your database, apply the commands below to add both extensions

CREATE EXTENSION postgis;
CREATE EXTENSION pgrouting;

step4.2

step4.3

5. Load shapefile to database: In this next step, the shapefile of your roads data must be placed into your virtual machine and further into your database.

My method is using email to send myself the roads shapefile then download and copy it from within my virtual machines web browser. From the desktop of your Virtual Machine, open the folder named “Databases” and select the application “shape2pgsql”.

step5

Follow the UI of shp2pgsql to connect to your routing database you created in Step 4.

step5.2

Next, select “Add File” and find your roads shapefile (in this guide we will call our shapefile “roads_table”) you want to use for your isochrones and click Open.

step5.3

Finally, click “Import” to place your shapefile into your routing database.

6. Add source & target columns: The purpose of this step is to create columns which will serve as placeholders for our nodes data we create later.

There are multiple ways to add these columns into the roads_table. The most important part of this step is which table you choose to edit, the names of the columns you create, and the format of the columns. Take time to ensure the source & target columns are integer format. Below are the commands used in your command line for these functions.

ALTER TABLE roads_table ADD COLUMN "source" integer;
ALTER TABLE roads_table ADD COLUMN "target" integer;

step6

step6.2

7. Create topology: Next, we will use a function to attach a node to each end of every road segment in the roads_table. The function in this step will create these nodes. These newly-created nodes will be stored in the source and target columns we created earlier in step 6.

As well as creating nodes, this function will also create a new table which will contain all these nodes. The suffix “_vertices_pgr” is added to the name of your shapefile to create this new table. For example, using our guide’s shapefile name , “roads_table”, the nodes table will be named accordingly: roads_table_vertices_pgr. However, we will not use the new table created from this function (roads_table_vertices_pgr). Below is the function, and a second simplified version, to be used in the command line for populating our source and target columns, in other words creating our network topology. Note the input format, the “geom” column in my case was called “the_geom” within my shapefile:

pgr_createTopology('roads_table', 0.001, 'geom', 'id',
 'source', 'target', rows_where := 'true', clean := f)

step7

Here is a direct link for more information on this function: http://docs.pgrouting.org/2.3/en/src/topology/doc/pgr_createTopology.html#pgr-create-topology

Below is an example(simplified) function for my roads shapefile:

SELECT pgr_createTopology('roads_table', 0.001, 'the_geom', 'id')

8. Create a second nodes table: A second nodes table will be created for later use. This second node table will contain the node data generated from pgr_createtopology function and be named “node”. Below is the command function for this process. Fill in your appropriate source and target fields following the manner seen in the command below, as well as your shapefile name.

To begin, find the folder on the Virtual Machines desktop named “Databases” and open the program “pgAdmin lll” located within.

step8

Connect to your routing database in pgAdmin window. Then highlight your routing database, and find “SQL” tool at the top of the pgAdmin window. The tool resembles a small magnifying glass.

step8.2

We input the below function into the SQL window of pgAdmin. Feel free to refer to this link for further information: (https://anitagraser.com/2011/02/07/a-beginners-guide-to-pgrouting/)

CREATE TABLE node AS
   SELECT row_number() OVER (ORDER BY foo.p)::integer AS id,
          foo.p AS the_geom
   FROM (     
      SELECT DISTINCT roads_table.source AS p FROM roads_table
      UNION
      SELECT DISTINCT roads_table.target AS p FROM roads_table
   ) foo
   GROUP BY foo.p;

step8.3

  1.  Create a routable network: After creating the second node table from step 8,  we will combine this node table(node) with our shapefile(roads_table) into one, new, table(network) that will be used as the routing network. This table will be called “network” and will be capable of processing routing queries.  Please input this command and execute in SQL pgAdmin tool as we did in step 8. Here is a reference for more information:(https://anitagraser.com/2011/02/07/a-beginners-guide-to-pgrouting/)   

step8.2

 

CREATE TABLE network AS
   SELECT a.*, b.id as start_id, c.id as end_id
   FROM roads_table AS a
      JOIN node AS b ON a.source = b.the_geom
      JOIN node AS c ON a.target = c.the_geom;

step9.2

10. Create a “noded” view of the network:  This new view will later be used to calculate the visual isochrones in later steps. Input this command and execute in SQL pgAdmin tool.

CREATE OR REPLACE VIEW network_nodes AS 
SELECT foo.id,
 st_centroid(st_collect(foo.pt)) AS geom 
FROM ( 
  SELECT network.source AS id,
         st_geometryn (st_multi(network.geom),1) AS pt 
  FROM network
  UNION 
  SELECT network.target AS id, 
         st_boundary(st_multi(network.geom)) AS pt 
  FROM network) foo 
GROUP BY foo.id;

step10

11.​ Add column for speed:​ This step may, or may not, apply if your original shapefile contained a field of values for road speeds.

In reality a network of roads will typically contain multiple speed limits. The shapefile you choose may have a speed field, otherwise the discrimination for the following steps will not allow varying speeds to be applied to your routing network respectfully.

If values of speed exists in your shapefile we will implement these values into a new field, “traveltime“, that will show rate of travel for every road segment in our network based off their geometry. Firstly, we will need to create a column to store individual traveling speeds. The name of our column will be “traveltime” using the format: ​double precision.​ Input this command and execute in the command line tool as seen below.

ALTER TABLE network ADD COLUMN traveltime double precision;

step11

Next, we will populate the new column “traveltime” by calculating traveling speeds using an equation. This equation will take each road segments geometry(shape_leng) and divide by the rate of travel(either mph or kph). The sample command I’m using below utilizes mph as the rate while our geometry(shape_leng) units for my roads_table is in feet​. If you are using either mph or kph, input this command and execute in SQL pgAdmin tool. Below further details explain the variable “X”.

UPDATE network SET traveltime = shape_leng / X*60

step11.2

How to find X​, ​here is an example​: Using example 30 mph as rate. To find X, we convert 30 miles to feet, we know 5280 ft = 1 mile, so we multiply 30 by 5280 and this gives us 158400 ft. Our rate has been converted from 30 miles per hour to 158400 feet per hour. For a rate of 30 mph, our equation for the field “traveltime”  equates to “shape_leng / 158400*60″. To discriminate this calculations output, we will insert additional details such as “where speed = 30;”. What this additional detail does is apply our calculated output to features with a “30” value in our “speed” field. Note: your “speed” field may be named differently.

UPDATE network SET traveltime = shape_leng / 158400*60 where speed = 30;

Repeat this step for each speed value in your shapefile examples:

UPDATE network SET traveltime = shape_leng / X*60 where speed = 45;
UPDATE network SET traveltime = shape_leng / X*60 where speed = 55;

The back end is done. Great Job!

Our next step will be visualizing our data in QGIS. Open and connect QGIS to your routing database by right-clicking “PostGIS” in the Browser Panel within QGIS main window. Confirm the checkbox “Also list tables with no geometry” is checked to allow you to see the interior of your database more clearly. Fill out the name or your routing database and click “OK”.

If done correctly, from QGIS you will have access to tables and views created in your routing database. Feel free to visualize your network by drag-and-drop the network table into your QGIS Layers Panel. From here you can use the identify tool to select each road segment, and see the source and target nodes contained within that road segment. The node you choose will be used in the next step to create the views of drive-time.

12.Create views​: In this step, we create views from a function designed to determine the travel time cost. Transforming these views with tools will visualize the travel time costs as isochrones.

The command below will be how you start querying your database to create drive-time isochrones. Begin in QGIS by draging your network table into the contents. The visual will show your network as vector(lines). Simply select the road segment closest to your point of interest you would like to build your isochrone around. Then identify the road segment using the identify tool and locate the source and target fields.

step12

step12.2

Place the source or target field value in the below command where you see ​VALUE​, in all caps​.

This will serve you now as an isochrone catchment function for this workflow. Please feel free to use this command repeatedly for creating new isochrones by substituting the source value. Please input this command and execute in SQL pgAdmin tool.

*AT THE BOTTOM OF THIS WORKFLOW I PROVIDED AN EXAMPLE USING SOURCE VALUE “2022”

CREATE OR REPLACE VIEW "​view_name" AS 
SELECT di.seq, 
       di.id1, 
       di.id2, 
       di.cost, 
       pt.id, 
       pt.geom 
FROM pgr_drivingdistance('SELECT
     gid::integer AS id, 
     Source::integer AS source, 
     Target::integer AS target,                                    
     Traveltime::double precision AS cost 
       FROM network'::text, ​VALUE::bigint, 
    100000::double precision, false, false)
    di(seq, id1, id2, cost)
JOIN network_nodes pt ON di.id1 = pt.id;

step12.3

13.Visualize Isochrone: Applying tools to the view will allow us to adjust the visual aspect to a more suitable isochrone overlay.

​After creating your view, a new item in your routing database is created, using the “view_name” you chose. Drag-and-drop this item into your QGIS LayersPanel. You will see lots of small dots which represent the nodes.

In the figure below, I named my view “take1“.

step13

Each node you see contains a drive-time value, “cost”, which represents the time used to travel from the node you input in step 12’s function.

step13.2

Start by installing the QGIS plug-in Interpolation” by opening the Plugin Manager in QGIS interface.

step13.3

Next, at the top of QGIS window select “Raster” and a drop-down will appear, select “Interpolation”.

step13.4

 

A new window pops up and asks you for input.

step13.5

Select your “​view”​ as the​ vector layer​, select ​”cost​” as your ​interpolation attribute​, and then click “Add”.

step13.6

A new vector layer will show up in the bottom of the window, take care the type is Points. For output, on the other half of the window, keep the interpolation method as “TIN”, edit the ​output file​ location and name. Check the box “​Add result to project​”.

Note: decreasing the cellsize of X and Y will increase the resolution but at the cost of performance.

Click “OK” on the bottom right of the window.

step13.7

A black and white raster will appear in QGIS, also in the Layers Panel a new item was created.

step13.8

Take some time to visualize the raster by coloring and adjusting values in symbology until you are comfortable with the look.

step13.9

step13.10

14. ​Create contours of our isochrone:​ Contours can be calculated from the isochrone as well.

Find near the top of QGIS window, open the “Raster” menu drop-down and select Extraction → Contour.

step14

Fill out the appropriate interval between contour lines but leave the check box “Attribute name” unchecked. Click “OK”.

step14.2

step14.3

15.​ Zip and Share:​ Find where you saved your TIN and contours, compress them in a zip folder by highlighting them both and right-click to select “compress”. Email the compressed folder to yourself to export out of your virtual machine.

Example Isochrone catchment for this workflow:

CREATE OR REPLACE VIEW "2022" AS 
SELECT di.seq, Di.id1, Di.id2, Di.cost,                           
       Pt.id, Pt.geom 
FROM pgr_drivingdistance('SELECT gid::integer AS id,                                       
     Source::integer AS source, Target::integer AS target, 
     Traveltime::double precision AS cost FROM network'::text, 
     2022::bigint, 100000::double precision, false, false) 
   di(seq, id1, id2, cost) 
JOIN netowrk_nodes pt 
ON di.id1 = pt.id;

References: Virtual Box ORACLE VM, OSGeo-Live 11  amd64 iso, Workshop FOSS4G Bonn(​http://workshop.pgrouting.org/2.2.10/en/index.html​),


Movement data in GIS #7: animated trajectories with TimeManager

In this post, we use TimeManager to visualize the position of a moving object over time along a trajectory. This is another example of what is possible thanks to QGIS’ geometry generator feature. The result can look like this:

What makes this approach interesting is that the trajectory is stored in PostGIS as a LinestringM instead of storing individual trajectory points. So there is only one line feature loaded in QGIS:

(In part 2 of this series, we already saw how a geometry generator can be used to visualize speed along a trajectory.)

The layer is added to TimeManager using t_start and t_end attributes to define the trajectory’s temporal extent.

TimeManager exposes an animation_datetime() function which returns the current animation timestamp, that is, the timestamp that is also displayed in the TimeManager dock, as well as on the map (if we don’t explicitly disable this option).

Once TimeManager is set up, we can edit the line style to add a point marker to visualize the position of the moving object at the current animation timestamp. To do that, we interpolate the position along the trajectory segments. The first geometry generator expression splits the trajectory in its segments:

The second geometry generator expression interpolates the position on the segment that contains the current TimeManager animation time:

The WHEN statement compares the trajectory segment’s start and end times to the current TimeManager animation time. Afterwards, the line_interpolate_point function is used to draw the point marker at the correct position along the segment:

CASE 
WHEN (
m(end_point(geometry_n($geometry,@geometry_part_num)))
> second(age(animation_datetime(),to_datetime('1970-01-01 00:00')))
AND
m(start_point(geometry_n($geometry,@geometry_part_num)))
<= second(age(animation_datetime(),to_datetime('1970-01-01 00:00')))
)
THEN
line_interpolate_point( 
  geometry_n($geometry,@geometry_part_num),
  1.0 * (
    second(age(animation_datetime(),to_datetime('1970-01-01 00:00')))
	- m(start_point(geometry_n($geometry,@geometry_part_num)))
  ) / (
    m(end_point(geometry_n($geometry,@geometry_part_num)))
	- m(start_point(geometry_n($geometry,@geometry_part_num)))
  ) 
  * length(geometry_n($geometry,@geometry_part_num))
)
END

Here is the animation result for a part of the trajectory between 08:00 and 09:00:


Read more:


Even more aggregations: QGIS point cluster renderer

In the previous post, I demonstrated the aggregation support in QGIS expressions. Another popular request is to aggregate or cluster point features that are close to each other. If you have been following the QGIS project on mailing list or social media, you probably remember the successful cluster renderer crowd-funding campaign by North Road.

The point cluster renderer is implemented and can be tested in the current developer version. The renderer is highly customizable, for example, by styling the cluster symbol and adjusting the distance between points that should be in the same cluster:

Beyond this basic use case, the point cluster renderer can also be combined with categorized visualizations and clusters symbols can be colored in the corresponding category color and scaled by cluster size, as demoed in this video by the developer Nyall Dawson:


Movement data in GIS #4: variations over time

In the previous post, I presented an approach to generalize big trajectory datasets by extracting flows between cells of a data-driven irregular grid. This generalization provides a much better overview of the flow and directionality than a simple plot of the original raw trajectory data can. The paper introducing this method also contains more advanced visualizations that show cell statistics, such as the overall count of trajectories or the generalization quality. Another bit of information that is often of interest when exploring movement data, is the time of the movement. For example, at LBS2016 last week, M. Jahnke presented an application that allows users to explore the number of taxi pickups and dropoffs at certain locations:

By adopting this approach for the generalized flow maps, we can, for example, explore which parts of the research area are busy at which time of the day. Here I have divided the day into four quarters: night from 0 to 6 (light blue), morning from 6 to 12 (orange), afternoon from 12 to 18 (red), and evening from 18 to 24 (dark blue).

 (data credits: GeoLife project,

Aggregated trajectories with time-of-day markers at flow network nodes (data credits: GeoLife project, map tiles: Carto, map data: OSM)

The resulting visualization shows that overall, there is less movement during the night hours from midnight to 6 in the morning (light blue quarter). Sounds reasonable!

One implementation detail worth considering is which timestamp should be used for counting the number of movements. Should it be the time of the first trajectory point entering a cell, or the time when the trajectory leaves the cell, or some average value? In the current implementation, I have opted for the entry time. This means that if the tracked person spends a long time within a cell (e.g. at the work location) the trip home only adds to the evening trip count of the neighboring cell along the trajectory.

Since the time information stored in a PostGIS LinestringM feature’s m-value does not contain any time zone information, we also have to pay attention to handle any necessary offsets. For example, the GeoLife documentation states that all timestamps are provided in GMT while Beijing is in the GMT+8 time zone. This offset has to be accounted for in the analysis script, otherwise the counts per time of day will be all over the place.

Using the same approach, we could also investigate other variations, e.g. over different days of the week, seasonal variations, or the development over multiple years.


Movement data in GIS #3: visualizing massive trajectory datasets

In the fist two parts of the Movement Data in GIS series, I discussed modeling trajectories as LinestringM features in PostGIS to overcome some common issues of movement data in GIS and presented a way to efficiently render speed changes along a trajectory in QGIS without having to split the trajectory into shorter segments.

While visualizing individual trajectories is important, the real challenge is trying to visualize massive trajectory datasets in a way that enables further analysis. The out-of-the-box functionality of GIS is painfully limited. Except for some transparency and heatmap approaches, there is not much that can be done to help interpret “hairballs” of trajectories. Luckily researchers in visual analytics have already put considerable effort into finding solutions for this visualization challenge. The approach I want to talk about today is by Andrienko, N., & Andrienko, G. (2011). Spatial generalization and aggregation of massive movement data. IEEE Transactions on visualization and computer graphics, 17(2), 205-219. and consists of the following main steps:

  1. Extracting characteristic points from the trajectories
  2. Grouping the extracted points by spatial proximity
  3. Computing group centroids and corresponding Voronoi cells
  4. Deviding trajectories into segments according to the Voronoi cells
  5. Counting transitions from one cell to another

The authors do a great job at describing the concepts and algorithms, which made it relatively straightforward to implement them in QGIS Processing. So far, I’ve implemented the basic logic but the paper contains further suggestions for improvements. This was also my first pyQGIS project that makes use of the measurement value support in the new geometry engine. The time information stored in the m-values is used to detect stop points, which – together with start, end, and turning points – make up the characteristic points of a trajectory.

The following animation illustrates the current state of the implementation: First the “hairball” of trajectories is rendered. Then we extract the characteristic points and group them by proximity. The big black dots are the resulting group centroids. From there, I skipped the Voronoi cells and directly counted transitions from “nearest to centroid A” to “nearest to centroid B”.

(data credits: GeoLife project)

From thousands of individual trajectories to a generalized representation of overall movement patterns (data credits: GeoLife project, map tiles: Stamen, map data: OSM)

The resulting visualization makes it possible to analyze flow strength as well as directionality. I have deliberately excluded all connections with a count below 10 transitions to reduce visual clutter. The cell size / distance between point groups – and therefore the level-of-detail – is one of the input parameters. In my example, I used a target cell size of approximately 2km. This setting results in connections which follow the major roads outside the city center very well. In the city center, where the road grid is tighter, trajectories on different roads mix and the connections are less clear.

Since trajectories in this dataset are not limited to car trips, it is expected to find additional movement that is not restricted to the road network. This is particularly noticeable in the dense area in the west where many slow trajectories – most likely from walking trips – are located. The paper also covers how to ensure that connections are limited to neighboring cells by densifying the trajectories before computing step 4.

trajectory_generalization

Running the scripts for over 18,000 trajectories requires patience. It would be worth evaluating if the first three steps can be run with only a subsample of the data without impacting the results in a negative way.

One thing I’m not satisfied with yet is the way to specify the target cell size. While it’s possible to measure ellipsoidal distances in meters using QgsDistanceArea (irrespective of the trajectory layer’s CRS), the initial regular grid used in step 2 in order to group the extracted points has to be specified in the trajectory layer’s CRS units – quite likely degrees. Instead, it may be best to transform everything into an equidistant projection before running any calculations.

It’s good to see that PyQGIS enables us to use the information encoded in PostGIS LinestringM features to perform spatio-temporal analysis. However, working with m or z values involves a lot of v2 geometry classes which work slightly differently than their v1 counterparts. It certainly takes some getting used to. This situation might get cleaned up as part of the QGIS 3 API refactoring effort. If you can, please support work on QGIS 3. Now is the time to shape the PyQGIS API for the following years!


How to visualize bird migration data with QGIS TimeManager

A common use case of the QGIS TimeManager plugin is visualizing tracking data such as animal migration data. This post illustrates the steps necessary to create an animation from bird migration data. I’m using a dataset published on Movebank:

Fraser KC, Shave A, Savage A, Ritchie A, Bell K, Siegrist J, Ray JD, Applegate K, Pearman M (2016) Data from: Determining fine-scale migratory connectivity and habitat selection for a migratory songbird by using new GPS technology. Movebank Data Repository. doi:10.5441/001/1.5q5gn84d.

It’s a CSV file which can be loaded into QGIS using the Add delimited text layer tool. Once loaded, we can get started:

1. Identify time and ID columns

Especially if you are new to the dataset, have a look at the attribute table and identify the attributes containing timestamps and ID of the moving object. In our sample dataset, time is stored in the aptly named timestamp attribute and uses ISO standard formatting %Y-%m-%d %H:%M:%S.%f. This format is ideal for TimeManager and we can use it without any changes. The object ID attribute is titled individual-local-identifier.

movebank_data

The dataset contains 128 positions of 14 different birds. This means that there are rather long gaps between consecutive observations. In our animation, we’ll want to fill these gaps with interpolated positions to get uninterrupted movement traces.

2. Configuring TimeManager

To set up the animation, go to the TimeManager panel and click Settings | Add Layer. In the following dialog we can specify the time and ID attributes which we identified in the previous step. We also enable linear interpolation. The interpolation option will create an additional point layer in the QGIS project, which contains the interpolated positions.

timemanager_settings

When using the interpolation option, please note that it currently only works if the point layer is styled with a Single symbol renderer. If a different renderer is configured, it will fail to create the interpolation layer.

Once the layer is configured, the minimum and maximum timestamps will be displayed in the TimeManager dock right bellow the time slider. For this dataset, it makes sense to set the Time frame size, that is the time between animation frames, to one day, so we will see one frame per day:

timemanager_dock

Now you can test the animation by pressing the TimeManager’s play button. Feel free to add more data, such as background maps or other layers, to your project. Besides exploring the animated data in QGIS, you can also create a video to share your results.

3. Creating a video

To export the animation, click the Export video button. If you are using Linux, you can export videos directly from QGIS. On Windows, you first need to export the animation frames as individual pictures, which you can then convert to a video (for example using the free Windows Movie Maker application).

These are the basic steps to set up an animation for migration data. There are many potential extensions to this animation, including adding permanent traces of past movements. While this approach serves us well for visualizing bird migration routes, it is easy to imagine that other movement data would require different interpolation approaches. Vehicle data, for example, would profit from network-constrained interpolation between observed positions.

If you find the TimeManager plugin useful, please consider supporting its development or getting involved. Many features, such as interpolation, are weekend projects that are still in a proof-of-concept stage. In addition, we have the huge upcoming challenge of migrating the plugin to Python 3 and Qt5 to support QGIS3 ahead of us. Happy QGISing!


City flows unfolding with the other Processing

A previous version of this post has been published in German on Die bemerkenswerte Karte.

Visualizations of mobility data such as taxi or bike sharing trips have become very popular. One of the best most recent examples is cf. city flows developed by Till Nagel and Christopher Pietsch at the FH Potsdam. cf. city flows visualizes the rides in bike sharing systems in New York, Berlin and London at different levels of detail, from overviews of the whole city to detailed comparisons of individual stations:

The visualizations were developed using Unfolding, a library to create interactive maps and geovisualizations in Processing (the other Processing … not the QGIS Processing toolbox) and Java. (I tinkered with the Python port of Processing in 2012, but this is certainly on a completely different level.)

The insights into the design process, which are granted in the methodology section section of the project website are particularly interesting. Various approaches for presenting traffic flows between the stations were tested. Building on initial simple maps, where stations were connected by straight lines, consecutive design decisions are described in detail:

The results are impressive. Particularly the animated trips convey the dynamics of urban mobility very well:

However, a weak point of this (and many similar projects) is the underlying data. This is also addressed directly by the project website:

Lacking actual GPS tracks, the trip trajectories are rendered as smooth paths of the calculated optimal bike routes

This means that the actual route between start and drop off location is not known. The authors therefore estimated the routes using HERE’s routing service. The visualization therefore only shows one of many possible routes. However, cyclists don’t necessarily choose the “best” route as determined by an algorithm – be it the most direct or otherwise preferred. The visualization does not account for this uncertainty in the route selection. Rather, it gives the impression that the cyclist actually traveled on a certain route. It would therefore be undue to use this visualization to derive information about the popularity of certain routes (for example, for urban planning). Moreover, the data only contains information about the fulfilled demand, since only trips that were really performed are recorded. Demand for trips which could not take place due to lack of bicycles or stations, is therefore missing.

As always: exercise some caution when interpreting statistics or visualizations and then sit back and enjoy the animations.

If you want to read more about GIS and transportation modelling, check out
Loidl, M.; Wallentin, G.; Cyganski, R.; Graser, A.; Scholz, J.; Haslauer, E. GIS and Transport Modeling—Strengthening the Spatial Perspective. ISPRS Int. J. Geo-Inf. 2016, 5, 84. (It’s open access.)


Towards better gradients

Interesting developments going on if you like creating your own gradients. After all, that’s not as easy as it might initially seem, as Gregor Aisch describes in his post “Mastering Multi-hued Color Scales with Chroma.js”:

The issues with simple color interpolations, which include nonuniform changes in lightness between classes, also haunt us in cartography. Just have a look at the map and legend on the left-hand side, which has been created using a normal custom QGIS gradient with colors ranging from black to red, yellow and finally white. We end up with three classes in yellow which are nearly impossible to tell apart:

comparing_ramps

For comparison, on the right side, I’ve used Gregor’s corrected color ramp, which ensures that lightness changes evenly from one class to the next.

Wouldn’t it be great if the built-in gradient tool in QGIS could correct for lightness? Too bad the current dialog is not that great:

My first reaction therefore was to write a short script to import gradients from Gregor’s Chroma.js Color Scale Helper into QGIS:

But we’ll probably have a much better solution in QGIS soon since Nyall Dawson has picked up the idea and is already working on a completely new version of the gradient tool. You can see a demo of the current work in progress here:

I’m really looking forward to trying this out once it hits master!


What went on at FOSS4G 2015?

Granted, I could only follow FOSS4G 2015 remotely on social media but what I saw was quite impressive and will keep me busy exploring for quite a while. Here’s my personal pick of this year’s highlights which I’d like to share with you:

QGIS

Marco Hugentobler at FOSS4G 2015 (Photo by Jody Garnett)

Marco Hugentobler at FOSS4G 2015 (Photo by Jody Garnett)

The Sourcepole team has been particularly busy with four presentations which you can find on their blog.

Marco Hugentobler’s keynote is just great, summing up the history of the QGIS project and discussing success factor for open source projects.

Marco also gave a second presentation on new QGIS features for power users, including live layer effects, new geometry support (curves!), and geometry checker.

There has also been an update to QTiles plugin by NextGIS this week.

If you’re a bit more into webmapping, Victor Olaya presented the Web App Builder he’s been developing at Boundless. Web App Builder should appear in the official plugin repo soon.

Preview of Web App Builder from Victors presentation

Preview of Web App Builder from Victors presentation

Geocoding

If you work with messy, real-world data, you’ve most certainly been fighting with geocoding services, trying to make the best of a bunch of address lists. The Python Geocoder library promises to make dealing with geocoding services such as Google, Bing, OSM & many easier than ever before.

Let me know if you tried it.

Mobmap Visualizations

Mobmap – or more specifically Mobmap2 – is an extension for Chrome which offers visualization and analysis capabilities for trajectory data. I haven’t tried it yet but their presentation certainly looks very interesting:


QGIS on the rise with journalists

If you are following QGIS on Twitter you’ve probably noticed the increasing number of tweets by journalists using QGIS.

For example this map in the Financial Times by Hannah Dormido

or this one with overview maps and three different levels of details

or this map with semi-transparent label backgrounds and nice flag images

or even Time Manager animations by raoulranoa in the Los Angeles Times

I think this is a great development and a sign of how wide-spread QGIS usage is today.

If you know of any other examples or if you are a journalist using QGIS yourself, I’d love to see more!


Video tutorial: animated heatmaps with QGIS

Do you like the QGIS heatmap functionality? Did you know that QGIS can also create animated heatmaps?

The following video tutorial shows all necessary steps. To reproduce it, you can get the sample data from my Time Manager workshop at #QGIS2015.


Time Manager workshop at #QGIS2015

Today was the final day of #QGIS2015 the first joint QGIS conference and developer meeting. I had the pleasure to meet Time Manager co-developer Karolina Alexiou aka carolinux in person and give a talk including a hands-on workshop on Time Manager together. Time Manager makes it possible to explore spatio-temporal data by creating animations directly in QGIS.

The talk presents QGIS visualization tools with a focus on efficient use of layer styling to both explore and present spatial data. Examples include the recently added heatmap style as well as sophisticated rule-based and data-defined styles. The focus of this presentation is exploring and presenting spatio-temporal data using the Time Manager plugin. A special treat are time-dependent styles using expression-based styling which access the current Time Manager timestamp.

To download the example data and QGIS projects download Time_Manager_Examples.zip.


Trajectory animations with fadeout effect

Today’s post is a short tutorial for creating trajectory animations with a fadeout effect using QGIS Time Manager. This is the result we are aiming for:

The animation shows the current movement in pink which fades out and leaves behind green traces of the trajectories.

About the data

GeoLife GPS Trajectories were collected within the (Microsoft Research Asia) Geolife project by 182 users in a period of over three years (from April 2007 to August 2012). [1,2,3] The GeoLife GPS Trajectories download contains many text files organized in multiple directories. The data files are basically CSVs with 6 lines of header information. They contain the following fields:

Field 1: Latitude in decimal degrees.
Field 2: Longitude in decimal degrees.
Field 3: All set to 0 for this dataset.
Field 4: Altitude in feet (-777 if not valid).
Field 5: Date – number of days (with fractional part) that have passed since 12/30/1899.
Field 6: Date as a string.
Field 7: Time as a string.

Data prep: PostGIS

Since any kind of GIS operation on text files will be quite inefficient, I decided to load the data into a PostGIS database. This table of millions of GPS points can then be sliced into appropriate chunks for exploration, for example, a day in Beijing:

CREATE MATERIALIZED VIEW geolife.beijing 
AS SELECT trajectories.id,
    trajectories.t_datetime,
    trajectories.t_datetime + interval '1 day' as t_to_datetime,
    trajectories.geom,
    trajectories.oid
   FROM geolife.trajectories
   WHERE st_dwithin(trajectories.geom,
           st_setsrid(
             st_makepoint(116.3974589, 
                           39.9388838), 
             4326), 
           0.1) 
   AND trajectories.t_datetime >= '2008-11-11 00:00:00'
   AND trajectories.t_datetime < '2008-11-12 00:00:00'
WITH DATA

Trajectory viz: a fadeout effect for point markers

The idea behind this visualization is to show both the current movement as well as the history of the trajectories. This can be achieved with a fadeout effect which leaves behind traces of past movement while the most recent positions are highlighted to stand out.

Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL.

Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL.

This effect can be created using a Single Symbol renderer with a marker symbol with two symbol layers: one layer serves as the highlights layer (pink) while the second layer represents the traces (green) which linger after the highlights disappear. Feature blending is used to achieve the desired effect for overlapping markers.

Screenshot 2015-05-06 23.52.40

The highlights layer has two expression-based properties: color and size. The color fades to white and the point size shrinks as the point ages. The age can be computed by comparing the point’s t_datetime timestamp to the Time Manager animation time $animation_datetime.

This expression creates the color fading effect:

color_hsv(  
  311,
  scale_exp( 
    minute(age($animation_datetime,"t_datetime")),
    0,60,
    100,0,
    0.2
  ),
  90
)

and this expression makes the point size shrink:

scale_exp( 
  minute(age($animation_datetime,"t_datetime")),
  0,60,
  24,0,
  0.2
)

Outlook

I’m currently preparing this and a couple of other examples for my Time Manager workshop at the upcoming 1st QGIS conference in Nødebo. The workshop materials will be made available online afterwards.

Literature

[1] Yu Zheng, Lizhu Zhang, Xing Xie, Wei-Ying Ma. Mining interesting locations and travel sequences from GPS trajectories. In Proceedings of International conference on World Wild Web (WWW 2009), Madrid Spain. ACM Press: 791-800.
[2] Yu Zheng, Quannan Li, Yukun Chen, Xing Xie, Wei-Ying Ma. Understanding Mobility Based on GPS Data. In Proceedings of ACM conference on Ubiquitous Computing (UbiComp 2008), Seoul, Korea. ACM Press: 312-321.
[3] Yu Zheng, Xing Xie, Wei-Ying Ma, GeoLife: A Collaborative Social Networking Service among User, location and trajectory. Invited paper, in IEEE Data Engineering Bulletin. 33, 2, 2010, pp. 32-40.


New stable release of GRASS GIS 7.0.0!

The GRASS GIS Development team has announced the release of the new major version GRASS GIS 7.0.0. This version provides many new functionalities including spatio-temporal database support, image segmentation, estimation of evapotranspiration and emissivity from satellite imagery, automatic line vertex densification during reprojection, more LIDAR support and a strongly improved graphical user interface experience. GRASS GIS 7.0.0 also offers significantly improved performance for many raster and vector modules: “Many processes that would take hours now take less than a minute, even on my small laptop!” explains Markus Neteler, the coordinator of the development team composed of academics and GIS professionals from around the world. The software is available for Linux, MS-Windows, Mac OSX and other operating systems.

Detailed announcement and software download:
http://grass.osgeo.org/news/42/15/GRASS-GIS-7-0-0/

About GRASS GIS
The Geographic Resources Analysis Support System (http://grass.osgeo.org/), commonly referred to as GRASS GIS, is an open source Geographic Information System providing powerful raster, vector and geospatial processing capabilities in a single integrated software suite. GRASS GIS includes tools for spatial modeling, visualization of raster and vector data, management and analysis of geospatial data, and the processing of satellite and aerial imagery. It also provides the capability to produce sophisticated presentation graphics and hardcopy maps. GRASS GIS has been translated into about twenty languages and supports a huge array of data formats. It can be used either as a stand-alone application or as backend for other software packages such as QGIS and R geostatistics. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

The post New stable release of GRASS GIS 7.0.0! appeared first on GFOSS Blog | GRASS GIS Courses.

Experiments with Conway’s Game of Life

This experiment is motivated by a discussion I had with Dr. Claus Rinner about introducing students to GIS concepts using Conway’s Game of Life. Conway’s Game of Life is a popular example to demonstrate cellular automata. Based on an input grid of “alive” and “dead” cells, new cell values are computed on each iteration based on four simple rules for the cell and its 8 neighbors:

  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

(Source: Wikipedia – Conway’s Game of Life)

Based on these simple rules, effects like the following “glider gun” can be achieved:

Gospers glider gun.gif
Gospers glider gun” by KieffOwn work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

There are some Game of Life implementations for GIS out there, e.g. scripts for ArcGIS or a module for SAGA. Both of these examples are raster-based. Since I couldn’t find any examples of raster manipulation like this in pyQGIS, I decided to instead implement a vector version: a Processing script which receives an input grid of cells and outputs the next iteration based on the rules of Game of Life. In the following screencast, you can see the Processing script being called repeatedly by a script from the Python console:

So far, it’s a quick and dirty first implementation. To make it more smooth, I’m considering adding spatial indexing and using memory layers instead of having Processing create a bunch of Shapefiles.

It would also be interesting to see a raster version done in PyQGIS. Please leave a comment if you have any ideas how this could be achieved.


Rendering a brain CT scan in 3D with GRASS GIS 7

brainscan1Last year (2013) I “enjoyed” a brain CT scan in order to identify a post-surgery issue – luckily nothing found. Being in Italy, like all patients I received a CD-ROM with the scan data on it: so, something to play with! In this article I’ll show how to easily turn 2D scan data into a volumetric (voxel) visualization.

The CT scan data come in a DICOM format which ImageMagick is able to read and convert. Knowing that, we furthermore need the open source software packages GRASS GIS 7 and Paraview to get the job done.

First of all, we create a new XY (unprojected) GRASS location to import the data into:

# create a new, empty location (or use the Location wizard):
grass70 -c ~/grassdata/brain_ct

We now start GRASS GIS 7 with that location. After mounting the CD-ROM we navigate into the image directory therein. The directory name depends on the type of CT scanner which was used in the hospital. The file name suffix may be .IMA.

Now we count the number of images, convert and import them into GRASS GIS:

# list and count
LIST=`ls -1 *.IMA`
MAX=`echo $LIST | wc -w`

# import into XY location:
curr=1
for i in $LIST ; do

# pretty print the numbers to 000X for easier looping:
curr=`echo $curr | awk ‘{printf “%04d\n”, $1}’`
convert “$i” brain.$curr.png
r.in.gdal in=brain.$curr.png out=brain.$curr
r.null brain.$curr setnull=0
rm -f brain.$curr.png
curr=`expr $curr + 1`

done

At this point all CT slices are imported in an ordered way. For extra fun, we can animate the 2D slices in g.gui.animation:

Animation of brain scan slices
(click to enlarge)

# enter in one line:
g.gui.animation rast=`g.mlist -e rast separator=comma pattern=”brain*”`

The tool allows to export as animated GIF or AVI:

Animation of brain scan slices (click to enlarge)

Now it is time to generate a volume:

# first count number of available layers
g.mlist rast pat=”brain*” | wc -l

# now set 3D region to number of available layers (as number of depths)
g.region rast=brain.0003 b=1 t=$MAX -p3

At this point the computational region is properly defined to our 3D raster space. Time to convert the 2D slices into voxels by stacking them on top of each other:

# convert 2D slices to 3D slices:
r.to.rast3 `g.mlist rast pat=”brain*” sep=,` out=brain_vol

We can now look at the volume with GRASS GIS’ wxNVIZ or preferably the extremely powerful Paraview. The latter requires an export of the volume to VTK format:

# fetch some environment variables
eval `g.gisenv -s`
# export GRASS voxels to VTK 3D as 3D points, with scaled z values:
SCALE=2
g.message “Exporting to VTK format, scale factor: $SCALE”
r3.out.vtk brain_vol dp=2 elevscale=$SCALE \
output=${PREFIX}_${MAPSET}_brain_vol_scaled${SCALE}.vtk -p

Eventually we can open this new VTK file in Paraview for visual exploration:

# show as volume
# In Paraview: Properties: Apply; Display Repres: volume; etc.
paraview –data=brain_s1_vol_scaled2.vtk

markus_brain_ct_scan3 markus_brain_ct_scan4 markus_brain_ct_scan2

 

 

 

 

 

 

 

 

 

 

 

 

Fairly easy!

BTW: I have a scan of my non-smoker lungs as well :-)

The post Rendering a brain CT scan in 3D with GRASS GIS 7 appeared first on GFOSS Blog | GRASS GIS Courses.

Visualizing direction-dependent values

When mapping flows or other values which relate to a certain direction, styling these layers gets interesting. I faced the same challenge when mapping direction-dependent error values. Neighboring cell pairs were connected by two lines, one in each direction, with an associated error value. This is what I came up with:

srtm_errors_1200px

Each line is drawn with an offset to the right. The size of the offset depends on the width of the line which in turn depends on the size of the error. You can see the data-defined style properties here:

directed_error_style

To indicate the direction, I added a marker line with one > marker at the center. This marker line also got assigned the same offset to match the colored line bellow. I’m quite happy with how these turned out and would love to hear about your approaches to this issue.

srtm_errors_detail

These figures are part of a recent publication with my AIT colleagues: A. Graser, J. Asamer, M. Dragaschnig: “How to Reduce Range Anxiety? The Impact of Digital Elevation Model Quality on Energy Estimates for Electric Vehicles” (2014).


3D viz with QGIS & three.js

If you are looking for a tool to easily create 3D visualizations of your geodata, look no further! Qgis2threejs is a plugin by Minoru Akagi which exports terrain data combined with the map canvas image and optional vector data to an html file which can be viewed in 3D in any web browser which supports WebGL. To do that, this plugin uses the Three.js library.

This is the result of my first experiments with Qgis2threejs. In the following sections, I will show the steps to reproduce it.

Türkenschanzpark, Vienna

click for the interactive version (requires WebGL-capable browser)

1. The data

The building blocks of this visualization are:

  • elevation data and the hillshade derived from this data
  • a base map (WMTS from basemap.at in my case)
  • OSM building data provided by Geofabrik and
  • tree data from the city of Vienna

Load all datasets into QGIS.

2. Preparing the map

Qgis2threejs will overlay the map (as rendered in the QGIS map area) on top of the elevation model. You can combine any number of layers to create your map. I just loaded a basemap.at WMTS and a hillshade layer. To add a nice tree shadow effect, I also added the tree layer (dark grey, 50% transparency, multiply blending).

tuerkenschanzpark_map

3. Preparing the vector features

The vector features in the visualization are buildings and trees. The buildings are based on an OSM building layer. The trees are create from two point layers: one point layer to create the tree trunks (cylinder shape) and a duplicate of this point layer to create the tree crowns (sphere shape).

Load the data and choose the desired fill colors.

4. Using Qgis2threejs

Now we can start Qgis2threejs. The first tab is used to configure the terrain. Just pick the correct elevation data layer. I didn’t modify any of the other default settings.

qgis2threejs_dem

The second tab provides the settings for the vector data. As mentioned in the previous section, the trees are created from two point layers and the buildings are based on a polygon layer. The tree crowns are spheres with a radius size 3 and a z value of 5 above the surface. The tree trunks are cylinders. Finally, the buildings have a height of 10.

qgis2threejs_vector

That’s it! Just press “run” and wait. When the export is finished, your default browser (or a different one, if you specify another one in the plugin settings) will open automatically and display the results.
The visualization is interactive. You can tilt the visualization using the left mouse button, pan using the right mouse button, and zoom using the mouse wheel. I found that Firefox used around 1.6 GB of RAM to render this example.

5. Share your visualization

In the browser window, you will see where Qgis2threejs stored the html and associated Javascript files. To share your visualization, you just need to copy these files onto a webserver.

I would love to see what you come up with. Please share a link in the comments.


  • Page 1 of 3 ( 42 posts )
  • >>
  • visualization

Back to Top

Sponsors