Page 1 of 2 (25 posts)

  • talks about »
  • visualization

Tags

Last update:
Sat Aug 23 02:25:20 2014

A Django site.

QGIS Planet

Rendering a brain CT scan in 3D with GRASS GIS 7

brainscan1Last year (2013) I “enjoyed” a brain CT scan in order to identify a post-surgery issue – luckily nothing found. Being in Italy, like all patients I received a CD-ROM with the scan data on it: so, something to play with! In this article I’ll show how to easily turn 2D scan data into a volumetric (voxel) visualization.

The CT scan data come in a DICOM format which ImageMagick is able to read and convert. Knowing that, we furthermore need the open source software packages GRASS GIS 7 and Paraview to get the job done.

First of all, we create a new XY (unprojected) GRASS location to import the data into:

# create a new, empty location (or use the Location wizard):
grass70 -c ~/grassdata/brain_ct

We now start GRASS GIS 7 with that location. After mounting the CD-ROM we navigate into the image directory therein. The directory name depends on the type of CT scanner which was used in the hospital. The file name suffix may be .IMA.

Now we count the number of images, convert and import them into GRASS GIS:

# list and count
LIST=`ls -1 *.IMA`
MAX=`echo $LIST | wc -w`

# import into XY location:
curr=1
for i in $LIST ; do

# pretty print the numbers to 000X for easier looping:
curr=`echo $curr | awk ‘{printf “%04d\n”, $1}’`
convert “$i” brain.$curr.png
r.in.gdal in=brain.$curr.png out=brain.$curr
r.null brain.$curr setnull=0
rm -f brain.$curr.png
curr=`expr $curr + 1`

done

At this point all CT slices are imported in an ordered way. For extra fun, we can animate the 2D slices in g.gui.animation:

Animation of brain scan slices
(click to enlarge)

# enter in one line:
g.gui.animation rast=`g.mlist -e rast separator=comma pattern=”brain*”`

The tool allows to export as animated GIF or AVI:

Animation of brain scan slices (click to enlarge)

Now it is time to generate a volume:

# first count number of available layers
g.mlist rast pat=”brain*” | wc -l

# now set 3D region to number of available layers (as number of depths)
g.region rast=brain.0003 b=1 t=$MAX -p3

At this point the computational region is properly defined to our 3D raster space. Time to convert the 2D slices into voxels by stacking them on top of each other:

# convert 2D slices to 3D slices:
r.to.rast3 `g.mlist rast pat=”brain*” sep=,` out=brain_vol

We can now look at the volume with GRASS GIS’ wxNVIZ or preferably the extremely powerful Paraview. The latter requires an export of the volume to VTK format:

# fetch some environment variables
eval `g.gisenv -s`
# export GRASS voxels to VTK 3D as 3D points, with scaled z values:
SCALE=2
g.message “Exporting to VTK format, scale factor: $SCALE”
r3.out.vtk brain_vol dp=2 elevscale=$SCALE \
output=${PREFIX}_${MAPSET}_brain_vol_scaled${SCALE}.vtk -p

Eventually we can open this new VTK file in Paraview for visual exploration:

# show as volume
# In Paraview: Properties: Apply; Display Repres: volume; etc.
paraview –data=brain_s1_vol_scaled2.vtk

markus_brain_ct_scan3 markus_brain_ct_scan4 markus_brain_ct_scan2

 

 

 

 

 

 

 

 

 

 

 

 

Fairly easy!

BTW: I have a scan of my non-smoker lungs as well :-)

The post Rendering a brain CT scan in 3D with GRASS GIS 7 appeared first on GFOSS Blog | GRASS GIS Courses.

Visualizing direction-dependent values

When mapping flows or other values which relate to a certain direction, styling these layers gets interesting. I faced the same challenge when mapping direction-dependent error values. Neighboring cell pairs were connected by two lines, one in each direction, with an associated error value. This is what I came up with:

srtm_errors_1200px

Each line is drawn with an offset to the right. The size of the offset depends on the width of the line which in turn depends on the size of the error. You can see the data-defined style properties here:

directed_error_style

To indicate the direction, I added a marker line with one > marker at the center. This marker line also got assigned the same offset to match the colored line bellow. I’m quite happy with how these turned out and would love to hear about your approaches to this issue.

srtm_errors_detail

These figures are part of a recent publication with my AIT colleagues: A. Graser, J. Asamer, M. Dragaschnig: “How to Reduce Range Anxiety? The Impact of Digital Elevation Model Quality on Energy Estimates for Electric Vehicles” (2014).


3D viz with QGIS & three.js

If you are looking for a tool to easily create 3D visualizations of your geodata, look no further! Qgis2threejs is a plugin by Minoru Akagi which exports terrain data combined with the map canvas image and optional vector data to an html file which can be viewed in 3D in any web browser which supports WebGL. To do that, this plugin uses the Three.js library.

This is the result of my first experiments with Qgis2threejs. In the following sections, I will show the steps to reproduce it.

Türkenschanzpark, Vienna

click for the interactive version (requires WebGL-capable browser)

1. The data

The building blocks of this visualization are:

  • elevation data and the hillshade derived from this data
  • a base map (WMTS from basemap.at in my case)
  • OSM building data provided by Geofabrik and
  • tree data from the city of Vienna

Load all datasets into QGIS.

2. Preparing the map

Qgis2threejs will overlay the map (as rendered in the QGIS map area) on top of the elevation model. You can combine any number of layers to create your map. I just loaded a basemap.at WMTS and a hillshade layer. To add a nice tree shadow effect, I also added the tree layer (dark grey, 50% transparency, multiply blending).

tuerkenschanzpark_map

3. Preparing the vector features

The vector features in the visualization are buildings and trees. The buildings are based on an OSM building layer. The trees are create from two point layers: one point layer to create the tree trunks (cylinder shape) and a duplicate of this point layer to create the tree crowns (sphere shape).

Load the data and choose the desired fill colors.

4. Using Qgis2threejs

Now we can start Qgis2threejs. The first tab is used to configure the terrain. Just pick the correct elevation data layer. I didn’t modify any of the other default settings.

qgis2threejs_dem

The second tab provides the settings for the vector data. As mentioned in the previous section, the trees are created from two point layers and the buildings are based on a polygon layer. The tree crowns are spheres with a radius size 3 and a z value of 5 above the surface. The tree trunks are cylinders. Finally, the buildings have a height of 10.

qgis2threejs_vector

That’s it! Just press “run” and wait. When the export is finished, your default browser (or a different one, if you specify another one in the plugin settings) will open automatically and display the results.
The visualization is interactive. You can tilt the visualization using the left mouse button, pan using the right mouse button, and zoom using the mouse wheel. I found that Firefox used around 1.6 GB of RAM to render this example.

5. Share your visualization

In the browser window, you will see where Qgis2threejs stored the html and associated Javascript files. To share your visualization, you just need to copy these files onto a webserver.

I would love to see what you come up with. Please share a link in the comments.


Data-defined properties in QGIS 2.0

In QGIS 2.0, the old “size scale” field has been replaced by data-defined properties which enable us to control many more properties than jut size and rotation. One of the often requested features – for example – is the possibility for data-defined colors:

datadefinedproperties

Today’s example map visualizes a dataset of known meteorite landings published on http://visualizing.org/datasets/meteorite-landings. I didn’t clean the data, so there is quite a bunch of meteorites at 0/0.

To create the map, I used QGIS 2.0 feature blending mode “multiply” as well as data-defined size based on meteorite mass:

meteorites1

Background oceans and graticule by NaturalEarthData.


Dataviz with OpenSource Tools

Today, I’ve finished my submission for the Hubway Data Visualization Challenge. All parts of the resulting dataviz were created using open source tools. My toolbox for this work contains: QGIS, Spatialite, Inkscape, Gimp and Open Office Calc. To see the complete submission and read more about it, check the project page.


Mapping Hubway Station Stats

Today, I’ve been working on some station statistics. From the trip data, I calculated incoming and outgoing trips per station as well as the station’s first day of operations. Combining this information makes it possible to calculate the average day’s “bike balance”. A balanced station has the same number of incoming and outgoing trips while an unbalanced station will either run out of bikes or empty slots for returns.

I’ve published the resulting station map on QGIS Cloud (http://qgiscloud.com/anitagraser/hubway_cloud1) where you can have a look at the bike balance values.

Additionally, I’ve created a mashup in Leaflet pulling together background tiles from Stamen and the cloud-hosted WMS for better orientation:


Exploring Hubway’s Data II

Today, I’ve been experimenting with a new way to visualize origin-destination pairs (ODs). The following image shows my first results:

The ideas was to add a notion of direction as well as uncertainty. The “flower petals” have a pointed origin and grow wider towards the middle. (Looking at the final result, they should probably go much narrower towards the end again.) The area covered by the petals is a simple approximation of where I’d expect the bike routes without performing any routing.

To get there, I reprojected the connection lines to EPSG:3857 and calculated connection length and line orientation using QGIS Field Calculator $length operator and the bearing formula given in QGIS Wiki:

(atan((xat(-1)-xat(0))/(yat(-1)-yat(0)))) * 180/3.14159 + (180 *(((yat(-1)-yat(0)) < 0) + (((xat(-1)-xat(0)) < 0 AND (yat(-1) - yat(0)) >0)*2)))

For the style, I created a new “flower petal” SVG symbol in Inkscape and styled it with varying transparency values: Rare connections are more transparent than popular ones. This style is applied to the connection start points. Using the advanced options “size scale” and “rotation”, it is possible to rotate the petals into the right direction as well as scale them using the previously calculated values for connection length and orientation.

Update

While the above example uses pretty wide petals this one is done with a much narrower petal. I think it’s more appropriate for the data at hand:

Most of the connections are clearly heading south east, across Charles River, except for that group of connections pointing the opposite direction, to Harvard Square.


Exploring Hubway’s Data I

Hubway is a bike sharing system in Boston and they are currently hosting a data visualization challenge. What a great chance to play with some real-world data!

To get started, I loaded both station Shapefile and trip CSV into a new Spatialite database. The GUI is really helpful here – everything is done in a few clicks. Afterwards, I decided to look into which station combinations are most popular. The following SQL script creates my connections table:

create table connections (
start_station_id INTEGER,
end_station_id INTEGER,
count INTEGER,
Geometry GEOMETRY);


insert into connections select 
start_station_id, 
end_station_id, 
count(*) as count, 
LineFromText('LINESTRING('||X(a.Geometry)||' '||Y(a.Geometry)||','
                          ||X(b.Geometry)||' '||Y(b.Geometry)||')') as Geometry
 from trips, stations a, stations b
where start_station_id = a.ID 
and end_station_id = b.ID
and a.ID != b.ID
and a.ID is not NULL
and b.ID is not NULL
group by start_station_id, end_station_id;

(Note: This is for Spatialite 2.4, so there is no MakeLine() method. Use MakeLine if you are using 3.0.)

For a first impression, I decided to map popular connections with more than one hundred entries. Wider lines mean more entries. The points show the station locations and they are color coded by starting letter. (I’m not yet sure if they mean anything. They seem to form groups.)

Some of the stations don’t seem to have any strong connections at all. Others are rather busy. The city center and the dark blue axis pointing west seem most popular.

I’m really looking forward to what everyone else will be finding in this dataset.


Exploring Mobility Data Using Time Manager

Data from various vehicles is collected for many purposes in cities worldwide. To get a feeling for just how much data is available, I created the following video using QGIS Time Manager which has been shown at the Austrian Museum of Applied Arts “MADE 4 YOU – Design for Change”. It shows one hour of taxi tracks in the city of Vienna:

If you like the video, please go to http://www.ertico.com/2012-its-video-competition-open-vote and vote for it in the category “Videos directed at the general public”.


Space-Time Cubes – Exploring Twitter Streams III

This post continues my quest of exploring the spatial dimension of Twitter streams. I wanted to try one of the classic spatio-temporal visualization methods: Space-time cubes where the vertical axis represents time while the other two map space. Like the two previous examples, this visualization is written in pyprocessing, a Python port of the popular processing environment.

This space-time cube shows twitter trajectories that contain at least one tweet in New York Times Square. The 24-hour day starts at the bottom of the cube and continues to the top. Trajectories are colored based on the time stamp of their start tweet.

Additionally, all trajectories are also drawn in context of the coastline (data: OpenStreetMap) on the bottom of the cube.

While there doesn’t seem to be much going on in the early morning hours, we can see quite a busy coming and going during the afternoon and evening. From the bunch of vertical lines over Times Square, we can also assume that some of our tweet authors spent a considerable time at and near Times Square.

I’ve also created an animated version. Again, I recommend to watch it in HD.


Pyprocessing for 3D Animations

I’ve been looking into the 3D capabilities of Pyprocessing for the creation of animated space-time cubes.

There are subtle differences between Processing and Pyprocessing. Processing is documented pretty well but I prefer Python over Java any time. So here is my port of the Processing “Cubes within Cube” example as a reference for how 3D animations are done in Pyprocessing.


(You can watch the animation live on the Processing site.)

from pyprocessing import *
from random import random

cubies = 20
c = [0]*cubies
quadBG = [[None]*6]*cubies

# Controls cubie's movement
x = [0.0]*cubies
y = [0.0]*cubies
z = [0.0]*cubies
xSpeed = [0.0]*cubies
ySpeed = [0.0]*cubies
zSpeed = [0.0]*cubies

# Controls cubie's rotation
xRot = [0.0]*cubies
yRot = [0.0]*cubies
zRot = [0.0]*cubies

stage = None

# Size of external cube
bounds = 300.0

def setup():
  size(640, 360);
  
  for i in range(0,cubies):
    # Each cube face has a random color component
    quadBG[i][0] = color(0)
    quadBG[i][1] = color(51)
    quadBG[i][2] = color(102)
    quadBG[i][3] = color(153)
    quadBG[i][4] = color(204)
    quadBG[i][5] = color(255)

    # Cubies are randomly sized
    cubieSize = random()*10+5
    c[i] =  Cube(cubieSize, cubieSize, cubieSize)

    # Initialize cubie's position, speed and rotation
    x[i] = 0.0
    y[i] = 0.0
    z[i] = 0.0

    xSpeed[i] = random()*4-2
    ySpeed[i] = random()*4-2
    zSpeed[i] = random()*4-2

    xRot[i] = random()*60+40
    yRot[i] = random()*60+40
    zRot[i] = random()*60+40
  

def draw():
  background(50)
  lights()
  
  # Center in display window
  translate(width/2, height/2, -130)
  
  # Outer transparent cube
  noFill()
  
  # Rotate everything, including external large cube
  rotateX(frame.count * 0.001)
  rotateY(frame.count * 0.002)
  rotateZ(frame.count * 0.001)
  stroke(255)
  
  # Draw external large cube
  stage =  Cube(bounds, bounds, bounds);
  stage.create()

  # Move and rotate cubies
  for i in range(0,cubies):
    pushMatrix()
    translate(x[i], y[i], z[i])
    rotateX(frame.count*PI/xRot[i])
    rotateY(frame.count*PI/yRot[i])
    rotateX(frame.count*PI/zRot[i])
    noStroke()
    c[i].create(quadBG[i])
    x[i] += xSpeed[i]
    y[i] += ySpeed[i]
    z[i] += zSpeed[i]
    popMatrix()

    # Draw lines connecting cubbies
    stroke(0)
    if i < cubies-1:
      line(x[i], y[i], z[i], x[i+1], y[i+1], z[i+1])

    # Check wall collisions
    if x[i] > bounds/2 or x[i] < -bounds/2:
      xSpeed[i]*=-1
    
    if y[i] > bounds/2 or y[i] < -bounds/2:
      ySpeed[i]*=-1
    
    if z[i] > bounds/2 or z[i] < -bounds/2:
      zSpeed[i]*=-1
    


# Custom Cube Class

class Cube():
  def __init__(self,w,h,d):
    self.vertices = [0]*24
    self.w = w;
    self.h = h;
    self.d = d;

    # cube composed of 6 quads
    #front
    self.vertices[0] =  PVector(-w/2,-h/2,d/2)
    self.vertices[1] =  PVector(w/2,-h/2,d/2)
    self.vertices[2] =  PVector(w/2,h/2,d/2)
    self.vertices[3] =  PVector(-w/2,h/2,d/2)
    #left
    self.vertices[4] =  PVector(-w/2,-h/2,d/2)
    self.vertices[5] =  PVector(-w/2,-h/2,-d/2)
    self.vertices[6] =  PVector(-w/2,h/2,-d/2)
    self.vertices[7] =  PVector(-w/2,h/2,d/2)
    #right
    self.vertices[8] =  PVector(w/2,-h/2,d/2)
    self.vertices[9] =  PVector(w/2,-h/2,-d/2)
    self.vertices[10] =  PVector(w/2,h/2,-d/2)
    self.vertices[11] =  PVector(w/2,h/2,d/2)
    #back
    self.vertices[12] =  PVector(-w/2,-h/2,-d/2)
    self.vertices[13] =  PVector(w/2,-h/2,-d/2)
    self.vertices[14] =  PVector(w/2,h/2,-d/2)
    self.vertices[15] =  PVector(-w/2,h/2,-d/2)
    #top
    self.vertices[16] =  PVector(-w/2,-h/2,d/2)
    self.vertices[17] =  PVector(-w/2,-h/2,-d/2)
    self.vertices[18] =  PVector(w/2,-h/2,-d/2)
    self.vertices[19] =  PVector(w/2,-h/2,d/2)
    #bottom
    self.vertices[20] =  PVector(-w/2,h/2,d/2)
    self.vertices[21] =  PVector(-w/2,h/2,-d/2)
    self.vertices[22] =  PVector(w/2,h/2,-d/2)
    self.vertices[23] =  PVector(w/2,h/2,d/2)
  
  def create(self,quadBG=None):
    # Draw cube
    for i in range(0,6):
      if quadBG:
          fill(quadBG[i])
      beginShape(QUADS)
      for j in range(0,4):
        vertex(self.vertices[j+4*i].x, self.vertices[j+4*i].y, self.vertices[j+4*i].z)
      endShape()

run()


A Visual Exploration of Twitter Streams II

After my first shot at analyzing Twitter data visually I received a lot of great feedback. Thank you!

For my new attempt, I worked on incorporating your feedback such as: filter unrealistic location changes, show connections “grow” instead of just popping up and zoom to an interesting location. The new animation therefore focuses on Manhattan – one of the places with reasonably high geotweet coverage.

The background is based on OpenStreetMap coastline data which I downloaded using QGIS OSM plugin and rendered in pyprocessing together with the geotweets. To really see what’s going on, switch to HD resolution and full screen:

It’s pretty much work-in-progress. The animation shows similar chaotic patterns seen in other’s attempts at animating tweets. To me, the distribution of tweets looks reasonable and many of the connection lines seem to actually coincide with the bridges spanning to and from Manhattan.

This work is an attempt at discovering the potential of Twitter data and at the same time learning some pyprocessing which will certainly be useful for many future tasks. The next logical step seems to be to add information about interactions between users and/or to look at the message content. Another interesting task would be to add interactivity to the visualization.


A Visual Exploration of Twitter Streams

Twitter streams are curious things, especially the spatial data part. I’ve been using Tweepy to collect tweets from the public timeline and what did I discover? Tweets can have up to three different spatial references: “coordinates”, “geo” and “place”. I’ll still have to do some more reading on how to interpret these different attributes.

For now, I have been using “coordinates” to explore the contents of a stream which was collected over a period of five hours using

stream.filter(follow=None,locations=(-180,-90,180,90))

for global coverage. In the video, each georeferenced tweet produces a new dot on the map and if the user’s coordinates change, a blue arrow is drawn:

While pretty, these long blue arrows seem rather suspicious. I’ve only been monitoring the stream for around five hours. Any cross-Atlantic would take longer than that. I’m either misinterpreting the tweets or these coordinates are fake. Seems like it is time to dive deeper into the data.


Glowing Hot Maps – QGIS Meets Gimp

Waiting time is over, Gimp 2.8 is finally here. That is reason enough to take it for a quick test run!

How about a new look for the QGIS user map?

This “glowing hot” map was made using the Gimp filter of the same name:

For the user point layer, I selected a simple point style with high transparency and separately exported land and user points from print composer.

user points as exported from QGIS

In Gimp, I applied the “glowing hot” filter to the user points and combined the layers. The trick here is to first use “Color to alpha” on the user point layer and turn black to transparent. This way, the “glowing hot” filter will only be applied to the remaining points.

Gimp 2.8 RC1 is close enough to the previous version to get comfortable fast. I like the single-window mode even if it’s hard to tell which part of the GUI has the focus sometimes.

Open source GIS and image editing for a perfect work flow.


Mapping the Night

Most maps of night time lights show the land masses lit brightly by city lights. But the oceans are not as dark as these maps suggest. NOAA/NGDC datasets available through edenextdata.com show very bright spots in the North Sea:

Night time lights trace the coast but illuminate the sea too.

The dataset description mentions that the sensors pick up moonlit clouds, lights from human settlements, fires, gas flares, heavily lit fishing boats, lightning and the aurora. So might these spots be fishing boats?


Mapping Density with Hexagonal Grids

A very common approach for mapping point density is to use heat maps. If you are aiming for a different style, give hexagonal grids a try. The workflow is very simple in QGIS:

  1. Load the point layer
  2. Create a hexagonal grid using MMQGIS – Create Grid Layer
  3. Count points per polygon (Vector menu)

I’ve applied this method to an OGD dataset of the Viennese tree cadastre containing 119,744 tree positions:

Default style: One dot per tree

Rendering tree counts per hexagonal grid cell reveals some of Vienna’s greenest spots, such as the Prater or Türkenschanzpark.

Tree density in a hexagonal grid

There’s also a printable version.

Some notes on the necessary steps:

MMQGIS – Create Grid Layer performs great. Creating the 18,400 hexagons in this map was very fast. Note though, that this tool doesn’t seem to write correct projection information to the resulting Shapefile. Therefore it is necessary to set the projection manually after loading the file.

As a result, it is very likely that the Points in Polygon tool will warn you that the point and polygon layer are not in the same projection. I ignored the warning and everything went fine. This step was reasonably fast considering the number of points (119,744) and polygons (18,400).


Mapping QGIS Users

For “QGIS Users Around the World” Gary Sherman collected and geocoded a few weeks of accesses to the plugin repositories. This map is my first attempt at mapping the data for use in QGIS publications:

Considering the coarse resolution of geocoded IP addresses, I’ve decided to count the number of unique IP addresses within each area (5×5 degrees). We can make out a lot of activity in Europe, Japan, Brazil and the US. The high number of accesses from the US mid west are due to IPs being mapped to country-level only.

I would love to hear your feedback on this one!


Nice Animations with Time Manager’s Offset Feature

You probably know this video from my previous post “Tweets to QGIS”. Today, I want to show you how it is done.

After importing the Twitter JSON file, I saved it as a Shapefile.
Every point in the Shapefile contains the timestamp of the tweet. Additionally, I added a second field called “forever” which will allow me to configure Time Manager to show features permanently.

A "forever" field will help with showing features permanently.

To create the flash effect you see in the video, we load the tweet Shapefile three times. Every layer gets a different role and style in the final animation:

  • Layer “start_flash” is a medium sized dot that marks the appearance of a new tweet.
  • Layer “big_flash” is a bigger dot of the same color which will appear after “start_flash”.
  • Layer “permanent” is a small dot that will be visible even after the flash vanishes.
Three layers with different styles will make the animation more interesting.

styling the tweet layers

We’ll plan the final animation with a time step size of 10 seconds. That means that every animation frame will cover a real-world timespan of 10 seconds.

We configure Time Manager by adding all three tweet layers:
Layer “start_flash” starts at the orginal time t. Layer “big_flash” gets an offset of -10 seconds, which means that it will display ten seconds after time t. Layer “permanent” gets an offset of -20 seconds and ends at time forever.

Layers can be timed using the "offset" feature.

Finally – in Time Manager dock – we can start the animation with a time step size of 10 seconds:

Use a time step size of 10 seconds so it fits to the offset values we specified earlier.

Besides watching the animation inside QGIS, Time Manager also enables you to export the animation to an image series using “Export Video” button. Actual video export is not implemented yet, but you can use mencoder on the resulting image series to create a video file:

mencoder "mf://*.PNG" -mf fps=10 -o output.avi -ovc lavc -lavcopts vcodec=mpeg4

Time offsets are a new feature in version 0.4 of Time Manager. You can get it directly from the project SVN and soon from the official QGIS repo.


More Color Ramps for QGIS

Colorbrewer is a great resource for visually pleasing gradients that can be used for mapping. It was already possible to use color brewer ramps in QGIS but it was necessary to create the ramp with the final number of classes in mind.

Creating a Colobrewer ramp

That’s why I sat down and created continuous ramps from the Colorbrewer data:

Colorbrewer Ramps in QGIS Style Manager

If you want to use them, just import the following XML file into QGIS Style Manager:

<!DOCTYPE qgis_style>
<qgis_style version="0">
  <symbols/>
  <colorramps>
    <colorramp type="gradient" name="Blues">
      <prop k="color1" v="247,251,255,255"/>
      <prop k="color2" v="8,48,107,255"/>
      <prop k="stops" v="0.13;222,235,247,255:0.26;198,219,239,255:0.39;158,202,225,255:0.52;107,174,214,255:0.65;66,146,198,255:0.78;33,113,181,255:0.9;8,81,156,255"/>
    </colorramp>
    <colorramp type="gradient" name="BrBG">
      <prop k="color1" v="166,97,26,255"/>
      <prop k="color2" v="1,133,113,255"/>
      <prop k="stops" v="0.25;223,194,125,255:0.5;245,245,245,255:0.75;128,205,193,255"/>
    </colorramp>
    <colorramp type="gradient" name="BuGn">
      <prop k="color1" v="237,248,251,255"/>
      <prop k="color2" v="0,109,44,255"/>
      <prop k="stops" v="0.25;178,226,226,255:0.5;102,194,164,255:0.75;44,162,95,255"/>
    </colorramp>
    <colorramp type="gradient" name="BuPu">
      <prop k="color1" v="237,248,251,255"/>
      <prop k="color2" v="129,15,124,255"/>
      <prop k="stops" v="0.25;179,205,227,255:0.5;140,150,198,255:0.75;136,86,167,255"/>
    </colorramp>
    <colorramp type="gradient" name="GnBu">
      <prop k="color1" v="240,249,232,255"/>
      <prop k="color2" v="8,104,172,255"/>
      <prop k="stops" v="0.25;186,228,188,255:0.5;123,204,196,255:0.75;67,162,202,255"/>
    </colorramp>
    <colorramp type="gradient" name="Greens">
      <prop k="color1" v="247,252,245,255"/>
      <prop k="color2" v="0,68,27,255"/>
      <prop k="stops" v="0.13;229,245,224,255:0.26;199,233,192,255:0.39;161,217,155,255:0.52;116,196,118,255:0.65;65,171,93,255:0.78;35,139,69,255:0.9;0,109,44,255"/>
    </colorramp>
    <colorramp type="gradient" name="Greys">
      <prop k="color1" v="250,250,250,255"/>
      <prop k="color2" v="5,5,5,255"/>
    </colorramp>
    <colorramp type="gradient" name="OrRd">
      <prop k="color1" v="254,240,217,255"/>
      <prop k="color2" v="179,0,0,255"/>
      <prop k="stops" v="0.25;253,204,138,255:0.5;252,141,89,255:0.75;227,74,51,255"/>
    </colorramp>
    <colorramp type="gradient" name="Oranges">
      <prop k="color1" v="255,245,235,255"/>
      <prop k="color2" v="127,39,4,255"/>
      <prop k="stops" v="0.13;254,230,206,255:0.26;253,208,162,255:0.39;253,174,107,255:0.52;253,141,60,255:0.65;241,105,19,255:0.78;217,72,1,255:0.9;166,54,3,255"/>
    </colorramp>
    <colorramp type="gradient" name="PRGn">
      <prop k="color1" v="123,50,148,255"/>
      <prop k="color2" v="0,136,55,255"/>
      <prop k="stops" v="0.25;194,165,207,255:0.5;247,247,247,255:0.75;166,219,160,255"/>
    </colorramp>
    <colorramp type="gradient" name="PiYG">
      <prop k="color1" v="208,28,139,255"/>
      <prop k="color2" v="77,172,38,255"/>
      <prop k="stops" v="0.25;241,182,218,255:0.5;247,247,247,255:0.75;184,225,134,255"/>
    </colorramp>
    <colorramp type="gradient" name="PuBu">
      <prop k="color1" v="241,238,246,255"/>
      <prop k="color2" v="4,90,141,255"/>
      <prop k="stops" v="0.25;189,201,225,255:0.5;116,169,207,255:0.75;43,140,190,255"/>
    </colorramp>
    <colorramp type="gradient" name="PuBuGn">
      <prop k="color1" v="246,239,247,255"/>
      <prop k="color2" v="1,108,89,255"/>
      <prop k="stops" v="0.25;189,201,225,255:0.5;103,169,207,255:0.75;28,144,153,255"/>
    </colorramp>
    <colorramp type="gradient" name="PuOr">
      <prop k="color1" v="230,97,1,255"/>
      <prop k="color2" v="94,60,153,255"/>
      <prop k="stops" v="0.25;253,184,99,255:0.5;247,247,247,255:0.75;178,171,210,255"/>
    </colorramp>
    <colorramp type="gradient" name="PuRd">
      <prop k="color1" v="241,238,246,255"/>
      <prop k="color2" v="152,0,67,255"/>
      <prop k="stops" v="0.25;215,181,216,255:0.5;223,101,176,255:0.75;221,28,119,255"/>
    </colorramp>
    <colorramp type="gradient" name="Purples">
      <prop k="color1" v="252,251,253,255"/>
      <prop k="color2" v="63,0,125,255"/>
      <prop k="stops" v="0.13;239,237,245,255:0.26;218,218,235,255:0.39;188,189,220,255:0.52;158,154,200,255:0.65;128,125,186,255:0.75;106,81,163,255:0.9;84,39,143,255"/>
    </colorramp>
    <colorramp type="gradient" name="RdBu">
      <prop k="color1" v="202,0,32,255"/>
      <prop k="color2" v="5,113,176,255"/>
      <prop k="stops" v="0.25;244,165,130,255:0.5;247,247,247,255:0.75;146,197,222,255"/>
    </colorramp>
    <colorramp type="gradient" name="RdGy">
      <prop k="color1" v="202,0,32,255"/>
      <prop k="color2" v="64,64,64,255"/>
      <prop k="stops" v="0.25;244,165,130,255:0.5;255,255,255,255:0.75;186,186,186,255"/>
    </colorramp>
    <colorramp type="gradient" name="RdPu">
      <prop k="color1" v="254,235,226,255"/>
      <prop k="color2" v="122,1,119,255"/>
      <prop k="stops" v="0.25;251,180,185,255:0.5;247,104,161,255:0.75;197,27,138,255"/>
    </colorramp>
    <colorramp type="gradient" name="RdYlBu">
      <prop k="color1" v="215,25,28,255"/>
      <prop k="color2" v="44,123,182,255"/>
      <prop k="stops" v="0.25;253,174,97,255:0.5;255,255,191,255:0.75;171,217,233,255"/>
    </colorramp>
    <colorramp type="gradient" name="RdYlGn">
      <prop k="color1" v="215,25,28,255"/>
      <prop k="color2" v="26,150,65,255"/>
      <prop k="stops" v="0.25;253,174,97,255:0.5;255,255,192,255:0.75;166,217,106,255"/>
    </colorramp>
    <colorramp type="gradient" name="Reds">
      <prop k="color1" v="255,245,240,255"/>
      <prop k="color2" v="103,0,13,255"/>
      <prop k="stops" v="0.13;254,224,210,255:0.26;252,187,161,255:0.39;252,146,114,255:0.52;251,106,74,255:0.65;239,59,44,255:0.78;203,24,29,255:0.9;165,15,21,255"/>
    </colorramp>
    <colorramp type="gradient" name="Spectral">
      <prop k="color1" v="215,25,28,255"/>
      <prop k="color2" v="43,131,186,255"/>
      <prop k="stops" v="0.25;253,174,97,255:0.5;255,255,191,255:0.75;171,221,164,255"/>
    </colorramp>
    <colorramp type="gradient" name="YlGn">
      <prop k="color1" v="255,255,204,255"/>
      <prop k="color2" v="0,104,55,255"/>
      <prop k="stops" v="0.25;194,230,153,255:0.5;120,198,121,255:0.75;49,163,84,255"/>
    </colorramp>
    <colorramp type="gradient" name="YlGnBu">
      <prop k="color1" v="255,255,204,255"/>
      <prop k="color2" v="37,52,148,255"/>
      <prop k="stops" v="0.25;161,218,180,255:0.5;65,182,196,255:0.75;44,127,184,255"/>
    </colorramp>
    <colorramp type="gradient" name="YlOrBr">
      <prop k="color1" v="255,255,212,255"/>
      <prop k="color2" v="153,52,4,255"/>
      <prop k="stops" v="0.25;254,217,142,255:0.5;254,153,41,255:0.75;217,95,14,255"/>
    </colorramp>
    <colorramp type="gradient" name="YlOrRd">
      <prop k="color1" v="255,255,178,255"/>
      <prop k="color2" v="189,0,38,255"/>
      <prop k="stops" v="0.25;254,204,92,255:0.5;253,141,60,255:0.75;240,59,32,255"/>
    </colorramp>
  </colorramps>
</qgis_style>

For a big selection of point, line and polygon styles check “QGIS symbology set” by S.S. Rebelious.


Converting Videos to Animated GIF

When giving presentations using a computer that’s not your own, trying to show video clips can end badly. It’s much safer to use animated GIFs instead. Luckily it’s easy to convert videos to animated GIF on the command line using mplayer:

mplayer movie.avi -vo gif89a:output=movie.gif -vf scale=600:337 -ss 4 

This command converts “movie.avi” to “movie.gif”, rescales the output to 600:337 and starts at second 4 of the video – skipping the beginning.


  • Page 1 of 2 ( 25 posts )
  • >>
  • visualization

Back to Top

Sponsors