OSGeo Planet

Paul Ramsey: More Speech for Money

OSGeo Planet - 2 hours 45 min ago

The political pundits in BC are making a great deal of noise (see V. Palmer's inside baseball assessment if you care) about an amendment to the Elections Act that says that:

"the chief electoral officer must provide … to a registered political party, in respect of a general election … a list of voters that indicates which voters on the list voted in the general election"

Meanwhile, they are ignoring the BC Liberals fundamentally changing the money dynamic of the fixed election date by eliminating the 60-day "pre-campaign" period.

"Section 198 is amended (a) by repealing subsections (1) and (2) and substituting the following: (1) In respect of a general election, the total value of election expenses incurred by a registered political party during the campaign period must not exceed $4.4 million."

The Elections Act currently divides up the election period before a fixed election into two "halves": the 60 days before the official campaign, and the campaign period itself (about 28 days if I recall correctly). In the first 60 days, candidates can spend a maximum of $70,000 and parties a maximum of $1.1 million. In the campaign period, candidates can spend another $70,000 and parties as much as $4.4 million.

The intent of the "pre-campaign" period is clearly to focus campaigning on the campaign period itself, by limiting the amount of early spending by parties. The "money density" of the pre-campaign period is about $18,000 / day in party spending; in the campaign period, it is almost $160,000 / day.

This is all very public-spirited, and contributes to a nice focussed election period. But (BUT!) the BC Liberals currently have more money than they know what to do with, so it is in their interest to be able to focus all that money as close to the event as possible. And rather than simply raising the pre-campaign spending limit they went one better: they removed it all together. They can spend unlimited amounts of money as close as 28 days before election day, 21 days before the opening of advance polls.

Let me repeat that: they can spend unlimited amounts of money.

So in British Columbia now, it is legal to both raise unlimited amounts of money from corporations, unions and individuals in any amounts at all (and some individuals and corporations have donated to the BC Liberals, individually, over $100,000 a year), and it is legal to spend unlimited amounts of money, right up to within 28 days of the election day.

See any problems with that?

Categories: OSGeo Planet

gisky: An update on Debian and Ubuntu GIS

OSGeo Planet - 3 hours 27 min ago
Last week was an important week for Debian and Ubuntu: both distributions had a release. Debian released its new stable release "8.0" nicknamed "jessie".



Deep down the release notes you will find a sentence which may be interesting for anyone interested in GIS on this platform:

During the jessie development cycle many changes from UbuntuGIS were merged back into Debian GIS. The collaboration with UbuntuGIS and OSGeo-Live projects was improved, resulting in new packages and contributors. Visit Debian GIS tasks pages to see the full range of GIS software inside Debian and the Debian GIS homepage for more information.This means that in jessie you will find a number of new packages (owslib, pgrouting,spatialite-gui,tinyows) and updates to many of the large well known packages: gdal, mapserver, postgis, qgis, saga and grass, but also to openstreetmap related packages such as josm.

If you are looking for a stable distribution which offers many gis packages out of the box, debian jessie is definitely the way to go, as it will be supported for five years.



Just a few days before the release of jessie also ubuntu had a new release: 15.04 nicknamed "vivid". Much of the work that was done in Debian for jessie is included in this release (since ubuntu is based on debian), and some packages even got synced from the development versions of Debian: you will find that vivid contains the current release for eg gdal, postgis and saga, without having to rely on a third party archive!

All of this was possible because DebianGIS, and Sebastic in particular have been very active. Now that jessie is released, more energy will go into packaging new packages and versions, so if your pet GIS project is not in Debian/Ubuntu yet (or not up-to-date as you would like), it is the ideal moment to join!

Categories: OSGeo Planet

Geomatic Blog: Dear OSM, just chill out.

OSGeo Planet - 4 hours 47 min ago

This is kinda a reply to Gary Gale’s “Dear OSM, it’s time to get your finger out“. The more I read that, the less sense it makes to me.

I think of myself as a Linux nerd. I consider myself a hacker. And I’ve spoken so many times about open/libre licensing in conferences the issue became boring.

A couple of years ago, a psychologist asked me a question as part of a job interview: What makes you angry?. And my response was clear: Things not working and logical fallacies. So my brain hurt a little when I read these particular sentences in Gary’s blog post:

There are really only three sources of global mapping […]: NAVTEQ, TeleAtlas , and OpenStreetMap. […]

Surely now is the moment for OpenStreetMap to accelerate adoption, usage and uptake? But why hasn’t this already happened?

See, a tiny part of my brain screams “fallacy“. «OpenStreetMap has things in common with NAVTEQ and TeleAtlas, ergo it has to play in the same field and by the same rules as NAVTEQ and TeleAtlas».

Maybe OSM was given birth by SteveC to cover the lack of affordable datasources, and then a way for him and his VC-fueled CloudMade to compete with them. But for me and a whole lot of nerds, OSM basically was, and still is, a playground where we can run database queries all night long. OSM is a wholly different animal.

In 2010 I argued that Geo businesses and Geo hackers are playing the same game, but with different goals, which makes for an interesting game; a game in which it would be foolish to think that the opponent will play for the same goal as you. You have to play this game to maximize your score, which is not a synonim of decreasing the opponent’s score.

In other words: when I put something into OSM, I don’t frakking care what happens to NAVTEQ or TeleAtlas. The same way when I cook something for friends I don’t care what happens to the local pizza joint.

See, maps are an infrastructure. In Spanish GIS circles, specially those around the Spanish NSDI, cartography is often called “the infrastructure of infrastructures” You need maps to plan roads, power lines, land zoning. You need stupidly accurate maps to collect taxes based on how many centimeters square your house has, or to give out grants based on exactly how many olive trees you own.

During the late 2000’s I heard a lot of criticism about OSM. But most of it didn’t come from data-collecting companies – it came from public servants. “Volunteers use cheap GPS with low accuracy, OSM will never be accurate”, they said. “The OSM data model cannot be loaded into ArcGIS and won’t be of any use”, they said. “OSM data will never be authoritative”, they said. And a few years later, this happened:

http://www.diariodenautica.com/los-ministros-del-interior-de-espana-y-francia-visitan-el-cecorvigmar/

That, right there, is the Guardia Civil (who are public servants) in a freakin’ control room using OSM for freakin’ pan-european coastal border patrols. Which means OSM is a freakin’ de facto infrastructure for sovereignty.

Fact is, government agencies play a wholly different game than commercial providers. The goal of a govt’ agency (and specifically those which maintain infrastructures) is to provide a service to citizens, not to make money. As a thought exercise, think about the public servants who place the border milestones, or the ones checking road surface quality, and guess how many fucks they give about NAVTEQ or TeleAtlas.

OSM is about the ownership of infrastructure. It’s about the efficiency of copyright law. It’s all about the digital commons, dammit.

And this is where I kick a wasps’ nest by turning this post into a political one.

A capitalistic liberal will argue that infrastructure is better handled by competing private entities, renting it to citizens for a fee. But remember, I’m a Spaniard, and that means that I’ve seen infrastructures such as power lines, water companies, telcos, motorways, airports and banks privatized under the excuse of theoretically running more efficiently in private hands. We have a nice word for what happened in the real world: “expolio“, which english-speakers might translate as “plunder”.

Thanks but no, thanks. I want infrastructures to be as close to the commons as possible. Maybe it’s because I now live in the land of the dugnad, and my bias makes me see maintaining the commons as beneficial for the society as a whole.

So when you look at OSM (or at the Wikipedia, for that matter) you’re really looking at a digital common good. Which puts OSM in the same basket as other digital common goods, such as programming languages, the radioelectric spectrum, technical RFCs, or state-owned cartography. Now go read the tragedy of the digital commons.

It’s sad to live in a world where the money-making is held above many commons, even at the expense of the commons. Fortunately it’s difficult for a private entity to pollute air, water or the radioelectrical spectrum without going unnoticed, but unfortunately copyright law cares next to nothing about intellectual commons.

<rant>Please, someone explain to me how giving me intellectual ownership of something I thought about until 70 years after my death makes me think about better things; then explain to me how that reverts into the common good. </rant>

TL;DR: Dear OpenStreetMap: just chill out and don’t listen to what they say. Corporations may come and go, but a common infrastructure such as you is here to stay.


Archivado en: opinión
Categories: OSGeo Planet

Faunalia: Ti piace avere QGIS ben tradotto in italiano? Ora puoi contribuire

OSGeo Planet - Mon, 2015-04-27 18:02
Avere tutto QGIS, incluso il programma, i manuali e il sito web, tradotti in italiano è una bella comodità; questo richiede uno sforzo notevole, per cui il tuo aiuto è essenziale. Fai una donazione tramite: http://qgis.it/#translation
Categories: OSGeo Planet

Paul Ramsey: GIS "Data Models"

OSGeo Planet - Mon, 2015-04-27 16:17

Most IT professionals have some expectation, having received a basic education on relational data modelling, that a model for a medium sized problem might look like this:

Why is it, then, that production GIS data flows so consistently produce models that look like this:

What is wrong with us?!?? I bring up this rant only because I was just told that some users find the PostgreSQL 1600 column limit constraining since it makes it hard to import the Esri census data, which are "modelled" into tables that are presumably wider than they are long.

Categories: OSGeo Planet

Slashgeo (FOSS Articles): New Scientific Journal: Open Geospatial Data, Software and Standards

OSGeo Planet - Mon, 2015-04-27 12:22

Thanks to Between the Poles I learned about the new SpringerOpen journal named ‘Open Geospatial Data, Software and Standards’. Its content is published under a Creative Commons license (CC-BY).

The aims and scope: “Open Geospatial Data, Software and Standards provides an advanced forum for the science and technology of open data, crowdsourced information, and sensor web through the publication of reviews and regular research papers. The journal publishes articles that address issues related, but not limited to, the analysis and processing of open geo-data, standardization and interoperability of open geo-data and services, as well as applications based on open geo-data. The journal is also meant to be a space for theories, methods and applications related to crowdsourcing, volunteered geographic information, as well as Sensor Web and related topics.”

The post New Scientific Journal: Open Geospatial Data, Software and Standards appeared first on Slashgeo.org.

Categories: OSGeo Planet

From GIS to Remote Sensing: Published the User Manual of the Semi-Automatic Classification Plugin v. 4.3.0 for QGIS

OSGeo Planet - Sun, 2015-04-26 10:14

I have published the new version of the User Manual of the Semi-Automatic Classification Plugin v. 4.3.0 for QGIS.
This new documentation illustrates the functions of the Semi-Automatic Classification Plugin v. 4.3.I have improved the chapter about GIS and Remote Sensing with information about the classification algorithms and spectral distances.I am going to write the basic tutorials very soon.
It is possible to download the User Manual (licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License) in English from this link. Also, an online version in English is available here.
Categories: OSGeo Planet

BostonGIS: PostGIS In Action 2nd Fresh off the presses

OSGeo Planet - Fri, 2015-04-24 21:08

Just got our shipment of PostGIS In Action 2nd Edition. Here is one here.

It's a wee bit fatter than the First Edition (just by about 100 pages). I'm really surprised the page count didn't go over 600 pages given the large additional ground of coverage this edition has. This edition covers a lot more raster than the first edition and has chapters dedicated to PostGIS topology and PostGIS tiger geocoder.

Categories: OSGeo Planet

Jackie Ng: Custom GDAL binaries for MapGuide Open Source 2.6 and 3.0

OSGeo Planet - Fri, 2015-04-24 13:44
A question that gets normally asked on our mailing list is how do you get the GDAL FDO provider to work with formats like ECW or MrSID. Our normal response would be (provided you are licensed to use ECW, MrSID or any other non-standard GDAL supported format) to point you over to GIS internals and grab one of their custom windows GDAL binaries to replace the GDAL dlls in your current MapGuide installation.

The reason we ask you to do this is because when we build GDAL for use with the FDO provider, we build GDAL using only the standard profile of supported formats. That is to say any format listed here:
Where the Compiled by default option is unconditionally Yes. It is not possible for us logistically to build GDAL/OGR with the proverbial kitchen sink of raster/vector format support, so that's where GIS internals comes in as their builds of GDAL/OGR have greater raster/vector format support. As long as you grab the same release of GDAL and make sure to pick the build that is built with the same MSVC compiler used to build the release of MapGuide/FDO you're using, you should then have GDAL and OGR FDO providers with expanded vector and raster format support.

This suggestion worked up until the 2.5.2 release, where the right version of GDAL built with the right version of MSVC (2010 at the time) was available for download. But for 2.6 and the (pending) 3.0 releases, this suggestion is not applicable because that site does not offer a MSVC 2012 build of GDAL 1.10, which is what MapGuide 2.6 and 3.0 both use for their GDAL FDO provider.

So this leaves some of you in a pickle, being stuck on 2.5.2 and unable to move to 2.6 or 3.0 because you need to support one of these esoteric data formats. Well, I have partially alleviated this issue for you.

Tamas has not only made these custom GDAL binaries for download, but also the development kits used to build these binaries as well. So in these past few days, I grabbed the MSVC 2012 dev kit, paired it with our internal GDAL 1.10 source tree in FDO and made a few tweaks to some makefiles here and there and here's the end result.

A custom build of GDAL 1.10 with support for the following additional raster data formats:
  • ECW (rw): ERDAS Compressed Wavelets (SDK 3.x)
  • JP2ECW (rw+v): ERDAS JPEG2000 (SDK 3.x)
  • FITS (rw+): Flexible Image Transport System
  • GMT (rw): GMT NetCDF Grid Format
  • netCDF (rw+s): Network Common Data Format
  • WCS (rovs): OGC Web Coverage Service
  • WMS (rwvs): OGC Web Map Service
  • HTTP (ro): HTTP Fetching Wrapper
  • Rasterlite (rws): Rasterlite
  • PostGISRaster (rws): PostGIS Raster driver
  • MBTiles (rov): MBTiles
And support for the following additional vector formats:
  • "PostgreSQL" (read/write)
  • "NAS" (readonly)
  • "LIBKML" (read/write)
  • "Interlis 1" (read/write)
  • "Interlis 2" (read/write)
  • "SQLite" (read/write)
  • "VFK" (readonly)
  • "OSM" (readonly)
  • "WFS" (readonly)
  • "GFT" (read/write)
  • "CouchDB" (read/write)
  • "ODS" (read/write)
  • "XLSX" (read/write)
  • "ElasticSearch" (read/write)
  • "PDF" (read/write)
You might notice some omissions from this list. Where's MrSID? Where's Oracle? Where's $NOT_COMPILED_BY_DEFAULT_DATA_FORMAT?
Well I did say I have partially alleviated the issue and not fully alleviated it. The issue is that due to what I gather is licensing restrictions, the development kit can't bundle the necessary headers and libraries needed to build GDAL with driver support for MrSID, OCI, etc. As such the custom build of GDAL I have made available does not include support for such formats.
What can be done about this. For something like Oracle, we already have a dedicated FDO provider for that. For something like MrSID? I'm afraid you're out of luck. You'll either have to stick on the 2.5.2 release for that much longer, or just bite the bullet and gdal_translate those MrSID files to something more accessible. I've heard some good things about RasterLite. I've also heard that you could get some great performance out of carefully prepared GeoTiffs
Any thing to liberate yourself from MrSID because you won't see the right GDAL binaries with this support built in for the foreseeable future.
You can find the download links for the custom GDAL builds in our updated GDAL provider guide for MapGuide 2.6 and 3.0.
One more thing before I finish that is worth re-iterating. Some formats like ECW, require you to be have a license to use ECW technology in a server environment. Other formats carry their own licensing shenanigans, so make sure you are properly licensed to use any of the the additional formats that are made available with this custom build of GDAL. The GISInternals build system repo on GitHub has all the applicable licenses in RTF format for your perusal.
Also worth pointing out is that this custom build of GDAL is not supported by me or anyone on the MapGuide development team. I only make this build available to you so you have the ability to access these additional data formats should you so choose and nothing more. There is no obligation by us to provide support for any issues that may arise as a result of using this custom GDAL release (inferred or otherwise). Use this custom build of GDAL at your own discretion.
/end lawyer-y talk. Enjoy!
Categories: OSGeo Planet

Micha Silver: Get Landsat 8 Reflectance with GRASS-GIS

OSGeo Planet - Fri, 2015-04-24 12:40

Landsat 8 tiles have been available for more than two years now. In addition to the obvious advantages of these new satellite images: higher (16 bit) radiometric resolution and extra bands, there are some subtle additions to the metadata file that makes image processing easier.

Firstly, the new metadata files are formatted for easy parsing. But more importantly, we now have a pair of parameters titled RADIANCE_MULT_BAND_* and RADIANCE_ADD_BAND_* (one pair for each band).  With these parameters we can calculate the Top Of Atmosphere (TOA) reflectance directly, without the need for the intermediary step of radiometric calibration. Read the Using USGS Landsat 8 website for full details.  These two parameters are parallel to the well known gain and bias parameters from earlier Landsat missions, however they already take into account Esun and the earth-sun distance.  Thus reflectance can be obtained straight-away. We only need to divide by the sun zenith angle to get corrected TOA relectance.

We GRASS users can easily put the above numbers into an r.mapcalc expression to get reflectance for each band. But we also want to take advantage of the scripting capabilities of GRASS to batch process all bands for several tiles.  Choosing python for our scripting language we have access to the  OS libraries we need as well as the GRASS python library.   We first loop through all the Landsat 8 directories in some top level folder where we downloaded the original tiles.  We read through the metadata file for each tile,  creating a python dictionary of the entries we need.  Now we implement an inner loop to import each of the individual bands in the tile as a GRASS raster. Then we run the mapcalc module on each band, creating TOA reflectance for that band. When the inner loop finishes importing and processing all the bands, the outer loop moves to the next Landsat directory and cycles thru the bands in that tile, etc.

For those interested in the nitty-gritty, you’re welcome to clone a small python script I’ve put  on github that does the above.

Categories: OSGeo Planet

GeoSpatial Camptocamp: OpenLayers 3: Code Sprint in Austria

OSGeo Planet - Fri, 2015-04-24 11:44

At the beginning of April, three Camptocamp’s developers attended the OpenLayers 3 Code Sprint which took place in Schladming (Austria). With this blog post, we would like to provide details on some of the work the Camptocamp team did at this code sprint.

Rendering tests

Since our work on drawing points with WebGL, we wanted to add « rendering tests » to the library. Drawing features with WebGL is quite complex, so we’ve always felt that having a way to test the library’s rendering output was mandatory for the future.

So we worked on a rendering test framework and actual rendering tests during the Code Sprint. The rendering test framework is based on the Resemble.js library for image comparison, and on the Slimer JS scriptable browser for running rendering tests in an headless way on Travis CI.

Slimer JS is similar to PhantomJS, except that it is based on Gecko, the browser engine of Mozilla Firefox. Contrary to PhantomJS, Slimer JS supports Canvas as well as WebGL, which was one of the main requirements for us.

Compile your OpenLayers 3 apps with Closure Compiler

At Camptocamp, we compile our JavaScript applications with the Closure Compiler. The Closure Compiler allows for high compression rates, and, based on annotations in the code, « type-checks » the JavaScript code. The Closure Compiler is a very good tool for maintaining large JavaScript code bases with high constraints in terms of performance.

At the Code Sprint in Schladming, we improved closure-util, our node-based tool for Closure, to make using the Closure Compiler in OpenLayers 3 applications much easier. We also wrote a tutorial showing how to compile applications together with OpenLayers 3. The tutorial will become an official OpenLayers 3 tutorial when OpenLayers v3.5.0 will be released (beginning of May 2015).

Drawing Lines and Polygons with WebGL

Some time ago, we added support for drawing points with WebGL to the library. This work was sponsored by Météorage, which uses OpenLayers 3 and WebGL to draw large number of lightning impacts on a map.

But this was just a first step towards WebGL vector support in OpenLayers 3. Obviously, we also wanted to support drawing lines, polygons and labels.

We took the opportunity of the Code Sprint to take a stab at it! We worked on a first implementation, to demonstrate the feasibility, and verify that the current rendering architecture will work for WebGL lines and polygons.

The results are so far encouraging, and we’re looking forward to continuing this work. Check out the dedicated blog post we wrote for more detail.

Vector extrusion with ol3-cesium

We added support to the KML parser for reading extrude and altitudeMode values that may be associated to geometries in KML documents. With some additions to ol3-cesium, the extrude and altitudeMode values may be used to, for example, display extruded buildings in the Cesium globe. See the ol3-cesium extrude example for a demo.

Vector Tiling

OpenLayers 3 already includes basic support for decoding and rendering Vector Tiles. See the tile-vector example for example.

But OpenLayers 3 doesn’t yet support the MapBox Vector Tile Spec, and rendering artefacts may be present at tile boundaries for polygon features with outlines/strokes. We think that full support for Vector Tiles is important for the library, so our goal is to fill the gaps.

At the Code Sprint, we started working on an OpenLayers 3 format for decoding MapBox Vector Tiles and creating vector objects that can be exploited by the library. We also discussed and designed rendering strategies that we could use for properly displaying Vector Tiles. We wanted to experiment with buffered Vector Tiles and clipping at rendering time to prevent rendering problems at tile boundaries.

We think that Vector Tiles present a number of advantages over standard/current vector strategies. To name a few: data caching, data simplification performed once on the server, natural index formed by the tiles, compact and efficient format.

We’re then looking forward to improving the support for Vector Tiles in OpenLayers 3 and making Vector Tiles as mainstream as possible for application developers.

Feel free to contact us if you want to discuss these topics with us!

Cet article OpenLayers 3: Code Sprint in Austria est apparu en premier sur Camptocamp.

Categories: OSGeo Planet

GeoSpatial Camptocamp: OpenLayers 3: towards drawing lines and polygons with WebGL

OSGeo Planet - Fri, 2015-04-24 09:50

Last year, we investigated then implemented massive and very fast rendering of hundred of thousands of points using WebGL. We took the opportunity of the recent OpenLayers 3 code sprint in Austria to implement a Proof of Concept (PoC) demonstrating line and polygon rendering using WebGL.

This first example shows a map with the base layer, points, lines and polygons, all drawn using WebGL.

webgl_lines_and_polygons                                                               WebGL lines and polygons

The second example shows countries (Polygon and MultiPolygon features), also drawn using WebGL. If the examples do not display correctly, please check here if your browser supports WebGL.

webgl_vector_layer                                                                    WebGL vector layer

Even though the developments are at an early stage, it is easy to notice that panning, rotating and zooming the map are already very smooth.

Rendering lines

WebGL supports rendering lines out of the box, and this is what we are using in this prototype. We are creating an array of pairs of vertices (a batch) and we are passing it to WebGL for rendering.

There are a few WebGL limitations though:

  • joins are not supported: there is some space between two consecutive lines.
  • thick lines are only supported on Mac and Linux: all lines have a width of 1 pixel on Microsoft Windows
  • lines are aliased: they appear rough on the screen.

webgl_lines                                                                   WebGL lines

Despite these limitations, we intentionally implemented line rendering this way as it is the simplest technique and it works well for the scope of this PoC.

In order to overcome these limitations, we should triangulate the lines; basically, line ends should be duplicated on the CPU then efficiently moved in the vertex shader. A line segment would be represented by two triangles.

Rendering polygons

WebGL has no support for rendering polygons, so we implemented it using only two draw calls.

  • First, the polygon interiors are rendered using a batch of triangles. We triangulate the polygons and create a batch with an array of vertices and an array of triangle indices. The color is stored on each of the vertices, which allows to draw all polygons in a single draw call.
  • Then the polygon outlines are rendered using a batch of lines. The line renderer described above is reused.

A limitation of our technique is that we duplicate the color on each vertex since using a uniform would prevent batching. An idea to save resources while still allowing batching would be to use a color atlas and only store a texture coordinate on each of the vertices. Only one draw call would be required.

As a side note, we use the promising earcut library for triangulating the polygons in this PoC.

This prototype provides a concrete first step toward implementing fast and reliable rendering of lines and polygons using WebGL. We are pleased by the smoothness we already get without any optimization. At Camptocamp, we are very excited by the performance and the full control WEBGL offers. If you are also looking for pushing the limits of current rendering, please get in touch with us!

Cet article OpenLayers 3: towards drawing lines and polygons with WebGL est apparu en premier sur Camptocamp.

Categories: OSGeo Planet

Nathan Woodrow: PSA: Please use new style Qt signals and slots not the old style

OSGeo Planet - Fri, 2015-04-24 05:20

Don’t do this:

self.connect(self.widget, SIGNAL("valueChanged(int)"), self.valuechanged)

It’s the old way, the crappy way. It’s prone to error and typing mistakes. And who really wants to be typing strings as functions and arg names in it. Gross.

Do this:

self.widget.valueChanged.connect(self.valuechanged) self.widget.valueChanged[str].connect(self.valuechanged)

Much nicer. Cleaner. Looks and feels like Python not some mash up between C++ and Python. The int argument is the default so it will use that. If you to pick the signal type you can use [type].

Don’t do this:

self.emit(SIGNAL("changed()", value1, value2))

Do this

class MyType(QObject): changed = pyqtSignal(str, int) def stuff(self): self.changed.emit(value1, value2)

pyqtSignal is a type you can use to define you signal. It will come with type checking, if you don’t want type checking just do pyqtSignal(object).

Please think of the poor kittens before using the old style in your code.


Filed under: pyqt, python, qgis Tagged: pyqt, qgis, qt
Categories: OSGeo Planet

Slashgeo (FOSS Articles): Batch geonews: LiDAR Standards Woes, Maps on Apple Watch, Esri Maps for Office 3, and much more

OSGeo Planet - Thu, 2015-04-23 18:21

Here’s the recent geonews in batch mode.

On the open source / open data front:

On the Esri front:

On the Google front:

Discussed over Slashdot:

In the everything else category:

In the maps category:

The post Batch geonews: LiDAR Standards Woes, Maps on Apple Watch, Esri Maps for Office 3, and much more appeared first on Slashgeo.org.

Categories: OSGeo Planet

Bjorn Sandvik: Real time satellite tracking of your journeys - how does it work?

OSGeo Planet - Thu, 2015-04-23 16:17
I'm back in Oslo after my 25 days ski trip across Nordryggen in Norway. It was a great journey, and I would highly recommend doing all or parts of it if you enjoy cross-country skiing. Just be prepared for shifting weather conditions.
@thematicmapping @mapperz I thought “cross country” skiing meant ski across the countryside, but you have literally crossed a whole country!
— harry_wood (@harry_wood) April 20, 2015 The goal of the trip was also to test my solution for real time satellite tracking, explained in several of my previous blog posts. It worked out really well, and people were able to follow along in the comfort of their sofa.


I fastened a Spot Satellite Messenger to the top of my backpack, and left the device in tracking mode while skiing. The device sent my current position every 5 minutes, allowing me to update the map without any mobile coverage. When we arrived at a mountain hut, I pressed the OK button to set up a bed. I also programmed a button to show a snow cave, in case we wouldn't reach a hut. Luckily we didn't have to use it :-)

My map and elevation plot of the 25 days ski trip across Nordryggen. Most of the trip is above tree line, and there are only 5 road crossings in total. 
The SPOT messenger only sends my time and position, so I had to create a web service to retrieve extra information about each location. I'm using a service from the Norwegian Mapping Authority
 to retrieve the altitude, nearest place name and the terrain type. Earlier this winter, I experienced that the service did't return any altitude if I was skiing on lakes, so I'm using the Google Elevation API to avoid gaps in the elevation profile.

By knowing the time and location, I could create an automatic service to obtain more information to enrich the map. In addition to elevation and place name, I've added a weather report.  The image show Bjordalsbu, the highest lying hut on the route 1586 m, which we visited in a strong breeze. 
While skiing, I used Instagram to post photos that would instantly show on the map as well. This required mobile coverage, which is sparse in the mountains. After the trip, I synced my camera photos with my GPS track to be able to show them along the route.

Click "Bilder" in the top menu to see the photos along the route. 
A few of my photos:

Eidsbugarden in Jotunheimen. 
Iungsdalshytta in Skarveimen. 
Taumevatn in Ryfylkeheiane.
Gaukhei in Setesdalsheiane. 
End of trip - and the snow - in Ljosland. More photos in my Google+ album.
Categories: OSGeo Planet

Slashgeo (FOSS Articles): Evolve! New techs for developer GIS, meet the latest SuperGIS Engine 3.3

OSGeo Planet - Thu, 2015-04-23 12:13

Supergeo Technologies, the leading global provider of complete GIS software and solutions, officially released SuperGIS Engine 3.3 for global GIS developers to customize GIS applications, meeting diverse demands in various fields. Lots of the components have been updated in the latest version, providing renewed and advanced data access and analysis functions such as mainstream database compatibility, table mashup, data processing etc. Moreover, enhanced mapping elements allow developer to build up impressive data display and detailed labeling design for clients.

 

Developed by Supergeo through integrating mapping and GIS technologies, SuperGIS Engine 3.3, as the COM-structured development component, provides developers with complete GIS core components. The developed applications can be seamlessly embedded into programming language in Windows developing environment, helping integration with other systems for strong system development.

 

SuperGIS Engine 3.3 offers complete development resources. Hence, GIS programmers and developers can efficiently develop applications with GIS functionalities such as Display Layer, Edit, Query, Access Spatial Database, etc. In addition, hundreds of GIS-related objects, diverse Controls, comprehensive development samples and object diagram are provided for technical users to effectively build programs and deploy to multiple end-users. Also, SuperGIS Engine developers are allowed to access online contents such as sample codes and GIS application designs, enabling front-end users to bring flexible and productive solutions for end-users.

 

To know more information about SuperGIS Engine, please visit our product pages on: www.supergeotek.com/ProductPage_SE.aspx.

Feel free to download the trial from:

http://www.supergeotek.com/download_6_developer.aspx

 

# # #

 

About Supergeo

 

Supergeo Technologies Inc. is a leading global provider of GIS software and solutions. Since the establishment, Supergeo has been dedicated to providing state-of-the-art geospatial technologies and comprehensive services for customers around the world. It is our vision to help users utilize geospatial technologies to create a better world.

Supergeo software and applications have been spread over the world to be the backbone of the world’s mapping and spatial analysis. Supergeo is the professional GIS vendor, providing GIS-related users with complete GIS solutions for desktop, mobile, server, and Internet platforms.

 

Marketing Contact:

Patty Chen

Supergeo Technologies Inc.

5F, No. 71, Sec. 1, Zhouzi St., Taipei, 114, TAIWAN

TEL:+886-2-2659 1899

Website: http://www.supergeotek.com

Email: patty@supergeotek.com

The post Evolve! New techs for developer GIS, meet the latest SuperGIS Engine 3.3 appeared first on Slashgeo.org.

Categories: OSGeo Planet

Stefano Costa: Tipografia elettorale: le elezioni regionali in Liguria

OSGeo Planet - Wed, 2015-04-22 05:50

Anche in Liguria stanno per arrivare le elezioni regionali. Se votassero i tipografi, vincerebbe Raffaella Paita. Le elezioni vere sono un’altra storia.

La partenza delle campagne per le presidenziali USA di Hillary Clinton e Marco Rubio ha destato un po’ di interesse per la tipografia anche sui media generalisti. Facciamo una panoramica dei principali candidati per la Liguria, in rigoroso ordine di sondaggio.

Nexa

imageRaffaella Paita, candidata per il PD, usa Nexa, disegnato da Font Fabric nel 2012. È un font geometrico moderno, con una ampia gamma di pesi. La campagna di Paita sfrutta Nexa in modo pervasivo, adottandolo sia in maiuscolo grassetto per lo slogan e il cognome, sia in minuscolo per il testo ‒ incluso quello sul sito web, con l’intera gamma di pesi disponibili. Nonostante sia un font geometrico, Nexa è abbastanza leggibile anche per testi di lunghezza media, soprattutto nei pesi più sottili, anche se non risulta necessariamente gradevole.

Block Condensed

imageGiovanni Toti, candidato di Forza Italia, usa Block Condensed, disegnato da Hermann Hoffmann nel 1908 e distribuito da molte case tipografiche, tra cui Linotype e Adobe. Il manifesto è scritto interamente in lettere maiuscole, ad eccezione dei riferimenti social, e non risulta un sito web ufficiale della campagna.

Block è un font senza grazie dai bordi leggermente frastagliati, che conferiscono un aspetto vagamente rustico, più caldo rispetto alla maggior parte dei caratteri senza grazie. La leggibilità è garantita soprattutto dall’uso esclusivo delle maiuscole.

Kabel

Alice Salvatore è la candidata del Movimento 5 Stelle e usa Kabel, disegnato da Rudolf Koch nel 1927. Kabel è un font geometrico umanistico, oggi distribuito da diverse case tipografiche. Ad un occhio inesperto, Kabel non è particolarmente diverso da Nexa, e questa convergenza è interessante (lascio ad altri valutazioni sulla tempistica delle scelte tipografiche), se consideriamo che questo genere di font può trasmettere sensazioni di modernità, efficienza, precisione.

cropped-alice_copertinaLa campagna di Salvatore usa Kabel in lettere maiuscole e minuscole, nel peso standard. Le lettere minuscole non sono particolarmente leggibili, in particolare con la spaziatura tra lettere ridotta: il risultato non è dei migliori. Non vengono sfruttati i diversi pesi a disposizione in molte delle versioni digitali del font.

~

Se dovessi votare adesso, la campagna elettorale tipografica sarebbe vinta da Raffaella Paita. Paita usa un font moderno, creato da una type foundry giovane che sforna font apprezzatissimi, e non è nemmeno un caso che il font ufficiale di Hillary Clinton sia molto simile. La campagna nel suo complesso è la meglio studiata dal punto di vista tipografico, integrata senza troppe sbavature tra i diversi media (anche se sia Paita che Salvatore usano WordPress per i loro siti web, la campagna M5S è più “minimal”). Il rovescio della medaglia è che probabilmente Paita ha investito più risorse degli altri candidati nella creazione di una campagna di comunicazione professionale di alto livello, nella diffusione del proprio slogan ‒ mentre gli altri due candidati qui considerati inseriscono lo slogan nel proprio manifesto sotto forma di #hashtag , in una sorta di chiamata all’azione per i propri potenziali elettori, forse già un po’ trita per chi gli hashtag li usa davvero.

Nelle prossime settimane, se ci sarà il tempo, guarderemo anche gli altri candidati alla presidenza, e qualche candidato al consiglio regionale (anche se quello che ho visto finora è molto noioso).

Le elezioni vere sono tutta un’altra storia.

Categories: OSGeo Planet

gvSIG Team: Recomendaciones y trucos para desarrollar con gvSIG 2.1 (1). Recorriendo datos

OSGeo Planet - Tue, 2015-04-21 13:57

Hola a todos de nuevo,

De vez en cuando echo un vistazo al código de proyectos gvSIG realizados por otros desarrolladores y veo algunos fragmentos con pequeñas erratas que se repiten a lo largo del código. Algunas ni siquiera pueden considerarse erratas, sino que simplemente sería recomendable hacer las cosas de otra forma. He recogido aquí unas pocas de ellas e intentaré ir recogiendo algunas más para ir contándolas en otros artículos.

En este vamos a ver algunos trucos o buenas prácticas relacionadas con:

  • Liberación de recursos
  • Usar FeatureReference
  • Iterar sobre un conjunto de features
Liberación de recursos

Empecemos por la parte de liberación de recursos.

Cuando estamos creando código que accede a fuentes de datos a través de DAL, hay que tener en cuenta que dependiendo del proveedor de datos que estemos utilizando es importante que liberemos los recursos cuando terminemos de usarlos. Algunos proveedores, como los de acceso a base de datos, mantienen conexiones
abiertas con la base de datos que deberían ser liberadas al terminar de usarlas. En general cualquier “artefacto” que implemente el interface “Disposable” deberíamos cerciorarnos de que lo liberamos al terminar de usarlo.

Veamos un fragmento de código:

... final List<FeatureReference> tmpFeatures = new ArrayList<FeatureReference>(); boolean showWarningDialog = false; DisposableIterator it = null; FeatureSet featureSet = null; try { featureSet = featureStore.getFeatureSet(); it = featureSet.fastIterator(); } catch (DataException ex) { String message = String.format( "Error getting feature set or fast iterator of %1", featureStore); LOG.info(message, ex); return; } while (it.hasNext()) { Feature feature = (Feature) it.next(); if (hasMoreThanOneGeometry(feature) || feature.getDefaultGeometry() == null) { showWarningDialog = true; } else { tmpFeatures.add(feature.getCopy().getReference()); } } it.dispose(); featureSet.dispose(); if (showWarningDialog) { showWarningDialog(); } ...

En este fragmento de código podemos observar que se crea un “FeatureSet”, al que se le pide un iterador para recorrernos las features de un “FeatureStore”, y al final del proceso se invoca al método “dispose” de ambos para liberar los recursos que estos tengan asociados.

Si el proceso que se realiza va bien, es decir, no se producen errores, no pasará nada y los recursos se liberarán correctamente al ejecutarse las últimas dos líneas… pero ¿Qué sucede si se produce un error durante la ejecución de las líneas previas a invocar al método dispose?

Lo normal es que si el FeatureStore sobre el que se está trabajando es un tabla de una base de datos, deje “pillada” al menos una conexión con la base de datos. Y esta ya no se liberará hasta que se cierre gvSIG. Si el usuario insiste y repite el proceso y le sigue fallando, en un momento dado dejaremos al servidor de base de datos sin conexiones disponibles, y estas no se liberarán hasta que el usuario cierre gvSIG.

El escenario puede llegar a bloquear el acceso a un gestor de base de datos y no solo al usuario de gvSIG.

La recomendación es que cuando se esté trabajando con recursos “disposables” se utilice siempre una construcción “try…finally” tal como se muestra a continuación:

... Disposable recurso = null; try { ... recurso = ... ... } finally { DisposeUtils.disposeQuietly(recurso); } ...

El método “DisposeUtils.disposeQuietly” es un método de utilidad que comprueba si es null el recurso pasado, y solo intenta liberarlo en caso de que no lo sea. Además atrapa los errores y los ignora; bueno, se limita a enviarlos al registro de errores, en caso de que se produzcan al invocar al método “dispose” del recurso.

Si no queremos que se ignoren los errores usaremos “DisposeUtils.dispose”. En el código de ejemplo esto quedaría como:

... final List<FeatureReference> tmpFeatures = new ArrayList<FeatureReference>(); boolean showWarningDialog = false; DisposableIterator it = null; FeatureSet featureSet = null; try { try { featureSet = featureStore.getFeatureSet(); it = featureSet.fastIterator(); } catch (DataException ex) { String message = String.format( "Error getting feature set or fast iterator of %1", featureStore); LOG.info(message, ex); return; } while (it.hasNext()) { Feature feature = (Feature) it.next(); if (hasMoreThanOneGeometry(feature) || feature.getDefaultGeometry() == null) { showWarningDialog = true; } else { tmpFeatures.add(feature.getCopy().getReference()); } } if (showWarningDialog) { showWarningDialog(); } } finally { DisposeUtils.disposeQuietly(it); DisposeUtils.disposeQuietly(featureSet); } ... Usar FeatureReference

Cuando tenemos una Feature y queremos guardárnosla para acceder a sus datos más tarde normalmente tendríamos que guardarnos una copia de esta utilizando el método “getCopy”. Sin embargo, a veces no nos interesa guardarnos la feature completa, sino quedarnos con una referencia a ella. Para hacer esto utilizaremos el método “getReference” de la feature.

¿ En qué se diferencia la “feature” de su “referencia” ?
¿ Qué es una “referencia” a una “feature” ?

La “feature” es una estructura de datos que contiene todos los valores de los distintos atributos de esta, mientras que la referencia” es un estructura de datos que contiene la información mínima para poder recuperar la feature completa de su almacén de datos. En ocasiones sera un OID, en otras una clave primaria, siendo el proveedor de datos el encargado de decidir con cual quiere trabajar y de que tipo será ese OID.

Por ejemplo, una referencia a una feature de un shape usa un OID para referenciar a la feature dentro del shape, que normalmente será la posición de la feature dentro del shape, mientras que una feature de una tabla de base de datos tendrá los valores de los campos que constituyen la clave primaria de esa feature.

Tenemos que tener en cuenta que dependiendo de la fuente de datos subyacente, recuperar la información de una FeatureReference puede ser costoso. Las usaremos con cuidado y siendo conscientes de lo que se puede estar haciendo.

Por ejemplo, si tenemos una referencia a una feature de un shape, si invocamos a su método “getFeature”, esto provocará que se vaya a la posición asociada a la feature del shape y se cargue esta en memoria. El coste es asumible sin problemas. Sin embargo, si se trata de una referencia a una feature de una tabla de base de datos, provocará que se lance una consulta contra la base de datos para obtener el registro asociado a la feature y se pueda cargar en memoria. Hay que tener cuidado con esto, ya que si tenemos un List de referencias y las dereferenciamos, provocaremos que se lance una consulta contra la base de datos por cada referencia que tengamos, lo que puede no ser admisible.

Cuando usemos referencias, tendremos que tener en cuenta que recuperar sus features puede ser costoso, y tendremos que valorar si la aproximación que estamos realizando es la adecuada.

Con esto en mente… echemos un vistazo de nuevo al fragmento de código de antes:

... while (it.hasNext()) { Feature feature = (Feature) it.next(); if (hasMoreThanOneGeometry(feature) || feature.getDefaultGeometry() == null) { showWarningDialog = true; } else { tmpFeatures.add(feature.getCopy().getReference()); } } ...

Observamos que está recorriéndose las features para guardarse las referencias a algunas features en un List. Sin embargo, para hacerlo hace “feature.getCopy().getReference()”, que provoca la creación de una copia de la feature original para después desecharla y quedarse con su referencia, que será la misma que si se la hubiésemos pedido a la feature original. Podríamos eliminar la obtención de la copia de la feature y pedir a la feature original su referencia y obtendríamos el mismo resultado.

... while (it.hasNext()) { Feature feature = (Feature) it.next(); if (hasMoreThanOneGeometry(feature) || feature.getDefaultGeometry() == null) { showWarningDialog = true; } else { tmpFeatures.add(feature.getReference()); } } ...

Ahora observemos otro fragmento de código:

... private List<FeatureReference> features; ... int[] currIndexs = getSelectedIndexs(); // If selected is the first row, do nothing if (currIndexs.length <= 0 || currIndexs[0] == 0) { return; } List<FeatureReference> selectedFeatures = new ArrayList<FeatureReference>(); for (int i = 0; i < currIndexs.length; i++) { FeatureReference selected = null; try { selected = features.get(currIndexs[i]).getFeature().getReference(); } catch (DataException ex) { LOG.info("Error getting feature", ex); return; } selectedFeatures.add(selected); } if (!selectedFeatures.isEmpty()) { ...

El código llena un List de FeatureReference con parte de las referencias obtenidas de otra lista. Nos centraremos ahora mismo en el código:

... selected = features.get(currIndexs[i]).getFeature().getReference(); ...

Si pensamos que está haciendo esto, tendremos que:

  • Primero recuperamos una FeatureReference de la lista de “referencias a features”
  • Luego construimos la “feature” asociada a esa referencia, con lo que eso provoca que se acceda al almacén de fearures subyacente para recuperarla.
  • Y por último le pedimos a la nueva feature su referencia y descartamos la feature.

Todo ello además dentro de un bucle.

Si estamos trabajando con una fuente basada en base de datos, se lanzará una consulta para recuperar cada una de las features, lo que en si puede llevar bastante mas tiempo del deseable. De todos modos, hasta aquí aún tenemos suerte. Tenemos una referencia a la que le pedimos su feature para obtener su referencia. No es necesario recuperar la feature para obtener la referencia, ¡ya la tenemos!. Nos bastaría con guardarnos en “selected” la referencia original:

... selected = features.get(currIndexs[i]); ...

En el código que estamos viendo se guarda las referencias en un List que luego usará para
mostrarlo en un JTable. Cada vez que el JTable tenga que acceder a los valores de las features a mostrar tendrá que dereferenciarlas para obtener así la feature. Si la tabla tiene muchas líneas, y las features están en base de datos se hará un consumo intensivo de base de datos, un acceso a la base de datos por cada línea de la tabla. Podría suceder que la herramienta no fuese a trabajar nunca con una fuente de datos “pesada” a la hora de recuperar features a partir de sus referencias, pero si no es así tendremos que plantearnos una implementación alternativa para este tipo de problema.

Por último, una cuestión más a tener en cuenta relacionada con las FeatureReference. No se puede garantizar que tras terminar la edición en un FeatureStore, las FeatureReference que te hayas guardado sean válidas. Hay fuentes de datos como shapes o dxfs, que usan como OID la posición de la feature dentro del fichero, y tras terminar la edición este orden puede cambiar. Sin embargo, en otras fuentes de datos como las de BBDD la referencia puede seguir siendo válida dependiendo de su clave primaria.

Iterar sobre un conjunto de features

Cuando queremos recorrer un conjunto de features, normalmente intentamos obtener un iterador y recorrerlo. Sin embargo la recomendación es que siempre que podamos para recorrer los datos usemos un visitor en lugar de un iterador.

Veámoslo con un ejemplo. Retomaremos el fragmento de código que vimos al principio:

... final List<FeatureReference> tmpFeatures = new ArrayList<FeatureReference>(); boolean showWarningDialog = false; DisposableIterator it = null; FeatureSet featureSet = null; try { try { featureSet = featureStore.getFeatureSet(); it = featureSet.fastIterator(); } catch (DataException ex) { String message = String.format( "Error getting feature set or fast iterator of %1", featureStore); LOG.info(message, ex); return; } while (it.hasNext()) { Feature feature = (Feature) it.next(); if (hasMoreThanOneGeometry(feature) || feature.getDefaultGeometry() == null) { showWarningDialog = true; } else { tmpFeatures.add(feature.getCopy().getReference()); } } if (showWarningDialog) { showWarningDialog(); } } finally { DisposeUtils.disposeQuietly(it); DisposeUtils.disposeQuietly(featureSet); } ...

Veamos que hace:

  • Obtenemos un FeatureSet a partir del FeatureStore
  • Le pedimos un iterador
  • Nos recorremos todas las features y nos quedamos una referencia a ellas.
  • Al final mostramos un mensaje si se ha encontrado multigeometrias o no.

Usando “visitor” podría quedar algo como:

... final MutableBoolean showWarningDialog = new MutableBoolean(false); try { featureStore.accept(new Visitor() { public void visit(Object obj) throws VisitCanceledException, BaseException { Feature feature = (Feature) obj; if (hasMoreThanOneGeometry(feature) || feature.getDefaultGeometry() == null) { showWarningDialog.setValue(true); } else { tmpFeatures.add(feature.getReference()); } } }); } catch (BaseException ex) { ... exception handling ... } if (showWarningDialog.isTrue() ) { showWarningDialog(); } ...

Como nos estamos recorriendo todas las features del store, podríamos visitar directamente el store. No pedimos un FeatureSet ni un iterador, así que no tendremos que preocuparnos de liberarlos. Tendremos que preocuparnos de atrapar las excepciones del método “visit”, pero también teníamos que hacerlo cuando creábamos el FeatureSet.

En este fragmento de código he utilizado algo que puede parecer extraño. Habia un flag que se modificaba dentro del bucle, “showWarningDialog”. Como hemos sustituido el cuerpo del bucle por una clase interna anónima, no podemos utilizar una variable “boolean” ya que esta no puede ser final. Así que en lugar de usar un “boolean”, he usado un “MutableBoolean” para almacenar el flag. Esta clase forma parte de la librería de “apache commons lang” que va de base con gvSIG.

Resumiendo

Recomendaciones en general:

  • Que cuando se este trabajando con recursos “disposables” se utilice siempre una construcción “try…finally“, usando “DisposeUtils.disposeQuietly” para liberar los recursos.
  • Una FeatureReference referencia a la feature, no precisas obtener una copia para pedirle la referencia.
  • Dereferenciar una FeatureReference llamando a “getFeature” puede ser un proceso “pesado”.
  • No asumiremos que tras terminar la edición en un FeatureStore las FeatureReference que tengamos guardadas sean válidas.
  • Siempre que podamos, para recorrer los datos usemos un visitor en lugar de un iterador.

Bueno, y por hoy asta aquí llego. Otro día ya contaré sobre algunas otras cosas…

Un saludo a todos!

 


Filed under: development, gvSIG Desktop, spanish Tagged: java
Categories: OSGeo Planet

GeoSolutions: Upcoming GeoServer training in Finland with our partner Gispo Ltd

OSGeo Planet - Tue, 2015-04-21 12:29

GeoServer

Dear All,

The GeoSolutions team is proud to announce that our GeoServer Lead Andrea Aime  will hold a three days training on GeoServer  in Espoo, Finland from 9/6/2015 to 11/06/2015.

The Training will be conducted in English following the material provided by GeoSolutions and available at this link. The course consists of lessons and exercises. The exercises materials can be used after the course as well.

The following topics, among others, will be covered:

  • Installing and running GeoServer
  • Advanced Raster Data Management
  • Web Processing Service (WPS) and Rendering Transformations
  • Advanced GeoServer Configuration
  • GeoServer Security
  • Styling with SLD
  • Styling with CSS
  • INSPIRE Support
Basic skills in Geoserver and geographic data management are required to proficiently follow the training. Additional information can be found at this link (in Finnish) as well as at this one (in English).

Happy GeoServing!

The GeoSolutions team,

http://www.geo-solutions.it
Categories: OSGeo Planet
Syndicate content