The Web Duel: part 3, Open Source
The Open Source Implementation
This is a continuation of Mark Balwanz’s blog posts on his creation of web mapping sites using both ESRI and Geoserver. Today he will talk about his experience creating the site using open source technology.
After completing the ESRI version of my project (Web GIS Duel: Part 2), I turned my attention to the open source version of my project. I have some experience with Open Source GIS from previous graduate courses and have spent some time working with Leaflet in the past, but this project was definitely going to be a learning experience for me. Below you will find a list of all the technology I used to create this web mapping application.
Now unlike the ESRI implementation, choosing the stack for this open source version was a lot more involved. I knew I would use QGIS for the styling and publishing of the data, and had previously decided to use GeoServer for the serving of the data to the web, but should I use OpenLayers or Leaflet? To be honest, I planned on using Leaflet as I had heard many good things about it and it seemed to be the up and coming library. However, after starting to build out my application with Leaflet I decided to switch to OpenLayers 3 as I was able to find more information about it online. Moving to OpenLayers also brought the unforeseen issue of finding “useful” information that was from OpenLayers 2 and did not translate most of the time, but that is a conversation for another day.
After downloading all the open source software I needed to complete this project, I started by loading the three shapefiles into QGIS. I had some brief experience with QGIS in the past, and was once again impressed by how easy it was to work with. I styled the layers to match the styling I used with my ESRI map service and then just had to publish the data to GeoServer. However, I had to first find a way make GeoServer available to anyone on the internet and that is where the Amazon EC2 service came into play. Using Amazon’s AWS free tier I was able to spin up a Windows Server 2012 R2 instance and install GeoServer on it. After updating the security settings for my Amazon instance and then opening up port 8080 to inbound traffic, I now had a GeoServer that I could publish my data to. So back in QGIS, I installed the GeoServer Manager plugin and connected to my new GeoServer running on Amazon EC2. After that, it was very easy to publish my data as a WMS and retain the styling I set in QGIS.
The information I will share throughout the rest of this blog will sound very familiar for those of you who read part 2 of this series, as I followed the same general workflow as I did with the ESRI version.
The first step to building my application with OpenLayers was to add a script tag for the hosted version of the library and then instantiate the map. Since I wanted these two applications to look identical I chose to use the light gray basemap provided by Mapbox within my OpenLayers map. After I had the map working, I had to add my three WMS layers (US Bank, Competitors, and Census Tracts) to the map. Thankfully the OpenLayers website contains many sample pages and I was able to find one for adding WMS layers to the map.
At this point I had an interactive map that depicted all of the data that I had published out to my GeoServer. However, I had to now build the code that would allow the user to click on a US Bank location and have the details window update. So the first thing I needed to do was build the initial call that would return information about the bank location that was clicked on. Just like with the ESRI version, I was able to listen for “clicks” on the map and fire some code when that happened. I used the API’s ‘getGetFeatureInfoURL’ function to create a url to my WMS and then sent it via a jQuery AJAX call. Since I wanted to receive jsonp data back from my request, I did have to tweak the GeoServers web.xml file to allow it to send jsonp (just had to uncomment the lines shown in the image). Once this call was working I was able to parse the response and update the details window with the name and address of the selected location. There is probably a better way to accomplish this task, but since I was familiar with AJAX calls and was able to get this working that was good enough for me.
I know had the details window partially updated and had a “selected location” to use for the rest of my analysis. I was able to use an OpenLayers function to create the five mile buffer using the latitude and longitude of the selected location (returned from the previous AJAX call) along with the radius (in meters). Once I added the resulting polygon to the map, I now had the buffer being visualized and could move on to the task of finding every feature within it.
I will say that other than the GeoServer issue I ran into with DWITHIN, the OpenLayers library was pretty easy to work with. In addition, since I already knew the general workflow of the application (since I already built it with ESRI technology) I was able to jump right in without spending too much time planning it out.
Amazon Cloud Web Hosting
Just like I did for the ESRI version, I turned to Amazon S3 for the hosting of this application. I uploaded my html file along with the images I used, and updated the settings of my S3 bucket to match what I did with my ESRI version. Now as long as I have my Amazon EC2 instance running GeoServer turned on, the application will work for anyone who navigates to it. If you have any questions about the Amazon S3 web hosting, please feel free to post comments or refer back to part 2 of this blog series.
Thanks for reading another post and please stop back for the fourth and final blog, where I will give you my final thoughts about everything I experienced throughout this project.