Categorising spatially enabled dashboards

Lately I’ve been working with dashboards quite a bit. For clients who don’t really understand spatial data, it’s easier for them to digest the information on the map when it’s presented along with the graphs and indicators they are familiar with from Excel or Power BI.

Over the last few weeks, Julian has spent quite a bit of time setting up a number of dashboards using Operations Dashboard, each with a different purpose. On one project, we have a dashboard showing the client the real-time progress of fieldworkers on a map, along with some graphs showing the breakdown of assignments which are in progress and completed per district. It enables the client to answer questions such as which worker is causing a bottleneck. This dashboard consumes the workers and assignments layer from the Workforce for ArcGIS project, along with the various Survey123 feature services.

On another project, we have a dashboard showing the results of an asset life cycle cost analysis model. This dashboard includes graphs depicting when the client can expect to incur the greatest cost to replace key assets, as well as helping to answer questions such as: Is it cheaper to replace an asset in 5 years, or to spend an additional amount on maintenance in 3 years in order to extend the remaining useful life of the asset by 7 years?

We also have a number of ideas in dev at the moment, including the actual software we use to display the dashboard (that will have to be a post by itself). I’ve been mulling over how to package these different dashboard types as solutions to offer to a client. I decided to adapt the traditional categories to our purpose.

  • Operational: This is the basic dashboard, as detailed in my first example. This type will normally display two maps – one showing real-time progress of fieldworkers and their assignments, and another showing the surveys they submit along with actual data. Graphs may include the amount of assignments completed per worker, per area or along whichever dimension is most logical (or whatever the client prefers). Filters are included to drill down through the live data.
  • Analytical: This dashboard shows the results of analysing the data displayed on an operational dashboard (my second example). A single map can be used to display the analysis results per survey or per area. Graphs will vary according to client needs, but will be based on the survey points in the map. The user can interact with the dashboard by drawing various reports that they need, creating pivot tables, filtering etc.

My current dev efforts are focussed on a third type of dashboard. For a large project last year, I designed and implemented a mobile data capture solution which incorporated a QA process (to be carried out by professional engineers) as well as an invoicing process (to reduce turnaround time between carrying out the work and getting paid by the client). I’ll have to use another post to brainstorm that idea.

How to convince someone to move from paper-based forms to electronic surveys

I spent March of 2018 working closely with a bridge engineer. For years, they had been using four paper questionnaires to capture inventory and inspection data of structures such as bridges, culverts, gantries etc. They would arrive at a structure, select the appropriate form, measure and write everything down, take a load of photos, draw a sketch or two and move on to the next one.

When they got back to the office, they would get a few students to manually input the captured data into an electronic form, which would store it in a database on their local machine. The students would also need to manually link photos to the correct structure. As with most projects, time would always run out, so the engineers would also need to help out with this electronic transfer process.

Once the screaming in my head subsided, I asked him, “Why? Why is it being done this way? It’s 2018. The process you are describing should have been phasing out at least 5 years ago already.” He didn’t really have a straight answer for me, beyond “this is the way it’s always been done”.

Over the next few weeks, I showed him that there is a better way. I used the first week to convert the largest of the four forms using Survey123. The form was fairly complex – with no access to a table layout or even a grid theme, I had to make a number of design decisions which wouldn’t impact the user experience too much while still retaining (and even enhancing) the functionality available in the paper form.

After my initial stab at it, we spent 2 more weeks going back and forth, fine-tuning choice lists, removing unnecessary questions, changing section groups, enforcing relevant fields, choosing repeats and optimising calculations. We ended up with an xls of 250 rows of questions and 35 choice lists. I was fortunate to have access to one of the databases they’d used on a previous project, so I was able to extract the choices from the lookup tables using SQL and Python.

I spent the last week of March replicating the other three forms. They were similar enough that I could copy and paste much of what I had implemented on the first form, but different enough that I couldn’t keep everything in one form. Once that was completed, I published all the surveys and gave it to him to thoroughly test (I believe my exact words were “Try to break it”).

By the first week of April, I had fixed most of the bugs and we were ready to train the students on it in the field. I set up a web app allowing him to view the surveys as they were submitted. He could immediately send a message to the students in the WhatsApp group I set up on their tablets if they were measuring components incorrectly, or not describing items properly. He was fully converted.

In my next post, I’ll detail how I took everything down and rebuilt it from its ashes by adding Workforce for ArcGIS and Operations Dashboard to create a more efficient system.

ArcGIS Pro: A year later

I mentioned previously that I was finally considering making the jump from ArcMap to ArcGIS Pro. I made the switch in early 2018, and after an adjustment period of a few weeks, I have not looked back.

Seriously.

For those still waiting to make the switch, just do it. I highly recommend completing the official tutorials. It is a huge adjustment to go from dialogue boxes to the ribbon interface, akin to the user shock experience when switching over from Office 2003 to Office 2007.

You’ll also need to learn how to deal with multiple data  map frames and layouts in one map document project, the lack of a standalone ArcCatalog, and the ability to link 2D and 3D views in one place. There are also new terms which you will need to learn, and new concepts.

A few of my favourite things:

  • Seamless integration with AGOL/Portal
  • Labelling properties now accessible through groups on the ribbon – no more Inception levels of dialogue boxes
  • Data driven pages is more robust
  • BIM integration
  • arcpy.mapping overhauled to arcpy.mp – a much more intuitive module
  • One ArcGIS project per client containing multiple data frames for each task – no need for dozens of map documents
  • Python 3 support (and built in Anaconda!)

I could go on. Our local licence server for Desktop went down a few weeks ago, and I only found out a few days after it happened because I just don’t open ArcMap anymore. With the latest update to Pro, I can now quickly launch a project without needing to save it first, a feature that was lacking up until now. My last reason to use ArcMap is gone.

Changing the worker basemap in Workforce

I’ve recently implemented Workforce for ArcGIS on a big project. It’s great being able to automatically assign sites to fieldworkers, and update their to-do lists on the fly.

However, the lack of reference data for the fieldworkers was bothering me. In the Workforce mobile app, the map displays the default topographic basemap and the location and status of their assignments. We were sending fieldworkers into areas where they would have to be a bit more vigilant of their surroundings, as well as ensure that fieldworkers did not drive through areas which were deemed as “High risk” due to crime or environmental conditions.

I modified the worker basemap of the project to include a polygon layer containing these “High risk” areas, so workers could always be aware when they were near one of these areas. I also tried changing the basemap to OpenStreetMap, but the Workforce app did not like that at all. The app crashed multiple times before I figured out it only wanted to render the default map. The dispatcher map had no issue with changing the basemap.

I also managed to overlay our own reference road network over the basemap, so the fieldworkers were aware of which direction to face when capturing a survey. This was included so that required photos were captured consistently.

A rant about attachments

I remember when attachments were first introduced in ArcGIS Desktop (10.2 I think? whoops it was ArcGIS 10). It was a very useful feature, and more functionality was added over the years.

It also made mobile data capture even easier. The fieldworkers would go out, do their assessments, and attach multiple photos to their points. However, attachments with Collector has caused me so much frustration. Specifically, syncing with attachments.

The nature of the work we do (and the economic environment we are in) means that by default, I take the maps offline so that the fieldworkers can carry out their assessments, and then sync back to AGOL when they are on lunch break (or whenever they can pick up WiFi). I discovered a few years ago that once one hits a certain threshold (like 20 attachments in the map), there are going to be problems syncing.

It will just outright fail, or take very long and may need to be attempted a number of times. Why is this? I don’t know. Over the years, I’ve encountered this issue on all types of devices – the latest iPhones, low-end Android tablets, high-end Android tablets, mid-range Android phones…

What it seems like to me is that Collector “expects” a certain connection speed, and when it doesn’t get it, it times out and rolls back the sync. Fair enough – I’ve found multiple delta tables on devices I’ve needed to recover the databases from due to failed sync attempts. On a current project, they are using rugged devices which have really awful network chips (as in, I need to stand about 1 or 2m away from the access point so that I can take the maps offline). Naturally, at the end of the first day, each device had dozens of features with multiple attachments each, which refused to sync.

They have been out in the field for 2 weeks. Everyday, I have to manually retrieve the databases from the device, recover them, and push them out into appropriate geodatabases once I’ve determined what’s inside them.

So clear

I can deal with all of that, because Python is a tool that I maaay have mentioned here before. What I cannot deal with is the fact that attachments are still lost during geoprocessing. The fact that it was added as an environment setting in ArcGIS 10.5 and has been available in ArcGIS Pro for a while is of little comfort to me as I currently have access to neither.

Fine. I store the GlobalIDs in another field, merge the features together into their correct feature classes, enable attachments and insert the records from the corresponding attachment tables. Of course, I forget that the relationship class is now messed up, as it’s linking through the (now incorrect) GlobalID fields instead of the fields I stored the original IDs in.

After staring at the screen cross-eyed, I then realise that I only need to provide the attachments as jpgs in a folder, which I can extract from the tables using the original IDs and write into subfolders based on the feature type. I don’t actually need to link them back together since the technician does not need to view the photos to complete the work in ArcMap. /endrant

The end is nigh…

I read a blog post a few weeks ago about the inevitable demise of ArcMap. When ArcGIS Pro launched a couple of years ago, I immediately started preparing for the end of ArcMap. By that, I mean I played around with Pro for a few weeks then put it away until the corporate overlords decided it was time to switch.

I had to sign up for a free trial the other day for something, and found this:

What happened to ArcMap?

After logging in and heading to the downloads, I found this:

What happened to ArcMap?!?!?!!

The forums pointed it out as well. I’ve been keeping a side eye on ArcGIS Pro development over the last few years, so I’m starting the transition from October, with the aim of using it as my daily driver by December. I’ve just started training our junior consultant who comes from a CAD background on GIS as well, so I may just start him out on ArcGIS Pro from the jump.

Overcoming the Make Query Table bug in ArcGIS

According to my notes, I first used the Make Query Table tool in my first week at Aurecon, back in March 2012. It was the first of many, many times, because often when receiving spatial data in a non-spatial format from a non-GIS user, the first thing that gets thrown out is any trace of the original spatial component.

At some point, I realised the tool’s expression parameter was a bit wonky. As I have come up against this problem every few months since (forgetting it happens each time because I only thought to write down a note about it now), I have decided to immortalise it in a gist below.

http://desktop.arcgis.com/en/arcmap/10.3/tools/data-management-toolbox/make-query-table.htm

When inputting the optional SQL clause, ArcGIS automatically adds quotation marks “” to the field names in the dialog box. This will
pass the tool’s error checking successfully but will cause the tool to fail with an error.

If you verify the SQL clause in the dialog box, it will give a SQL error with no specifics. When adding the clause, remember to remove
the quotation marks.

e.g. If you want to join Layer1 to Layer 2 on common field ID and where Layer 1 contains “Cape Town”, ArcGIS will format your expression
in the following way:

"Layer1.ID" = "Layer2.ID" AND "Layer1.TOWN" = 'Cape Town'

You need to change it to

Layer1.ID = Layer2.ID AND Layer1.TOWN = 'Cape Town'

Using a query table to represent a 1:M relationship spatially

I first discovered (and used) the Make Query Table tool during my second week at Aurecon (March 13 2012, according to my OneNote). This was about a month before I started using ModelBuilder (to combat the frustrations of ArcMap), and about 6 months before I started Python (to combat the frustrations of ModelBuilder).

I’m just giving a bit of context because this was before everything clicked into place for me. Before this point, I treated everything I learnt in my studies and during my internship as separate silos of information: GIS, Databases, Programming.

I did not even realise until after I was working at Aurecon that the query expressions used in ArcMap are SQL, despite me studying all those things. It just shows how one’s mindset can block progress, and how I allowed the awful experience of learning Computer Science in Afrikaans to stop me from letting things “click” for so long.

Back to the tool. Joins in ArcGIS are notoriously slow, and 1:M joins are not allowed (technically they are, but in the sense that an arbitrary matching feature will be joined). Naturally, the relationship between the GIS and the asset register is 1:M.

For example, a single point is used to represent the physical, spatial location of an asset, reservoir WRV-00001. In the asset register, this reservoir is unbundled into various components – storage tank, building, fence etc. Each of these assets have their own unique asset ID, but all have the same GIS ID.

I now need to represent all these assets spatially. The points/lines will all be on top of each other, but that’s fine. The Make Query Table tool does exactly this, but it is…quirky. I’ve compiled a list of things to remember when using this tool (supplemented by this question on GIS.SE):

  1. Tables/feature classes in the relationship should be stored within the same database: I tend to remember this step only after I add the inputs and the tool shouts at me.
  2. Add the feature class first in the multivalue parameter: The format of the input relies on the format of the first argument in the multivalue parameter control. The feature class should be added first to ensure that the output is a layer, otherwise it will be a table view.
  3. Enclose table names and field names in quotes: For example, I wish to join the asset register table ar to the point layer points using their common field GIS_ID. By default, the tool encloses my whole expression in quotes "points.GIS_ID" = "ar.GIS_ID". This will cause the tool to fail. Add extra quotes around the field names "points"."GIS_ID" = "ar"."GIS_ID".
  4. Choose the NO_KEY_FIELD option: Trying to add key fields causes some erratic behaviour (???). Just don’t do it. By selecting this option, the existing ObjectID field from the input will be used.
  5. The output layer will appear to have no symbology: Go into the Layer properties, click the Symbology tab, click the existing symbol, then OK, OK. It’s a bug./li>
  6. Persist to disk!: Remember to export the layer to a feature class, otherwise the layer will only exist in the map document.

Accessing metadata using ArcPy, Python, and now hermes!

Two weeks ago, a colleague asked me to write a script to extract some metadata values from dozens of feature classes in a gdb, and write it out to a spreadsheet along with some other descriptive properties. I fiddled around with the metadata using Python’s xml package, and managed to come up with a script for her.

So it was to my immense delight that this post popped up in my feedly on Friday. I immediately starred hermes on GitHub, and maybe this will be the first project I can actually contribute to, and not only because of the Futurama reference.

Developing an asset management GIS data maintenance methodology: Part 5 – My preparation for the way forward

(This is Part 5 of a week long series of posts on the current project I am working on, to develop a reasonable GIS data maintenance strategy for Asset Management data. Read Part 1, 2, 3, 4.)

The work I’ve done over the last few weeks (and years) is all leading to one point. A single system, with all the data topologically correct, standardised and easily accessible.

An enterprise geodatabase, with versioning enabled for the Desktop team to maintain the data without fear of conflicts. Using SDE in this manner will automatically allow for checks to be in place, for example, where I could first check the reconciled versions before moving it to default.

Archiving would be set up, and the database would be regularly backed up. Map services would be published and be made available to the non-GIS users such as the Asset team. I’d prepare the services for consumption in their app of choice: ESRI Maps for SharePoint, ArcGIS for AutoCAD, a JavaScript viewer, an Excel document with the attribute tables embedded…

The services would have the sync capability enabled, so that when they go out in the field, a Collector map could be easily configured for data capture in an offline environment. Since they visit areas which are routinely out of cell coverage, this would be ideal (and better than carrying around a printed mapbook).

While I am busy with this year’s updates, I am keeping all of this in mind. Every little bit I can do now is a little bit less that I have to do later. Once this foundation is in place, we can start looking at more advanced aspects, such as turning all this data into geometric networks, and creating custom tools which can automatically calculate the remaining useful life, asset depreciation and answer all the questions that could possibly be asked.