Yesterday during Kevin’s presentation on our mobile data capture workflow, he brought up an example of insufficient training/assumptions of user aptitude. The fieldworkers were capturing sign posts, and the way the schema was structured was that the device prompted for a photo of the sign before asking for a GPS point. The fieldworkers would cross the road to get a nice picture of the sign, which is great, but then they would take the location of the point where they were standing, without crossing back to the pole.
The end result contains points which are not true reflections of where the poles are. If the photos were stored as geotagged attachments, we could extract the correct location from there, but that’s an additional unnecessary step. If the fieldworkers were in a road with >2 storey buildings, such as in most cities, then GPS error could also add complexity to the issue. If the photos were stored as embedded rasters, then there would be no secondary location information stored as it would be embedded within the incorrect GPS point.
My point is, when Kevin mentioned this example, everyone laughed when he spoke about the point being captured in the wrong place. It’s not like he made a joke, as he was just saying what happened. Only a room full of spatial nerds would understand the full implications of what he was saying, and the amount of post-processing required for something which could have been easily avoided, and then still laugh about it. It’s a coping mechanism, I think.