My new work laptop finally arrived yesterday, so I can give the temporary one I was using the boot. Now, I was a bit apprehensive as I was not given any other information about the specs besides “it’s super fast”. The specs seemed fine to me, but I needed some proof.
By that, I of course mean a completely anecdotal check of how fast things appear to be working. First up: the ArcGIS desktop install. I took approx 30 minutes to install it on the temporary laptop (i5, 8GB RAM). It took less than 5 minutes on this beast.
Another completely unscientific test: I looped over a gdb and used the Add Field tool to add a field to each of them. Yes, without using the numpy ExtendTable tool, and I ran it from the python window in ArcMap, so it was using 32-bit. It took 0.2 seconds to add the field to each feature class.
Those are all the tests I need. I’m happy!
That’s a lie. The real tests will happen later once I have my VMs going. I do like what I see so far though.
I wrote my first exam today for my 2016 modules, Ontology Engineering. To say that I am over writing exams is putting it mildly. I just want to get to the research project so I can finish up this degree.
Sadly, there are still 2 more exams this week (Software Project Management tomorrow and Software Engineering on Friday), and then three more “study” modules this year, in addition to the research project.
The face I’ll make to the SO next week
I’m mainly struggling with the fact that a lot of what we are learning is not applied in the workplace (imagine, an IT project being managed from start to finish hahahafsnodlfkl oops fell out of my chair from laughter). It is very hard to find the motivation to study when you know it won’t help you. It’s easier during undergrad when you don’t know anything about industry.
According to my notes, I first used the Make Query Table tool in my first week at Aurecon, back in March 2012. It was the first of many, many times, because often when receiving spatial data in a non-spatial format from a non-GIS user, the first thing that gets thrown out is any trace of the original spatial component.
At some point, I realised the tool’s expression parameter was a bit wonky. As I have come up against this problem every few months since (forgetting it happens each time because I only thought to write down a note about it now), I have decided to immortalise it in a gist below.
I know it’s already a month into the new year, and I haven’t been active on the blog for a while, but that’s mostly because I had nothing to post. However, that will all be changing because in January I returned to Aurecon. This time around, I’m fulfilling a technology expert role within the Asset Management team, so I’ll be designing and driving our technology strategy going forward. I’ve got a few interesting things lined up, particularly around spatial data warehouses and ontologies, so I’ll be posting about it alot here while I work through my ideas.
I also got my SAGC exam results today – 100% for Paper A and 96% for Paper F, so once the fees are paid, I will be registered as a Geomatics Technologist. When I’m finished with my BscHons next year, I’ll be able to upgrade to Practitioner as I’m just short of the academic requirement.
Speaking of Hons, I just registered for the 2nd (final) year, and am currently studying for 3 exams next week. To be honest, it has been painful studying, simply because many of the concepts we have discussed are outdated and having been in industry now for almost 6 years, it is irritating to have to study these things. I’m just pushing through to get that piece of paper. Hopefully I get a good research topic for the project (or can convince one of the supervisors to take me with my own topic). I’ll worry about that after the exams.
I’ll probably be able to return to my normal posting schedule in March. I just thought it was about time I post something.
Aaaaand now I’m thinking about how much I miss Community. #SixSeasonsAndAMovie
I’ve been thinking quite a bit lately about how to store spatial data. It’s something I’ve covered here and my attitude towards this topic has evolved over the years.
The organisations I’ve worked in have mountains of spatial data accumulated over the years. The data is stored in shapefiles, geodatabases, normal databases, spreadsheets, documents, reports, photos…Why is it this way? It doesn’t have to be this way. It shouldn’t be this way!
In the course of my research for a topic for my project for next year, I’ve honed in on the methods for implementing an enterprise geoportal within an existing spatial data infrastructure. However, I feel like my focus is shifting to the data that the geoportal is trying to expose to a larger audience.
The concepts of a spatial data warehouse and a spatially enabled operational data store have been intriguing me. A regular GIS task involves comparing spatial data across a time period, analysing trends and presenting the results in a map or report. Why aren’t we storing this historical data in a SDW that’s optimised for reporting?
Non-spatial data can come from a variety of sources as well – spreadsheets, other databases etc. Another common GIS task is to spatially enable these datasets. Why are we not storing the outputs in a spatial enabled operational data store in an open format like GML?
I think it’s because to plan and implement a SDW/S-ODS takes time (and money). With a normal EDW, the organisation will not need much convincing to see the benefit of implementing one. “Spatial” is still seen as an “add-on”, or a “nice-to-have”.
I recently modified a script I wrote to extract data from a Word document to a csv file. The modified script had to iterate over multiple docs and extract data from certain tables based on certain keywords and fields.
I used the python-docx module to do this, but hit an obstacle when I realised that it could not (as yet) parse Word’s content controls. Since I only had 9 documents, I opened each, pasted some VBA code pilfered off StackOverflow to remove all content controls from the document.
While that worked temporarily, my next step is of course to schedule the script to automatically pull the data out once the folder is updated with the new batch of docs for the month. A solution suggested entails the code being saved inside the doc so it can be called via com.
I’m not happy with that solution because I would still need to open each document and insert the code. What I need to do now is fiddle around some more so that the code can be saved inside the script and then run on each document as needed.
Recently I had to wrangle some csv files, including some data calculations and outputting a semi-colon delimited file instead of a comma-delimited file.