The issue with names

I recently underwent a name change, and though I have yet to make it official (who wants to waste a Saturday at Home Affairs?), I have been thinking about the implications of my name change.

Now that my surname contains a hyphen and is 18 characters long (with my full name now 26 characters), I’ve been wondering how I should abbreviate it. Some hasty Googling shows that there is no standard for this. My entire life I just assumed that the first part of the surname takes precedence so the initials remain the same. In other words, Cindy Lee Williams (CLW) becomes Cindy Lee Williams-Jayakumar (CLW).

I toyed around with the idea of dropping my middle name (itself having been an issue with people assuming I’m Cindy-Lee and not Cindy Lee) to become Cindy Williams-Jayakumar (CW), but the thought of having only two initials terrified me.

I came across this blog post which calls out the assumptions programmers make when building systems which need to accept names (I’m guessing that’s about 95% of all systems). Now that my name has become slightly more complicated, I’m going to be more aware of my own assumptions when writing code, and not just when it comes to validating names.

I’ve also decided to be a bit more difficult and use CWJ as my initials. I had CLW for 27 years, it was time for a change.

Convert a list of field names and aliases from Excel to table using ArcPy

I went digging through my old workspace and started looking at some of my old scripts. My style of coding back then is almost embarrassing now 🙂 but that’s just the process of learning. I decided to post this script I wrote just before ArcGIS released their Excel toolset in 10.2.

From what I can recall, I needed to create a file geodatabase table to store records of microbial sample data. Many of the field names were the chemical compound themselves, such as phosporus or nitrogen, or bacterial names. For brevity’s sake, I had to use the shortest field names possible while still retaining the full meaning.

I set up a spreadsheet containing the full list of field names in column FIELD_NAMES and their aliases in ALIAS. I created an empty table in a file gdb, and used a SearchCursor on the spreadsheet to create the fields and fill in their aliases.

This solution worked for me at the time, but of course there are now better ways to do this.

Reverse geocode spreadsheet coordinates using geocoder and pandas

I had a spreadsheet of coordinates, along with their addresses. The addresses were either inaccurate or missing. Without access to an ArcGIS licence, and knowing the addresses were not available on our enterprise geocoding service, I sought to find a quicker (and open-source) way.

I used the geocoder library to do this. I used it previously when I still had an ArcGIS Online account and a Bing key to check geocoding accuracy amongst the three providers.

Since I don’t have those luxuries anymore, I used pandas to read in the spreadsheet and reverse geocode the coordinates found in the the third and fourth columns. I then added a new column to the data frame to contain the returned address, and copied the data frame to a new spreadsheet.

Database access via Python

In my ongoing quest to do absolutely everything through Python, I’ve been looking a lot lately at manipulating databases. I’ve been using arcpy to access GIS databases for years, and last year I finally got around to using pyodbc (and pypyodbc) for accessing SQL Server databases.

Now that I’m in an Oracle environment, Oracle has provided the cx_Oracle library to directly connect to databases. I have yet to test that though. What I’m interested in at the moment is creating and accessing databases for personal use.

I considered MongoDB for a while, but I don’t think I want to go NoSQL yet. This is why I have been experimenting with SQLite (through the sqlite3 library), as it is included in the Python install, and has the delightful SpatiaLite extension. The slogan goes against my one of my mottos (Spatial is Special) while supporting my other motto (Everything is Spatial).

Filter a pandas data frame using a mask

After using pandas for quite some time now, I started to question if I was really using it effectively. After two MOOCs in R about 2 or 3 years ago, I realised that because my GIS work wasn’t in analysis, I would not be able to use it properly.

Similarly, because pandas is essentially the R of Python, I thought I wouldn’t be able to use all the features it had to offer. As it stands, I’m still hovering around in the data munging side of pandas.

I used a pandas mask to filter a spreadsheet (or csv) based on some value. I originally used this to filter out which feature classes need to be created from a list of dozens of templates, but I’ve also used it to filter transactions in the money tracking app I made for my household.

Choosing a research topic

For the first time, the Honours class was not given a list of topics to choose from. Instead, we were instructed to choose any computing centric topic, and post a discussion about it where it could be ripped apart. Obviously, the whole point of me doing this course is to focus on the computery side of GIS, so I settled on spatial data infrastructure (SDI).

There is a lot of literature available on this topic, but most of it focusses on the implementation of SDIs within the public sector. Rightly so – implementing it correctly is expensive, and once the government has bought into the idea and made their data available, private companies should follow suit.

Even then, the whole SDI topic can be very broad, as it is made up of several components. In my current role, while I would have input into all of the components, I wouldn’t necessarily have the mandate for those components. I then narrowed it down to the component that I would be responsible for: the geoportal within the SDI. I’ll be focussing on methods and challenges involved in the implementation of a geoportal as a component of a SDI within the financial constraints of a large enterprise environment.