I recently modified a script I wrote to extract data from a Word document to a csv file. The modified script had to iterate over multiple docs and extract data from certain tables based on certain keywords and fields.
I used the python-docx module to do this, but hit an obstacle when I realised that it could not (as yet) parse Word’s content controls. Since I only had 9 documents, I opened each, pasted some VBA code pilfered off StackOverflow to remove all content controls from the document.
While that worked temporarily, my next step is of course to schedule the script to automatically pull the data out once the folder is updated with the new batch of docs for the month. A solution suggested entails the code being saved inside the doc so it can be called via com.
I’m not happy with that solution because I would still need to open each document and insert the code. What I need to do now is fiddle around some more so that the code can be saved inside the script and then run on each document as needed.
Recently I had to wrangle some csv files, including some data calculations and outputting a semi-colon delimited file instead of a comma-delimited file.
I went digging through my old workspace and started looking at some of my old scripts. My style of coding back then is almost embarrassing now 🙂 but that’s just the process of learning. I decided to post this script I wrote just before ArcGIS released their Excel toolset in 10.2.
From what I can recall, I needed to create a file geodatabase table to store records of microbial sample data. Many of the field names were the chemical compound themselves, such as phosporus or nitrogen, or bacterial names. For brevity’s sake, I had to use the shortest field names possible while still retaining the full meaning.
I set up a spreadsheet containing the full list of field names in column
FIELD_NAMES and their aliases in
ALIAS. I created an empty table in a file gdb, and used a
SearchCursor on the spreadsheet to create the fields and fill in their aliases.
This solution worked for me at the time, but of course there are now better ways to do this.
I had a spreadsheet of coordinates, along with their addresses. The addresses were either inaccurate or missing. Without access to an ArcGIS licence, and knowing the addresses were not available on our enterprise geocoding service, I sought to find a quicker (and open-source) way.
I used the geocoder library to do this. I used it previously when I still had an ArcGIS Online account and a Bing key to check geocoding accuracy amongst the three providers.
Since I don’t have those luxuries anymore, I used pandas to read in the spreadsheet and reverse geocode the coordinates found in the the third and fourth columns. I then added a new column to the data frame to contain the returned address, and copied the data frame to a new spreadsheet.
In my ongoing quest to do absolutely everything through Python, I’ve been looking a lot lately at manipulating databases. I’ve been using arcpy to access GIS databases for years, and last year I finally got around to using pyodbc (and pypyodbc) for accessing SQL Server databases.
Now that I’m in an Oracle environment, Oracle has provided the cx_Oracle library to directly connect to databases. I have yet to test that though. What I’m interested in at the moment is creating and accessing databases for personal use.
I considered MongoDB for a while, but I don’t think I want to go NoSQL yet. This is why I have been experimenting with SQLite (through the sqlite3 library), as it is included in the Python install, and has the delightful SpatiaLite extension. The slogan goes against my one of my mottos (Spatial is Special) while supporting my other motto (Everything is Spatial).
After using pandas for quite some time now, I started to question if I was really using it effectively. After two MOOCs in R about 2 or 3 years ago, I realised that because my GIS work wasn’t in analysis, I would not be able to use it properly.
Similarly, because pandas is essentially the R of Python, I thought I wouldn’t be able to use all the features it had to offer. As it stands, I’m still hovering around in the data munging side of pandas.
I used a pandas mask to filter a spreadsheet (or csv) based on some value. I originally used this to filter out which feature classes need to be created from a list of dozens of templates, but I’ve also used it to filter transactions in the money tracking app I made for my household.
I needed to add a field to dozens of layers. Some of the layers contained the field already, some of them contained a similar field, and some of them did not have the field. I did not want to batch Add Field, because not only would it fail on the layers which already had the field, but it is super slow and I would then still have to transfer the existing values from the old field to the new field.
I wrote the tool in the ArcMap Python window, because I found it easier to load my layers into an mxd first as they were lying all over the place. The new field to be added is set up as a NumPy array, with the relevant dtype.
The script loops over all the layers in the document, adding the field via the Extend Table tool, and then transferring the values from the old field to the new field. Deleting the old field at the end would be an appropriate step to include, but I didn’t, purely because I’ve lost data that way before.