I wrote this script for a former colleague last year. She had received a Word document containing about 600 tables which must have been dumped out of a database somewhere. The tables had the same header, and each table represented an “incident”, with dates, details etc.
She was requested to “put it into Excel”. After she had manually copied the first table into matching columns in a spreadsheet, she came to me. This type of thing is normally a task we would give to a student, as it has nothing to do with GIS. Nevertheless, when I saw the repeating structure I was sure I could come up with something to do this automatically.
|doc = r"C:\Some\Arb\Folder\input.docx"|
|xls = r"C:\Some\Arb\Folder\output.xls"|
|document = docx.Document(doc)|
|book = xlwt.Workbook()|
|cur_sheet = book.add_sheet("Tables")|
|row_num = 0|
|tables = document.tables # Get all the tables in the docx|
|# Get the header row from the 1st table's 1st row|
|for index, cell in enumerate(tables.rows.cells):|
|cur_sheet.write(row_num, index, cell.text)|
|for table in tables:|
|for row in table.rows[1:]: # Skip the repeating header row of each table|
|row_num += 1|
|for index, cell in enumerate(row.cells):|
|if cell != '':|
|cur_sheet.write(row_num, index, cell.text.strip())|
The script finds all the tables in the document, and grabs the header from the first table to serve as the headings in the spreadsheet. It then iterates over all the tables, skips the header row and populates the spreadsheet with all the rows from the various tables.
It took about 15 minutes to write (had to play around with accessing the table elements correctly) and less than a minute to extract the data. That’s the amount of time it would have taken to copy 5 of the tables manually. At that pace it would have taken about 4 days to complete the process.