Quantcast
Channel: joy of data » text mining
Viewing all articles
Browse latest Browse all 5

Segmenting a Text Document using the Idea of a Cellular Automata

$
0
0

text-layout-smallThe German parliament publishes protocols for each of their sessions. A lot of data waiting to be processed. The protocols are published in the form of text files and PDFs. The published text files are not of my liking but xpdf manages to produce decent text versions from the PDFs. The layout is preserved quite well, which is good because it makes the whole journey from there more deterministic. Processing the layout though is not trivial because the text flow is not trivial.

  • Most of the text – the actual parts holding the transcript – is split into two columns.
  • Lists with names are usually separated into four columns.
  • Headlines and titles occupy mostly a full line.
  • Tables can look programmatically similar to all of those three styles.

Those different layouts can be combined on one page. And the vertically separating empty columns are not guaranteed to be found at specific positions.

text-flow

If you are like me you now start thinking about a set of regular expressions to get the job done. This got very messy very soon …

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.” (Jamie Zawinski)

My gut feeling was that there must be a more generic solution to this, so I did what every decent programmer would do at this point – I took my mind off the job and got me a coffee.

doc-cell-automatNot sure what led me to the concept of cellular automata but I suddenly had the idea that when you think of the text as a character matrix then the shape of a text block would emerge from iteratively applying cellular rules. As soon as all text boxes would be rectangular it was just at matter of enumerating them accoring to the top-left corner. This would be possible because an empty separating line (green in above picture) would always go all the way from left to right.

I wrote a javascript app that would highlight the parts identified as content (red) and gradually push back the vaccum (blue) until a stable state is reached. The case of table content needs some more computational hand holding though because a table would otherwise be identified as a set of disconnected columns (yes … I used regular expressions at that point, but don’t tell anybody). I am not publishing the code because there is nothing special to be learned from it. But I will publish the data processing for the Bundestag protocols and I guess I will try to enhance and generalize the idea for the purpose of text-mining in general.


I implemented this algorithm in Python for evaluating the public Bundestag protocols as described here. The code can be found at GitHub and the code specific to this algo starts at line 135. The algorithm is at its current state not efficient but it works like a charm.


(original article published on www.joyofdata.de)


Viewing all articles
Browse latest Browse all 5

Trending Articles