Queen's University tried out a deception detector, and applied it to speeches by three Canadian political leaders.
The results? Paul Martin uses the most indirect language, Stephen Harper the least indirect, and Jack Layton is slightly more indirect than Harper.
Using the Pennebaker model for detecting deception in text, the computer science students scanned the speeches for the following traits:
The Pennebaker model predicts that deceptive text will be marked by:
- A decreased frequency of first-person pronouns, perhaps because of a speaker's attempts to distance himself or herself from what's being said;
- A decreased frequency of exception words, such as `however' and `unless', perhaps to keep the story simple;
- An increased frequency of negative emotion words, perhaps because of some instinctive distaste about deceiving; and
- An increased frequency of action words, perhaps to keep the story going so that inconsistencies might not be noticed.
I don't know if I buy the model's premise, but at least somebody's paying attention to speechwriting these days.
Intrigued, I tried running a recent blog posting through the Language Inquiry and Word Count tool from the University of Texas at Austin's James Pennebaker, for analyzing character traits of an author.
If I understand the tool correctly, my writing is more direct, less emotional and more optimistic than average. I also use a lot of articles and big words, which leaves me open to accusations of being a detached, concrete thinker. (I also ran this posting through the tool, and it seems to think a brainy robot wrote it.)
Thanks to Jakob Webster for the link.
Comments