The Pear Necessities: Pear Analytics’s Pointless Babble Beta needs more work

By | October 11, 2009

I was raiding the Pear Analytics Website for the reference to their initial report, and I noticed they’ve upgraded to have a brand spanking new Beta(TM)  Feature – the PointlessBabble Filter so we can see just how what counts as pointless babble.  Under their own house rules for what counts into each different category, they’ve stuffed up the first six of seven items on the page (and I just wandered in, screenshot and counted. Your mileage won’t vary that much, they’d stuffed a large number of other tweets down the page).

Once again, I find myself grateful I don't have these people as my students
Once again, I find myself grateful I don’t have these people as my students

Seriously, if you’ve set criteria of (@, RT or HTTP) how hard is it to keep to that criteria?

Also, Ryan, it pays to read the report you’re defending before you defend it.  When you explained about the criticisms of the categorisation, you made one minor error of consistency.

Categorization – lots of comments claimed that the categories were vague and subjective. I still feel the categories are fine – but what we can do is sub-categorize on the next round.  For example, on the News category, we could break it out into mainstream, tech, social media, etc.  For Conversational and Pass Along Value, we could add which percentage of those had links.  This keeps the primary categories consistent between reports for comparison purposes.

News: Any sort of main stream news that you might find on your national news stations such as CNN, Fox or others. This did not include tech news or social media news that you might find on TechCrunch or Mashable.

Categorisation of "News" now includes the excluded
All the news that's quite limited and doesn't include social media stuff

Sarah, Ryan, Pear Analytics et al, before you drop the October 15 report, here’s a set of suggestions

  • Pick up on the lack of attention to detail that’s thwarting you from producing coherent arguments – look for the internal consistency in your software, your document and your site.  The example above is a neat demonstration of the lack of cohesive understanding of the “News” category .
  • Clean up the bugs on the Philtro system – ensure the auto-categories (RT, @, url) don’t appear on the pointless babble stream.
  • Redefine your content categories with greater precision so you don’t find yourself excluding with the left and including with the right.
  • Specify if single count or multi count method is used where a tweet could fit multiple categories
  • Read Naaman, Boase, and Lai (2010) – Is it Really About Me? Message Content in Social Awareness Streams.

Good luck for the October 15 release.  You know I’ll be waiting for the report.

ETA: Sarah Monahan has been in contact to let me know she’s no longer with Pear Analytics.

One thought on “The Pear Necessities: Pear Analytics’s Pointless Babble Beta needs more work

  1. Ryan Kelly

    Sorry to disappoint Stephen, but we are delaying the release date of the next study, and will probably have it by end of the month. Also, to briefly comment on your observation above, half of those tweets you are seeing were voted “down” in the Philtro tool, which will vary from our criteria of tweets having no RT, short URL, etc. I believe our criteria is more strict for defining what is babble and what is not, and if you left it up to the user, the percentage would be much higher. That’s my theory anyway, and hope to prove/disprove that in the next study.

Comments are closed.