I have decided IP Statistics is mostly stupid. Let’s say statistically speaking 85% of it.
Mostly because the people who have always worked in this space (patent related) are used to focusing on individual patent documents – either creating them or enforcing them. Which means hardly any one applies the findings from an IP Statistical analysis correctly.
By correctly, I say “see earlier blog on how one cannot apply big data insights to individual events”.
The receivers of IP Stats data, typically, try to focus in on one or two patents and then try to make some determination or gain some insight in to that one right. But it doesn’t work like that.
An IP statistical analysis that sets out to eg identify valuable patent rights in a portfolio that might be licencable seems doomed. At best it can tell you where the valuable right “might” be (in his 50 or so) but any more then that requires some human intervention and an understanding of the commercial environment in which the right sits. At best the stats it’s a guide and at worst it’s bad dogma.
IP statistics might have use in policy making. This is because policies are broad brush handwavy things. But take it any further and expect to be able to identify freedom to operate risks, collaboration partners, or gold nuggets of value and I think you’ll be using a sledgehammer to crack a peanut.
My friend – let’s call him George – figured this all out ages ago. So he started doing analytics projects that could be “drilled down into”. This means making really clean datasets that can be used to provide overview data insights, but which can also be deconstructed to provide meaningful results. In other words follow a thread and you will end up at a really relevant patent document. This is achieved through careful meta-tagging of data during cleanup and all that. Any noise (to a threshold) in raw data can be tolerated by statistics, but not by drill-down. The outcome of such a project looks largely the same to the client recipient, but it’s actually much better in “function”. It’s more flexible and can provide answers to big and little questions.
A problem is the approach is currently less profitable, and the client doesn’t seem to realise the value proposition. Companies that commoditise IP statistics (eg Thomson Reuters types) wouldn’t know a good clean dataset (or why you would want one) if it jumped out of the bushes. Unfortunately the client doesn’t care. George gets mad. It all devolves. Ja. Stick to governments and go for policy support. That’s what I say.