This blog is the outcome of thoughts, discussions and interactions which began with the last two posts on The Velocity of Information Part 1 and Part 2.Given that information needs to flow into the decision matrices and the speed is important, the thought has been dubbed with this title.
The insights obtained from these discussions resulted in the below chain of thought.
- Data as an end state is of no use. It needs to be converted to a usable format. i.e. Information. This conversion is best done by the people who are close to and understand the data
- Information as an end state is of no use. It needs to be analyzed and insights created. These creations are motivated from the perspective of the results. In other words, the analysis and insights need to be drawn (or at least vetted) close to the decision makers
- Insight as an end state is of no use. It becomes an academic exercise if it does not flow into any decision matrix. Remember, a decision to do nothing is also a decision
- It is difficult to predict which piece of information will be relevant to which decision a priori. Therefore, the appropriate approach would be to de-couple the information creation and the information use stages. These two stages would then be connected through context
As we increase the velocity, the idea is that decision making will be more effective. The context of the decision will drive what information/insights get into the matrix and the increased velocity will improve the coverage and recency aspects.
So, the question is what does this mean. We need to dig deeper into 3 areas.
1. Information Creation
There are a plethora of tools available for this step comprising the first two layers of the proposed model. The “Business Intelligence” world is exactly about being able to extract information about data sets (which are getting ‘bigger’ all the time). The majority of this world revolves around structured data and structured analysis, but unstructured data analysis is beginning to come into its own at this point. There are strengths and weaknesses in this area which need to be addressed, but there are already several threads on that.
In my mind, the importance here is to ensure that the information extracted is rational. This means accurate, timely and correctly categorized. Categorization relates to defining the context(s) within which a particular piece of information could be useful along with the standard tagging/metadata pieces.
2. Information Delivery
This is also a critical stage of the process. This is represented by the cloud (layer 3) in the previous posts. However, this layer defines the contextual language, provides for connecting the suppliers of information to the buyers of information, is the plumbing in the scheme of things. In essence, it is a marketplace to make decision-making more effective.
3. Insight Creation
At the heart of decision making is converting information to insight represented by layer 4 in the model. This is a process which is completely manual and is often the basis of what people mean when they say “out-of-the-box”. This layer will need to have bright, experienced people who understand the context. But, the quality and coverage of the information coming into this is very important. A missed indicator or a false positive can throw the whole process out of kilter.
Therefore, the ability of these “experts” will be to create the correct contexts. What they need to depend on is the quality of information they get when they send a context into the Information Delivery system or the aforementioned cloud.
It seems to me that it is quite imperative that organizations look at what can be done to improve the velocity of information and make decision making more effective. De-coupling, as above, probably makes for a smoother implementation; technically, as well as organizationally.
However, the opportunity here is not necessarily limited to internal implementations. The power of this can be stretched a little bit. How many data-providers do we have in the world today? Can they improve the offerings by converting to information before they publish? The analytics can be done of their own accord or could be added as per the specification of the client. Further, they could publish the output (non client specific information) to somebody who runs an information mart! The information mart sets up the contextual language and provides fully baked information to clients who need it; the information could be a mix of the clients internal analytics as well as external open market feeds. The possibilities here are extremely intriguing…
So, how many of the 9-to-whatevers would be OK to take a plunge into something like this? Comments? Thoughts?