The Velocity Of Information (Part 1)

Decisions are made everyday. Actually, they are made several times a day by most individuals. This is also true within a corporate setting.

So, what are these decisions based upon? When we talk of taking “informed” decisions, they are typically based on information. Crisis management apart, decisions are typically taken after careful deliberation of the available information. The information may not provide a complete picture, but the more coverage you have, the better the decision.

How does information get to the decision makers? In yesteryear, careful paths were laid down to elicit this information. In a top down approach, the data to be analyzed was identified, the framework for analysis was agreed, and the data flowed from the point of data availability to the point of decision making. In this journey, it was collated, checked, summarized, categorized, sliced and diced, ad nauseum until it reached it’s destination. The time it took to reach the decision maker often made the information irrelevant, or at least resulted in lost opportunity. For information to be useful in decision making, it has to be timely.

Then, the technique advanced. Vast stores of data were made available close to the decision makers. They could apply the frameworks necessary to elicit the information in a more timely manner. However, the volume of data grew resulting in incorrect and irrelevant information being included in the decision. For information to be useful in decision making, it has to be relevant.

Also, in todays world, there are multiple sources of information and multiple decision makers to consume it, making pre-defined flows difficult to identify and implement. The question is what can be done to get relevant information in a timely  manner to the decision makers, even when they might not know they want it. This is what I would like to discuss and build on. My initial thoughts follow…

I am not big on industry jargon, but three things which are big today might contribute to the solution (not necessarily in the way they are understood currently though!). In my proposal, I create four layers. However, a lot of thought needs to be put in to how this will function. Early days yet…

1. The information creation layer represents the “big data” gang. However, this can be decentralized. This layer extracts information based on the data. The rules and frameworks used to extract information are in the hands of those who are closest to the data.

2. The information curation layer. These are the people/systems that validate and categorize the data for accuracy and relevance. eg. eliminating co-incidences and non-causal co-relations.

3. The “cloud” layer. This allows for multi-point to multi-point flow of the information.

4. The “decision maker curation” layer. These are people/systems who subscribe the information from the cloud, establish relationships and curate it for the decision makers.

Will this allow us to increase the Velocity of Information allowing for better quality decisions? What to the 9-to-whatevers think?

Advertisements

6 thoughts on “The Velocity Of Information (Part 1)

  1. The challenge, as always, will be to improve the interfaces between the different layers. Having 4 layers will increase the propensity of delay at each stage. There should be a flexibility in the system to circumvent a layer or two depending upon the complexity of the analysis. Also, the flow of backward feedback needs to be established. For instance, if the decision curator identifies an issue in the data, will the process of rectification/clarification go through the cloud to the information layer or can it go directly to the information layer? There can ideally be a governing group who will not only establish a standard set of processes across the platform, to prevent the various layers working in silos, but also control the flow of information (both data and otherwise) across the system.

    • Satyakam – Thanks for the thought. The response may be longer than the article 🙂

      The framework is designed for information (post analysis) to flow upwards, not data. The idea is to take action closer to the source and not the destination. The information curator will know the data, not the decision curator. Hence, no issues from a data perspective are expected to enter the cloud.

      The desire here is to de-couple the various layers and make them work independently. This will allow for information from multiple data sources to flow on their own times into a context. This should result in reduced delays since various existing layers can be avoided.

      A platform implementation of this framework will definitely need a governing body, but that should dictate the boundaries and interfaces, rather than attempt to make them closely coupled.

      The thoughts should be more evident in part 2, which you should see shortly.

  2. Dear Aviral,
    you pose interesting questions. I totally agree with you when you say “decisions are typically taken after careful deliberation of the available information. The information may not provide a complete picture, but the more coverage you have, the better the decision”.

    I think that in today’s complex world the only approach that works for problem solving and opportunities catching is experimentation. The variables and elements are so many and interacts in a ever changing way that as decision makers, all we have and can do is to experiment.

    Have a look our blog site and watch our 2 minutes video:
    http://straight2globalchances.wordpress.com/

    Feel free to comment.

    Kind regards,
    Marco Ippolito

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s