The Velocity of Information (Part 2)

There have been a few interesting offline discussions on this topic – thanks to those who helped shape the below thoughts.

Here a visual representation of the framework.

The Layers

The Layers

So, what are the thoughts driving the framework at this point?

  • The output desired here is a framework and not a product roadmap. Hence, the thoughts are limited to principles with an attempt towards completeness
  • The solution is not looking at a BI tool. While these tools today (including Big Data tools) are sufficiently advanced, they still represent only the bottom layer of this framework
  • Work should be done as close to the source as possible. Hence the ownership of the data, analysis and accuracy needs to be at the information curation layer. These concerns need to be satisfied before hitting the cloud
  • The two processes of generating the information and using the information for decisions need to be de-coupled. The impact of this will be shown below

If we look at the Analytics layer, we see several disparate data sets. This indicates that we could be creating information within disparate silos. As an example, the operations department could provide the information on production data, the marketing department on sales data; the macro-economic data for the geography and any relevant news tidbits could come from a completely different data set.

The silo generating the information should not limit themselves in knowing who needs the data a priori. Anybody within the organization should be allowed to access the information.

Similarly, the decision curators should not limit themselves to knowing the range of information that they would like as inputs a priori. A context search within the cloud should give them all that is relevant within the context. Then, their function would be to filter what is relevant from the results available. As an example, the decision on a product mix within a particular geography would not necessarily approach the legal department to determine if any new laws are expected. This structure would allow that information to be assimilated automatically. Don’t we all want to increase the serendipity within our organizations?

The basic underlying idea is to increase the velocity of information by getting it to the right place at the right time. This will also improve the return on investment in the creation of information; reduce duplication of datamarts and analysis across silos; improve the speed at which information gets incorporated as well as increase collaboration across the organization.

Do the 9-to-whatevers think that the objectives here are desirable? Do you think that this framework will move us in that direction? The time has now come to start thinking beyond the basics into some level of detail on the layers and the interactions between them. Do you have any thoughts you would like to share?

Advertisements

4 thoughts on “The Velocity of Information (Part 2)

  1. “The silo generating the information should not limit themselves in knowing who needs the data a priori ” – interesting point….but isn’t Big data trying to mine this data which is traditionally branded as useless ? with falling cost of data generation/storage…corporations will probably capture more data at these silo stages

  2. I’m getting slightly confused with word Velocity here ? Should we consider velocity to be infinite if we are providing seamless access to raw data to everyone once we give omni-access as recommended in the paper ?

  3. Sorry….you mentioned on FB that I shouldn’t bother too much about word velocity in the context of article above….but just to share my thoughts

    Velocity will have two aspects 1) availability of data 2 ) faster transmission of data

    Additionally if I use analogy from economics (that’s another place where we use word v0elocity : in Friedman’s monetary theory)…money velocity acceleration was done 1) via e-commerce and online transactions 2) via unique concept of credit cards where you use money even before you earn it….just trying to think aloud what are the learning from that sphere if any if we have to increase velocity of data usage…not sure if it makes any sense

    • Sandeep – you bring up an interesting comparison to the velocity of money. The velocity of money is about money changing hands multiple times in a year. This allows the size of the economy to grow without the need for increasing the money supply. The methods you mention above certainly contribute to it.

      If we try to equate it with the velocity of information, some interesting parallels come into the picture. The ultimate organization would be one where information is created only once and used throughout. This would increase the efficiency of the organization. While this is not necessarily achievable (look at code re-use in IT shops today), it is a good objective to have. As we move towards this, we will have a situation where “golden sources” of information will proliferate. This will lead to greater accuracy. Also, as the adoption of this framework increases within an organization, the use of these golden sources will increase since they will start to flow into decision matrices asynchronously. This will address the “direction” part of the velocity, as per my thoughts. As the efficiensy of this process becomes better, the speed or the timeliness will also improve.

      So, in essence, while the definitions, or objectives, of these velocities may differ, they certainly have some properties in common. Thanks for the thoughts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s