Is Globalization Tenable?

Embed from Getty Images

Globalization is here! Over the last couple of decades, the barriers to exploring and exploiting remote markets have continually eroded. Aided by the web and other factors, even small firms and individuals are able to create and tap clients the world over.

However, is this going to continue to flourish or even be tenable in the future? That, in my opinion, has become a good question over the last few years. The momentum is good, but the momentum arrestors have become quite active now.

Regulators! They are the ones playing spoil-sport. Local regulation has always been as much about populism as protection. Jingoism and parochialism have ruled the day. The topics have also run the gamut – one of the newer entrants being data”.

Regulations around data now abound. Under the labels of privacy and protection (worthy goals in themselves), various countries around the world have created a dis-balanced situation. Essentially, the story is going to “we welcome all your data and we will process it, but our data is not allowed to leave our borders”. In some cases, this is not only for data that can identify individuals, but also for transactional data. For highly regulated industries like banking and pharmaceuticals, this can create havoc. Can other data intensive industries, even e-tailing be far behind?

What is the impact?

  • If data cannot be transferred across borders for processing, global systems become a no-no. Local systems across the globe are not the most cost-effective or the most effective solutions
  • Home countries wish to/need to regulate organizations as a whole – evaluate risks as a whole, monitor for not-nice things, etc. If the data cannot leave individual country boundaries, this is simply not possible. A case of regulations ensuring violation of other regulations
  • The cost of conforming to different regulations in different countries reduces the motivation to globalize in the first place

So, what is going to happen? At some point, the global view will need to be restored. For this, the regulators might wake up and realize the cost that their parochialism is extracting (fat chance?). One other possibility is if enough organizations from an industry actually decide to move out of a country that becomes too heavily regulated threatening a paucity of services (not nice, but has happened before). Not sure what the action will be, but I am convinced that the globalization train will get over this bump.

Sure, 9-to-whatevers, feel free to call me an optimist…

Advertisements

Do We Want to Know?

Knowledge Management is everywhere. In discussions, blogs, corporate strategies, individual minds, etc. If one is unaware of or not convinced about the benefits, there is an army of consultants and vendors who can change that. I am a convert without needing any more help.

For evidence, one needs only to look at traditions passed from one generation to the next. The artisan/farmer/xyz made sure that the next generation understood and learnt the sum of their knowledge so that it may be built upon and improved. This was actually necessary for survival. Today, organizations are fighting for survival/success in a way they have probably never fought before. Every asset is being analyzed in order to increase the efficiency of usage. Knowledge is one such asset which is underutilized and can provide significant returns. The question then is, why is knowledge underutilized? To use any asset efficiently, the nature of the asset needs to be understood; the asset transformed to be usable in the manner desired; the asset used in an optimal manner; the asset maintained in a usable/relevant state and measurements of the benefits coming out of this. Let us apply this to knowledge.

The nature of “knowledge” has been well studied and classified and is constantly being refined. Most of the literature I read today relates to the transformation of knowledge into a usable state. Tools to capture explicit knowledge are widely available. There is also good direction on how to start capturing implicit knowledge; direct interaction and collaboration between the haves and have-nots being used to speed up this process. Curation and maintenance of this “library” is also an oft-touched upon topic. But what about the users of this knowledge? When there is a need for context based answers (typically quick problem-solving type things), people do approach other people. However, a large part of the problem is around re-inventing the wheel and re-learning lessons. My experience has shown me that the not built here syndrome continues to exist in this space. Large swathes of the organization (including and specially managers) do not believe that solutions created and lessons learnt by other people apply to them. Their problem is always different. (Code re-use & Service re-use anybody?). What is done to change this attitude will decide the pay-off from any KM strategy. Another issue is training. While internal corporate providers can play a just in time game with knowledge, vendor organizations and service providers need to be on the bleeding edge. They need to prepare people with knowledge in expectation of its use, not after they develop a need.

I have seen multiple organizations repeat mistakes or re-invent things because people do not want to talk to the people with the knowledge. I have also seen different groups at different levels of preparedness with knowledge (within and across organizations) which they know will be needed. Unfortunately, this depends on the attitude of individuals. We need to work on the culture to spread the “correct” version of the attitude.

We know that Po’s father would confide the secret ingredient to him at some point. But, we need the whole organizational kitchen to know it. What can be done to make it happen? Any thoughts from the 9-to-whatevers?

Getting from Data to Decisions

This blog is the outcome of thoughts, discussions and interactions which began with the last two posts on The Velocity of Information Part 1 and Part 2.Given that information needs to flow into the decision matrices and the speed is important, the thought has been dubbed with this title.

The insights obtained from these discussions resulted in the below chain of thought.

  • Data as an end state is of no use. It needs to be converted to a usable format. i.e. Information. This conversion is best done by the people who are close to and understand the data
  • Information as an end state is of no use. It needs to be analyzed and insights created. These creations are motivated from the perspective of  the results. In other words, the analysis and insights need to be drawn (or at least vetted) close to the decision makers
  • Insight as an end state is of no use. It becomes an academic exercise if it does not flow into any decision matrix. Remember, a decision to do nothing is also a decision
  • It is difficult to predict which piece of information will be relevant to which decision a priori. Therefore, the appropriate approach would be to de-couple the information creation and the information use stages. These two stages would then be connected through context

As we increase the velocity, the idea is that decision making will be more effective. The context of the decision will drive what information/insights get into the matrix and the increased velocity will improve the coverage and recency aspects.

So, the question is what does this mean. We need to dig deeper into 3 areas.

1. Information Creation

There are a plethora of tools available for this step comprising the first two layers of the proposed model. The “Business Intelligence” world is exactly about being able to extract information about data sets (which are getting ‘bigger’ all the time). The majority of this world revolves around structured data and structured analysis, but unstructured data analysis is beginning to come into its own at this point. There are strengths and weaknesses in this area which need to be addressed, but there are already several threads on that.

In my mind, the importance here is to ensure that the information extracted is rational. This means accurate, timely and correctly categorized. Categorization relates to defining the context(s) within which a particular piece of information could be useful along with the standard tagging/metadata pieces.

2. Information Delivery

This is also a critical stage of the process. This is represented by the cloud (layer 3) in the previous posts. However, this layer defines the contextual language, provides for connecting the suppliers of information to the buyers of information, is the plumbing in the scheme of things. In essence, it is a marketplace to make decision-making more effective.

3. Insight Creation

At the heart of decision making is converting information to insight represented by layer 4 in the model. This is a process which is completely manual and is often the basis of what people mean when they say “out-of-the-box”. This layer will need to have bright, experienced people who understand the context. But, the quality and coverage of the information coming into this is very important. A missed indicator or a false positive can throw the whole process out of kilter.

Therefore, the ability of these “experts” will be to create the correct contexts. What they need to depend on is the quality of information they get when they send a context into the Information Delivery system or the aforementioned cloud.

It seems to me that it is quite imperative that organizations look at what can be done to improve the velocity of information and make decision making more effective. De-coupling, as above, probably makes for a smoother implementation; technically, as well as organizationally.

However, the opportunity here is not necessarily limited to internal implementations. The power of this can be stretched a little bit. How many data-providers do we have in the world today? Can they improve the offerings by converting to information before they publish? The analytics can be done of their own accord or could be added as per the specification of the client. Further, they could publish the output (non client specific information) to somebody who runs an information mart! The information mart sets up the contextual language and provides fully baked information to clients who need it; the information could be a mix of the clients internal analytics as well as external open market feeds. The possibilities here are extremely intriguing…

So, how many of the 9-to-whatevers would be OK to take a plunge into something like this? Comments? Thoughts?

The Velocity of Information (Part 2)

There have been a few interesting offline discussions on this topic – thanks to those who helped shape the below thoughts.

Here a visual representation of the framework.

The Layers

The Layers

So, what are the thoughts driving the framework at this point?

  • The output desired here is a framework and not a product roadmap. Hence, the thoughts are limited to principles with an attempt towards completeness
  • The solution is not looking at a BI tool. While these tools today (including Big Data tools) are sufficiently advanced, they still represent only the bottom layer of this framework
  • Work should be done as close to the source as possible. Hence the ownership of the data, analysis and accuracy needs to be at the information curation layer. These concerns need to be satisfied before hitting the cloud
  • The two processes of generating the information and using the information for decisions need to be de-coupled. The impact of this will be shown below

If we look at the Analytics layer, we see several disparate data sets. This indicates that we could be creating information within disparate silos. As an example, the operations department could provide the information on production data, the marketing department on sales data; the macro-economic data for the geography and any relevant news tidbits could come from a completely different data set.

The silo generating the information should not limit themselves in knowing who needs the data a priori. Anybody within the organization should be allowed to access the information.

Similarly, the decision curators should not limit themselves to knowing the range of information that they would like as inputs a priori. A context search within the cloud should give them all that is relevant within the context. Then, their function would be to filter what is relevant from the results available. As an example, the decision on a product mix within a particular geography would not necessarily approach the legal department to determine if any new laws are expected. This structure would allow that information to be assimilated automatically. Don’t we all want to increase the serendipity within our organizations?

The basic underlying idea is to increase the velocity of information by getting it to the right place at the right time. This will also improve the return on investment in the creation of information; reduce duplication of datamarts and analysis across silos; improve the speed at which information gets incorporated as well as increase collaboration across the organization.

Do the 9-to-whatevers think that the objectives here are desirable? Do you think that this framework will move us in that direction? The time has now come to start thinking beyond the basics into some level of detail on the layers and the interactions between them. Do you have any thoughts you would like to share?

The Velocity Of Information (Part 1)

Decisions are made everyday. Actually, they are made several times a day by most individuals. This is also true within a corporate setting.

So, what are these decisions based upon? When we talk of taking “informed” decisions, they are typically based on information. Crisis management apart, decisions are typically taken after careful deliberation of the available information. The information may not provide a complete picture, but the more coverage you have, the better the decision.

How does information get to the decision makers? In yesteryear, careful paths were laid down to elicit this information. In a top down approach, the data to be analyzed was identified, the framework for analysis was agreed, and the data flowed from the point of data availability to the point of decision making. In this journey, it was collated, checked, summarized, categorized, sliced and diced, ad nauseum until it reached it’s destination. The time it took to reach the decision maker often made the information irrelevant, or at least resulted in lost opportunity. For information to be useful in decision making, it has to be timely.

Then, the technique advanced. Vast stores of data were made available close to the decision makers. They could apply the frameworks necessary to elicit the information in a more timely manner. However, the volume of data grew resulting in incorrect and irrelevant information being included in the decision. For information to be useful in decision making, it has to be relevant.

Also, in todays world, there are multiple sources of information and multiple decision makers to consume it, making pre-defined flows difficult to identify and implement. The question is what can be done to get relevant information in a timely  manner to the decision makers, even when they might not know they want it. This is what I would like to discuss and build on. My initial thoughts follow…

I am not big on industry jargon, but three things which are big today might contribute to the solution (not necessarily in the way they are understood currently though!). In my proposal, I create four layers. However, a lot of thought needs to be put in to how this will function. Early days yet…

1. The information creation layer represents the “big data” gang. However, this can be decentralized. This layer extracts information based on the data. The rules and frameworks used to extract information are in the hands of those who are closest to the data.

2. The information curation layer. These are the people/systems that validate and categorize the data for accuracy and relevance. eg. eliminating co-incidences and non-causal co-relations.

3. The “cloud” layer. This allows for multi-point to multi-point flow of the information.

4. The “decision maker curation” layer. These are people/systems who subscribe the information from the cloud, establish relationships and curate it for the decision makers.

Will this allow us to increase the Velocity of Information allowing for better quality decisions? What to the 9-to-whatevers think?