Conquering Randomness: The Science of achieving consensus & alignment

Conquering Randomness: The Science of achieving consensus & alignment

Variables around us find it difficult to conform to a trend, a pattern or predictability; the curves we try to fit are a bizarre approximation of what actually they constitute, while business is predicated not on probabilistic distributions but deterministic traditions.

To take an analogy from the world of Science, businesses thrive on Newton's Law of motion, which interprets nature as a deterministic model while Quantum mechanics is about God playing with a dice which is predicated on a belief that nature is far more probabilistic in its outcomes; take the position of a particle within a mass and the chance of it being at a specific location is never certain but probabilistic.

Take economic variables or financial ones and this will become far more clear. If you are dealing with income statistics it can hardly be deterministic, the best you can do is take a snapshot of various moments and make a probability distribution, but even then you would be running the odds that your data sets missed important pools of information. How can income be so firm when every moment the value of your portfolio is transient, no matter whether you are Warren Buffet or the poorest of the poor? How much change can the models tolerate, but there could be swings that could alter anything between 20% to 30% of your future holdings every five years? Income mobility itself is a difficult subject with several hordes of people moving from one quartile to the other over a five year period, so how can one say that the income model could be deterministic?

But the world still believes in certainties and so we have a single GDP number for a population of 1 Billion or even seven billion when the world GDP is computed, and without probabilities attached.

Take the random walk of the stocks, they have a clear unique value at every moment of time, which is the net present value of future expectations arrived at by millions of people putting their bets, but is there a way to predict any future value of any stock? It would be futile to do so, such is the randomness of the trajectory in which the stock prices move. But well, thanks to the Bayesian Statistics of Conditional probabilities, you could probably do better to factor in new information and have a better view of what could usher when you look at probabilities of stock positions given the availability of new information.

So one would be inclined to look at change in expectations given the advent of new information; this is where the randomness could have a clear and sound opponent.

Randomness is opposed by those forces that could crystallize divergence or noise into patterns or signals; sometimes this could be through potent forces of communication as in advertising or promotion, strength or willingness to unite and align as in political campaigns or removing the assignable cause of variation as in processes that are in statistical control.

If you have pure randomness at play, the process would be at best as smoke coming out of a chimney, unpredictable as far as its trajectory would go and highly undesirable. We could tolerate the random nature of the chimney smoke but we would not tolerate if our production processes are concerned as it would make the output that much more unpredictable. So the easiest way would be to find those elements in the process that oppose the random nature of the outcomes, like finding out the most dominant factors and running a design of experiments to see how each factor and its interactions influence the outcomes.

In the area of social beliefs, this could be a formidable challenge as finding dominant opinions and factors that could create a theme of alignment could be extremely difficult. To sway a crowd with no dominant belief would need powerful interventions, symbols, contracts of association, inter-dependencies, faith and trust above all to be built over several rounds of mutual exchange over economic or non-economic interests.

But today's world of social media makes it far easier and there are three very important reasons for this.

First and foremost, the small group effect is striking. The chances of finding affinity for a theme increases exponentially if that is taken in a small group versus if the same is thrown to a large crowd. The Root N Hypothesis says that if there are say N number of people in a crowd and you make Root N number of groups, the chances of a consensus increases if the topics are discussed in these smaller  groups.

Secondly, the transfer of a hypothesis or consensus once achieved to others in the network could happen far more speedily due to network effects that the current social media environment allows.

Thirdly, the added power of reaching millions in a second and the subsequent multiplicative effects are so profound that reaching from one corner of the globe to the other in very large numbers is possible in a very short time. The followership in Twitter, Instagram, etc makes it possible for an idea to reach millions with no added cost.

When you are part of a network where communication of all kinds could reach you over notifications, wily nily, you are part of the network in any case, whether you accept or reject the hypothesis that reaches you. In course of time, what reaches you becomes part of our living memory, you could eventually accept a null hypothesis although it could be false and therefore you could reject the opposite hypothesis which actually could be true.

Directing an idea that is already a part of a network is an area of interesting study; no matter how powerful the idea is, it could move to an auto pilot once it floats on the network. Dominant ideas therefore need re-inforcement to be able to sway large hordes instead of being thrown out of the orbit. This is where social platforms have a responsibility towards bias. 

Mark Raffan

Coaching entrepreneurs and B2B professionals how to get better deals.

5 年

Great article!

回复

要查看或添加评论,请登录

Prof. Procyon Mukherjee的更多文章

社区洞察

其他会员也浏览了