The World's Unsexiest Business
adding a little sizzle to the convenience store industry
WARNING: I'm about to get super nerdy and technical on you in this post.
Circumventing the need to take inventory readings of every item in the store, why not have this process become a more manageable exercise by leveraging the power of a best estimator? That is, what’s more likely to experience a variance from the norm? Better yet, what’s most likely to erode bottom line?
Sure you could end up neck deep in academic prose which lends much credibility to a highly robust quantitative method but may leave you scratching your head at where or how to start given the heavy usage of double sigmas and all too many greek characters.
So let’s start slowly and introduce Bayes’ Theorem
P(A | B) = P(B | A)P(A) / P(B)
It's scary but not entirely unapproachable. And when presented in the context of the following, it's becomes even less daunting.
In terms of realizing my far-fetched dream of being a comedian, I’ve many times pondered the following:
The probability of me becoming a comedian
I’m not really much of a funny guy =
The probability of me being not much of a funny guy
I’m really a comedian at heart
The probability of making a living in comedy
Parlayed into convenience retailing, how do we measure the probability of a product experiencing unexplained inventory variability given the following individual unrelated scenarios (using 2015 National Retail Federation estimates):
At first, we’d have to apply a probability to each one of these particular scenarios which is more like throwing darts than anything rooted in precision. But since I’m making a rather huge leap of faith and assuming your team have their finger on the pulse of the business, subjective probability assessments measured using a numerical scale, wouldn’t be entirely ruled out.
Because what we’re trying to determine is a conditional based probability estimate, we need to use what we already know or might be able to reasonably estimate. Then using those composite pieces, simplify (or reduce the time to completion of) a task that’s big and can only be completed by nothing short of brute force.
Back in 2009 when I had worked as a software consultant in New York City, one of the more interesting assignments I became a part of was building a risk management platform for a $20 billion hedge fund. Essentially, the system needed to inform executive management how bad losses would become given various tiers of disastrous outcomes by various financial risk management metrics (VaR, Gain to Pain Ratio, Drawdown).
As for determining the upside, I don’t think that the particular fund manager got into that particular vocation if he didn’t unilaterally believe that his own connection with divinity would lead him down the path of limitless profits.
Given that the fund had survived the 2008 crash reasonably well, it had done so by betting against the housing market (among other well placed bets). From what I could infer, the success of one of their largest capital allocated strategies (without going into any specific detail) going forward hinged on the default of individual corporation debt alongside that of sovereign bonds.
With the rise in sell-side financial institution credit default swap offerings, buy-side investors could easily participate in a prisoners dilemma version of what I understood to be Wall Street’s perception of the likelihood of default (as opposed to what individual investors thought). It became less about the default itself and more what trader x at fund y thought about when and how that default would occur. The intrinsic ‘alpha’ generating component relied heavily on the lack of opacity associated with understanding this particular financial instrument. Given the price of the credit default swap was calculated based upon a composite of default probabilities (among other nearly important inputs) ranging from tomorrow going out a predefined time period (usually 5 to 10 years), much of it was rooted in either ‘quant’ heavy simulation based number crunching or just touchy-feely guesswork.
I’m not going to rule out the overt obtaining of insider information but for the majority of chatter gleaned from bathroom stall conversations and the supposed ‘clandestine’ hotel lobby discussions about default curve predilections.
Over the following couple of years I had gotten to hear about small groups of experienced traders with finely honed ‘gut’ instincts for recognizing ill conceived default probabilities that had been rewarded rather handsomely for their unique ‘ear to the street’ insight. The moral of this story, as an experienced professional, never downplay your own instinct.
Given the 5 aforementioned five scenarios, apply the following rating to each:
1 - Without a doubt (100%)
2 - Extremely likely (80%)
3 - Likely (50%)
4 - Neutral (20%)
5 - I’m not a conspiracy theorist (10%)
6 - Not a chance, buddy (0%)
Now let’s go for a test drive on plugging this into Bayes’ theorem applied to the nightmare inducing scenario of customer pilferage:
A = Customer pilferage
B = Marlboro Lights shrink
How do we calculate something we might already know? This may be more self-evident when realized as a statement, what is the probability of Marlboro Lights shrink given that customer pilferage is being experienced? It’s an all or nothing measure here. Since you know (or rather have a suspicion) that customer pilferage is affecting your business, the probability of any product in the store contributing to that is high. And let’s say on the order of 100% high!
P(B|A) = 1
P(A) = 50% (let’s assume we have a likely suspicion)
This one is a little bit more tricky. The statisticians (and my former academic self) may burn me at the stake for the following. Let’s leverage a 12 month lag correlation statistic of sales and purchases.
Correlation = Covariance (Purchases, Sales) / SQRT(Variance (Purchases)) * SQRT(Variance (Sales))
In an ideal environment, we would expect parity — we only purchase as much as we need to sell. We’ll have to ignore starting inventory levels having factored into purchasing behavior but with 12 data points, some averaging out would reasonably occur.
While technology which leverages everything from video feeds down to granular point of sale transaction data can help identify shrink, it's prone to idiosyncratic paralysis of analysis not to mention flagging what may just be discovered to be false positives. It's going to take the implementation of artificial intelligence and machine learning based initiatives to be able to identify systemic shrink with high precision.