After arriving at a single, agreed-upon version of truth, data initiatives commonly fail anyway. Decision makers go back to using the same familiar sources … the ones that disagree with each other, take hours to assemble, and don’t provide much insight.
Why? A common explanation: “We lack a data-driven culture!”
That’s a bogey. A cop-out. In our experience, people WANT to deliver better results and they’d LOVE to have data to help them.
So what’s wrong?
By asking end users, we’ve found that dashboards and reports provided
don’t fit their decision process and
don’t get to “why?” in order to diagnose and solve problems.
The fault is the content and design of the dashboards and reports. Blaming the users is unfair.
As an example, imagine a page in the Monthly Business Review (MBR) of a company we’ve worked with, about Sales Team performance. It shows
A bar chart of total sales by month over the past 13 months
A bar chart of # of deals in pipeline by month over the past 13 months
List of top 10 sales people
Pie chart of products sold last month
There is also additional detail showing
Sales by each region over the past 13 months
Pipeline by each region over the past 13 months
In the MBR, the VP of sales goes over the numbers and tells reasons why they’re over or under-performing. There was some discussion with Marketing about the quantity and quality of leads. Yet the sales managers didn’t consult this source. Why? This was the same essential information they’d always worked from, and it was easier just to get it where they got it before.
Asking the sales managers how they manage their team and process, they told us their decision and action points were around how much attention they were getting from Marketing, which salespeople to coach and which to replace, and which products and customers to focus on.
The original dashboard only told them where they stood, it was just “reporting the news.” They needed to know WHY pipeline and sales were ahead or behind. And they needed a look forward.
That depended on
Quantity of leads
Quality of leads
Lead survival rate through the funnel (contact, pitch, contract, close)
Product (different products close at different rates)
Sales person ability and tenure (it was hard to compare teams when one team had more newer sales people than another … was team performance a matter of talent or simply that the new salespeople needed a few months to get up to speed?)
Dashboards for each sales manager were organized by each decision & action (with drilldown to salesperson for coaching):
Action: Manage leads with Marketing
Quantity of leads
Response rate to first attempted contact, to indicate quality of lead
Action: Manage performance of salespeople on my team
Lead survival rate through the funnel (contact, pitch, contract, close), where “aged” leads which were cluttering the data, were excluded
Profile of products at each stage vs benchmark (the annual goal for the company)
Projected sales for rest of year based on pipeline and team’s close rate, as a leading indicator
Action: Decide on coaching vs replacing salespeople on my team
Tenure-adjusted (cuts some slack for new sales people over their first 6 months) # of pitches, contracts, and closes.
These were the items each sales manager needed to consider, and they were the basis of reviews with the VP of Sales. These were readily adopted, and a rollup of them became the Sales Team content for the MBR.
We data people love to dig into the data without sufficient attention to the end users and their decision & action processes. Software developers and UI-UX learned this lesson years ago. Pivot Point Analytics.io starts with user interviews, then iterates designs with the users at 3 points in the development of a data pipeline. The outcome is dashboards people love and use.