Why I Don’t Put Recommendations on Dashboards

Published by Tim Wilson on September 10, 2013 All posts from Tim Wilson

WARNING: Gilligan contrarianism alert! The following post posits a thesis that runs contrary to popular opinion in the analytics community.

Many companies these days rely on some form of internal dashboard(s). That’s a good thing. Even better is when those companies have actually automated these dashboards – pulling data from multiple data sources, structuring it in a way that directly maps to business objectives, and delivering the information in a clean, easy-to-digest format. That’s nirvana.


Reality, often, is that the dashboards can only be partially automated. They wind up being something an analyst needs to at least lightly touch to bridge inevitable API gaps before delivering them on some sort of recurring schedule: through email, through an intranet, or even in person in a regularly scheduled meeting.

So, what is the purpose of these dashboards? Here’s where a lack of clarity — clearly communicated — becomes a slippery slope faster than Miley Cyrus can trigger a TV viewer’s gag reflex. Dashboards are, first and foremost, performance measurement tools. They are a mechanism for quickly (at a glance!) answering a single question:

“Are we achieving the goals we set out to achieve?”

They can provide some minimal context around performance, but everything beyond answering that question is a distant second purpose-wise.

It’s easy enough to wax sophomoric on this. It doesn’t change the fact, though, that one of the top complaints dashboard-delivering analysts hear is: “I get the [weekly/monthly/quarterly] dashboard from the analyst, but it doesn’t have recommendations on it. It’s just data!”

I get it. And, my response? When that complaint is leveled, it’s a failure on the part of the analyst to educate (communicate), and a failure of process — a failure to have mechanisms in place to deliver actionable analytical results in a timely and effective manner.

But…here…I’m just going to lay out the various reasons that dashboards are not the place to expect to deliver recommendations, because, in my experience, analysts hear that complaint and respond by trying to introduce recommendations to their dashboards. Why shouldn’t they? I can give four reasons!

Reason No. 1: Dashboards Can’t Wait

Another complaint analysts often hear is that dashboards aren’t delivered quickly enough at the end of the reporting period. Well, no one, as far as I know, has found a way to stop time. It marches on inexorably, with every second taking exactly one second, every minute having a duration of 60 seconds, and every hour having a duration of 60 minutes (crappy Adam Sandler movies – pardon the adjectival redundancy — notwithstanding).

Source: aussiegal

Given that, let’s step back and plot out a timeline for what it takes in an “insights and recommendations delivered with the dashboard” scenario for a dashboard that gets delivered monthly:

  1. Pull data (can’t happen until the end of the previous month)
  2. Consolidate data to get it into the dashboard
  3. Review the data — look at KPIs that missed targets and supporting metrics that moved unexpectedly
  4. Dig in to do analysis to try to figure out why those anomalies appeared
  5. IF the root cause is determined, assess whether this is something that needs “fixing” and posit ways that it might be fixable
  6. Summarize the results — the explanation for why those anomalies appeared and what might be done to remedy them going forward (if the root cause was something that requires a near-term change)
  7. Add the results to the dashboard
  8. Deliver the dashboard
  9. [Recipient] Review the dashboard and the results
  10. [Recipient] Decide whether to take action
  11. [Recipient] If action will be taken, then take the action

Seems like a long list, right? I didn’t write it trying to split out separate steps and make it needlessly long. What’s interesting is that steps 1 and 2 can (and should!) be shortened through automation. Aside from systems that are delayed in making their data available, there is no reason that steps 1 and 2 can’t be done within hours (or a day) of the end of the reporting period.

Steps 3 through 7, though, are time-consuming. And, often, they require conversations and discussion — not to mention time to actually conduct analysis. Despite vendor-perpetuated myths that “the tool” can generate recommendations… tools really suck at doing so (outside of highly operationalized processes).

Here’s the other kicker, though: steps 9 through 11 take time, too! So, realistically, let’s say that steps 1 and 2 take a day, steps 3 through 8 take a week, steps 9 and 10 takes 3 days (because the recipient doesn’t drop everything to review the dashboard when it arrives), and then step 11 takes a week (because “action” actually requires marshalling resources and getting something done). That means — best case — we’re 2.5 weeks into the month before action gets taken.

So, what happens at the end of the month? The process repeats, but there was only 1.5 weeks of the change actually being in place… which could easily get dwarfed by the 2.5 weeks of the status quo!!!

Let’s look at how a “dashboard without insights” process can work:

  1. Pull data (can’t happen until the end of the previous month)
  2. Consolidate data to get it into the dashboard
  3. Deliver the dashboard (possibly calling out any anomalies or missed targets)
  4. [Recipient] Review the dashboard and hones in on anything that looks troubling that she cannot immediately explain (more on that in the next section)
  5. The analyst and the recipient identify what, if any, trouble spots require deeper analysis and jointly develop actionable hypotheses to dig in
  6. The analyst conducts a very focused analysis (or, in some cases, proposes an A/B test) and delivers the results.
  7. [Recipient] If action is warranted, takes action

Time doesn’t stop for this process, either. But, it gets the information into the business’s hand inside of 2 days. The analyst doesn’t waste time discovering root causes that the business owner already knows (see the next section). The analysis that gets conducted is focused and actionable, and the business owner is already primed to take action, because she participated in determining what analyses made the most sense.

Reason No. 2: Analysts Aren’t Omniscient

I alluded to this twice in the prior paragraph. Let’s look at a generic and simplistic (but based on oft-observed real-world experience) example:

  1. The analyst compiles the dashboard and sees that traffic is down
  2. The analyst digs into the traffic sources and sees that paid search traffic is down dramatically
  3. The analyst digs in further and sees that paid search traffic went to zero on the 14th of the month and stayed there
  4. The analyst fires off an urgent email to the business that paid search traffic went to zero mid-month and that something must be wrong with the site’s SEM!
  5. The business responds that SEM was halted mid-month due to budget adjustments, and they’ve been meaning to ask what impact that has had

What’s wrong with this picture? Steps 2 through 4 are largely wasted time and effort! There is very real analysis to be done… but it doesn’t come until step 5, when the business provides some context and is ready for a discussion.

This happens all the time. It’s one of the reasons that it is imperative that analysts build strong relationships with their marketing stakeholders, and one of the reasons that a sign of a strong analytics organization is one where members of the team are embedded – literally or virtually – in the teams they support.

But, even with a strong relationship, co-location with the supported team, regular attendance at the team’s recurring meetings, and a spot on the team’s email distribution list, analysts are seldom aware of every activity that might result in an explainable anomaly in the results delivered in a dashboard.

This gets to a data source that gets ignored all too often: the minds and memories of the marketing team. There is nothing at all wrong with an analyst making the statement: “Something unexpected happened here, and, after I did some cursory digging, I’m not sure why. Do you have any ideas as to what might have caused this?” There are three possible responses from the marketer who is asked this question:

  • “I know exactly what’s going on. It’s almost certainly the result of X.”
  • “I’m not sure what might have caused that, but it’s something that we should get to the bottom of. Can you do some more digging to see if you can figure it out?”
  • “I’m not sure what might have caused that, but I don’t really care, either. It’s not important.”

These are quick answers to an easy question that can direct the analyst’s next steps. And, two of the three possible answers lead to a next step of moving onto a value-adding analysis — not pursuing a root cause that will lead to no action! Powerful stuff!

Reason No. 3: Insights Don’t have a Predictable and Consistent Length

I see it all the time: a standard dashboard format that, appropriately, has a consistent set of KPIs and supporting metrics carefully laid out in a very tightly designed structure. Somewhere in that design is a small box – at the top of the dashboard, at the bottom right of the dashboard, somewhere – that has room for a handful of bullet points or a short paragraph. This  area of the dashboard often has an ambitious heading: “Insights,” “Recommendations,” “Executive Summary.”

The idea – conceived either on a whiteboard with the initial design of the dashboard, or, more likely, added the first time the dashboard was produced – is that this is where the analysts real value will be manifested. THIS is where the analyst will place the Golden Nuggets of Wisdom that have been gleaned from the data.

Here’s the problem: some of these nuggets are a flake of dust, and some are full-on gold bars. Expecting insights to fit into a consistent, finite space week in and week out or month in and month out is naïve. Sometimes, the analyst has half a tweet’s worth of prose-worthy material to include, which makes for a largely empty box, leaving the analyst and the recipient to wonder if the analyst is slacking. At other times, the analyst has a handful of useful nuggets to impart…but then has to figure out how to distill a WordPress-sized set of information into a few tweet-sized statements.

Source: martinak15

 Now, if you buy into my first two reasons as to why recommendations shouldn’t be included with the dashboard in the first place, then this whole section becomes moot. But, if not — if you or your stakeholders still insist that performance measurement include recommendations — then don’t constrain the space to include that information to a fixed box on the dashboard.

Reason No. 4: Insights Can’t Be Scheduled

A scene from The Marketer and the Analyst (it’s a gripping — if entirely fictitious — play):

Marketer: “This monthly dashboard is good. It’s showing me how we’re doing. But, it doesn’t include any insights based on the performance for the month. I need insights to take action!”

Analyst: “Well, what did you do differently this month from previous months?”

Marketer: “What do you mean?”

Analyst: “Did you make any changes to the site?”

Marketer: “Not really.”

Analyst: “Did you change your SEM investment or strategy?”

Marketer: “No.”

Analyst: “Did you launch any new campaigns?”

Marketer: “No.”

Analyst: “Were there any specific questions you were trying to answer about the site this month?”

Marketer: “No.”

Analyst: ???!

Raise your hand if this approximates an exchange you’ve had. It’s symptomatic of a completely ass-backward perception of analytics: that the data is a vast reserve of dirt and rock with various veins of golden insights threaded throughout. And, that the analyst merely needs to find one or more of those veins, tap into it, and then produce a monthly basket of new and valuable ingots from the effort.

The fact is, insights come from analyses, and analyses come from hypotheses. Some analyses are small and quick. Some are large and require gathering data – through an A/B or multivariate test, for instance, or through a new custom question on a site survey. Confusing “regularly scheduled performance measurement” with “hypothesis-driven analysis” has become the norm, and that is a mistake.

While it is absolutely fine to measure the volume and value of analyses completed, it is a recipe for failure to expect a fixed number of insights to be driven from and delivered with a scheduled dashboard.

A Final Word: Dashboards vs. Reports

Throughout this post, I’ve discussed “dashboards.” I’ve steered clear of the word “report,” because it’s a word that has become pretty ambiguous. Should a report include insights? It depends on how you define a report:

  • If the report is the means by which, on a regularly scheduled basis, the performance of a [site/campaign/channel/initiative] is performing, then my answer is: “No.” Reasons 1, 2, and 4 explain why.
  • If the report is the term used to deliver the results of a hypothesis-driven analysis (or set of hypothesis-driven analyses), then my answer is, “Perhaps.” But…why not call it “Analysis Results” to remove the ambiguity in what it is?
  • If the report is intended to be a combination of both of the above, then you will likely be delivering a 25+ deck of rambling slides that — despite your adoration for the content within — is going to struggle to hold your audience’s attention and is going to do a poor job of both measuring performance and of delivering clearly actionable analysis results.

We live in a real-time world. Consumers — all marketers have come to accept — have short attention spans and consume content in bite-sized chunks. An effective analyst delivers information that is super-timely and is easily digestible.

So. Please. Don’t spend 3 weeks developing insights and recommendations to include on a 20-page document labeled “dashboard.”

Categorized under Analysis, Metrics

  • J. Neal Cornett

    I’m reading this and practically screaming, “Yes, preach on!” Stop the insanity of expecting actionable insights without hypothesis or testing.

    Still it is our responsibility to educate our customers/executives not just get annoyed at these ceaseless expectations for ‘gold’ actionable take-aways. This article went a long way toward helping me plan that journey with mine and for that I am grateful.

    • http://www.gilliganondata.com Tim Wilson

      I see your “preach on!” and raise you a “damn straight!” I *absolutely* believe analysts need to own that education. And, the education can only be 3 parts preaching to 7 parts showing — analysts figuring out ways to tease out hypotheses, play them back as such (without using the word “hypothesis,” necessarily) and then delivering results where the business can take action… totally separate from the dashboard generation.

      Build up a few of those examples, and it starts to come into focus for everyone that the dashboard isn’t where the insights come from — it’s the ad hoc, fluid response to questions and assumptions (hypotheses).

  • Jonathon Frampton

    I like the shift from existing line of thinking, but do feel (in many cases) that a well executed dashboard lends itself to actions.

    I agree that KPI dashboards with an array of moving parts would require a months’ (2.5 weeks to be exact) worth of analysis to dig into, but what about breaking that dashboard up into smaller more manageable chunks. So if in your example I have a dashboard that is focused on what is driving traffic to the site then adding some nuggets and insights would seem more do-able in terms of time spent ROI.

    I could go on, but this is a comment, not my own post…

    Either way, great article!


    • http://www.gilliganondata.com Tim Wilson

      My intention wasn’t to imply that dashboards are not inherently actionable. Rather, the action it drives can be one of two things: 1) the recipient sees a problem and immediately knows what to do (which is why it’s imperative to get them the information with minimal latency), 2) the recipient sees a problem and does *not* know what to do…but willingly engages in a dialogue to generate hypotheses with the analyst.

      In your example, what if traffic to the site is…pretty much what is expected? Do you find yourself needing to include supporting data — traffic sources, new vs. returning, etc. — anyway? Even though things look fine?

      • Jonathon Frampton

        I had a large write up of a comment keyed up but realized that I agree, those are the results / actions that it drives and obviously the latter is to be encouraged. I apologize if my comment came of wrong, perhaps it should have been posed as more of a question.

        In my example I am trying to get the best of both worlds by creating inherent actions within the dashboards in a automated fashion because (as is often the case) I am a resource strapped analyst. I was just curious if creating more dashboards with this type of action is a better solution as the business still needs the actions to be called out…

        And in my (little) experience it is very rare that “nothing” changes, either on the front or back end, so that was my miss.

        Thank you for the reply, it is much appreciated.

        • http://tim.webanalyticsdemystified.com/ Tim Wilson

          Hmmm… Tell me more about “creating inherent actions…in an automated fashion.” Google, I think, is trying to move towards this on a media mix front, and GA has long had automated “intelligence”…but those aren’t “actions” so much as “oddities that you might want to know about.”

          Can you give me an example of an action that would be automatically recommended (without an analyst involved)? I’m just trying to think that through.

          “Manageable chunks” is definitely key (as long as that’s not: “This spreadsheet has 25 tabs…each one is a manageable chunk of information.”). The analyst’s passion for breadth and depth of data is seldom matched by their stakeholders.

  • julian barnes

    My initial thoughts are this, I’d mostly detach the “analyst” from the dashboard, they need to be aware of business drivers and available to answer questions if something has changed that can’t be explained by the business but leave the (most likely well paid) “analyst” to analyse, get deep into the data and answer those business questions which a KPI dashboard hardly asks but the business wants to know.

    That deeper, richer analysis then has recommendations or insight.

    Someone who produces a report is not necessarily an analyst and a good analyst that is producing reports wishes he wasn’t.

    Completely agree on a lot of what you say though, having seen it first hand

    • http://tim.webanalyticsdemystified.com/ Tim Wilson

      Yup. I agree. I feel like my next post may need to be “why I avoid the word ‘report!’”

      “The business user asked a question, to which the analyst is providing a ‘report’ that answers it.” That’s analysis — digging in, answering the point-in-time question, and, hopefully, providing insight that can lead to action.

      “The business user wants to see the monthly report on how the web site is performing.” THAT, in my mind, totally falls under this post — expecting the analyst to find actionable nuggets by trying to figure out which data fluctuations matter and where the business actually cares about performance — is pretty inefficient.

      Many times, a “richer analysis” may have a deeper insight — explain the *root cause* of something bad that is happening. But, I don’t think it can always have a 100%-from-the-analyst recommendation. “I’ve figured out that visitors are steadily gravitating to the lower margin products” may be valuable. I’m wayyy more comfortable facilitating and contributing to a discussion as to how that might be rectified rather than just rattling off a recommendation — quite possibly external factors at play that the marcom, product marketer, or product manager may be aware of off the top of their heads that needs to be factored into any recommendation.

  • Sergio Maldonado

    Very good post, Tim. It crowns you as one of the few bloggers in this industry who is actually trying to make analytics useful, and not just something people do for a living.

    As you know, I preach Digital Insight Management (i.e. making analytics useful is what I tell my wife I do for a living :) ). This basically means two things: a) I automate dashboards with an obsession for actionability; and b) I provide a framework for “Analysis Results” to be delivered or for hypotheses to be acted upon and tracked.

    Now, Can this two coexist? I believe Yes. Dashboards should indeed be automated and cannot wait for insights or recommendations. But they can very well become the starting point for a conversation (as opposed to an exchange of emails followed by a few meetings). And this could happen in both directions: the Analyst decides after a while that some of the results called for an explanation/recommendation or the business user decides to provide some context and requests some insights.

    All in all a great discussion, anyhow.

    • http://tim.webanalyticsdemystified.com/ Tim Wilson

      Thanks, Sergio!

      Scenario that I’m cool with:
      1. Dashboard automatically delivered (still as a “push” rather than just passively waiting for the marketer to log in and view it) — to the business and to the analyst.

      2. The analyst looks at it and sees a couple of anomalies. She knows the business well enough to know that both anomalies might really be troubling.

      3. She spends 10 minutes digging in to: 1) confirm there isn’t a data issue, and 2) see if there is a quickly available explanation. Let’s say she confirms the first and doesn’t find that explanation.

      4. She pings the business (specific communication channel varies): “Hey… did you see these results? Any idea what might be causing them? I assume we want to get to the bottom of it.”

      5. Collaboratively, they develop and prioritize some hypotheses — doesn’t take more than 5 minutes because they are well aligned and have a good working relationship. The analyst is sure to confirm, with each one, that they’d be actually able to *do* something if each hypothesis is confirmed.

      6. The analyst returns to her office and starts analyzing. She doesn’t wait to compile a detailed report with the results of every hypothesis. She gets what can be done fairly quickly and then pushes out the results to the marketer. The marketer weighs in — some of that analysis actually triggers some additional hypotheses.

      7. The marketer starts to act as each conclusive result comes in.

      It’s fluid and collaborative…and the dashboard is just one potential jumping off point for that collaboration.

      • julian barnes

        Collaboration here is the key between the relevant parties, something that technology can only help facilitate but never substitute…..internal stakeholder relationships, mutual respect for ones opinions; if the relationships are strong enough then the desired end result which is beneficial for the business will work itself out.

        @disqus_ECvuRMSGnH:disqus: Hi hope your well, small world :)