This past weekend, data visualization, and specifically data visualization around depicting uncertainty, got a bit of national attention. As someone who designs with data and uncertainty, this was very exciting to me (not to mention it gave me a way to talk to my family about what I do!)

On Wednesday President Trump spoke in front of reporters holding up a map of the approaching Hurricane Dorian. The reason this became a news items on its own is that the map had been altered from its original form. A map of the gulf coast that originally had a cone-shaped graphic mapping the potential path of the hurricane now had a new addition — an extra bulbous bump.

The Bulbous Lump

The new addition was immediately suspicious, and there are a couple of obvious reasons why. Its shape is a little bent, the black stroke doesn’t seem to match the rest of the map, and its style isn’t referenced in the key below. Adding to this general gestalt of jankiness is that the extra bump makes the whole shape to look, well, weird. Even if someone is not an expert on data visualization or hurricanes, they can recognize that the shape is not right.

Weather services often show Hurricane paths with a graphic known as a “cone of uncertainty.” This shape is familiar to people who have seen storm maps on the news year after year, but it is also used for a number of other purposes. Now, with the suspicious bump, the shape is no longer really a cone.

Now, I’m not saying Trump drew it on there, or had an aide draw it on there, or anything like that. He said later that he doesn’t know who altered the map and for the purposes of this post I will take his word on that. But. Whoever added that extra little pot belly onto the map clearly did not have a good understanding of how a probability cone works.

And as it turns out, that’s not altogether uncommon.

The cone map is intended to show a range of possibilities for the path of the hurricane, but many people misinterpret this to mean that the hurricane gets bigger over time, or that they are safe if they are on the outside of its edges.

In fact, what the widening end of the cone actually represents is an increase in uncertainty as time progresses!

AI Cone of Uncertainty

The loops get larger as the hurricane moves forward not because the hurricane is getting bigger but because meteorologists have less confidence in where the hurricane will be the further into the future they are predicting. According to a really great NY Times interactive piece, “The uncertainty circles grow over time because it’s easier to predict what will happen one day from now than five days from now.

This makes intuitive sense. It’s easier to predict where a thing will be in an hour than where it will be in a week. Heck, do you know where you’re going to be at this time next week? And thus, the shape gets wider (possible area of storm) as it moves forward (over time).

When you consider the mechanics of the cone, that extra little bump make even less sense. It suggests that forecast models are more confident of where the center of the hurricane will be in three days than in two, but it just doesn’t work that way.

The ramifications of this kind of understanding are important.

In the case of Hurricane Dorian and the NOAA map (minus the bonus bump) there is a 60-70% chance of the hurricane crossing within that geographical area. Which of course also means there is a one in three chance that the hurricane will go outside of the cone. According to the Times story, “The cone graphic is deceptively simple. That becomes a liability if people believe they’re out of harm’s way when they aren’t.” The misunderstanding matters; it could lead to people not taking important safety precautions and putting themselves in real danger.

But just as the cone of uncertainty can oversimplify the hurricane’s path to the point of confusion, other map types can complicate the path to the point of confusion. Take a look at another map that had been shared with President Trump:

Hurricane of Uncertainty

I’ll call this one “The spaghetti nightmare.” Where the first map was clear but oversimplified in a way that hid uncertainty, this one is a complex mess that is hard to get a quick read on.  People only have so much mental energy to spend on your map.

[click_to_tweet tweet=”Being clear — while also exposing uncertainty — is a balancing act.” quote=”Being clear — while also exposing uncertainty — is a balancing act.”]

At Noodle.ai–and in the field of artificial intelligence more generally–we work a lot with uncertainty. When our mathematical predictions can improve certainty by a few percentage points, that is a win. As a designer at Noodle, I do a lot of visual representations of machine learning model outputs that are steeped in uncertainty. It’s not easy to both show a range of potential outcomes and expose that they are potentialities rather than a sure bet, but that is always the goal. One good resource for designing for uncertainty comes from Scientific American. Another comes from Stefaner:

A good example of a graphic that nails this balance is Moritz Stefaner’s Project Ukko

Process Detail as visualization

In project Ukko, Stefaner shows a cone that is the accumulation of 51 predictions of equal likelihood in a continuous line with past observations. The shape of the cone is formed by the individual lines, so that a natural “clumping” of probability is quickly visible, without being forced. He also presents the probability data in a “deeper dive” so that it doesn’t clutter the global view.

A big part of the work is finding the right design language for a given problem.

According to Moritz, that language should be: “capable of providing a good mental model for the dataset, allow us to see aggregate patterns but also fine details, and is recognizable, and attractive.”

Bringing it back home, at Noodle.ai we express uncertainty in a variety of ways. In our Transportation Network AI application, we reveal to trucking industry operators where there is hidden capacity within their networks in order to create more efficient routes, and ultimately, value. Insight into where the network imbalances will occur — that is, where there are more trucks than loads or more loads than trucks — in the next few days, can be a game changer. But no prediction is a certainty, and for humans to make good decisions based on those predictions, there has to be a clear understanding of confidence and risk. The challenge is to present the advanced predictions of our data scientists in a way that doesn’t sweep uncertainty under the rug.

We opted to show imbalance at a network wide level, but allow a deeper dive to view uncertainty at a regional level. Confidence levels are shown in shades, but grouped into distinct bands (separate shades rather than continuous gradients).

Noodle.ai Transportation Networking Loads Chart

In designing data visualization, it is our goal to create clarity out of large volumes of data. When designing for machine learning and AI — as with weather maps — we need to make sure that clarity does not suggest a certainty that does not exist.