#Workout Wednesday – Fiscal Years + Running Sums

As a big advocate of #WorkoutWednesday I am excited to see that it is continuing on in 2018.  I champion the initiative because it offers people a constructive way to problem solve, learn, and grow using Tableau.

I was listening to this lecture yesterday and there was a great snippet “context is required to spark curiosity.”  We see this over and over again in our domain – everyone wants to solve problems, but unless there is a presented problem it can be hard to channel energy and constructively approach something.

Last pre-context paragraph before getting into the weeds of the build.  I enjoy the exercise of explaining how something works.  It helps cement in my mind concepts and techniques used during the build.  Being able to explain the “why” and “how” are crucial.

Let’s get started.

High level problem statement or requirement: build out a dashboard that shows the running total of sales and allows for the user to dynamically change the start of a fiscal year.  The date axis should begin at the first month of the fiscal year. Here’s the embedded tweet for the challenge:

 

Okay – so the challenge is set. Now on to the build. I like to tackle as many of the small problems that I know the answer to immediately. Once I get a framework of things to build from I can then work through making adjustments to get to the end goal.

  • numeric parameter 1 to 12 for the fiscal year start
  • calculation to push a date to the right fiscal year
  • dimension for the fiscal year
  • transformation of date to the correct year
  • running sum of sales

I’ll spare you the parameter build and go directly into the calculations that deal with the fiscal year components.

Logic as follows – if the month of the order date is less than the fiscal year start, then subtract one from the year, otherwise it’s the year.  I can immediately use this as a dimension on color to break apart the data.

The next step would be to use that newly defined year with other elements of the original date to complete the “fiscal year” transformation.  Based on the tooltips – the year for each order date should be the fiscal year.

Now that the foundation is in place, the harder part is building a continuous axis, and particularly a continuous axis of time.  Dates span from 2013 to 2017 (depending on how you’ve got your FY set up), so if we plotted all the native dates we’d expect it to go from 2013 to 2017.  But that’s not really want we want.  We want a timeline that spans a single year (or more appropriately 365 days).

So my first step was to build out a dummy date that had the SAME year for all the dates.  The dates are already broken out with the FY, so as long as the year isn’t shown on the continuous axis, it will display correctly.  Here’s my first pass at the date calculation:

That gets me the ability to produce this chart – which is SO CLOSE!

The part that isn’t working is my continuous axis of time is always starting at 1/1.  And that makes sense because all the dates are for the same year and there’s always 1/1 data in there.  The takeaway: ordering the dates is what I need to figure out.

The workaround?  Well the time needs to span more than one year and specifically the start of the axis should be “earlier” in time.  To achieve this – instead of hard coding a single year 2000 in my case, I changed the dummy year to be dependent on the parameter.  Here’s the result:

Basically offset everything that’s less than the chosen FY start to the next year (put it at the end).  (Remember that we’re already slicing apart the data to the correct FY using the previous calculations.)  The combination of these two calculations then changes the chart to this guy which is now mere formatting steps away from completion.

The full workbook is available to download here.

The Remaining 25 Weeks of #WorkoutWednesday

Back in July I wrote the first half of this blog post – it was about the first 27 weeks of #WorkoutWednesday.  The important parts to remember (if the read is too long) are that I made a commitment to follow through and complete every #MakeoverMonday and #WorkoutWednesday in 2017.  The reason was pretty straightforward – I wanted a constructive way to challenge myself and tangible, realistic goals.

Now that we’re 3 days into 2018 – it’s the perfect opportunity to go through the same process of sharing the impact each workout (the remaining 25) has had on me.

Week 28 | Insights & Annotations
The focus on this workout was adding context/insights/summarization to existing visualizations.  Something that is often asked of those creating dashboards or presenting data.  I enjoyed this workout tremendously because it was a great example of using a feature within Tableau in a way I hadn’t thought about.  The premise is pretty simple – allow users the ability to input insights/findings and customize the summary output.  Very clever use that is a great way to provide a dashboard to someone who has little time or is skeptical of self-service analytics.

Week 29 | Who sits where at the Data School?
I hated this workout – because of donut charts.  Donut charts in Tableau are the level one “look how cool I am” or “I think I’m pretty awesome with the tool” things that people make.  Yes – they are cuter than pie charts (which I don’t mind) – but I strongly hate how these are implemented in Tableau.  Pushing aside my dissatisfaction for donuts – I veered off requirements for this dashboard.  I particularly changed from the original build results because of how seat favorites were computed – more specifically I ended up with more “No Favorite” than the original.  The great points about this workout – the sophistication required to calculate several of the numbers shown.

Week 30 | Loads of LODs
As described this workout had a few LODs.  The most distinct thing I remember about this build relates to the region filter.  You need to decide early on how you’re going to implement this.  I believe Emma used a parameter where I used a filter.  The choice made here will have consequences on the visualizations and is a great way to start understanding the order of operations within Tableau.

Week 31 | The Timing of Baby Making
Ah – a favorite visualization of mine, the step chart/plot.  The two major gotcha moments here are: implementing the step chart and the dropdowns/filters to set highlighting.  This one is solvable in multiple ways and I went the Joe Mako route of unioning my data set.  During the build this seemed like the easier solution to me, but it does have implications later on.  I believe it’s worth the effort to go this route – you will learn a lot about how unioning your data on itself can be useful.

Week 32 | Continuous Dates are Tricky
This is classic Emma style – particularly down to the requirement of a dynamic title that would update based on newer data availability.  The premise itself is pretty straightforward – how can you plot a continuous month line, but have years broken out.  By definition that concept should break your brain a bit, because continuous means it’s the entire date, so how are you plotting different years on a continuous axis?  And hence the challenge of the workout!

Week 33 | How Have Home Prices Changed?
Andy mentioned that this visualization was inspired by Curtis Harris.  And as I write this – it is probably my top visualization in terms of design from the 2017 #WorkoutWednesday collection.  Something about the strategic use of capitalization in conjunction to the color choices resonated with me and has left a lasting impact on my viz style.  This is a stunningly beautiful example of making Tableau a very self-service analytics tool, having dense data, but still being deceptively simple and clean from a design perspective.  Plus you’re practicing dynamic titles again – which I find to be a requirement for most serious builds.

Week 34 | Disney’s Domination
This workout was my first waffle chart.  I’d successfully avoided the waffle (as a general rule I don’t like high carbohydrate visualizations) up to this point.  More than the waffle – was the requirement to data blend.  I’m not a big data blending fan because it is very easy for things to go sideways.  However – the icky feeling you get from data blending is exactly why this is a great exercise to work through.  And also because I believe I did this entire visualization (up to a point) using LODs and had to switch to table calculations.  I learned how to make an LOD into a table calculation (probably the reverse practice for more tenured Tableau folks).

Week 35 | Average Latitude of Solar Eclipses by Century
This is another visualization with design that I find very pleasing – particularly the use of reference lines.  I strongly remember learning so much about the Path shelf this week.  Specifically how to use path to your advantage.  You don’t often see people create something that would otherwise be a line chart, but instead has vertical bars/gantts/lines to focus the eye vertically and then across.  A great exercise and thought-starter on additional visualizations to make.

Week 36 | Which UK Airport Should You Fly From?
This workout is the perfect hands-on exercise on taking continuous sizing on bar charts offered up as a new feature in the past year.  Beyond knowing that you CAN do something, knowing HOW to build that something is the key (at least for me) to be able to iterate and ideate.  This one is more complex than it seems at first glance.

Week 37 | Killings of Blacks by Whites Are Far More Likely to Be Ruled ‘Justifiable’
A viz I don’t want to remember – this one is 100% about formatting.  It took me a considerable chunk of time to complete.  And probably more maddening – this viz needs to be built in one session, otherwise you’ll forget all the intricate details required to make it look “just so.”  I was cursing my PC the entire build – and worse than that I think I restarted it at a certain point because things weren’t lining up how I wanted.  Only complete if you enjoy being tormented.  The upside?  Going through this workout will make you intimately aware of all the gaps and limitations Tableau has as it relates designing.  Also – this was done before changing padding was a feature.  Thanks guys.

Week 38 | (It Takes) All Sorts
This is another “looks simple” but has “some tricks” workout.  I remember someone at our user group asking about this over the summer and if I knew how to build it.  I didn’t have an answer readily available within 30 seconds, so I knew there was more going on.  I highly encourage this build because it demonstrates how sorting works and how multiple sorts interact with each other.  Also – I think whatever sorting I ended up was some sort of mathematical manipulation on my part.

Week 39 | Are the contributions of top sellers increasing throughout the year?
Another trellis chart!   More than the trellis – check out what is being analyzed.  This isn’t superficial or first-pass reading of data.  This is second and third level thought on finding deeper insights and answers to questions within a data set.  So naturally it requires more layers of calculations to resolve.  And of course – the “just so” placement of the far right label.  This is a perfect example of taking a question and turning it into a visualization that shows the answer.

Week 40 | All Sorts Part 2
As advertised and named – the second half of Emma’s sorting workout.  This may actually have been the dashboard where I did some mathematical magic to require the first position to be first and retain any additional sorting.  Also – the devil is in the details.  When you change sort order, notice that the bottom visualization always changes to be the subcategory chosen.  Sounds easy, but takes some thought to implement.

Week 41 | State to City Drill Down
As I look back on my tracker – I realize that I did 38 through 41 in the same day.  And naturally I approach backlog in the fashion of oldest gets built first.  So this was the 4th on a particular day – but I championed this guy hardcore.  I will say it again – YOU NEED TO DO THIS WORKOUT.  The concepts on execution here are next level.  I know it sounds kind of trivial – but it will help unlock your mind to the possibilities of using filters and the art of the possible.  Plus this is a question that ALWAYS gets asked.  “Can I click a state and have it automatically change the view to cities.”  This does that.  Also – this build took me 30 minutes tops.

Week 42 | Market Basket Analysis
I won’t forget this workout anytime soon because it required the legacy JET connector and thinking about how data gets joined back on itself.  This type of analysis is something people often want to have done – so knowing the steps on creation using an Excel data source (or other sources for that matter) makes this guy worth the build.  Follow Emma’s advice closely.

Week 43 | The Seasonality of Superstore
A great viz once again demonstrating how powerful parameters can be – how you can use them in multiple places – and also things you can do to make visualizations more user/reader friendly.  You’re definitely using a table calculation somewhere in here – and you definitely will get angry when trying to recreate the smoothing (particularly dealing with endpoints of the chosen time range).

Week 44 | Customer Cohorts
When dealing with cohort analysis you’re very likely to encounter LODs – that’s par for the course for this workout.  But again – Emma is so clever at taking something that seems straightforward and challenging you to implement.  If you look closely you’ll have to dynamically change the bottom visualization based on where a user clicks.  I remember spending the majority of my time on the dynamic title.

 

Week 45 | Stock Portfolio
This is one sheet.  Just remember that – everything is on one sheet.  And more than that – think about how this is implemented from a numerical perspective – there’s some serious normalization going on to make things show up in context to one another.  If you’re not a math lover – this will be a great way to play with numbers and have them bend to your advantage.  Also – I remember being annoyed because one of the stocks had a maximum value greater than the recorded max (which is it’s own measure) – and that irritated me.

Week 46 | Top N Customers
Think of this as a different way of implementing sets.  It has a lot of similar functionality between IN/OUT and showing members of a set.  And also there are some key takeaways in terms of aggregating dimensions.  Not super flashy on design, but very useful in terms of implementation.

Week 47 | Fun with Formatting
Another visualization where you’re required to do everything in a single sheet.  This will put all that table calculation sweat to action.  I really enjoyed this one.  There is something very satisfying about ranking/indexing things multiple ways in one view.  Also it uses the Caption guys.

Week 48 | Treemap Drilldown
Same concept as week 41, but executed as a treemap.  I think I even opened up week 41 to use as influence on where to go.  Same concepts are repeated, but in a different format.  The automagic of this one doesn’t get old – also carefully look at how things are sorted.

Week 49 | Position of Letter Occurrences in Baby Names
When you say out loud what you’re trying to do – particularly “find the nth occurrence” of a specific letter (we can generalize as substring) in a specific string – it sounds really really hard.  But guess what – there’s a built in function!  The fact that it’s built in made this visualization super straightforward to complete.  You should build this to introduce yourself to a function you’ve probably never used before.

Week 50 | Rocket ship Chart
I very much enjoy this type of chart from an analytical perspective.  It’s a great way to normalize things that are bound to time.  You see immediate inferred rank and results.  Emma put in some requirements to ensure that as data changed this chart would stay accurate.

Week 51 | State by State Profit Ratio
If you want several of the lessons Andy built into multiple workouts all in one place – this workout is for you.  It’s got so many classic Kriebel “gotcha” moments in it.  As I was building this I really felt like it was a specially designed final to test what I’ve learned.  Also this is probably my first tilemap (unless we made one in another workout).  I don’t use them often – so it’s a great refresher on how to implement.  And also – you get to use a join calculation.

Week 52 | UK’s Favourite Christmas Chocolates
When I was building this one someone asked me why I was making it – specifically where was the challenge.  I explained that it was all in one sheet opposed to 4 different sheets.  A natural next question occurred which was “why would you want to do it in one sheet.”  I though that was a very interesting question and one that I explained by saying that for me personally knowing multiple ways to do things is important.  And more specifically as I know to be true of these types of builds – if you can do it in one sheet, it shows a level of mastery on making Tableau do exactly what you want (which is LOTS of things).

And that wraps up 2017 beautifully.  Comparing the retrospective of this half of the year vs. the first half – there are stark differences from my perspective.  I can honestly say that each build got easier as time went on.  Once I got to the last few challenges – I was timing completion to be about 20 minutes.  Contrast that with the first few weeks where I spent hours (over multiple sessions) making my way through each build.

Beyond building out my portfolio, having concrete examples of specific types of analysis, and fulfilling my own goals – #WorkoutWednesday has given me such depth of knowledge in Tableau that it’s ridiculous.  You know how people say things like “that’s just funny Tableau behavior” – well I can (for the most part) now verbally articulate what and why that funny behavior is.  And more than that – I know how to maximize the behavior which is really working as designed and use it to my own advantage.

The last part of this blog series is going to be a ranking of each workout – aimed to help those who are interesting in completing the challenges approach these without getting too discouraged or burnt out on some of the builds that are still (to this day) hard.  Be on the lookout.

#WorkoutWednesday Week 24 – Math Musings

The Workout Wednesday for week 24 is a great way to represent where a result for a particular value falls with respect to a broader collection.  I’ve used a spine chart recently on a project where most data was centered around certain points and I wanted to show the range.  Propagating maximums, minimums, averages, quartiles, and (when appropriate) medians can help to profile data very effectively.

So I started off really enjoying where this visualization was going.  Also because the spine chart I made on a recent project was before I even knew the thing I developed had already been named.  (Sad on my part, I should read more!)

My enjoyment turned into caution really quickly once I saw the data set.  There are several ratios in the data set and very few counts/sums of things.  My math brain screams trap!  Especially when we start tiptoeing into the world of what we semantically call “average of all” or “overall average” or something that somehow represents a larger collective (“everybody”).  There is a lot of open-ended interpretation that goes into this particular calculation and when you’re working with pre-computed ratios it gets really tricky really quickly.

Here’s a picture of the underlying data set:

 

Some things to notice right away – the ratios for each response are pre-computed.  The number of responses is different for each institution.  (To simplify this view, I’m on one year and one question).

So the heart of the initial question is this: if I want to compare my results to the overall results, how would I do that?  Now there are probably 2 distinct camps here.  1: take the average of one of the columns and use that to represent the “overall average”.  Let’s be clear on what that is: it is the average pre-computed ratio of a survey.  It is NOT the observed percentage of all individuals surveyed.  That would be option 2: the weighted average.  For the weighted average or to calculate a representation of all respondents we could add up all the qualifying respondents answering ‘agree’ and divide it by the total respondents.

Now we all know this concept of average of an average vs. weighted average can cause issues.  Specifically we’d feel the friction immediately if there were several low-end responses commingled with several higher response capturing entities.  EX: Place A: 2 people out of 2 answered yes (100%) and  Place B: 5 out of 100 answered ‘yes’ (5%).  If we average 100% and 5% we’ll get 52.5%.  But what if we take 7 out of 102, that’s 6.86% – a way different number.  (Intentionally extreme example.)

So my math brain was convinced that the “overall average” or “ratio for all” should be inclusive of the weights of each Institution.  That was fairly easy to compensate for: take each ratio and multiply it by the number of respondents to get raw counts and then add those all back up together.

The next sort of messy thing to deal with was finding the minimums and maximums of these values.  It seems straightforward, but when reviewing the data set and the specifications of what is being displayed there’s caution to throw with regard to level of aggregation and how the data is filtered.  As an example, depending on how the ratios are leveraged, you could end up finding the minimum of 3 differently weighted subjects to a subject group.  You could also probably find the minimum Institution + subject result at the subject level of all the subjects within a group.  Again I think the best bet here is to tread cautiously over the ratios and get into raw counts as quickly as possible.

So what does this all mean?  To me it means tread carefully and ask clear questions about what people are trying to measure.  This is also where I will go the distance and include calculations in tool tips to help demonstrate what the values I am calculating represent.  Ratios are tricky and averaging them is even trickier.  There likely isn’t a perfect way to deal with them and it’s something we all witness consistently throughout our professional lives (how many of us have averaged a pre-computed average handle time?).

Beyond the math tangent – I want to reiterate how great a visualization I think this is.  I also want to highlight that because I went deep-end math on it that I decided to go deep end development different.

The main difference from the development perspective?  Instead of using reference bands, I used a gannt bar as the IQR.  I really like using the bar because it gives users an easier target to hover over.  It also reduce some of the noise of the default labeling that occurs with reference lines.  To create the gannt bar – simply compute the IQR as a calculated field and use it as the size.  You can select one of the percentile points to be the start of the mark.

#WorkoutWednesday Week 23 – American National Parks

I’m now back in full force from an amazing analytics experience at the Alteryx Inspire conference in Las Vegas.  The week was packed with learning, inspiration, and community – things I adore and am honored to be a part of.  Despite the awesome nature of the event, I have to admit I’m happy to be home and keeping up with my workout routine.

So here goes the “how” of this week’s Workout Wednesday week 23.  Specifications and backstory can be found on Andy’s blog here.

Here’s a picture of my final product and my general assessment of what would be required for approach:

Things you can see from the static image that will be required –

  • Y axis grid lines are on specific demarcations with ordinal indicators
  • X-axis also has specific years marked
  • Colors are for specific parks
  • Bump chart of parks is fairly straight forward, will require index() calculation
  • Labels are only on colored lines – tricky

Now here’s the animated version showing how interactivity works

  • Highlight box has specific actions
    • When ‘none’ is selected, defaults to static image
    • When park of specific color is selected, only that park has different coloration and it is labeled
    • When park of unspecified color is selected, only that park has different coloration (black) and it is labeled

Getting started is the easy part here – building the bump chart.  Based on the data set and instructions it’s important to recognize that this is limited to parks of type ‘National Historical Park’ and ‘National Park.’  Here’s the basic bump chart setup:

and the custom sort for the table calculation:

Describing this is pretty straight for – index (rank) each park by the descending sum of recreation visitors every year.  Once you’ve got that setup, flipping the Y-axis to reversed will get you to the basic layout you’re trying to achieve.

Now – the grid lines and the y-axis header.  Perhaps I’ve been at this game too long, but anytime I notice custom grid lines I immediately think of reference lines.  Adding constant reference lines gives ultimate flexibility in what they’re labelled with and how they’re displayed.  So each of the rank grid lines are reference lines.  You can add the ‘Rank’ header to the axis by creating an ad-hoc calculation of a text string called ‘Rank.’  A quick note on this: if you add dimensions and measures to your sheet be prepared to double check and modify your table calculations.  Sometimes dimensions get incorporated when it wasn’t intended.

Now on to the most challenging part of this visualization: the coloration and labels.  I’ll start by saying there are probably several ways to complete this task and this represents my approach (not necessarily the most efficient one):

First up: making colors for specific parks called out:

(probably should have just used the Grouping functionality, but I’m a fast typer)

Then making a parameter to allow for highlighting:

(you’ll notice here that I had the right subset of parks, this is because I made the Park Type a data source filter and later an extract filter – thus removing them from the domain)

Once the parameter is made, build in functionality for that:

And then I set a calculation to dynamically flip between the two calculations depending on what the parameter was set to.

Looking back on this: I didn’t need the third calculation, it’s exactly the same functionality as the second one.  In fact as I write this, I tested it using the second calculation only and it functions just fine.  I think the over-build speaks to my thought process.

  1. First let’s isolate and color the specific parks
  2. Let’s make all the others a certain color
  3. Adding in the parameter functionality, I need the colors to be there if it is set to ‘(None)’
  4. Otherwise I need it to be black
  5. And just for kicks, let’s ensure that when the parameter is set to ‘(None)’ that I really want it to be the colors I’ve specified in the first calc
  6. Otherwise I want the functionality to follow calc 2

Here’s the last bit of logic to get the labels on the lines.  Essentially I know we’re going to want to label the end point and because of functionality I’m going to have to require all labels to be visible and determine which ones actually have values for the label.  PS: I’m really happy to use that match color functionality on this viz.

And the label setting:

That wraps up the build for this week’s workout with the last components being to add in additional components to the tooltip and to stylize.  A great workout that demonstrates the compelling nature of interactive visualization and the always compelling bump chart.

Interact with the full visualization here on my Tableau Public.

Workout Wednesday Week 21 – Part 1 (My approach to existing structure)

This week’s Workout Wednesday had us taking NCAA data and developing a single chart that showed the cumulative progression of a basketball game.  More specifically a line chart where the X axis is countdown of time and the Y axis is current score.  There’s some additional detail in the form of the size of each dot representing 1, 2, or 3 points.  (see cover photo)

Here’s what the underlying data set looks like:

Comparing the data structure to the image and what needs to be produced my brain started to hurt.  Some things I noticed right away:

  • Teams are in separate columns
  • Score is consolidated into one column and only displayed when it changes
  • Time amount is in 20 minute increments and resets each half
  • Flavor text (detail) is in separate columns (the team columns)
  • Event ID restarts each half, seriously.

My mind doesn’t like that there’s a team dimension that’s not in the same column.  It doesn’t like the restarting time either.  It really doesn’t like the way the score is done.  These aren’t numbers I can aggregate together, they are raw outputs that are in a string format.

Nonetheless, my goal for the Workout was to take what I had in that structure and see if I could make the viz.  What I don’t know is this: did Andy do it the same way?

My approach:

First I needed to get the X axis working.  I’ve done a good bit of work with time so I knew a few things needed to happen.  The first part was to convert what was in MM:SS to seconds.  I did this in my mind to change the data to a continuous axis that I could format into MM:SS format.  Here’s the calculation:

I cheated and didn’t write my calculated field for longevity.  I saw that there was a dropped digit in the data and compensated by breaking it up into two parts.  Probably a more holistic way to do this would be to say if it is of length 4 then append a 0 to the string and then go about the same process.  Here’s the described results showing the domain:

Validation check: the time goes from 0 to 20 minutes (0 to 20*60 seconds aka 1200 seconds).  We’re good.

Next I needed to format that time into MM:SS continuous format.  I took that calculation from Jonathan Drummey.  I’ve used this more than once, so my google search is appropriately ‘Jonathan Drummey time formatting.’  So the resultant time ‘measure’ was almost there, but I wasn’t taking into consideration the +20 minutes for the first half and that the time axis was full game duration.  So here’s the two calculations that I made (first is +20 mins, then the formatting):

At this point I felt like I was kind of getting somewhere – almost to the point of making the line chart, but I needed to break apart the teams.  For that bit I leveraged the fact that the individual team fields only have details in them when that team scores.  Here’s the calc:

I still don’t have a lot going on – at best I have a dot plot where I can draw out the event ID and start plotting the individual points.

So to get the score was relatively easy.  I also did this in a custom to the data set kind of way with 3 calculations – find the left score, find the right score, then tag the scores to the teams.

Throwing that on rows, here’s the viz:

All the events are out of order and this is really difficult to understand.  To get closer to the view I did a few things all at once:

  • Reverse the time axis
  • Add Sum of the Team Score to the path
  • Put a combined half + event field on detail (since event restarts per half)

Also – I tried Event & Half separately and my lines weren’t connected (broken at half time; so creating a derived combined field proved useful at connecting the line for me)

Here’s that viz:

It’s looking really good.  Next steps are to get the dots to represent the ball sizes.

One of my last calculations:

That got dropped on size on a duplicated and synchronized “Team Score.”  To get the pesky null to not display from the legend was a simple right click and ‘hide.’  I also had to sort the Ball Size dimensions to align with the perceived sizing.  Also the line size was made super skinny.

Now some cool things happened because of how I did this:  I could leverage the right and left scores for tooltips.  I could also leverage them in the titling of the overall scores UNC = {MAX([LeftScore]}.

Probably the last component was counting the number of baskets (within the scope of making it a single returned value in a title per the specs of the ask).  Those were repeated LODs:

And thankfully the final component of the over sized scores on the last marks could be accomplished by the ‘Always Show’ option.

Now I profess this may not be the most efficient way to develop the result, heck here’s what my final sheet looks like:

All that being said: I definitely accomplished the task.

In Part 2 of this series, I’ll be dissecting how Andy approached it.  We obviously did something different because it seems like he may have used the Attribute function (saw some * in tooltips).  My final viz has all data points and no asterisks ex: 22:03 remaining UNC.  Looking at that part, mine has each individual point and the score at each instantaneous spot, his drops the score.  Could it be that he tiptoed around the data structure in a very different way?

I encourage you to download the workbook and review what I did via Tableau Public.

 

Synergy through Action

This has been an amazing week for me.  On the personal side of things my ship is sailing in the right direction.  It’s amazing what the new year can do to clarify values and vision.

Getting to the specifics of why I’m calling this post “Synergy through Action.”  That’s the best way for me to describe how my participation in this week’s Tableau and data visualization community offerings have influenced me.

It all actually started on Saturday.  I woke up and spent the morning working on a VizforSocialGood project, specifically a map to represent the multiple locations connected to the February 2017 Women in Data Science conference.  I’d been called out on Twitter (thanks Chloe) and felt compelled to participate.  The kick of passion I received after submitting my viz propelled me into the right mind space to tackle 2 papers toward my MBA.

Things continued to hold steady on Sunday where I took on the #MakeoverMonday task of Donald Trump’s tweets.  I have to imagine that the joy from accomplishment was the huge motivator here.  Otherwise I can easily imagine myself hitting a wall.  Or perhaps it gets easier as time goes on?  Who knows, but I finished that viz feeling really great about where the week was headed.

Monday – Alberto Cairo and Heather Krause’s MOOC was finally open!  Thankfully I had the day off to soak it all in.  This kept my brain churning.  And by Wednesday I was ready for a workout!

So now that I’ve described my week – what’s the synergy in action part?  Well I took all the thoughts from the social good project, workout Wednesday, and the sage wisdom from the MOOC this week to hit on something much closer to home.

I wound up creating a visualization (in the vein of) the #WorkoutWednesday redo offered up.  What’s it of?  Graduation rates of specific demographics for every county in Arizona for the past 10ish years.  Stylized into small multiples using at smattering of slick tricks I was required to use to complete the workout.

Here’s the viz – although admittedly it is designed more as a static view (not quite an infographic).

 

And to sum it all up: this could be the start of yet another spectacular thing.  Bringing my passion to the local community that I live in – but more on a widespread level (in the words of Dan Murray, user groups are for “Tableau zealots”).

#WorkoutWednesday Week 1

Another great community activity is Workout Wednesday hosted by Andy Kriebel and Emma Whyte.  According to Andy it’s “designed to test your knoweldge of Tableau and help you kick on in your development.”  They’re alternating odd vs. even weeks.

Here’s the first task in a visual nutshell (using Superstore data set):

I’m happy to say that I was able to complete the task.  What was the most interesting part?  To get the dots on the single lines I ended up redoing a field that had a secondary table calculation and using some built in functions.  Those functions were RUNNING_SUM() and TOTAL().  The dots continued to be tricky, but I resolved to using AND logic within my IF statement and leveraging LOOKUP().

I also did a micro upgrade.  The instructions indicated that the red should highlight the “most current year.”  When interacting with the viz on the original blog, I noticed that only 2015 was red and the title was static.  So I added in logic to highlight the most recent year and added the dynamic change to the title as well.

Full viz on my Tableau Public page.