#MakeoverMonday Week 21 – Are Britons Drinking Less?

After some botched attempts at reestablishing routine, #MakeoverMonday week 21 got made within the time-boxed week!  I have one pending makeover and an in-progress blog post to talk about Viz Club and the 4 developed during that special time.  But for now, a quick recap of the how and why behind this week’s viz.

This week’s data set was straightforward – aggregated measures sliced by a few dimensions.  And to what I believe is now becoming an obvious trend on how data is published, it included both aggregated and lower dimensions within the same field (read this as “men,” “women,” “all people”).  The structured side of my doesn’t like it and screams for me to exclude from any visualizations, but this week I figured I’d take a different approach.

The key questions asked related to alcohol consumption frequency by different age and gender combinations (plus those aggregates) – so there was lots of opportunity to compare within those dimensions.  More to that, the original question and how the data was presented begged to rephrase into what became the more direct title (Are Britons Drinking Less?)

The question really informed the visualizations – and more to that point, the phrasing of the original article seemed to dictate to me that this was a “falling measure.”  Meaning it has been declining for years or year-to-year, or now compared to then – you get the idea.

With it being a falling measure and already in percentages, this made the concept of using an “difference from first” table calculation a natural progression.  When using the calculation the first year of the measure would be anchored at zero and subsequent years would be compared to it.  Essentially asking and answering for every year “was it more or less than the first year we asked?”  Here’s the beautiful small multiple:

Here the demographics are set to color, lightest blue being youngest to darkest blue being oldest; red is the ‘all.’  I actually really enjoyed being able to toss the red on there for a comparison and it is really nice to see the natural over/under of the age groups (which mathematically follows if they’re aggregates of the different groups).

One thing I did to add further emphasis was to put positive deltas on size – that is to say to over emphasize (in a very subdued, probably only Ann appreciates the humor behind it way) when it is anti-the trend.  Or more directly stated: draw the readers attention to different points where the percentage response has increased.

Here’s the resultant:

So older demographics are drinking more than they used to and that’s fueled by women.  This becomes more obvious to the point of the original article when looking at the Teetotal groups and seeing many more fat lines.

Here’s the calculation to create the line sizing:

Last up was to make one more view to help sell the message.  I figured a dot plot would mimic champagne bubbles in a very abstract way.  And I also thought open/closed circles in combination with the color encoding would be pleasant for the readers.  Last custom change there was to flip the vertical axis of time to be in reverse.  Time is read top down and you can see it start to push down to the left in some of the different groupings.

If you go the full distance and interact with the dashboard, the last thing I hope you’ll notice and appreciate is the color legend/filter bar at the top.  I hate color legends because they lack utility.  Adding in a treemap version of a legend that does double duty as highlight buttons is my happy medium (and only when I feel like color encoding is not actively communicated enough).

#MakeoverMonday Week 18

{witty intro}  This week’s makeover challenge was to take Sydney ferry data for 7 ferry lines and 8 months.  What’s even better is there was another dimension with a domain of 9 members.  This is a dream data set.  I say it’s a dream from the perspective of having two dimensions that can be manipulated and managed (no deciding HOW they have to be reduced or further grouped) and there’s decent data volume with each one.

In the world of visualization, I think this is a great starter data set.  And it was fun for me because I could focus on some of the design rather than deciding on a deep analytical angle.  Plus in the spirit of the original, my approach was to redo the output of “who’s riding the ferries” and make it more accessible.

So the lowdown: first decision made was the color palette.  The ferry route map had a lot of greens in it.  And obviously a lot of blues because of water.

So I wanted to take that idea and take it one step further.  That landed to me a world of deep blues and greens – using the darkest blue/green throughout typically represent the “most” of something.

These colors informed most decisions that came afterward.  I really wanted to stick to small multiples on this one, just by the sheer line up of the two medium/small domained dimensions.  Unfortunately – nothing of that nature turned out very interesting.  Here’s an example:

Like it’s okay and somewhat interesting – especially giving each row the opportunity to have a different axis range.  But you can see the “problem” immediately, there’s a few routes that are pretty flat and further to that, end users are likely going to be frustrated by the independent axis when they dive deeper to compare.

Pivoting from that point led me to the conclusion that the dimensions shouldn’t necessarily be shown together, but instead show one within the other.  But – worth noting, in the small multiple above you can see that the ‘Adult’ fare is just the most everywhere all the time.  Which led to this guy:

Where the bars are overall and the dots are Adult fares.  I felt that representing them in this context could free up the other dwarfed fare types to play with the data.

Last step from my end was to highlight those fare types and add a little whimsy.  I knew switching to % of total would be ideal because of the trip amounts for each route.  Interpret this as: normalizing to proportions gave opportunity to compare the routes.

I actually landed on the area chart by accident – I was stuck with lines, did my typical CTRL + drag of same pill to try and do some fun dual axis… and Tableau decided to automatically build me an area chart.

The original view of this was obviously not as attractive and I’ve done a few things to enhance how this displays.  The main thing was to eliminate the adult fare from the view visually.  We KNOW it’s the most, let’s move on.  Next was to stretch out the data a bit to see what’s going on in the remaining 30%-ish of rides. (Nerd moment: look at what I titled the sheet.)

Finishing up – there’s some label magic to show only those that are non-adult.  I also RETAINED the axis labels – I am hoping this helps to demonstrate and draw attention to the tagged axis at 50%.  What’s probably the most fun about this viz – you can hover over that same blue space and see the adult contribution – no data lost.

Overall I’m happy with the final effect.  A visually attractive display of data that hopefully invites users into deeper exploration.  Smaller dimension members given a chance to shine, and some straightforward questions asked and answered.

#MakeoverMonday Week 17

After a bit of life prioritization, I’m back in full force on a mission to contribute to Makeover Monday.  To that end, I’m super thrilled to share that I’ve completed my MBA.  I’ve always been an individual destined not to settle for one higher education degree, so having that box checked has felt amazing.

Now on to the Makeover!  This week’s data set was extra special because it was published on the Tableau blog – essentially more incentive to participate and contribute (there’s plenty of innate incentive IMO).

The data was courtesy of LinkedIn and represented 3 years worth of “top skills.”  Here’s my best snapshot of the data: 

 

This almost perfectly describes the data set, without the added bonus of there being a ‘Global’ skills in the Country dimension as well.  Mixing aggregations or concepts of what people believe can be aggregated, I sighed just a little bit.  I also sighed at seeing some countries are missing 2014 skills and 2016 is truncated to 10 skills each.

So the limitations of the data set meant that there had to be some clever dealing to get around this.  My approach was to take it from a 2016 perspective.  And furthermore to “look back” to 2014 whenever there was any sort of comparison.    I made the decision to eliminate “Global” and any countries without 2014 from the data set.  I find that the data lends itself best to comparison within a given country (my perspective) – so eliminating countries was something I could rationalize.

Probably the only visualization I really cared about was a slope chart.  I thought this would be a good representation of how a skill has gotten hotter (or not).  Here’s that:

Some things I did to jazz it up a bit.  Added a simple boolean expression to color to denote if the rank has improved since 2014.  Added on reference lines for the years to anchor the lines.  I’ve done slope charts different ways, but this one somehow evolved into this approach.  Here’s what the sheet looks like:

Walking through it, starting with the filter shelf.  I’ve got an Action filter on country (based on action filter buttons elsewhere on the dashboard).  Year has been added to context and 2015 eliminated.  Datasource filtered out the countries without 2014 data & global.  Skill is filtered to an LOD for 2016 Rank <> 0.  This ensures I’m only using 2016 skills.  The context filters keep everything looking pretty for the countries.

The year lines are reference lines – all headers are hidden.  There’s a dual axis on rows to have line chart & circle chart.  The second Year in columns is redundant and leftover from an abandoned labeling attempt (but adds nice dual labels automatically to my reference lines).

Just as a note – I made the 2016 LOD with a 2014 LOD to do some cute math for line size – I didn’t like it so abandoned.

Last steps were to add additional context to the “value” of 2016 skills.  So a quick unit chart and word cloud.  One thing I like to do on my word clouds these days is square the values on size.  I find that this makes the visual indicator for size easier to understand.  What’s great about this is that smaller rank is better, so instead of “^2” it became this:

Sometimes math just does you a real solid.

The kicker of this entire data set for me and gained knowledge: Statistical Analysis and Data Mining are hot!  Super hot!  Also really like that User Interface Design and Algorithm Design made it to the top 10 for the United States.  I would tell anyone that a huge component of my job is designing analytical outputs for all types of users and that requires an amount of UX design.  And coincidentally I’m making an algorithm to determine how to eliminate a backlog, all in Tableau.  (basic linear equation)

Makeover Monday Week 8 – Potatoes in the EU

I’ll say this first – I don’t eat potatoes.  Although potatoes are super tasty, I refuse to have them as part of my diet.  So I was less than thrilled about approaching a week that was pure potato (especially coming off the joy of Valentine’s Day).  Nonetheless – it presented itself with a perfect opportunity for growth and skill testing.  Essentially, if I could make a viz I loved about a vegetable I hate – that would speak to my ability to interpret varying data sets and build out displays.

I’m very pleased with the end results.  I think it has a very Stephen Few-esque approach.  Several small multiples with high and low denoted, color playing throughout as a dual encoder.  And there’s even visual interest in how the data was sorted for data shape.

So how did I arrive there?  It started with the bar chart of annual yield.  I had an idea on color scheme and knew that I wanted to make it more than gray.

This gave perfect opportunity to highlight the minimum and maximum yields.  To see what years different countries production was affected by things like weather and climate.  It’s actually very interesting to see that not too many of the dark bars (max value) are in more recent times.  Seems like agricultural innovation is keeping pace with climate issues.

After that I was hooked on this idea of sets of 3.  So I knew I wanted to replicate a small multiple in a different way using the same sort order.  That’s where Total Yield came in.  I’ve been pondering this one in the shower on the legitimacy of adding up annual ratios for an overall yield.  My brain says it’s fine because the size of the country doesn’t change.  But my vulnerable brain part says that someone may take issue with it.  I’d love for a potato farming expert to come along and tell me if that’s a silly thing to add up.  I see the value in doing a straight total comparison of the years.  Because although there’s fluctuation in the yield annually, we have a normalized way to show how much each countries produces irrespective of total land size.

Next was the dot plot of the current year.  This actually started out its life as a KPI indicator of up or down from previous, but it was too much for the visual.  I felt the idea of the dot plot of current year would do more justice to “right now” understanding.  Especially because you can do some additional visual comparison to its flanks and see more insight.

And then rinse/repeat for the right side.  This is really where things get super interesting.  The amount of variability in pricing for each country, both by average and current year.  Also – 2013 was a great year for potatoes.

And so it beings – Adventures in Python

Tableau 10.2 is on the horizon and with it comes several new features – one that is of particular interest to me is their new Python integration.  Here’s the Beta program beauty shot:

Essentially what this will mean is that more advanced programming languages aimed at doing more sophisticated analysis will become an easy to use extension of Tableau.  As you can see from the picture, it’ll work similar to how the R integration works with the end-user using the SCRIPT_STR() function to pass through the native Python code and allowing output.

I have to admit that I’m pretty excited by this.  For me I see this propelling some data science concepts more into the mainstream and making it much easier to communicate and understand the purpose behind them.

In preparation I wanted to spend some time setting up a Linux Virtual Machine to start getting a ‘feel’ for Python.

(Detour) My computer science backstory: my intro to programming was C++ and Java.  They both came easy to me.  I tried to take a mathematics class based in UNIX later on that was probably the precursor to some of the modern languages we’re seeing, but I couldn’t get on board with the “terminal” level entry.  Very off putting coming from a world where you have a better feedback loop in terms of what you’re coding.  Since that time (~9 years ago) I haven’t had the opportunity to encounter or use these types of languages.  In my professional world everything is built on SQL.

Anyway, back to the main heart – getting a box set up for Python.  I’m a very independent person and like to take the knowledge I’ve learned over time and troubleshoot my way to results.  The process of failing and learning on the spot with minimal guidance helps me solidify my knowledge.

Here’s the steps I went through – mind you I have a PC and I am intentionally running Windows 7.  (This is a big reason why I made a Linux VM)

  1. Download and install VirtualBox by Oracle
  2. Download x86 ISO of Ubuntu
  3. Build out Linux VM
  4. Install Ubuntu

These first four steps are pretty straightforward in my mind.  Typical Windows installer for VirtualBox.  Getting the image is very easy as is the build (just pick a few settings).

Next came the Python part.  I figured I’d have to install something on my Ubuntu machine, but I was pleasantly surprised to learn that Ubuntu already comes with Python 2.7 and 3.5.  A step I don’t have to do, yay!

Now came the part where I hit my first real challenge.  I had this idea of getting to a point where I could go through steps of doing sentiment analysis outlined by Brit Cava on the Tableau blog.  I’d reviewed the code and could follow the logic decently well.  And I think this is a very extensible starting point.

So based on the blog post I knew there would be some Python modules I’d be in need of.  Googling led me to believe that installing Anaconda would be the best path forward, it contains several of the most popular Python modules.  Thus installing it would eliminate the need to individually add in modules.

I downloaded the file just fine, but instructions on “installing” were less than stellar.  Here’s the instructions:

Directions on installing Anaconda on Linux

So as someone who takes instructions very literal (and again – doesn’t know UNIX very well) I was unfortunately greeted with a nasty error message lacking any help.  Feelings from years ago were creeping in quickly.  Alas, I Googled my way through this (and had a pretty good inkling that it just couldn’t ‘find’ the file).

What they said (also notice I already dropped the _64) since mine isn’t 64-bit.

 

Alas – all that was needed to get the file to install!

So installing Anaconda turned out to be pretty easy.  After getting the right code in the prompt.  Then came the fun part, trying to do sentiment analysis.  I knew enough based on reading that Anaconda came with the three modules mentioned: pandas, nltk, and time.  So I felt like this was going to be pretty easy to try and test out – coding directly from the terminal.

Well – I hit my second major challenge.  The lexicon required to do the sentiment analysis wasn’t included.  So, I had no way of actually doing the sentiment analysis and was left to figure it out on my own.  This part was actually not that bad, Python did give me a good prompt to fix – essentially to call the nltk downloader and get the lexicon.  And the nltk downloader has a cute little GUI to find the right lexicon (vader).  I got this installed pretty quickly.

Finally – I was confident that I could input the code and come up with some results.  And this is where I hit my last obstacle and probably the most frustrating of the night.  When pasting in the code (raw form from blog post) I kept running into errors.  The message wasn’t very helpful and I started cutting out lines of code that I didn’t need.

What’s the deal with line 5?

Eventually I figured out the problem – there were weird spaces in the raw code snippet.  To which after some additional googling (this time from my husband) he kindly said “apparently spaces matter according to this forum.”  No big deal – lesson learned!

Yes! Success!

So what did I get at the end of the day?  A wonderful CSV output of sentiment scores for all the words in the original data set.

Looking good, there’s words and scores!
Back to my comfort zone – a CSV

Now for the final step – validate that my results aligned with expectations.  And it did – yay!

0.3182 = 0.3182

Next steps: viz the data (obviously).  And I’m hoping to extend this to an additional sentiment analysis, maybe even something from Twitter.  Oh and I also ended up running (you guessed it, already installed) a Jupyter notebook to get over the pain of typing directly in the Terminal.

Synergy through Action

This has been an amazing week for me.  On the personal side of things my ship is sailing in the right direction.  It’s amazing what the new year can do to clarify values and vision.

Getting to the specifics of why I’m calling this post “Synergy through Action.”  That’s the best way for me to describe how my participation in this week’s Tableau and data visualization community offerings have influenced me.

It all actually started on Saturday.  I woke up and spent the morning working on a VizforSocialGood project, specifically a map to represent the multiple locations connected to the February 2017 Women in Data Science conference.  I’d been called out on Twitter (thanks Chloe) and felt compelled to participate.  The kick of passion I received after submitting my viz propelled me into the right mind space to tackle 2 papers toward my MBA.

Things continued to hold steady on Sunday where I took on the #MakeoverMonday task of Donald Trump’s tweets.  I have to imagine that the joy from accomplishment was the huge motivator here.  Otherwise I can easily imagine myself hitting a wall.  Or perhaps it gets easier as time goes on?  Who knows, but I finished that viz feeling really great about where the week was headed.

Monday – Alberto Cairo and Heather Krause’s MOOC was finally open!  Thankfully I had the day off to soak it all in.  This kept my brain churning.  And by Wednesday I was ready for a workout!

So now that I’ve described my week – what’s the synergy in action part?  Well I took all the thoughts from the social good project, workout Wednesday, and the sage wisdom from the MOOC this week to hit on something much closer to home.

I wound up creating a visualization (in the vein of) the #WorkoutWednesday redo offered up.  What’s it of?  Graduation rates of specific demographics for every county in Arizona for the past 10ish years.  Stylized into small multiples using at smattering of slick tricks I was required to use to complete the workout.

Here’s the viz – although admittedly it is designed more as a static view (not quite an infographic).

 

And to sum it all up: this could be the start of yet another spectacular thing.  Bringing my passion to the local community that I live in – but more on a widespread level (in the words of Dan Murray, user groups are for “Tableau zealots”).

Makeover Monday 2017 – Week 3 Trump Tweets

**Update (1/20/17) : The original data set had a date formatting snafu resulting in 1307 tweets at the 12:00-12:59 PM (UTC time) hour to be displayed as 00:00-00:59 (aka 12 AM hour).  This affected 4.3% of the original data set visualization and has been corrected.  I have also added a footnote denoting the visualization is in EST.  This affects the shape of the data in both the 4 AM – 8 AM and 4 PM – 8 PM sections.

Rolling right along into week 3’s Makeover Monday.  The data set this week: Donald Trump’s tweets.  The original Buzzfeed viz and article accompanying this analyzed Trump’s retweet activity since his announcement of running for president.  The final viz ended up being what I would best describe as bubble charts of the top users he retweeted during this time:

What’s interesting is that the actual article goes into significant depth on how their team systematically reviewed the tweets.  It’a a bummer that the additional analysis done couldn’t be synthesized into visual form.

My take on the makeover this week was driven completely by the underlying data available.  The TDE provided had the following fields:

Two things stuck out to me with the data.  First: the username being retweeted wasn’t included; second: the entire tweet text was included.  Having full text available just screams for some sort of text analysis.  I got committed at that point to doing something with the text.

My initial idea was to do some sort of sentiment analysis.  Recently I had installed both R-Studio and Python on my PC to try integration with Tableau.  I’d had success with R-Studio (mind you after watching a brief YouTube video), but I hadn’t gotten Python to cooperate (my effort in assisting in this cooperation = 2 out of 10).  I figured since I had both available maybe I should make an attempt.  After marinating on the concept I didn’t feel comfortable adding more sentiment analysis to the fire of American politics.  (On a personal note: I have been politically checked out since the early primaries.)

So instead of doing sentiment analysis, I decided to turn the data more into text mining for mentions and hashtags.  I had done some fiddling with the time component and was digging how the cycle plot/horizon chart were playing out visually.  So it seemed natural to continue on a progression of getting more details out of the bars and times of day.

Note on the time: time is graciously parsed into correct format with the data.  In looking at the original time, I am under the impression it was represented in GMT (+0000).  To adjust for this, I added -5 hours to all of the parsed dates to put it in EST aka Trump time.

So back to text mining.  Post #data16 conference, a colleague of mine was recounting how to use regex to scrub through text.  I walked away from his talk thinking I need to use that next time I have the opportunity.  And what I love about it: NATIVE FUNCTION TO TABLEAU!!  So this was making me sing.  Now I don’t know a ton about regex (lots of notation I have yet to memorize), so I decided to quickly google my way to getting the user handles and hashtags.  These handy results really made this analysis zip along: regexr & regex+twitter.

Everything else came to life pretty quickly.  I knew I wanted to include at least one or two tweets to read through, but I wanted to keep it curated.  I think this was accomplished well and I spent a good deal of time trying out different time combinations just to see what would bubble to the surface.

A final note on aesthetics this week: I’m reading Alberto Cairo’s The Functional Art, and as I mentioned in an earlier post, I’m also participating in his MOOC that starts tomorrow.  I am only 4 chapters in, but Alberto has me taking a few things to heart.  I don’t think it is by coincidence that I decided to push the beauty side of things.  I always strive for elegance, but I strive for it through white space and keeping that “data ink ratio” at a certain point.  But I’m not blind to the different visualizations out there that attract people.  So for once I used a non-white background (yay!).  And I also went for a font that’s well outside of the look of my usual vizzing font.

More than focusing on aesthetics, is of course the function of the viz.  I tried to spend more time thinking about the audience and what they were going to “get” out of it.  I hope that the final product is less of a “visual aid” to my analysis and more of an interactive tool to explore the tweets of the soon to be President.

Full viz available on my Tableau public page.

#DataResolutions – More than a hashtag

This gem of a blog post appeared on Tableau Public and within my twitter feed earlier this week asking what my #DataResolutions are.  Here was my lofty response:

 


Sound like a ton of goals and setting myself up for failure?  Think again.  At the heart of most of my work with data visualization are 2 concepts: growth and community.  I’ve had the amazing opportunity to co-lead and grow the Phoenix Tableau user group over the past 5+ months.  And one thing I’ve learned along the way: to be a good leader you have to show up.  Regardless of skill level, technical background, formal education, we’re all bound together by our passion for data visualization and data analytics.

To ensure that I communicate my passion, I feel that it’s critical to demonstrate it.  It grows me as a person and stretches me outside of my comfort zone to an extreme.  And it opens up opportunities and doors for me to grow in ways I didn’t know existed.  A great example of this is enrolling in Alberto Cairo and Healther Krause’s MOOC Data Exploration and Storytelling: Finding Stories in Data with Exploratory Analysis and Visualization.  I see drama and story telling as a development area for me personally.  Quite often I think I get very wrapped up in the development of data stories that the final product is a single component being used as my own visual aid.  I’d like the learn how to communicate the entire process within a visualization and guide a reader through.  I also want to be surrounded by 4k peers who have their own passion and opinions.

Moving on to collaborations.  There are 2 collaborations I mentioned above, one surrounding data+women and the other is data mashup.  My intention behind developing out these is to once again grow out of my comfort zone.  Data Mashup is also a great way for me to enforce accountability to Makeover Monday and to develop out my visualization interpretation skills.  The data+women project is still in an incubation phase, but my goal there is to spread some social good.  In our very cerebral world, sometimes it takes a jolt from someone new to be used as fuel for validation and action.  I’m hoping to create some of this magic and get some of the goodness of it from others.

More to come, but one thing is for sure: I can’t fail if I don’t write down what I want to achieve.  The same is true for achievement, unless it’s written down, how can I measure?

Makeover Monday 2017 – Week 2

It’s time for Makeover Monday – Week 2.  This week’s data set was the quarterly sales (by units) of Apple iPhones for the past 10ish years.  The original article accompanying the data indicated that the golden years of Apple may be over.

So let me start by saying – I broke the rules (or rather, the guidelines).  Makeover Monday guidelines indicate that the goal is to improve upon the original visualization and stick to the original data fields.  I may have overlooked that guideline this week in favor of adding a little more context.

When I first approached the data set and dropped it into Tableau, the first thing I immediately noticed was that Q4 always has a dip compared to the other quarters of the year.

This view contradicted all of my existing knowledge of how iPhone releases work.  Typically every year Apple holds a conference around the middle/end of September announcing the “new” iPhone.  That can either be the gap increase (off year, aka the S) or the new generation.  It lines up such that pre-sales and sales come in the weeks shortly following.  And in addition to that I would suspect that sales would stay heightened throughout the holiday season.

This is where I immediately went back to the data to challenge it and I noticed that Apple defines its fiscal year differently.  Specifically October to December (of the previous year) counts as Q1 of the current year.  Essentially Q1 of 2017 is actually 10/1/16 to 12/31/16.  Meaning that in the normalized world thinking about quarters, everything should be adjusted.

Now I was starting to feel much better about how things were looking.  It aligned with my real world expectations.

I still couldn’t help but feel that a significant portion of the story was missing.  In my mind it wasn’t fair to only look at iPhone sales over time without understanding more data points of the smartphone market.  I narrowed it down to overall sales of smartphones and number of smartphone users.  The idea I had was this: have we reached a point where the number of smartphone users is now a majority?  Essentially the Adoption Curve came to my mind – maybe we’ve hit that sweet spot where the Late Majority is now getting in on smartphones.

To validate the theory and keep things simple, I did quick searches for data sets I could bring into the view.  As if through serendipity, the two additional sources I stumbled upon came from the same as the original (statistica.com).  I went ahead and added them into my data set and got to work.

My initial idea was this: line plot of iPhone sales vs. overall smartphone sales.  See if directionality was the same.  Place a smaller graph of smartphone users to the side (mainly because it was US only, couldn’t find a free global data set).  And the last viz was going to be a combination of the 3 showing basic “growth” change.  That in my mind would in a very basic way display an answer to my questioning.

I went through a couple of iterations and finally landed on the view below as my final.

I think it sums up the thought process and answers the question I originally asked myself when I approached the data set.  And hopefully I can be pardoned (if even necessary) since the accompanying data added in merely enhanced information at hand and kept with the simplicity of data points available (units and time).

Makeover Monday 2017 – Week 1

It’s officially 2017 – the start of a new year.  As such, this is a great time for anyone in the Tableau universe to make a fresh commitment to participate in the community challenge known as Makeover Monday.

As I jump into this challenge, I’ve made the conscious decision to start with the things I already like doing and to add on each time.  This to me is the way that I’ll be able to stay actively involved and enthusiastic.  Essentially: keep it simple.

For this week’s data set it was obvious that something of a comparative nature needed to be applied.  I started off with a basic dot plot and went from there.

What I ended up with: a slope chart with the slope representing the delta in rank of income by gender, the size of the line representing the annual monetary difference in income, and 3 colors representing categorized multipliers on the wage gap.

I wanted this to be for a phone, so I held to the idea of a single viz.  Interactivity is really limited to tooltips, most other nuance comes from the presentation of the visualization itself.

And I pushed myself to add a little journalistic flare this week.  Not really my style, but I figured I would see where it took me.