Tag Archives: Data

Anatomy of an Analysis (Part 2) – The Enrichening

In the first part of this analysis, I turned a short list of movies into a database that could be used to answer basic questions about the list’s contents. Now I’d like to broaden this analysis by combining the original list with additional outside information — a process called data enrichment.

First, I needed to find and process a new set of data. In this case, I chose a list of the Best Movies of All Time compiled by popular film review aggregator, Rotten Tomatoes because I thought it might include movies that were more popular with a general audience. The RT list ranks movies by their adjusted Tomatometer rating (as of mid-August 2015) and pulls out the top 100. I copied this list over to a spreadsheet and created fields for Rank, Film, Year, and Decade.

Once this information was ready, I used the name of the movie itself to join the RT list to the original BBC list. This approach, while perfectly reasonable, does come with a certain level of risk because the two sources do not always match perfectly. When that happens you have to match the information by hand. Can you spot the problems associated with each pair of names below?

Best Movies (RT) Greatest American Films (BBC)
Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb Dr Strangelove
E.T. the Extra-Terrestrial ET: The Extra-Terrestrial
It’s a Wonderful Life It’s a Wonderful Life
One Flew Over the Cuckoo’s Nest One Flew Over the Cuckoo’s Nest
Schindler’s List Schindler’s List
The Godfather Part II The Godfather Part II

The first mismatch is pretty obvious because Rotten Tomatoes includes the full tagline for the movie Dr. Strangelove in the title while the BBC does not. However, there are also some subtle differences in punctuation (such as the period after the abbreviation of “doctor” in the first column) that would still cause problems during a join. These punctuation issues show up more clearly with the second item which has differences in both the abbreviation of “E.T.” and the inclusion of a colon in the BBC version of the title. It gets more subtle from there!

The next three movie titles all contain a contraction or a possessive noun but one source uses an apostrophe while the other uses a single quotation mark. (To make this problem even harder to spot, some web browsers render them both the same. Check the page source.). Finally, the last paired items look identical … except that the first listing of The Godfather Part II includes a trailing space. Pretty esoteric, I know, but that is life in the data world.

With the two data sources aligned, I then created my final enhanced database and explored the information using another pivot table.
The first thing I noticed when I compared the BBC list to the Rotten Tomatoes list is that they only had 22 films in common. This surprised me a little at first but it makes sense when you realize that the RT list is not limited to American films. It also seemed to support my initial instinct that the RT database would contain many more recent films due to its online format.

TOP DECADES IN FILM (BBC vs. Rotten Tomatoes)
Top_Film_Decades_2

A quick look at the films by source and decade (above) shows a huge number of recent films in the RT listing (including one, Mad Max: Fury Road, that was still in theaters when I first downloaded the data). It is also interesting to note that the spike in “best” movies for Rotten Tomatoes occurs in the 1950s instead of the 1970s. However, the large number of foreign films in the RT list for the 1950s leads quickly to discussions of Japan’s “Golden Age” of cinema during that time period.

RANK COMPARISONS OF AMERICAN FILM (BBC vs. Rotten Tomatoes)
Rank_Comparisons_1

Another interesting view of this information can be seen when you compare the two ranked lists side-by-side. The chart above shows the 22 films that appear on both lists with a line connecting their two ranks. This makes it easy to see where the sources agree and where they disagree. Several of the critical darlings (Citizen Kane, The Godfather, Singin’ in the Rain, and North by Northwest) also rank high on the RT list while others (many of them from the American New Wave period of the 1970s) show a drop in popularity. Meanwhile, other classically popular films like The Wizard of Oz and ET: The Extra-Terrestrial float upward.

Anatomy of an Analysis (Part 1)

A few weeks ago, the BBC News produced a list of the top 100 greatest American films based on input from critics from around the world.

Here are the top ten films presented in rank order:

  1. Citizen Kane (Orson Welles, 1941)
  2. The Godfather (Francis Ford Coppola, 1972)
  3. Vertigo (Alfred Hitchcock, 1958)
  4. 2001: A Space Odyssey (Stanley Kubrick, 1968)
  5. The Searchers (John Ford, 1956)
  6. Sunrise (FW Murnau, 1927)
  7. Singin’ in the Rain (Stanley Donen and Gene Kelly, 1952)
  8. Psycho (Alfred Hitchcock, 1960)
  9. Casablanca (Michael Curtiz, 1942)
  10. The Godfather Part II (Francis Ford Coppola, 1974)

There is really nothing too surprising here. Perennial favorite Citizen Kane tops the list followed by The Godfather and Vertigo — two of the most famous films (by two of the most famous directors) ever produced. Perusing the full list, you might recognize a few other titles and maybe think about adding some of them to your Netflix queue. But that’s about it. Aside from a handful of ancillary stories, there was little additional commentary to draw you deeper into the story. Sensing an opportunity, I decided to use this list to demonstrate the steps involved in a quick and simple analysis of data found “in the wild.”

Here follows a demonstration of my 5-step program for data analysis:

Source

The BBC asked each critic to submit a list of the ten films they felt were the greatest in American cinema (“… not necessarily the most important, just the best.”). For the project, an “American film” was defined as any movie that received funding from a U.S. source. This criteria included many films by foreign directors as well as films shot outside of the country. The highest ranking films on each list received ten points, the next film down received nine points, and so on. The tenth pick received one point. All the points were then tallied to produce the final list.

Processing

Even though the resulting “listicle” is fairly simple, it contains a lot of interesting information just waiting to be freed from its icy confines. I pulled the list into Excel and used some very basic string (text) manipulation to create four basic fields from each row of information:
List_String_Manipulation_1

Additional manipulation of the “Year” field yields a useful grouping category:
List_String_Manipulation_2

With the creation of these five fields, I now have a flexible database instead of a rigid list.

Organization

The final data set was “stored” as a table in a simple spreadsheet. Although I have many problems using Excel for data storage (more on that in a future post), it is a quick and easy way to organize small sets of data.

Transformation

Once the data was in the format I wanted, I created a pivot table that allowed me to manipulate information in different ways. I was particularly interested in answering questions like “Who are the top directors?”, “When were most of these films made?”, and “Was there ever a ‘Golden Age’ of modern cinema?” Most of these questions can be answered through simple grouping and summarization.

Serve

After all that work, it’s time to pull together the results and display them in some way. For this exercise, that means a few simple tables and charts:

TOP 10 DIRECTORS IN AMERICAN FILM

Director # of Films in the Top 100
Stanley Kubrick 5
Steven Spielberg 5
Alfred Hitchcock 5
Billy Wilder 5
Francis Ford Coppola 4
Howard Hawks 4
Martin Scorsese 4
John Ford 3
Orson Welles 3
Charlie Chaplin 3

TOP YEARS IN AMERICAN FILM

Year # of Films in the Top 100
1975 5
1980 4
1974 4
1959 4
1939 3
1941 3
1977 3
1946 3
1994 3

TOP DECADES IN AMERICAN FILM
Top_Film_Decades_1

These simple presentation tools start to tell some interesting stories and — like all good analysis tools — start to hint at additional avenues of exploration. For example, while two of the directors with five films in the Top 100 (Kubrick, Hitchcock) also made it into the Top 10, the other two (Spielberg and Wilder) did not … why? The year with the most films on the list was 1975 … what were the films? The 1970s account for over 20% of the films on the list … what was going on in the culture that lead to this flowering of expression?

It would have been really great if the BBC article had included some sort of interactive tool that allowed readers to explore the database themselves. I will see what I can do to tackle this in an upcoming post.

Data Literacy 101: What is Data?

Whenever the topic of data comes up at meetings or informal conversations it doesn’t take long for people’s eyes to glaze over. The subject is usually considered so complex and esoteric that only a few technically-minded geeks find value in the details. This easy dismissal of data is a real problem in the modern business world because so much of what we know about customers and products is codified as information and stored in corporate databases. Without a high level of data literacy this information sits idle and unused.

One way I try to get people more interested in data is to make a distinction between data management and data content. In its broadest sense, data management consists of all the technical equipment, expertise, security procedures, and quality control measures that go into riding herd on large volumes of data. Data content, on the other hand, is all the fun stuff that is housed and made accessible by this infrastructure. To put it another way, think of data management as a twisty, mountain road built by skilled engineers and laborers while data content is the Ferrari you get to drive on it.

Okay, maybe that’s taking it a bit too far. Stick with me.

At its most basic, data is simply something you want to remember (a concept I borrowed from an article by Rob Karel). Examples might include:

  • Your home address
  • Your mom’s birthday
  • Your computer password
  • A friend’s phone number
  • Your daughter’s favorite color

You could simply memorize this information, of course, but human memory is fragile and so we often collect personally meaningful information and store it in “tools” like calendars, address books, spreadsheets, databases, or even paper lists. Although this last item might not seem like a robust data storage method it is a good introduction to some basic data concepts. (I’ve talked about the appeal of “Top 10” lists as a communication tool in a previous post but I didn’t really address their specific structure.)

Let’s start with a simple grocery list:

Data101_List_1

Believe it or not, this is data. A list like this has a very loose data structure consisting of related items separated by some sort of “delimiter” like a comma or — in this case — a new line or row on our fake note pad. You can add or subtract items from the list, count the total number of items, group items into categories (like “dairy” or “bakery”), or sort items by some sequence. Many of you will have created similar lists because they are great external memory aids.

The problem with this list is that it is very generalized. You could give this grocery list to ten different people and get ten different results. How many eggs do you want? Do you want whole milk, 2%, or fat free? What type of bread do you want? What brand of peanut butter do you like?

This list really only works for you because a memory aid works in concert with your own personal circumstances. If someone doesn’t share that context then the content itself doesn’t translate very well. That’s okay for “to do” lists or solo trips to the grocery store but doesn’t work for a system that will be used by multiple people (like a business). In order to overcome this barrier you have to add specificity to your initial list.

Data101_List_2

This is a grocery list that I might hand over to my teenage son. It is more specific than the first list and has exact amounts and other additional details that he will need to get the order right. Notice, however, that there is a cost for this increased level of specificity, with the second list containing over four times as many characters as the first one. At the same time, this list still lacks key attributes that would help clarify the request for non-family members.

If we are going to make this list more useful to others, we need to continue to improve its specificity while making it more versatile. One way to do this is to start thinking about how we would merge several grocery lists together.

Data101_List_3

Here is our original list stacked on top of a second list of similar items. I’ve added brand names to both of them and included a heading above each list with the name of the list’s owner. The data itself is still “unstructured”, however, meaning it is not organized in any particular way. This lack of structure doesn’t necessarily interfere with our goal of buying groceries but it does limit our ability to organize items or find meaningful patterns in the data. As our list grows this problem is compounded. Eventually, we’ll need to find some way of introducing structure to our lists.

Data101_List_4

One step we can take is to break up our list entries and put the individual pieces into a table. A table is an arrangement of rows and column where each row represents a unique item, while each column or “field” contains elements of the same data “type.” For this first example, I’ve created three columns: a place for a “customer” name (the text of the list’s owner), an item count (a number), and the item itself (more text). Notice that the two lists are truly merged, allowing us to sort items if we want.

Data101_List_4_sorted

Sorting makes it a bit easier to pick out similar items, which will help a little on our fictitious shopping trip. However, we still have a problem. Some of the items (like the milk, butter, and peanut butter) are sorted by the size criteria listed in the unstructured text, which makes it harder to see that some of things can be found in same aisle. Adding new fields will help with this.

Data101_List_5_sorted

By adding separate columns for brand name and size, the data in the “item” column is actually pretty close to our first list. All the additional detail are included in new fields that are clearly defined and contain similar data. We’ve had to clean up a few labeling issues (such as “skim milk” vs. “fat free milk”) but these are relatively minor data governance issues. Our final, summarized list is ready for prime time.

Data101_List_6_Summary

And that, my friend, is how data is made.

A 12-Year-Old’s Take on Data and Analysis

I asked my daughter what she thought data was last night. Her text to me at 11:45 pm:

Okay, so you asked me what data was at the motorcycle thing today and at first, I didn’t really think about that much I just said an answer if what I thought it was. I was rethinking that and the answer I have didn’t sound right now. So, I do what most people would do and that was look it up in the dictionary, what the definition was was this:

Data_definition_Anna

And it was really boring. Even though it was true it sounded like the color gray or beige and we both know that those are the boring colors. So, if you now start to think of data as a color or colors, the colors I come up with are the primary colors and the three colors that those primary colors make. Data is colors at this point, and what you’re trying to achieve is a painting. For an example, if you asked 5 different people in 5 different stages of their life what their favorite cereal was, the answers those people would give you are the pastels or paints that they have given you and you are the artist at this point. You have to create the painting. Now, I would infer that the people in the later stages of their life they might say the boring cereals made of cardboard and unhappiness, while people in early stages of their life would most likely say one of the sugary cereals. The picture you would make would say to the people looking at it that old people are boring and kids have better breakfast cereals. This may have been a reasonable explanation or it might’ve been crap but what the fluff. It’s almost 11:45 so don’t blame me. P.S. Tell the people to get a tumblr 🙂

I get it now!

How to Build the Perfect Data Science Team

Although the fields of statistics, data analysis, and computer programming have been around for decades, the use of the term “data science” to describe the intersection of these disciplines has only become popular within the last few years.

The rise of this new specialty — which the Data Science Association defines as “the scientific study of the creation, validation and transformation of data to create meaning” — has been accompanied by a number of heated debates, including discussions about its role in business, the validity of specific tools and techniques, and whether or not it should even be considered a science. For those convinced of its significance, however, the most important deliberations revolve around finding people with the right skills to do the job.

On one side of this debate there are those purists who insist that data scientists are nothing more than statisticians with fancy new job titles. These folks are concerned that people without proper statistics training are trying to horn in on a rather lucrative gig without getting the necessary training. Their solution is to simply ignore the data science buzzword and hire a proper statistician.

At the other end of the spectrum are people who are convinced that making sense out of large data sets requires more than just number-crunching skills, it also requires the ability to manipulate the data and communicate insights to others. This view is perhaps best represented by Drew Conway’s data science venn diagram and Mike Driscoll’s blog post on the three “sexy skills” of the data scientist. In Conway’s case, the components are computer programming (hacking), math and statistics, and specific domain expertise. With Driscoll, the key areas are statistics, data transformation — what he calls “data munging” — and data visualization.

The main problem with this multi-pronged approach is that finding a single individual with all of the right skills is nearly impossible. One solution to this dilemma is to create teams of two or three people that can collectively cover all of the necessary areas of expertise. However, this only leads to the next question, which is: What roles provide the best coverage?

In order to address this question, I decided to start with a more detailed definition of the process of finding meaning in data. In his PhD dissertation and later publication, Visualizing Data, Ben Fry broke down the process of understanding data into seven basic steps:

  1. Acquire – Find or obtain the data.
  2. Parse – Provide some structure or meaning to the data (e.g. ordering it into categories).
  3. Filter – Remove extraneous data and focus on key data elements.
  4. Mine – Use statistical methods or data mining techniques to find patterns or place the data in a mathematical context.
  5. Represent – Decide how to display the data effectively.
  6. Refine – Make the basic data representations clearer and more visually engaging.
  7. Interact – Add methods for manipulating the data so users can explore the results.

These steps can be roughly grouped into four broad areas: computer science (acquire and parse data); mathematics, statistics, and data mining (filter and mine); graphic design (represent and refine); and information visualization and human-computer interaction (interaction).

In order to translate these skills into jobs, I started by selecting a set of occupations from the Occupational Information Network (O*NET) that I thought were strong in at least one or two of the areas in Ben Fry’s outline. I then evaluated a subset of skills and abilities for each of these occupations using the O*NET Content Model, which allows you to compare different jobs based on their key attributes and characteristics. I mapped several O*NET skills to each of Fry’s seven steps (details below).

ONET Skills, Knowledge, and Abilities Associated with Ben Fry’s 7 Areas of Focus

Acquire (Computer Science)

  • Learning Strategies – Selecting and using training/instructional methods and procedures appropriate for the situation when learning or teaching new things.
  • Active Listening – Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
  • Written Comprehension – The ability to read and understand information and ideas presented in writing.
  • Systems Evaluation – Identifying measures or indicators of system performance and the actions needed to improve or correct performance, relative to the goals of the system.
  • Selective Attention – The ability to concentrate on a task over a period of time without being distracted.
  • Memorization – The ability to remember information such as words, numbers, pictures, and procedures.
  • Oral Comprehension – The ability to listen to and understand information and ideas presented through spoken words and sentences.
  • Technology Design – Generating or adapting equipment and technology to serve user needs.

Parse (Computer Science)

  • Reading Comprehension – Understanding written sentences and paragraphs in work related documents.
  • Category Flexibility – The ability to generate or use different sets of rules for combining or grouping things in different ways.
  • Troubleshooting – Determining causes of operating errors and deciding what to do about it.
  • English Language – Knowledge of the structure and content of the English language including the meaning and spelling of words, rules of composition, and grammar.
  • Programming – Writing computer programs for various purposes.

Filter (Mathematics, Statistics, and Data Mining)

  • Flexibility of Closure – The ability to identify or detect a known pattern (a figure, object, word, or sound) that is hidden in other distracting material.
  • Judgment and Decision Making – Considering the relative costs and benefits of potential actions to choose the most appropriate one.
  • Critical Thinking – Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
  • Active Learning – Understanding the implications of new information for both current and future problem-solving and decision-making.
  • Problem Sensitivity – The ability to tell when something is wrong or is likely to go wrong. It does not involve solving the problem, only recognizing there is a problem.
  • Deductive Reasoning – The ability to apply general rules to specific problems to produce answers that make sense.
  • Perceptual Speed – The ability to quickly and accurately compare similarities and differences among sets of letters, numbers, objects, pictures, or patterns. The things to be compared may be presented at the same time or one after the other. This ability also includes comparing a presented object with a remembered object.

Mine (Mathematics, Statistics, and Data Mining)

  • Mathematical Reasoning – The ability to choose the right mathematical methods or formulas to solve a problem.
  • Complex Problem Solving – Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.
  • Mathematics – Using mathematics to solve problems.
  • Inductive Reasoning – The ability to combine pieces of information to form general rules or conclusions (includes finding a relationship among seemingly unrelated events).
  • Science – Using scientific rules and methods to solve problems.
  • Mathematics – Knowledge of arithmetic, algebra, geometry, calculus, statistics, and their applications.

Represent (Graphic Design)

  • Design – Knowledge of design techniques, tools, and principles involved in production of precision technical plans, blueprints, drawings, and models.
  • Visualization – The ability to imagine how something will look after it is moved around or when its parts are moved or rearranged.
  • Visual Color Discrimination – The ability to match or detect differences between colors, including shades of color and brightness.
  • Speed of Closure – The ability to quickly make sense of, combine, and organize information into meaningful patterns.

Refine (Graphic Design)

  • Fluency of Ideas – The ability to come up with a number of ideas about a topic (the number of ideas is important, not their quality, correctness, or creativity).
  • Information Ordering – The ability to arrange things or actions in a certain order or pattern according to a specific rule or set of rules (e.g., patterns of numbers, letters, words, pictures, mathematical operations).
  • Communications and Media – Knowledge of media production, communication, and dissemination techniques and methods. This includes alternative ways to inform and entertain via written, oral, and visual media.
  • Originality – The ability to come up with unusual or clever ideas about a given topic or situation, or to develop creative ways to solve a problem.

Interact (Information Visualization and Human-Computer Interaction)

  • Engineering and Technology – Knowledge of the practical application of engineering science and technology. This includes applying principles, techniques, procedures, and equipment to the design and production of various goods and services.
  • Education and Training – Knowledge of principles and methods for curriculum and training design, teaching and instruction for individuals and groups, and the measurement of training effects.
  • Operations Analysis – Analyzing needs and product requirements to create a design.
  • Psychology – Knowledge of human behavior and performance; individual differences in ability, personality, and interests; learning and motivation; psychological research methods; and the assessment and treatment of behavioral and affective disorders.

Using occupational scores for these individual ONET skills and abilities, I was able to assign a weighted value to each of Ben Fry’s categories for several sample occupations. Visualizing these skills in a radar graph shows how different jobs (identified using standard SOC or ONET codes) place different emphasis on the various skills. The three jobs below have strengths that could be cultivated and combined to meet the needs of a data science team.

Another example includes occupations that fall outside of the usual sources of data science talent. You can see how — taken together — these non-traditional jobs can combine to address each of Fry’s steps.

According to a recent study by McKinsey, the U.S. “faces a shortage of 140,000 to 190,000 people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions” based on data. Instead of fighting over these scarce resources, companies would do well to think outside of the box and build their data science teams from unique individuals in other fields. While such teams may require additional training, they bring a set of skills to the table that can boost creativity and spark innovative thinking — just the sort of edge companies need when trying to pull meaning from their data.

Updates:

May 2, 2014 – The folks over at DarkHorse Analytics put together a list of the “five faces” of analytics. Great article.

  1. Data Steward – Manages the data and uses tools like SQL Server, MySQL, Oracle, and maybe some more rarified tools.
  2. Analytic Explorer – Explores the data using math, statistics, and modeling.
  3. Information Artist – Organizes and presents data in order to sell the results of data exploration to decision-makers.
  4. Automator – Puts the work of the Explorer and Visualizer into production.
  5. The Champion – Helps put all of the pieces in place to support an analytics environment.

D3 Notes:

Spelling and National Security

A former co-worker of mine always used to joke about our company’s customer database by posing the deceptively simple question: “How many ways can you spell ‘IBM’?” In fact, the number of unique entries for that particular client was in the dozens. Here is a sample of possible iterations, with abbreviations alone counting for several of them:

  • IBM
  • I B M
  • I.B.M.
  • I. B. M.
  • IBM CORP
  • IBM CORPORATION
  • INTL BUS MACHINES
  • INTERNATION BUSINESS MACHINES
  • INTERNATIONAL BUSINESS MACHINES
  • INTERNATIONAL BUSINESS MA

I thought of this anecdote recently while I was reading an article about the government’s Terrorist Identities Datamart Environment list (TIDE), an attempt to consolidate the terrorist watch lists of various intelligence organizations (CIA, FBI, NSA, etc.) into a single, centralized database. TIDE was coming under scrutiny because it had failed to flag Tamerlan Tsarnaev (the elder suspect in the Boston Marathon bombings) as a threat when he re-entered the country in July 2012 after a six-month trip to Russia. It turns out that Tsarnaev’s TIDE entry didn’t register with U.S. customs officials because his name was misspelled and his date of birth was incorrect.

These types of data entry errors are incredibly common. I keep a running list of direct marketer’s misspellings of my own last name and it currently stands at 22 variations. In the data world, these variation can be described by their “edit distance” or Levenshtein distance — the number of single character changes, deletions, or insertions required to correct the entry.

Actual Name Phonetic Misspellings Dropped Letters Inserted Letters Converted Letters
Kinde Kindy Kine Kiinde Kinoe
Kindee Inde Kinder Kimbe
Kindle Kimde
Kindde Isinde
Kindke Pindy
Kindl
Kinds
Kinge
Kinele
Winde
Kinae
Kincius
Jindy

Many of these typographical mistakes are the result of my own poor handwriting, which I admit can be difficult to transcribe. However, if marketers have this much trouble with a basic, five-letter last name, you can imagine the problems the feds might have with a longer foreign name with extra vowels, umlauts, accents, and other flourishes thrown in for good measure. Add in a first name and a middle initial and the list of possible permutations grows quite large … and this doesn’t even begin to address the issue of people with the same or similar names. (My own sister gets pulled out of airport security lines on a regular basis because her name doppelgänger has caught the attention of the feds.)

The standard solutions for these types of problems typically involve techniques like fuzzy matching algorithms and other programmatic methods for eliminating duplicates and automatically merging associated records. The problem with this approach is that it either ignores or downplays the human element in developing and maintaining such databases.

My personal experience suggests that most people view data and databases as an advanced technological domain that is the exclusive purview of programmers, developers, and other IT professionals. In reality, the “high tech” aspect of data is limited to its storage and manipulation. The actual content of databases — the data itself — is most decidedly low tech … words and numbers. By focusing popular attention almost exclusively on the machinery and software involved in data processing, we miss the points in the data life-cycle where most errors start to creep in: the people who enter information and the people who interpret it.

I once worked at a company where we introduced a crude quality check to a manual double-entry process. If two pieces of information didn’t match, the program paused to let the person correct their mistake. The data entry folk were incensed! The automatic checks were bogging down the process and hurting their productivity. Never mind that the quality of the data had improved … what really mattered was speed!

On the other hand, I’ve also seen situations where perfectly capable people had difficulty pulling basic trends from their Business Intelligence (BI) software. The reporting deployments were so intimidating that people would often end up moving their data over to a copy of Microsoft Excel so they could work with a more familiar tool.

In both cases, the problem wasn’t the technology per se, but the way in which humans interacted with the technology. People make mistakes and take shortcuts … it is a natural part of our creativity and self-expression. We’re just not cut out to follow the exacting standards of some of these computerized environments.

In the case of databases like TIDE, as long as the focus remains on finding technical solutions to data problems, we miss out on what I think is the real opportunity — human solutions that focus on usability, making intuitive connections, and the ease of interpretation.

Update:

  • July 7, 2013 – In a similar database failure, Interpol refused to issue a worldwide “Red Notice” for Edward Snowden recently because the U.S. paperwork didn’t include his passport number and listed his middle name incorrectly.
  • January 2, 2014 – For a great article on fuzzy matching, check out the following: http://marketing.profisee.com/acton/attachment/2329/f-015e/1/-/-/-/-/file.pdf.

Wisconsin Voters Banished to NULL Island

The top headline in my local paper this morning was “Glitch puts some Wisconsin voters in Africa” … an interesting thing to ponder over a bowl of Quaker Oatmeal Squares. I suppose this problem merits at least some attention given the heated political climate surrounding the state’s voter redistricting process. But headline news? Above the fold? Sounds like a slow news day to me.

Online, of course, the debate has already devolved into the standard round of mudslinging and name-calling so good luck trying to find out what’s going on from that crowd. The reporters themselves focused on the political fallout of the issue rather than an explanation so no help there either. I guess it’s up to the humble folks at Ideas Illustrated to offer up some insight!

The first clue to the problem can be found in the article’s pullout quote, which describes the voter’s location as the “coast of Africa” and not a specific country in Africa. The second clue can be found deep within the article when it is mentioned that clerks have recently made changes to the way voters are being entered into the voter registration database:

” … voters are [now] being entered into different districts by the physical location of their address in computerized maps. Previously, they were entered into different districts in the state voter database according to where their address fell in certain address ranges.”

These two hints point to a very common problem associated with geocoding, which is the process of converting a postal address to a set of map coordinates. Let’s backtrack. An online mapping tool like Google Maps uses specific geographic coordinates (latitude and longitude) to place a location on a map. However, because none of these physical locations are actually stored in a database anywhere, the tool needs to interpolate the coordinates from a vector database of the road network (i.e. a mathematically represented set of lines).

For example, if you look up the address for Trump Tower, you find that it is located at 725 Fifth Avenue in Midtown Manhattan. When you enter this address into Google Maps, the tool finds 5th Avenue on the underlying road grid and then uses an algorithm to determine that the “725” address is somewhere between 56th and 57th streets. It will also determine which side of the street the address is located based on stored knowledge of the “odd” and “even” numbering pattern. In other words, it’s guessing.

Google Map detail of the area around Trump Tower

 

TIGER/Line® Shapefile detail of the same area

These guesstimates work pretty well in dense urban environments where there are a lot of cross streets to serve as reference points. In rural areas, the curvilinear streets and widely-spaced buildings make things a little more difficult. When the situation gets really muddled, some mapping tools essentially “punt” and enter a default set of coordinates. In the case of the Wisconsin voter addresses, these default coordinates are 0.00 degrees latitude and 0.00 degrees longitude. Where is this exactly? It is the intersection of the Prime Meridian and the Equator … which occurs just off the coast of Africa.

Geographers have actually given this place a rather fanciful name called NULL island (it is not, in fact, a real island). It even has its own web site and unofficial flag (below right).

So there are no nefarious schemes behind this situation … just normal, everyday data problems. The state clerks need to tell their IT guys to flag the errant voter addresses and then they can assign them to the appropriate districts by hand. Problem solved. However, they should be aware that interpolation is an imperfect process and, in addition to assigning blocks of voters to NULL island, the geocoding process may also assign voters to the wrong districts. This could be particularly true for people who live close to a district boundary. It might actually make sense to keep the old method around for backup.