Tag Archives: D3

A Force Node Diagram of the U.S. Interstate System

There’s nothing too complicated about this post. I’ve been interested in creating an illustration of the U.S. Interstate system for awhile but my initial concept of a “subway-style” diagram of the network had already been done. After some recent experimentation with the D3 Javascript library, I decided that it might be interesting to try out a simple force node display using the Interstate’s control cities as the nodes. Control cities are certain major destinations that are used to provide navigational guidance at key decision points along a particular route. It should be noted that not all control cities are actually cities and not all cities qualify as control cities. My starting list can be found here.

After my initial data collection, I found I had that I had to modify my approach to improve the network. First of all, I had to add some nodes for certain highway-to-highway connections, especially those that occurred in remote areas. I also had to include some cities that had multiple Interstate highways passing through because they weren’t always listed on each route. Finally, I added a few non-Interstate roads where I thought it made sense, including Alaska (which doesn’t actually have any Interstate highways) and eastern Canada, which has a major highway called the King’s Highway or Ontario Highway 401 linking Toronto and Montreal to key American cities.

Here is the result … click on the picture to get to a fully interactive version.

interstate_force_node_v2

The size of the nodes is related to the estimated population of the city/destination and the color represents Census division (plus Canada). You can kind of see a rough outline of the U.S., with the Midwest roughly in the center of the diagram (in orange) and the two coasts wrapping around on either side. Hawaii and Alaska float alone at the edge and the Florida penninsula (in the South Atlantic, in red) protrudes out toward the bottom of the chart.

Most Popular Word Roots in U.S. Place Names

My family visited Washington D.C. last year for Spring Break and, during our 12-hour drive, I remember noticing a subtle change in the names of the cities and towns we were passing through. In the beginning, the place names had a familiar mid-western flavor; one that mixed Native American origins (e.g. Milwaukee, Chicago) with bits of French missionary and 19th-century European settler. The names slowly took on a more Anglo-Saxon bent as we moved east, traveling through spots like Wexford, PA, Pittsburgh, PA, Gaithersburg, MD, Boonsboro, MD, Hagerstown, MD, and Reston, VA.

We have English-sounding place names in Wisconsin, of course, including highfalutin towns like Brighton, Kingston, and New London, but they seem to get overwhelmed by the sheer number of places with syllables like “wau”, “kee”, and “sha” (or all three combined). Many of these town names can be difficult for “outsiders” to pronounce and the spelling is all over the place since they were often coined by non-native speakers who’d misheard the original words. (The Native American word for “firefly”, for example, is linked to variations like Wauwatosa (WI), Wawatasso (MN), and Wahwahtaysee Way (a street in MI).)

I thought it would be interesting to see if there were any patterns to these U.S. place names or toponyms so I pulled a list of Census Places and extracted the most frequent letter combinations from the names of the country’s cities, towns, and villages. I tried to isolate true prefixes and suffixes by remove any letter pairings that were simply common to the English language and I then counted up the number of times each word root appeared and ranked them by state.

Top 10 Word Roots by State

After looking over the top word roots by state, I was interested in seeing more detail so I calculated a location quotient for some of the most common word roots and plotted these out by county. Click on the maps for a larger D3 map.

Location Quotient for “ton”
ii_Map_word_root_ton
The word town derives from the Germanic word for “fence” or “fenced settlement.” In the U.S., the use of -ton/-town to honor important landowners or political leaders began before the American Revolution (think Jamestown, VA or Charleston, SC) and continued throughout the settlement of the country. (Interestingly, my hometown of Appleton, WI was named for philanthropist Samuel Appleton and is not a true town-based word root.)

Location Quotient for “boro/borough”
ii_Map_word_root_boro_borough
The word borough originates from the Germanic word for “fort” and has many common variations, including suffixes like -borough/-boro, and -burgh/-burg. Like -ton/-town, these place name suffixes became popular in the 18th century and were used extensively throughout New England and the Atlantic coastal colonies. You can see how dominant the -boro/-borough suffix is in the upper Northeast.

Location Quotient for “ville”
ii_Map_word_root_ville
The suffix “ville” comes from the French word for “farm” and is the basis for common words like “villa” and “village”. The use of the suffix -ville for the names of cities and towns in the U.S. didn’t really begin until after the Revolution, when pro-French sentiment spread throughout the country — particularly in the South and Western Appalachian regions. The popularity of this suffix began to decline in the middle of the 19th century but you can still see it’s strong influence in the southern states.

Location Quotient for “san/santa”
ii_Map_word_san_santa
The Spanish colonial period in the Americas left a large legacy of Spanish place names, particularly in the American West and Southwest. Many of the Californian coastal cities were named after saints by early Spanish explorers, while other cities in New Spain simply included the definite article (“la”, “el”, “las” and “los) in what was often a very long description (e.g. “El Pueblo de Nuestra Señora la Reina de los Ángeles del Río de Porciúncula” … now known simply as Los Angeles or LA). The map shows the pattern for the San/Santa prefix, which is strong on the West Coast and weaker inland, where it may actually be an artifact of some Native American word roots.

Location Quotient for “Lake/Lakes”
ii_Map_word_root_lake_lakes
The practice of associating a town with a nearby body of water puts a wrinkle into the process of tracking of place names (the history of “hydronyms” being an entirely different area of study) but it was common in parts of the country that were mapped by explorers first and settled later. This can be seen in the prevalence of town names with word roots like Spring, Lake, Bay, River, and Creek.

Location Quotient for “Beach”
ii_Map_word_root_beach
There is a similar process for other prominent features of the landscape such as fields, woods, hills, mountains, and — in Florida’s case — beaches.

Location Quotient for “wau”
ii_Map_word_root_wau
Here is the word root that started this whole line of inquiry. It is apparently a very iconic Wisconsin toponym, with even some of the outlying place names having Wisconsin roots (the city of Milwaukie in Clackamas County, Oregon was named after Milwaukee, Wisconsin in the 1840s).

D3 Notes:

How to Build the Perfect Data Science Team

Although the fields of statistics, data analysis, and computer programming have been around for decades, the use of the term “data science” to describe the intersection of these disciplines has only become popular within the last few years.

The rise of this new specialty — which the Data Science Association defines as “the scientific study of the creation, validation and transformation of data to create meaning” — has been accompanied by a number of heated debates, including discussions about its role in business, the validity of specific tools and techniques, and whether or not it should even be considered a science. For those convinced of its significance, however, the most important deliberations revolve around finding people with the right skills to do the job.

On one side of this debate there are those purists who insist that data scientists are nothing more than statisticians with fancy new job titles. These folks are concerned that people without proper statistics training are trying to horn in on a rather lucrative gig without getting the necessary training. Their solution is to simply ignore the data science buzzword and hire a proper statistician.

At the other end of the spectrum are people who are convinced that making sense out of large data sets requires more than just number-crunching skills, it also requires the ability to manipulate the data and communicate insights to others. This view is perhaps best represented by Drew Conway’s data science venn diagram and Mike Driscoll’s blog post on the three “sexy skills” of the data scientist. In Conway’s case, the components are computer programming (hacking), math and statistics, and specific domain expertise. With Driscoll, the key areas are statistics, data transformation — what he calls “data munging” — and data visualization.

The main problem with this multi-pronged approach is that finding a single individual with all of the right skills is nearly impossible. One solution to this dilemma is to create teams of two or three people that can collectively cover all of the necessary areas of expertise. However, this only leads to the next question, which is: What roles provide the best coverage?

In order to address this question, I decided to start with a more detailed definition of the process of finding meaning in data. In his PhD dissertation and later publication, Visualizing Data, Ben Fry broke down the process of understanding data into seven basic steps:

  1. Acquire – Find or obtain the data.
  2. Parse – Provide some structure or meaning to the data (e.g. ordering it into categories).
  3. Filter – Remove extraneous data and focus on key data elements.
  4. Mine – Use statistical methods or data mining techniques to find patterns or place the data in a mathematical context.
  5. Represent – Decide how to display the data effectively.
  6. Refine – Make the basic data representations clearer and more visually engaging.
  7. Interact – Add methods for manipulating the data so users can explore the results.

These steps can be roughly grouped into four broad areas: computer science (acquire and parse data); mathematics, statistics, and data mining (filter and mine); graphic design (represent and refine); and information visualization and human-computer interaction (interaction).

In order to translate these skills into jobs, I started by selecting a set of occupations from the Occupational Information Network (O*NET) that I thought were strong in at least one or two of the areas in Ben Fry’s outline. I then evaluated a subset of skills and abilities for each of these occupations using the O*NET Content Model, which allows you to compare different jobs based on their key attributes and characteristics. I mapped several O*NET skills to each of Fry’s seven steps (details below).

ONET Skills, Knowledge, and Abilities Associated with Ben Fry’s 7 Areas of Focus

Acquire (Computer Science)

  • Learning Strategies – Selecting and using training/instructional methods and procedures appropriate for the situation when learning or teaching new things.
  • Active Listening – Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
  • Written Comprehension – The ability to read and understand information and ideas presented in writing.
  • Systems Evaluation – Identifying measures or indicators of system performance and the actions needed to improve or correct performance, relative to the goals of the system.
  • Selective Attention – The ability to concentrate on a task over a period of time without being distracted.
  • Memorization – The ability to remember information such as words, numbers, pictures, and procedures.
  • Oral Comprehension – The ability to listen to and understand information and ideas presented through spoken words and sentences.
  • Technology Design – Generating or adapting equipment and technology to serve user needs.

Parse (Computer Science)

  • Reading Comprehension – Understanding written sentences and paragraphs in work related documents.
  • Category Flexibility – The ability to generate or use different sets of rules for combining or grouping things in different ways.
  • Troubleshooting – Determining causes of operating errors and deciding what to do about it.
  • English Language – Knowledge of the structure and content of the English language including the meaning and spelling of words, rules of composition, and grammar.
  • Programming – Writing computer programs for various purposes.

Filter (Mathematics, Statistics, and Data Mining)

  • Flexibility of Closure – The ability to identify or detect a known pattern (a figure, object, word, or sound) that is hidden in other distracting material.
  • Judgment and Decision Making – Considering the relative costs and benefits of potential actions to choose the most appropriate one.
  • Critical Thinking – Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
  • Active Learning – Understanding the implications of new information for both current and future problem-solving and decision-making.
  • Problem Sensitivity – The ability to tell when something is wrong or is likely to go wrong. It does not involve solving the problem, only recognizing there is a problem.
  • Deductive Reasoning – The ability to apply general rules to specific problems to produce answers that make sense.
  • Perceptual Speed – The ability to quickly and accurately compare similarities and differences among sets of letters, numbers, objects, pictures, or patterns. The things to be compared may be presented at the same time or one after the other. This ability also includes comparing a presented object with a remembered object.

Mine (Mathematics, Statistics, and Data Mining)

  • Mathematical Reasoning – The ability to choose the right mathematical methods or formulas to solve a problem.
  • Complex Problem Solving – Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.
  • Mathematics – Using mathematics to solve problems.
  • Inductive Reasoning – The ability to combine pieces of information to form general rules or conclusions (includes finding a relationship among seemingly unrelated events).
  • Science – Using scientific rules and methods to solve problems.
  • Mathematics – Knowledge of arithmetic, algebra, geometry, calculus, statistics, and their applications.

Represent (Graphic Design)

  • Design – Knowledge of design techniques, tools, and principles involved in production of precision technical plans, blueprints, drawings, and models.
  • Visualization – The ability to imagine how something will look after it is moved around or when its parts are moved or rearranged.
  • Visual Color Discrimination – The ability to match or detect differences between colors, including shades of color and brightness.
  • Speed of Closure – The ability to quickly make sense of, combine, and organize information into meaningful patterns.

Refine (Graphic Design)

  • Fluency of Ideas – The ability to come up with a number of ideas about a topic (the number of ideas is important, not their quality, correctness, or creativity).
  • Information Ordering – The ability to arrange things or actions in a certain order or pattern according to a specific rule or set of rules (e.g., patterns of numbers, letters, words, pictures, mathematical operations).
  • Communications and Media – Knowledge of media production, communication, and dissemination techniques and methods. This includes alternative ways to inform and entertain via written, oral, and visual media.
  • Originality – The ability to come up with unusual or clever ideas about a given topic or situation, or to develop creative ways to solve a problem.

Interact (Information Visualization and Human-Computer Interaction)

  • Engineering and Technology – Knowledge of the practical application of engineering science and technology. This includes applying principles, techniques, procedures, and equipment to the design and production of various goods and services.
  • Education and Training – Knowledge of principles and methods for curriculum and training design, teaching and instruction for individuals and groups, and the measurement of training effects.
  • Operations Analysis – Analyzing needs and product requirements to create a design.
  • Psychology – Knowledge of human behavior and performance; individual differences in ability, personality, and interests; learning and motivation; psychological research methods; and the assessment and treatment of behavioral and affective disorders.

Using occupational scores for these individual ONET skills and abilities, I was able to assign a weighted value to each of Ben Fry’s categories for several sample occupations. Visualizing these skills in a radar graph shows how different jobs (identified using standard SOC or ONET codes) place different emphasis on the various skills. The three jobs below have strengths that could be cultivated and combined to meet the needs of a data science team.

Another example includes occupations that fall outside of the usual sources of data science talent. You can see how — taken together — these non-traditional jobs can combine to address each of Fry’s steps.

According to a recent study by McKinsey, the U.S. “faces a shortage of 140,000 to 190,000 people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions” based on data. Instead of fighting over these scarce resources, companies would do well to think outside of the box and build their data science teams from unique individuals in other fields. While such teams may require additional training, they bring a set of skills to the table that can boost creativity and spark innovative thinking — just the sort of edge companies need when trying to pull meaning from their data.

Updates:

May 2, 2014 – The folks over at DarkHorse Analytics put together a list of the “five faces” of analytics. Great article.

  1. Data Steward – Manages the data and uses tools like SQL Server, MySQL, Oracle, and maybe some more rarified tools.
  2. Analytic Explorer – Explores the data using math, statistics, and modeling.
  3. Information Artist – Organizes and presents data in order to sell the results of data exploration to decision-makers.
  4. Automator – Puts the work of the Explorer and Visualizer into production.
  5. The Champion – Helps put all of the pieces in place to support an analytics environment.

D3 Notes:

Trends in NFL Football Scores (Part 1)

One of the goals I set for myself this summer was to learn a bit about D3, a visualization toolkit that can be used to manipulate and display data on the web. Considering that the trees are bare and we’ve already had our first frost here in Wisconsin, you can safely assume that I am behind schedule. Nevertheless, I feel that I’ve finally reached a point where I have something to publish, so here goes.

First of all, a little background. D3 is a JavaScript library that allows you to bind data to any of the elements (text, lines and shapes) you might normally find on a web page.  These objects can be stylized using CSS and animated using simple dynamic functions. These features make D3 a perfect tool for creating interactive charts and graphs without having to depend on third party programs like Google Charts, Many Eyes or Tableau.

I wanted to start out with something simple so I elected to go with a basic line chart using data I pulled from Pro-Fooball-Reference.com. This site contains a ton of great information and statistics from the past 90+ years of the National Football League but — for now — I just looked at the final scores of all the games played from 1920 to 2011. My first D3-powered chart is below. It shows the average combined scores of winning and losing teams for each year of the NFL’s existence.

Although this chart looks pretty simple, every element — including titles, subtitles, axes, labels, grids and data lines — has been created manually using the D3 code. The payoff is pretty nice. All of the elements can be reused and you have tremendous control over what is shown onscreen. To demonstrate some of these cababilities, I’ve added interactive overlays that show a few of the major eras in NFL football (derived from work of David Neft and this discussion thread). If you move your mouse over the graph, you will see these different eras highlighted:

Early NFL (1920-1933) – The formation of the American Professional Football Association (APFA) in 1920 marked the official start of what was to become the National Football League. This era was marked by rapid formation (and dissolution) of small town franchises, vast differences in team capabilities and a focus on a relatively low-scoring running game. At this time, the pass was considered more of an emergency option than a reliable standard. The rapid growth in popularity of the NFL during this era culminated with the introduction of a championship game in 1932.

Introduction of the Forward Pass (1933-1945) – The NFL discontinued the use of collegiate football rules in 1933 and began to develop its own set of rules designed around a faster-paced, higher-scoring style of play. These innovations included the legalization of the forward pass from anywhere behind the line of scrimmage — a change that is often called the  “Bronko Nagurski Rule” after his controversial touchdown in the 1932 NFL Playoff Game.

Post-War Era (1945-1959) – The end of WWII saw the expansion of the NFL beyond its East Coast and Midwestern roots with the move of the Cleveland Rams to Los Angeles — the first big-league sports franchise on the West Coast. This period also saw the end of racial segregation (enacted in the 30s) and the start of nationally televised games.

Introduction of the AFL (1959-1966) – Professional football’s surge in popularity led to the formation of a rival organization — the American Football League — in 1960. The growth of the flashy AFL was balanced by a more conservative style of play in the NFL. This style was epitomized by coach Vince Lombardi and the Green Bay Packers, who would win five championships in the 1960s. In 1966, the two leagues agreed to merge as of the 1970 season.

Dead Ball Era (1966-1977) – Driven in part by stringent restrictions on the offensive line, this period is marked by low scores and tough defensive play. Teams that thrived in this environment include some of the most famous defenses in modern NFL history: Pittsburgh’s Steel Curtain, Dallas’ Doomsday Defense, Minnesota’s Purple People Eaters and the Rams’ Fearsome Foursome.

Live Ball Era (1978-present) – Frustrated by the decreasing ability of offenses to score points in 70s, the NFL began to add rules and make other changes to the structure of the game in an attempt to boost scoring. The most famous of these initiatives was the so-called “Mel Blount Rule” (introduced in 1978), which severely restricted the defense’s ability to interfere with passing routes. With the subsequent introduction of the West Coast Offense in 1979 — an offense based on precise, short passes — this period became marked by a major focus on the passing game.

Having created this first chart, I decided to build a second chart based on the ratio of average winning scores to average losing scores to see if there were any patterns.

The chart above shows how — after a period of incredibly lopsided victories — the average scoring differential settled in to a very steady pattern by the late 1940s and stayed at that level (roughly 2:1) for the next 30 years. Despite many changes in rules, coaching techniques, technology and other factors, only the pass interference rules of the late 1970s seemed to have any signifcant effect on this ratio, shifting it to just under 1.8:1 for the next 30 years.

While I had the data available, I also decided to look at the differences in average scores between home teams and away teams. The chart below plots this data along with the same overlay I used in the first chart.

A look at the ratio of average home team scores to average away team scores follows:

What’s fascinating about this chart is how quickly a form of parity was acheived among all the NFL teams. By the mid-30s, a measurable home field advantage can be seen at roughly 15%, a rate that has remained essential constant for over 70 years. Factors for this boost could include the psychological support of fans, familiar weather conditions, unique features of local facilities, lack of travel fatigue, referee bias and/or increased levels of motivation in home town players.

Thanks to Charles Martin Reid for his solution to getting D3 and WordPress to play nice.