Tuesday, October 27, 2015

Cluster Analysis

Updated November 2, 2015

Cluster analysis is a descriptive tool to find interesting subgroups of participants in your data. It's somewhat analogous to sorting one's disposables at a recycling center: glass items go in one bin, clear plastic in another, opaque plastic in another, etc. The items in any one bin are similar to each other, but the contents of one bin are different from those in another bin.

Typically, the researcher will have a domain of interest, such as political attitudes, dining preferences, or goals in life. Several items (perhaps 5-10) in the relevant domain will be submitted to cluster analysis. Participants who answer similarly to the items will be grouped into the same cluster, so that each cluster will be internally homogeneous, but different clusters will be different from each other.

Here's a very conceptual illustration, using the music-preference items from the 1993 General Social Survey (click image to enlarge)...


The color-coding shows the beginning stage of dividing the respondents into clusters:
  • People whose responses are shaded orange tend to like big band, blues, Broadway musicals, and jazz, and dislike rap and heavy metal. 
  • Those in yellow really like jazz, and are moderately favorable toward country, blues, and rap.
  • Those in green appear to dislike music generally!
Unlike the other techniques we've learned thus far, there are no significance tests. However, after conducting a cluster analysis, the groups you derive can be compared on other variables via MANOVA, discriminant analysis, or log-linear modeling.

Though the general approach of cluster analysis is relatively straightforward, actual implementation is fairly technical. The are two main approaches -- k-means/iterative and hierarchical -- that will be discussed. Key to both methods is determining similarity (or conversely, distance) between cases. The more similar cases are to each other, the more likely they will end up in the same cluster.

k-means/Iterative -- This approach is spatial, very much like discriminant analysis. One must specify the number (k) of clusters one seeks in an analysis, each one having a centroid (again like discriminant analysis). Cases are sorted into groups (clusters), based on which centroid they're closest to. The analysis goes through an iterative (repetitive) process of relocating centroids and determining data-points' distance from them, until the solution doesn't change anymore, as I illustrate in the following graphic.


Methods for locating initial centroids are discussed here. Naftali Harris has an excellent interactive webpage called "Visualizing K-Means Clustering," which illustrates many of the above steps.

There are different criteria for distance, such as single-linkage, average-linkage, etc. (See slide 17 of this slideshow.)

Hierarchical -- This approach uses a dendrogram (tree-diagram), which looks like a sports-tournament bracket college-basketball fans fill out every March with their predictions. As Rapkin and Luke (1993) describe:

Agglomerative hierarchical algorithms start with all cases as separate entities. Cases are combined (agglomerated) in sequence, so that those closest together are placed into the same cluster early in the hierarchy. As the analysis proceeds, small clusters of cases combine to form continually larger and more heterogeneous clusters, until all cases are joined into a single cluster (p. 267).

A particular kind of hierarchical clustering technique is Ward's method, which is said to be conducive to balanced cluster sizes (i.e., each cluster having a roughly equal number of cases, rather than some huge and some tiny clusters).

An "oldie, but goody" introductory article on cluster analysis is the following. The software descriptions are obviously out-of-date, but the general overview is excellent. Of particular value, in my view, is the set of recommendations for determining the number of clusters to retain (pp. 268-270).

Rapkin, B., & Luke, D. (1993). Cluster analysis in community research: Epistemology and practice. American Journal of Community Psychology, 21, 247-277.

An article that illustrates the use of cluster analysis, including how to characterize and name the clusters, is:

Schrick, B., Sharp, E. A., Zvonkovic, A., & Reifman, A. (2012). Never let them see you sweat: Silencing and striving to appear perfect among US college women. Sex Roles, 67, 591-604.

One final issue is the stability of cluster solutions. Even within k-means/iterative methods alone, or hierarchical methods alone, there are many ways to implement cluster analysis. To ensure your cluster solution is not merely the peculiar result of one method, you can use more than one method with the same dataset (e.g., one k-means/iterative method and one hierarchical method). You can save the assigned-memberships in SPSS for both methods and then run a cross-tab of these memberships to verify that the same people would end up grouped together (for the most part) in the various clusters.

As an analogy, think of the "Sorting Hat" in the Harry Potter series, which assigns new students at Hogwarts School into one of the four houses (clusters). Imagine that Headmaster Dumbledore decides to run a quality check on the Sorting Hat, bringing in another hat to independently conduct a second sorting of the kids into houses, so it can be seen if the two hats arrive at similar solutions. In the following hypothetical set of results, the two hats indeed arrive at largely similar solutions, although there are a few disagreements.


And, of course, we have a song...

Run Me Off a Cluster
Lyrics by Alan Reifman
May be sung to the tune of “Hey, Soul Sister” (Monahan/Bjørklund/Lind)

O-K, what we’ll learn, today,
What we’ll learn, today,

Are there, groupings?
Of participants, with the same things?
We use, formulas, for distance,
Compactness, is our insistence,
Within each cluster,

But different sets,
Should be as, far apart as,
Distance gets, oh yeah,
So pick an operation,
There are two main, realizations,
Of clustering techniques,

Hey there, Buster,
Run me off, a cluster...ing, analysis,
A synthesis, to help group our participants,

Hey there, Buster,
Run me off, a cluster…ing, analysis,
Tonight,

Hey, hey,
Hey, hey, hey, hey, hey,
Hey, hey, hey, hey, hey,

A dendrogram,
Is at the heart of,
Hierarchy-based plans, oh yeah,
Each case starts out, as a cluster,
Into pairs, they all will muster,
Then form, larger groups,

In number space,
Clusters and k-mean spots,
Will take their place, oh year,
Testing maps by, iteration,
Till it hits, the final station,
Then we’ll all take, a vacation,

Hey there, Buster,
Run me off, a cluster...ing, analysis,
A synthesis, to help group our participants,

Hey there, Buster,
Run me off, a cluster…ing, analysis,
Tonight,

There’s no official way,
To decide, how many clusters stay,
There are some criteria,
To take consideration of,

Your clusters, need interpretation,
On items, from their derivation,
I hope you find, some cool combinations,

Hey there, Buster,
Run me off, a cluster...ing, analysis,
A synthesis, to help group our participants,

Hey there, Buster,
Run me off, a cluster…ing, analysis,
Tonight,

Hey, hey,
Hey, hey, hey, hey, hey,
Hey, hey, hey, hey, hey,

Tonight,

Hey, hey,
Hey, hey, hey, hey, hey,
Hey, hey, hey, hey, hey,

Tonight...

Wednesday, October 14, 2015

Log-Linear Modeling

Log-linear modeling, which is used when one has a set of entirely nominal/categorical variables, is an extension of the chi-square analysis for two nominal variables. As you'll recall, with chi-square analyses, we compare actual/observed frequencies of people within each cell, with expected frequencies. With log-linear modeling (LLM), one can analyze three, four, or more nominal variables in relationship to each other. The name comes from the fact that LLM use logarithms in the calculations (along with odds and odds ratios, like logistic regression).

Many sources state that with LLM there is no distinction between independent and dependent variables. I think it's still OK to think of IV's (predictors) in relation to a DV, however. In the example below (from the 1993 General Social Survey), the DV is political party identification (collapsed into Democratic [strong, weak], Republican [strong, weak], and Independent ["pure" Independent, Ind. Lean Dem., and Ind. Lean. Repub.]). Predictors are religious affiliation (collapsing everyone other than Protestant and Catholic into "All Other & None"); college degree (yes/no), and gender.

Variables are typically represented by their initial (P, R, C, and G in our example). Further, putting two or more letters together (with no comma) signifies relationships among the respective variables. By convention, one's starting (or baseline) model posits that the DV (in this case, party identification) is unrelated to the three predictors, but the predictors are allowed to relate to each other. The symbolism [P, RCG] describes the hypothesis that one's decision to self-identify as a Democrat, Republican, or Independent (and make one's voting decisions accordingly) is not influenced in any way by one's religious affiliation, attainment (or not) of a college degree, or gender. However, any relationships in the data between predictors are taken into account. Putting the three predictors together (RCG) also allows for three-way relationships or interactions, such as if Catholic females had a high rate of getting bachelor's degrees (which I have no idea if it's true). Three-way interaction terms (e.g., Religion X College X Gender) also include all two-way interactions (RC, RG, CG) contained within.

The orange column in the chart below shows us how many respondents actually appeared in each of the 36 cells representing combinations of political party (3) X religion (3) X college-degree (2) X gender (2).

The next column to the right shows us the expected frequencies generated by the [P, RCG] baseline model. We would not expect this model to do a great job of predicting cell frequencies, because it does not allow Party ID to be predicted by religion, college, or gender. Indeed, the expected frequencies under this model do not match the actual frequencies very well. I have highlighted in purple any cell in which the expected frequency comes within +/-  3 people of the actual frequency (the +/- 3 criterion is arbitrary; I just thought it gives a good feel for how well a given model does). The [P, RCG] model produces only 7 purple cells out of 36 possible. Each model also generates a chi-square value (use the likelihood-ratio version). As a reminder from previous stat classes, chi-square represents discrepancy (O-E) or "badness of fit," so a highly significant chi-square value for a given model signifies poor match to the actual frequencies. Significance levels for each model are indicated in the respective red boxes atop each column (***p < .001, **p < .01, *p < .05).

After running the baseline model and obtaining its chi-square value, we then move on to more complex models that add relationships or linkages between the predictors and DV. The second red column shows expected frequencies for the model [PR, RCG]. This model keeps the previous RCG combination, but now adds a relationship between party (P) and religious (R) affiliation. If there is some relationship between party and religion, such as Protestants being more likely than other religious groups to identify as a Republican, the addition of the PR term will result in a substantial improvement in the match between expected frequencies for this model and the actual frequencies. Indeed, the [PR, RCG] model produces 16 well-fitting (purple) cells, a much better performance than the previous model. (Adding linkages such as PR instead of just P will either improve the fit or leave it the same; it cannot harm fit.)


Let's step back a minute and consider all the elements in the [P, RCG] and [PR, RCG] models:

[P, RCG]: P, RCG, RC, RG, CG, R, C, G

[PR, RCG]: PR, P, RCG, RC, RG, CG, R, C, G

Notice that all the terms in the first model are included within the second model, but the second model has one additional term (PR). The technical term is that the first model is nested within the second. Nestedness is required to conduct some of the statistical comparisons we will discuss later.

If we look at the model [PC, RCG], we see that it contains:

PC, P, RCG, RC, RG, CG, R, C, G

The two models highlighted in yellow are not nested. To go from [PR, RCG] to [PC, RCG], you would have to delete the PR term (because the latter doesn't have PR) and add the PC term. When you have to both add and subtract, two models are not nested.

Let's return to discussing models that allow R, C, and/or G to relate to P. As noted above, adding more linkages will improve the fit between actual and expected frequencies. However, we want to add as few linkages as possible in order to keep the model as simple or parsimonious as possible.

The next model in the above chart is [PC, RCG], which allows college-degree status (but no other variables) predict party ID. There's not much extra bang (9 purple cells) for the buck (using PC instead of just P). The next model [PG, RCG], which specifies gender as the sole predictor of party ID, yields 11 purple cells. If you could only have one predictor relate to party ID, the choice would be religion (16 purple cells).

We're not so limited, however. We can allow two or even all three predictors to relate to party ID. The fifth red column presents [PRC, RCG], which allows religion, college-degree, and the two combined to predict party ID. Perhaps being a college-educated Catholic disproportionately is associated with identifying as a Democrat (again, I don't know if this is actually true). As with all the previous models, the RCG term allows all the predictors to relate to each other. As it turns out, [PRC, RCG] is the best model of all the ones tested, yielding 18 purple cells. The other two-predictor models, [PRG, RCG] and [PCG, RCG], don't do quite as well.

The final model, on the far right (spatially, not politically) is known as [PRCG]. It allows religion, college-degree, and gender -- individually and in combination -- to predict party ID. In this sense, it's a four-way interaction. As noted, a given interaction includes all lower-order terms, so [PRCG] also includes PRC, PRG, PCG, RCG, PR, PC, PG, RC, RG, RC, P, R, G, and G. Inclusion of all possible terms, as is the case here, is known as a saturated model. A saturated model will yield estimated frequencies that match perfectly the actual frequencies. It's no great accomplishment; it's a mathematical necessity. (Saturation and perfect fit also feature prominently in the next course in our statistical sequence, Structural Equation Modeling.)

Ideally, among the models tested, at least one non-saturated model will show a non-significant chi-square (badness of fit) on its own. That didn't happen in the present set of models, but the model I characterized above as the best [PRC, RCG] is "only" significant at p < .05, compared to p < .001 for all the other non-saturated models. Also, as shown in the following table, [PRC, RCG] fits significantly better than the baseline [P, RCG] by what is known as the delta chi-square test. Models must be nested within each other for such a test to be permissible. (For computing degrees of freedom, see Knoke & Burke, 1980, Log-Linear Models, Sage, pp. 36-37.)


When you tell SPSS to run the saturated model, it automatically gives you a supplemental backward-elimination analysis, which is described here. This is another way to help decide which model best approximates the actual frequencies.

My colleagues and I used log-linear modeling in one of our articles:

Fitzpatrick, J., Sharp, E. A., & Reifman, A. (2009). Midlife singles’ willingness to date partners with heterogeneous characteristics. Family Relations, 58 , 121–133.

Finally, we have a song:

Log-Linear Models 
Lyrics by Alan Reifman
May be sung to the tune of “I Think We’re Alone Now” (Ritchie Cordell; performed by Tommy James and others)

Below, Dr. Reifman chats with Tommy James, who performed at the 2013 South Plains Fair and was kind enough to stick around and visit with fans and sign autographs. Dr. Reifman tells Tommy about how he (Dr. Reifman) has written statistical lyrics to Tommy's songs for teaching purposes.  




Chi-square, two-way, is what we're used, to analyzing,
But, what if you've, say, three or four nominal variables?

Reading all the stat books that you can, seeking out what you can understand,
Trying to find techniques, specifically for, multi-way categorical data,
And you finally find a page, and there it says:

Log-linear models,
You try to re-create, the known frequencies,
Log-linear models,
You try to use as few, hypothesized links,

Each step of the way, you let it use associations,
You build an array, until the point of saturation,

Reading all the stat books that you can, seeking out what you can understand,
Trying to find techniques, specifically for, multi-way categorical data,
And you finally find a page, and there it says:

Log-linear models,
You try to re-create, the known frequencies,
Log-linear models,
You try to use as few, hypothesized links,

Log-linear models,
You try to re-create, the known frequencies,
Log-linear models,
You try to use as few, hypothesized links,

Log-linear models,
You try to re-create, the known frequencies,
Log-linear models,
You try to use as few, hypothesized links,

Log-linear models,
You try to re-create, the known frequencies,
Log-linear models,
You try to use as few, hypothesized links,