Clustergram
clustergram.
Clustergram class mimicking the interface of clustering class (e.g. KMeans).
KMeans
Clustergram is a graph used to examine how cluster members are assigned to clusters as the number of clusters increases. This graph is useful in exploratory analysis for nonhierarchical clustering algorithms such as k-means and for hierarchical cluster algorithms when the number of observations is large enough to make dendrograms impractical.
Clustergram offers three backends for the computation - scikit-learn and scipy which use CPU and RAPIDS.AI cuML, which uses GPU. Note that all are optional dependencies but you will need at least one of them to generate clustergram.
scikit-learn
scipy
cuML
Alternatively, you can create clustergram using from_data or from_centers methods based on alternative clustering algorithms.
from_data
from_centers
iterable of integer values to be tested as k (number of cluster or components). Not required for hierarchical clustering but will be applied if given. It is recommended to always use limited range for hierarchical methods as unlimited clustergram can take a while to compute and for large number of observations is not legible.
k
Specify computational backend. Defaults to sklearn for 'kmeans', 'gmm', and 'minibatchkmeans' methods and to 'scipy' for any of hierarchical clustering methods. 'scipy' uses sklearn for PCA computation if that is required. sklearn does computation on CPU, cuml on GPU.
sklearn
'kmeans'
'gmm'
'minibatchkmeans'
'scipy'
cuml
Clustering method.
kmeans uses K-Means clustering, either as sklearn.cluster.KMeans or cuml.KMeans.
kmeans
sklearn.cluster.KMeans
cuml.KMeans
gmm uses Gaussian Mixture Model as sklearn.mixture.GaussianMixture
gmm
sklearn.mixture.GaussianMixture
minibatchkmeans uses Mini Batch K-Means as sklearn.cluster.MiniBatchKMeans
minibatchkmeans
sklearn.cluster.MiniBatchKMeans
hierarchical uses hierarchical/agglomerative clustering as scipy.cluster.hierarchy.linkage. See
hierarchical
scipy.cluster.hierarchy.linkage
Note that gmm and minibatchkmeans are currently supported only with sklearn backend.
Print progress and time of individual steps.
Additional arguments passed to the model (e.g. KMeans), e.g. random_state. Pass linkage to specify linkage method in case of hierarchical clustering (e.g. linkage='ward'). See the documentation of scipy for details.
random_state
linkage
linkage='ward'
References
The clustergram: A graph for visualizing hierarchical and nonhierarchical cluster analyses: https://journals.sagepub.com/doi/10.1177/1536867X0200200405
Tal Galili’s R implementation: https://www.r-statistics.com/2010/06/clustergram-visualization-and-diagnostics-for-cluster-analysis-r-code/
Examples
>>> c_gram = clustergram.Clustergram(range(1, 9)) >>> c_gram.fit(data) >>> c_gram.plot()
Specifying parameters:
>>> c_gram2 = clustergram.Clustergram( ... range(1, 9), backend="cuML", random_state=0 ... ) >>> c_gram2.fit(cudf_data) >>> c_gram2.plot(figsize=(12, 12))
DataFrame with cluster labels for each iteration.
Dictionary with cluster centers for each iteration.
Linkage object for hierarchical methods.
Methods
bokeh([fig, size, line_width, …])
bokeh
Generate interactive clustergram plot based on cluster centre mean values using Bokeh.
calinski_harabasz_score()
calinski_harabasz_score
Compute the Calinski and Harabasz score.
davies_bouldin_score()
davies_bouldin_score
Compute the Davies-Bouldin score.
fit(data, **kwargs)
fit
Compute clustering for each k within set range.
from_centers(cluster_centers, labels[, data])
Create clustergram based on cluster centers dictionary and labels DataFrame
from_data(data, labels[, method])
Create clustergram based on data and labels DataFrame
plot([ax, size, linewidth, cluster_style, …])
plot
Generate clustergram plot based on cluster centre mean values.
silhouette_score(**kwargs)
silhouette_score
Compute the mean Silhouette Coefficient of all samples.
Requires bokeh.
bokeh figure on which to draw the plot
multiplier of the size of a cluster centre indication. Size is determined as 50 / count of observations in a cluster multiplied by size.
50 / count
size
multiplier of the linewidth of a branch. Line width is determined as 50 / count of observations in a branch multiplied by line_width.
Style options to be passed on to the cluster centre plot, such as color, line_width, line_color or alpha.
color
line_width
line_color
alpha
Style options to be passed on to branches, such as color, line_width, line_color or alpha.
Size of the resulting bokeh.plotting.figure.Figure. If the argument figure is given explicitly, figsize is ignored.
bokeh.plotting.figure.Figure
figure
figsize
Whether use PCA weighted mean of clusters or standard mean of clusters on y-axis.
Additional arguments passed to the PCA object, e.g. svd_solver. Applies only if pca_weighted=True.
svd_solver
pca_weighted=True
Notes
Before plotting, Clustergram needs to compute the summary values. Those are computed on the first call of each option (pca_weighted=True/False).
>>> from bokeh.plotting import show >>> c_gram = clustergram.Clustergram(range(1, 9)) >>> c_gram.fit(data) >>> f = c_gram.bokeh() >>> show(f)
For the best experience in Jupyter notebooks, specify bokeh output first:
>>> from bokeh.io import output_notebook >>> from bokeh.plotting import show >>> output_notebook()
>>> c_gram = clustergram.Clustergram(range(1, 9)) >>> c_gram.fit(data) >>> f = c_gram.bokeh() >>> show(f)
See the documentation of sklearn.metrics.calinski_harabasz_score for details.
sklearn.metrics.calinski_harabasz_score
Once computed, resulting Series is available as Clustergram.calinski_harabasz. Calling the original method will compute the score from the beginning.
Clustergram.calinski_harabasz
The algortihm uses sklearn. With cuML backend, data are converted on the fly.
>>> c_gram = clustergram.Clustergram(range(1, 9)) >>> c_gram.fit(data) >>> c_gram.calinski_harabasz_score() 2 23.176629 3 30.643018 4 55.223336 5 3116.435184 6 3899.068689 7 4439.306049 Name: calinski_harabasz_score, dtype: float64
Once computed:
>>> c_gram.calinski_harabasz 2 23.176629 3 30.643018 4 55.223336 5 3116.435184 6 3899.068689 7 4439.306049 Name: calinski_harabasz_score, dtype: float64
See the documentation of sklearn.metrics.davies_bouldin_score for details.
sklearn.metrics.davies_bouldin_score
Once computed, resulting Series is available as Clustergram.davies_bouldin. Calling the original method will recompute the score.
Clustergram.davies_bouldin
>>> c_gram = clustergram.Clustergram(range(1, 9)) >>> c_gram.fit(data) >>> c_gram.davies_bouldin_score() 2 0.249366 3 0.351812 4 0.347580 5 0.055679 6 0.030516 7 0.025207 Name: davies_bouldin_score, dtype: float64
>>> c_gram.davies_bouldin 2 0.249366 3 0.351812 4 0.347580 5 0.055679 6 0.030516 7 0.025207 Name: davies_bouldin_score, dtype: float64
Input data to be clustered. It is expected that data are scaled. Can be numpy.array, pandas.DataFrame or their RAPIDS counterparts.
numpy.array
pandas.DataFrame
Additional arguments passed to the .fit() method of the model, e.g. sample_weight.
.fit()
sample_weight
Fitted clustergram.
>>> c_gram = clustergram.Clustergram(range(1, 9)) >>> c_gram.fit(data)
dictionary of cluster centers with keys encoding the number of clusters and values being M``x````N arrays where M == key and N == number of variables in the original dataset. Entries should be ordered based on keys.
M``x````N
M
N
DataFrame with columns representing cluster labels and rows representing observations. Columns must be equal to cluster_centers keys.
cluster_centers
array used as an input of the clustering algorithm with N columns. Required for plot(pca_weighted=True) plotting option. Otherwise only plot(pca_weighted=False) is available.
The algortihm uses sklearn and pandas to generate clustergram. GPU option is not implemented.
pandas
>>> import pandas as pd >>> import numpy as np >>> labels = pd.DataFrame({1: [0, 0, 0], 2: [0, 0, 1], 3: [0, 2, 1]}) >>> labels 1 2 3 0 0 0 0 1 0 0 2 2 0 1 1 >>> centers = { ... 1: np.array([[0, 0]]), ... 2: np.array([[-1, -1], [1, 1]]), ... 3: np.array([[-1, -1], [1, 1], [0, 0]]), ... } >>> cgram = Clustergram.from_centers(centers, labels) >>> cgram.plot(pca_weighted=False)
>>> data = np.array([[-1, -1], [1, 1], [0, 0]]) >>> cgram = Clustergram.from_centers(centers, labels, data=data) >>> cgram.plot()
Cluster centers are created as mean values or median values as a groupby function over data using individual labels.
array used as an input of the clustering algorithm in the (M, N) shape where M == number of observations and N == number of variables
(M, N)
Method of computation of cluster centres.
>>> import pandas as pd >>> import numpy as np >>> data = np.array([[-1, -1, 0, 10], [1, 1, 10, 2], [0, 0, 20, 4]]) >>> data array([[-1, -1, 0, 10], [ 1, 1, 10, 2], [ 0, 0, 20, 4]]) >>> labels = pd.DataFrame({1: [0, 0, 0], 2: [0, 0, 1], 3: [0, 2, 1]}) >>> labels 1 2 3 0 0 0 0 1 0 0 2 2 0 1 1 >>> cgram = Clustergram.from_data(data, labels) >>> cgram.plot()
matplotlib axis on which to draw the plot
multiplier of the size of a cluster centre indication. Size is determined as 500 / count of observations in a cluster multiplied by size.
500 / count
multiplier of the linewidth of a branch. Line width is determined as 50 / count of observations in a branch multiplied by linewidth.
Style options to be passed on to the cluster centre plot, such as color, linewidth, edgecolor or alpha.
linewidth
edgecolor
Style options to be passed on to branches, such as color, linewidth, edgecolor or alpha.
Size of the resulting matplotlib.figure.Figure. If the argument ax is given explicitly, figsize is ignored.
matplotlib.figure.Figure
ax
iterable of integer values to be plotted. In none, Clustergram.k_range will be used. Has to be a subset of Clustergram.k_range.
Clustergram.k_range
See the documentation of sklearn.metrics.silhouette_score for details.
sklearn.metrics.silhouette_score
Once computed, resulting Series is available as Clustergram.silhouette. Calling the original method will compute the score from the beginning.
Clustergram.silhouette
Additional arguments passed to the silhouette_score function, e.g. sample_size.
sample_size
>>> c_gram = clustergram.Clustergram(range(1, 9)) >>> c_gram.fit(data) >>> c_gram.silhouette_score() 2 0.702450 3 0.644272 4 0.767728 5 0.948991 6 0.769985 7 0.575644 Name: silhouette_score, dtype: float64
>>> c_gram.silhouette 2 0.702450 3 0.644272 4 0.767728 5 0.948991 6 0.769985 7 0.575644 Name: silhouette_score, dtype: float64