Category: | Meta title: Hierarchical Clustering - 360DigiTMG | Course:

Home / Blog / Data Science Digital Book / Hierarchical Clustering

Hierarchical Clustering

  • July 15, 2023
  • 4764
  • 20
Author Images

Meet the Author : Mr. Bharani Kumar

Bharani Kumar Depuru is a well known IT personality from Hyderabad. He is the Founder and Director of Innodatatics Pvt Ltd and 360DigiTMG. Bharani Kumar is an IIT and ISB alumni with more than 18+ years of experience, he held prominent positions in the IT elites like HSBC, ITC Infotech, Infosys, and Deloitte. He is a prevalent IT consultant specializing in Industrial Revolution 4.0 implementation, Data Analytics practice setup, Artificial Intelligence, Big Data Analytics, Industrial IoT, Business Intelligence and Business Management. Bharani Kumar is also the chief trainer at 360DigiTMG with more than Ten years of experience and has been making the IT transition journey easy for his students. 360DigiTMG is at the forefront of delivering quality education, thereby bridging the gap between academia and industry.

Read More >

Agglomerative technique (top-down hierarchy of clusters) or Divisive technique (bottom-up hierarchy of clusters) are other names for hierarchical clustering.

Agglomerative:

When merging records or clusters, start by treating each data point as a separate cluster and proceed until all records have been combined into a single large cluster.

Steps:

  • Start with 'n' number of clusters where 'n' is the number of data points
  • Merge two records, or a record and a cluster, or two clusters at each step based on the distance criteria and linkage functions.

Divisive:

  • Start by considering that all data points belong to one single cluster and keep splitting into two groups each time, until we reach a stage where each data point is a single cluster.
  • Divisive Clustering is more efficient than Agglomerative Clustering.
  • Split the clusters with the largest SSE value.
  • Splitting criterion can be Ward's criterion or Gini-index in case of categorical data.
  • Stopping criterion can be used to determine the termination criterion

After executing the algorithm and examining the Dendrogram, a selection of clusters is made. A dendrogram is a collection of data points that resembles a multi-level nested partitioned tree of clusters.

Click here to learn Data Science in Hyderabad


Learn the core concepts of Data Science Course video on YouTube:

Disadvantages of Hierarchical Clustering

Work done previously cannot be undone and cannot work well on large datasets.

Types of Hierarchical Clustering

  • BIRCH - Balanced Iterative Reducing and Clustering using Hierarchies
  • CURE - Clustering Using REpresentatives
  • CHAMELEON - Hierarchical Clustering using Dynamic Modeling. This is a graph partitioning approach used in clustering complex structures.
  • Probabilistic Hierarchical Clustering
  • Generative Clustering Model

Click here to learn Data Science in Bangalore


Density-Based Clustering: DBSCAN

  • Clustering based on a local cluster criterion
  • Can discover clusters of random shapes and can handle outliers
  • Density parameters should be provided for stopping condition

DBSCAN - Density-Based Spatial Clustering of Applications with Noise

Works on the basis of two parameters:

Eps - Maximum Radius of the neighbourhood

MinPts - Minimum number of points in the Eps-neighbourhood of a point

It works on the principle of density

dbscan

Click here to learn Data Analytics in Bangalore


OPTICS

Ordering of Points to Identify Cluster Structure

Works on the principle of varying density of clusters

2 Aspects for Optics

optics

“Plot the number of clusters for the image if it was subject to Optics clustering”.

optics graph

Click here to learn Data Analytics in Hyderabad


Grid-Based Clustering Methods

Create a grid structure by dividing the data space into a fixed number of cells.

From the grid's cells, identify clusters.

Challenges:

uneven data distribution is challenging to handle.

is plagued by dimensionality, making it challenging to cluster high-dimensional data.

grid

Click here to learn Artificial Intelligence in Bangalore

Methods:

STING - STatistical INformation Grid approach.

CLIQUE - CLustering in QUEst - This is both density-based as well as grid-based subspace clustering algorithm.

Three broad categories of measurement in clustering:

three methods

Click here to learn Artificial Intelligence in Hyderabad

External

Used to compare the clustering output against subject matter expertise (ground truth)

Four criteria for External Methods are:

  • Cluster Homogeneity - More the purity, better is the cluster formation.

  • Cluster Completeness - Ground truth of objects and cluster assigned objects to belong to the same cluster.

  • Ragbag better than Alien - Assigning heterogeneous object is very different from the remaining points of a cluster to a cluster will be penalized more than assigning it into a rag bag/miscellaneous/other category

  • Small cluster preservation - Splitting a large cluster into smaller clusters is much better than splitting a small cluster into smaller clusters.

Click here to learn Machine Learning in Hyderabad

Most Common External Measures

  • Matching-based measures
    • Purity
    • Maximum Matching
    • F-measure (Precision & Recall)
  • Entropy-based measures
    • Entropy of Clustering
    • Entropy of Partitioning
    • Conditional Entropy
    • Mutual Information
    • Normalized Mutual Information (NMI)
  • Pairwise measures
    • True Positive
    • False Negative
    • False Positive
    • True Negative
    • Jaccard Coefficient
    • Rand Statistic
    • Fowlkes - Mallow Measure
  • Correlation measures
    • Discretized Huber Static
    • Normalized Discretized Huber Static

Internal

Goodness of clustering and an example of same is Silhouette coefficient

Most common internal measures:

  • Beta-CV measure
  • Normalized Cut
  • Modularity
  • Relative measure - Silhouette Coefficient

Click here to learn Machine Learning in Bangalore

Relative

Compare the results of clustering obtained by different parameter settings of the same algorithm.

Clustering Assessment Methods

  • Spatial Histogram
  • Distance Distribution
  • Hopkins Statistic

clusturing assessment

Finding K value in clustering

  • Bootstrapping Approach
  • Empirical Method
  • Elbow Method
  • Cross-Validation Method

finding k value

Click here to learn Data Science Course, Data Science Course in Hyderabad, Data Science Course in Bangalore

Data Science Placement Success Story

Data Science Training Institutes in Other Locations

Navigate to Address

360DigiTMG - Data Science, Data Scientist Course Training in Bangalore

No 23, 2nd Floor, 9th Main Rd, 22nd Cross Rd, 7th Sector, HSR Layout, Bengaluru, Karnataka 560102

1800-212-654-321

Get Direction: Data Science Course

Read
Success Stories
Make an Enquiry

Celebrate this festival with Learning! Unlock Your Future with Our Special Festival Discounts!! Know More