【论文翻译】Clustering by Passing Messages Between Data Points

论文题目:Clustering by Passing Messages Between Data Points
论文来源:Clustering by Passing Messages Between Data Points
翻译人:[email protected]实验室

Clustering by Passing Messages Between Data Points

Brendan J. Frey*, Delbert Dueck

通过传递数据点之间的消息进行聚类

Brendan J. Frey*, Delbert Dueck

Abstract

Clustering data by identifying a subset of representative examples is important for processing sensory signals and detecting patterns in data. Such “exemplars” can be found by randomly choosing an initial subset of data points and then iteratively refining it, but this works well only if that initial choice is close to a good solution. We devised a method called “affinity propagation,” which takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. We used affinity propagation to cluster images of faces, detect genes in microarray data, identify representative sentences in this manuscript, and identify cities that are efficiently accessed by airline travel. Affinity propagation found clusters with much lower error than other methods, and it did so in less than one-hundredth the amount of time.

摘要

通过识别具有代表性的例子的子集来聚类数据,对于处理感觉信号和检测数据中的模式非常重要。这样的“样本”可以通过随机选择数据点的初始子集,然后迭代地细化它来找到,但是只有当这个初始值选择接近一个好的值时,它才能很好地工作。我们设计了一种称为“亲和传播”的方法,它以数据点对之间的相似性作为输入度量,在数据点之间交换实值消息,直到一个高质量的样本集和相应的集群逐渐出现。我们使用亲和传播对人脸图像进行聚类,检测微阵列数据中的基因,识别这份手稿中的代表句子,并识别航空旅行有效访问的城市。亲和力传播发现的聚类误差比其他方法低得多,而且它所用的时间不到百分之一。

正文

Clustering data based on a measure of similarity is a critical step in scientific data analysis and in engineering systems. A common approach is to use data to learn a set of centers such that the sum of squared errors between data points and their nearest centers is small. When the centers are selected from actual data points, they are called “exemplars.” The popular k-centers clustering technique (1) begins with an initial set of randomly selected exemplars and iteratively refines this set so as to decrease the sum of squared errors. k-centers clustering is quite sensitive to the initial selection of exemplars, so it is usually rerun many times with different initializations in an attempt to find a good solution. However, this works well only when the number of clusters is small and chances are good that at least one random initialization is close to a good solution. We take a quite different approach and introduce a method that simultaneously considers all data points as potential exemplars. By viewing each data point as a node in a network, we devised a method that recursively transmits real-valued messages along edges of the network until a good set of exemplars and corresponding clusters emerges. As described later, messages are updated on the basis of simple formulas that search for minima of an appropriately chosen energy function. At any point in time, the magnitude of each message reflects the current affinity that one data point has for choosing another data point as its exemplar, so we call our method “affinity propagation.” Figure 1A illustrates how clusters gradually emerge during the message-passing procedure.
【论文翻译】Clustering by Passing Messages Between Data Points
Fig. 1.How affinity propagation works. (A) Affinity propagation is illustrated for two-dimensional data points, where negative Euclidean distance (squared error) was used to measure similarity. Each point is colored according to the current evidence that it is a cluster center (exemplar). The darkness of the arrow directed from point i to point k corresponds to the strength of the transmitted message that point i belongs to exemplar point k.(B) “Responsibilities” r(i, k) are sent from data points to candidate exemplars and indicate how strongly each data point favors the candidate exemplar over other candidate exemplars. © “Availabilities” a(i, k) are sent from candidate exemplars to data points and indicate to what degree each candidate exemplar is available as a cluster center for the data point. (D) The effect of the value of the input preference (common for all data points) on the number of identified exemplars (number of clusters) is shown. The value that was used in (A) is also shown, which was computed from the median of the pairwise similarities.

基于相似性度量的聚类数据是科学数据分析和工程系统中的关键步骤。一种常见的方法是使用数据来学习一组中心值,使得数据点与其最近的中心之间的平方误差之和很小。当中心值从实际数据点中选择时,它们被称为“样本”。k-centers聚类技术从一组随机选择的样本开始,并迭代地细化这组样本,以减少平方误差之和。 中心聚类对样本的初始选择非常敏感,因此通常用不同的初始化值重新运行多次,试图找到一个好的解。然而,只有当集群的数量很小,并且至少有一个随机初始化接近一个良好的解时,这种方法才能很好地工作。我们采取了一种完全不同的方法,并引入了一种同时将所有数据点视为潜在样本的方法。通过将每个数据点看作网络中的一个节点,我们设计了一种方法,它沿着网络的边缘递归地传输实值消息,直到一组好的样本和相应的集群出现。正如后面所描述的,消息是根据搜索适当选择的能量函数的最小值的简单公式更新的。在任何时间点,每个消息的大小都反映了一个数据点选择另一个数据点作为样本的当前亲和力,因此我们将我们的方法称为“亲和力传播”。图1A说明了在消息传递过程中集群是如何逐渐出现的。
【论文翻译】Clustering by Passing Messages Between Data Points
图1 亲和力传播是如何工作的。(A)图示了二维数据点的亲和力传播,其中负欧氏距离(平方误差)被用来测量相似性。每个点都是根据当前证据着色的,即它是一个聚类中心(范例)。 从点i到点k的箭头的黑暗对应于发送消息的强度,该点i属于样本点k。(B)“Responsibilities”r(i,k)从数据点发送到候选样本,并表明每个数据点对候选样本比其他候选样本有多大的偏好。(C)“Availability”a(i,k)从候选样本发送到数据点,并指示每个候选样本在多大程度上可作为数据点的集群中心。(D)显示了输入偏好值(所有数据点的共同值)对已识别样本数(簇数)的影响。 还显示了(A)中使用的值,该值是根据成对相似性的中位数计算的。

Affinity propagation takes as input a collection of real-valued similarities between data points, where the similarity s(i, k) indicates how well the data point with index k is suited to be the exemplar for data point i. When the goal is to minimize squared error, each similarity is set to a negative squared error (Euclidean distance): For points xi and xk, s(i, k)= –∥xi – xk∥2. Indeed, the method described here can be applied when the optimization criterion is much more general. Later, we describe tasks where similarities are derived for pairs of images, pairs of microarray measurements, pairs of English sentences, and pairs of cities. When an exemplar-dependent probability model is available, s(i, k) can be set to the log-likelihood of data point i given that its exemplar is point k. Alternatively, when appropriate, similarities may be set by hand.

Rather than requiring that the number of clusters be prespecified, affinity propagation takes as input a real number s(k, k) for each data point k so that data points with larger values of s(k, k) are more likely to be chosen as exemplars. These values are referred to as “preferences.” The number of identified exemplars (number of clusters) is influenced by the values of the input preferences, but also emerges from the message-passing procedure. If a priori, all data points are equally suitable as exemplars, the preferences should be set to a common value—this value can be varied to produce different numbers of clusters. The shared value could be the median of the input similarities (resulting in a moderate number of clusters) or their minimum (resulting in a small number of clusters).

亲和力传播以数据点之间实值相似性的集合作为输入,其中相似性s(i,k)表示具有索引k的数据点非常适合作为数据点i的样本。 当目标是最小化平方误差时,每个相似性被设置为负平方误差(欧几里得距离):对于点xi和xk,s(i,k)=−||xi−xk−||−||2。实际上,这里描述的方法可以在优化准则更一般的情况下应用。 后来,我们描述了一些任务,其中相似之处是对图像、对微阵列测量、对英语句子和对城市。当一个样本相关的概率模型可用时,s(i,k)可以设置为数据点i的对数似然,因为它的样本是点k。或者,在适当的情况下,可以手工设置相似性。

亲和传播不要求预先指定簇的数目,而是将每个数据点k的实数s(k,k)作为输入,这样较大值s(k,k)的数据点更有可能被选择为样本。这些值被称为“首选项”。识别的样本数(簇数)受输入首选项值的影响,但也出现在消息传递过程中。如果所有数据点都同样适合作为示例,则首选项应该设置为一个公共值,这个值可以被更改以产生不同数量的簇,共享值可以是输入相似性的中位数(导致适度数量的集群)或它们的最小值(导致少量的集群)。

There are two kinds of message exchanged between data points, and each takes into account a different kind of competition. Messages can be combined at any stage to decide which points are exemplars and, for every other point, which exemplar it belongs to. The “responsibility” r(i, k), sent from data point i to candidate exemplar point k, reflects the accumulated evidence for how well-suited point k is to serve as the exemplar for point i, taking into account other potential exemplars for point i (Fig. 1B). The “availability” a(i, k), sent from candidate exemplar point k to point i, reflects the accumulated evidence for how appropriate it would be for point i to choose point k as its exemplar, taking into account the support from other points that point k should be an exemplar (Fig. 1C). r(i, k) and a(i, k) can be viewed as log-probability ratios. To begin with, the availabilities are initialized to zero: a(i, k) = 0. Then, the responsibilities are computed using the rule
r(i,k)=s(i,k)max{(i,k)+s(i,k)}st.k!=kr(i,k)=s(i,k)-max\lbrace(i,k')+s(i,k') \rbrace \\st. k'!=k
In the first iteration, because the availabilities are zero, r(i, k) is set to the input similarity between point i and point k as its exemplar, minus the largest of the similarities between point i and other candidate exemplars. This competitive update is data-driven and does not take into account how many other points favor each candidate exemplar. In later iterations, when some points are effectively assigned to other exemplars, their availabilities will drop below zero as prescribed by the update rule below. These negative availabilities will decrease the effective values of some of the input similarities s(i, k′) in the above rule, removing the corresponding candidate exemplars from competition. For k = i, the responsibility r(k, k) is set to the input preference that point k be chosen as an exemplar, s(k, k), minus the largest of the similarities between point i and all other candidate exemplars. This “self-responsibility” reflects accumulated evidence that point k is an exemplar, based on its input preference tempered by how ill-suited it is to be assigned to another exemplar.

数据点之间交换的消息有两种,每种都考虑了不同的竞争。 消息可以在任何阶段组合,以决定哪些点是示例,对于每一个其他点,它属于哪个示例。从数据点i发送到候选样本点k的“责任”r(i,k)反映了累积的证据,证明适合点k是如何作为第一点的样本,同时考虑到了第一点的其他潜在样本(图1B)。从候选样本点k到点i发送的“可用性”a(i,k)反映了累积的证据,表明第i点选择k点作为其样本是多么合适,同时考虑到其他点的支持,即k点应该是一个样本(图1C)。r(i,k)和a(i,k)可以看作是对数概率比。 首先,可被初始化为零:a(i,k)=0。然后,利用公式进行计算r(i,k)=s(i,k)max{(i,k)+s(i,k)}st.k!=kr(i,k)=s(i,k)-max\lbrace(i,k')+s(i,k') \rbrace \\st. k'!=k
在第一次迭代中,由于可用性为零,r(i,k)被设置为点i和点k之间的输入相似性作为其样本,减去点i和其他候选样本之间的最大相似性。这种竞争性更新是数据驱动的,没有考虑到有多少其他点有利于每个候选人样本。在以后的迭代中,当一些点被有效地分配给其他示例时,它们的可用性将下降到下面更新规则规定的零以下。这些负可用性将降低上述规则中一些输入相似性s(i,k‘)的有效值,从而从竞争中删除相应的候选样本。对于k=i,责任r(k,k)被设置为选择k点作为样本s(k,k)的输入偏好,减去点i与所有其他候选样本之间最大的相似性。 这种“自我责任”反映了积累的证据,即k点是一个样本,基于它的输入偏好,它不合适被分配给另一个样本。

Whereas the above responsibility update lets all candidate exemplars compete for ownership of a data point, the following availability update gathers evidence from data points as to whether each candidate exemplar would make a good exemplar:a(i,k)=min{0,r(k,k)+max(0,r(i,k))}i(i,k)a(i,k)=min \lbrace 0,r(k,k)+\sum max(0,r(i',k)) \rbrace\\i'\notin(i,k)

The availability a(i, k) is set to the self-responsibility r(k, k) plus the sum of the positive responsibilities candidate exemplar k receives from other points. Only the positive portions of incoming responsibilities are added, because it is only necessary for a good exemplar to explain some data points well (positive responsibilities), regardless of how poorly it explains other data points (negative responsibilities). If the self-responsibility r(k, k) is negative (indicating that point k is currently better suited as belonging to another exemplar rather than being an exemplar itself), the availability of point k as an exemplar can be increased if some other points have positive responsibilities for point k being their exemplar. To limit the influence of strong incoming positive responsibilities, the total sum is thresholded so that it cannot go above zero. The “self-availability” a(k, k) is updated differently:
a(k,k)is.t.ikmax{0,r(i,k)}} a(k,k)\leftarrow \sum\limits_{i's.t.i'\neq k} max\{0,r(i',k)\} \}\\
This message reflects accumulated evidence that point k is an exemplar, based on the positive responsibilities sent to candidate exemplar k from other points.

上面responsibility更新让所有候选exemplars为数据点的所有权而竞争,但下面的availability 更新从数据点收集了关于每个候选示例是否会成为一个好的示例的证据:a(i,k)=min{0,r(k,k)+max(0,r(i,k))}i(i,k)a(i,k)=min \lbrace 0,r(k,k)+\sum max(0,r(i',k)) \rbrace\\i'\notin(i,k)
可用性a(i,k)被设置为self-responsibilityr(k,k)加上候选样本k从其他点获得的正的responsibility之和。只增加了传入responsibility的正向部分,因为只有一个好的范例才能很好地解释一些数据点(positive responsibilities),而不管它对其他数据点(negative responsibilities)的解释有多坏。如果self-responsibilityr(k,k)是负的(表明k点目前更适合于属于另一个样本,而不是作为样本本身),一些点对k点作为他们的样本负有正的responsibilityr(k,k),那么k点作为样本的可用性就可以增加。为了限制强烈的正的responsibilityr(k,k)的影响,总和被阈值化,使其不能超过零。 “self-availability”a(k,k)的更新方式不同:
a(k,k)is.t.ikmax{0,r(i,k)}} a(k,k)\leftarrow \sum\limits_{i's.t.i'\neq k} max\{0,r(i',k)\} \}\\
这一信息反映了积累的证据,即k点是一个样本,基于从其他点发送给候选样本k的积极责任。

The above update rules require only simple, local computations that are easily implemented (2), and messages need only be exchanged between pairs of points with known similarities. At any point during affinity propagation, availabilities and responsibilities can be combined to identify exemplars. For point i, the value of k that maximizes a(i, k)+ r(i, k) either identifies point i as an exemplar if k = i, or identifies the data point that is the exemplar for point i. The message-passing procedure may be terminated after a fixed number of iterations, after changes in the messages fall below a threshold, or after the local decisions stay constant for some number of iterations. When updating the messages, it is important that they be damped to avoid numerical oscillations that arise in some circumstances. Each message is set to λ times its value from the previous iteration plus 1 – λ times its prescribed updated value, where the damping factor λ is between 0 and 1. In all of our experiments (3), we used a default damping factor of λ = 0.5, and each iteration of affinity propagation consisted of (i) updating all responsibilities given the availabilities, (ii) updating all availabilities given the responsibilities, and (iii) combining availabilities and responsibilities to monitor the exemplar decisions and terminate the algorithm when these decisions did not change for 10 iterations.

上述更新规则只需要简单的、易于实现的本地计算,并且消息只需要在具有已知相似性的点对之间交换。在亲和传播过程中的任何时候,可用性和责任都可以结合起来来识别样本。对于第i点,使a(i,k)r(i,k)最大化的k的值要么将第i点标识为如果k=i的示例,要么标识为第i点的示例数据点。 消息传递过程可以在固定的迭代次数之后终止,在消息的更改低于阈值之后,或者在局部决策在某些迭代次数中保持不变之后终止。在更新消息时,必须对它们进行阻尼,以避免在某些情况下出现数值振荡。每个消息被设置为l倍其值从上一次迭代加上1-l倍其规定的更新值,其中阻尼因子l在0到1之间。在我们的所有实验中,我们使用了l=0.5的默认阻尼因子,亲和传播的每一次迭代都包括:(1)更新给定可用性的所有责任;(2)更新给定责任的所有可用性;(3)结合可用性和责任来监测样本决策,并在这些决策没有变化10次迭代时终止算法。

Figure 1A shows the dynamics of affinity propagation applied to 25 two-dimensional data points (3), using negative squared error as the similarity. One advantage of affinity propagation is that the number of exemplars need not be specified beforehand. Instead, the appropriate number of exemplars emerges from the message-passing method and depends on the input exemplar preferences. This enables automatic model selection, based on a prior specification of how preferable each point is as an exemplar. Figure 1D shows the effect of the value of the common input preference on the number of clusters. This relation is nearly identical to the relation found by exactly minimizing the squared error (2).

图1A显示了应用于25个二维数据点的亲和传播动力学,以负平方误差作为相似性。亲和传播的一个优点是不需要事先指定样本的数量。相反,适当数量的样本出现在消息传递方法中,并取决于输入样本首选项。这使得能够自动选择模型,基于对每个点作为样本的可取性的先验规范。图1D显示了公共输入偏好值对簇数的影响,这种关系几乎与通过精确最小化平方误差发现的关系相同。

We next studied the problem of clustering images of faces using the standard optimization criterion of squared error. We used both affinity propagation and k-centers clustering to identify exemplars among 900 grayscale images extracted from the Olivetti face database (3). Affinity propagation found exemplars with much lower squared error than the best of 100 runs of k-centers clustering (Fig. 2A), which took about the same amount of computer time. We asked whether a huge number of random restarts of k-centers clustering could achieve the same squared error. Figure 2B shows the error achieved by one run of affinity propagation and the distribution of errors achieved by 10,000 runs of k-centers clustering, plotted against the number of clusters. Affinity propagation uniformly achieved much lower error in more than two orders of magnitude less time. Another popular optimization criterion is the sum of absolute pixel differences (which better tolerates outlying pixel intensities), so we repeated the above procedure using this error measure. Affinity propagation again uniformly achieved lower error (Fig. 2C).
【论文翻译】Clustering by Passing Messages Between Data Points
Fig. 2. Clustering faces. Exemplars minimizing the standard squared error measure of similarity were identified from 900 normalized face images (3). For a common preference of –600, affinity propagation found 62 clusters, and the average squared error was 108. For comparison, the best of 100 runs of k-centers clustering with different random initializations achieved a worse average squared error of 119. (A) The 15 images with highest squared error under either affinity propagation or k-centers clustering are shown in the top row. The middle and bottom rows show the exemplars assigned by the two methods, and the boxes show which of the two methods performed better for that image, in terms of squared error. Affinity propagation found higher-quality exemplars. (B) The average squared error achieved by a single run of affinity propagation and 10,000 runs of k-centers clustering, versus the number of clusters. The colored bands show different percentiles of squared error, and the number of exemplars corresponding to the result from (A) is indicated. © The above procedure was repeated using the sum of absolute errors as the measure of similarity, which is also a popular optimization criterion.

接下来,我们利用平方误差的标准优化准则研究了人脸图像的聚类问题。 我们使用亲和传播和k-centers聚类来识别从Olivetti人脸数据库中提取的900幅灰度图像中的样本。亲和力传播发现样本的平方误差远低于100次k-centers聚类的最佳结果(图2A),它花费了大约相同的计算机时间。我们询问大量的k-centers聚类随机重新启动是否可以实现相同的平方误差。 图2B显示了亲和传播的一次运行所实现的误差,以及10000次k-centers聚类所实现的误差分布。亲合性传播在两个数量级以上的时间内均匀地获得了较低的误差。 另一个流行的优化准则是绝对像素差异之和(它更好地容忍外围像素强度),因此我们使用这种误差度量重复了上述过程,亲和力传播再次一致地获得了较低的误差(图2c)。

【论文翻译】Clustering by Passing Messages Between Data Points
图二:聚集的面孔。 从900幅归一化人脸图像中识别出最小化标准平方误差测度的示例。对于-600的共同偏好,亲和传播发现62个簇,平均平方误差为108。 作为比较,100次不同随机初始化的k-centers聚类中,最好的平均平方误差为119。
(A)在亲和传播或k点聚类下,误差最高的15幅图像显示在顶行。 中间和底部行显示了这两种方法分配的示例,框显示了这两种方法中的哪一种在平方误差方面对该图像表现得更好。 亲和力传播发现了更高质量的样本。(B)一次亲和传播和1万次k点聚类所获得的平均平方误差与聚类数的比较。 彩色带显示不同的平方误差百分位数,并表示对应于(A)结果的样本数。©以绝对误差之和作为相似性度量,重复了上述程序,这也是一种流行的优化标准。

Many tasks require the identification of exemplars among sparsely related data, i.e., where most similarities are either unknown or large and negative. To examine affinity propagation in this context, we addressed the task of clustering putative exons to find genes, using the sparse similarity matrix derived from microarray data andreported in (4). In that work, 75,066 segments of DNA (60 bases long) corresponding to putative exons were mined from the genome of mouse chromosome 1. Their transcription levels were measured across 12 tissue samples, and the similarity between every pair of putative exons (data points) was computed. The measure of similarity between putative exons was based on their proximity in the genome and the degree of coordination of their transcription levels across the 12 tissues. To account for putative exons that are not exons (e.g., introns), we included an additional artificial exemplar and determined the similarity of each other data point to this “nonexon exemplar” using statistics taken over the entire data set. The resulting 75,067 × 75,067 similarity matrix (3) consisted of 99.73% similarities with values of –∞, corresponding to distant DNA segments that could not possibly be part of the same gene. We applied affinity propagation to this similarity matrix, but because messages need not be exchanged between point i and k if s(i, k)= –∞, each iteration of affinity propagation required exchanging messages between only a tiny subset (0.27% or 15 million) of data point pairs.

许多任务需要在相关数据稀少的数据中识别样本,即大多数相似之处要么是未知的,要么是大的,要么是负面的。为了研究这种背景下的亲和力传播,我们讨论了聚类假定外显子来寻找基因的任务,使用从微阵列数据中导出的稀疏相似矩阵。在这项工作中,从小鼠1号染色体的基因组中提取了75066个DNA片段(60个碱基长)。 在12个组织样本中测量了它们的转录水平,并计算了每一对假定的外显子(数据点)之间的相似性。 推测外显子之间的相似性是基于它们在基因组中的接近程度和它们在12个组织中转录水平的协调程度。为了解释不是外显子(例如内含子)的假定外显子,我们包括了一个额外的人工样本,并利用整个数据集的统计数据确定了彼此数据点与这个“内含子样本”的相似性。由此产生的75067×75067相似矩阵与-∞值有99.73%的相似性,对应于不可能是同一基因的一部分的遥远DNA片段。我们将亲和传播应用于这个相似矩阵,但是由于消息不需要在点I和k之间交换,如果s(i,k)=-∞,亲和传播的每一次迭代只需要在数据点对的一个小子集(0.27%或1500万)之间交换消息。

Figure 3A illustrates the identification of gene clusters and the assignment of some data points to the nonexon exemplar. The reconstruction errors for affinity propagation and k-centers clustering are compared in Fig. 3B. For each number of clusters, affinity propagation was run once and took 6 min, whereas k-centers clustering was run 10,000 times and took 208 hours. To address the question of how well these methods perform in detecting bona fide gene segments, Fig. 3C plots the true-positive (TP) rate against the false-positive (FP) rate, using the labels provided in the RefSeq database (5). Affinity propagation achieved significantly higher TP rates, especially at low FP rates, which are most important to biologists. At a FP rate of 3%, affinity propagation achieved a TP rate of 39%, whereas the best k-centers clustering result was 17%. For comparison, at the same FP rate, the best TP rate for hierarchical agglomerative clustering (2) was 19%, and the engineering tool described in (4), which accounts for additional biological knowledge, achieved a TP rate of 43%.
【论文翻译】Clustering by Passing Messages Between Data Points
Fig. 3. Detecting genes. Affinity propagation was used to detect putative exons (data points) comprising genes from mouse chromosome 1. Here, squared error is not appropriate as a measure of similarity, but instead similarity values were derived from a cost function measuring proximity of the putative exons in the genome and co-expression of the putative exons across 12 tissue samples (3). (A) A small portion of the data and the emergence of clusters during each iteration of affinity propagation are shown. In each picture, the 100 boxes outlined in black correspond to 100 data points (from a total of 75,066 putative exons), and the 12 colored blocks in each box indicate the transcription levels of the corresponding DNA segment in 12 tissue samples. The box on the far left corresponds to an artificial data point with infinite preference that is used to account for nonexon regions (e.g., introns). Lines connecting data points indicate potential assignments, where gray lines indicate assignments that currently have weak evidence and solid lines indicate assignments that currently have strong evidence. (B) Performance on minimizing the reconstruction error of genes, for different numbers of detected clusters. For each number of clusters, affinity propagation took 6 min, whereas 10,000 runs of k-centers clustering took 208 hours on the same computer. In each case, affinity propagation achieved a significantly lower reconstruction error than k-centers clustering. © A plot of true-positive rate versus false-positive rate for detecting exons [using labels from RefSeq (5)] shows that affinity propagation also performs better at detecting biologically verified exons than k-centers clustering.

图3A说明了基因簇的识别以及将一些数据点分配给内含子样本。在图3B中比较了亲和传播和k-centers聚类的重建误差。对于每个簇数,亲和传播运行一次,耗时6分钟,而k-centers聚类运行10000次,耗时208小时。为了解决这些方法在检测真正的基因片段方面表现得如何的问题,图3C使用RefSeq数据库中提供的标签,绘制真阳性(TP)率与假阳性(FP)率的关系图。亲和力传播获得了显著较高的TP率,特别是在低FP率下,这对生物学家很重要的。 在FP率为3%的情况下,亲和力传播的TP率为39%,而最佳的k点聚类结果为17%。 为了比较,在相同的FP率下,分层团聚聚类(2)的最佳TP率为19%,而(4)中描述的工程工具占额外的生物学知识,TP率为43%。
【论文翻译】Clustering by Passing Messages Between Data Points
图3 检测基因。亲和繁殖用于检测由小鼠染色体1基因组成的推测外显子(数据点)。在这里,平方误差不适合作为相似性的度量,而是从一个成本函数中导出相似度值,该函数测量基因组中假定外显子的接近程度,并在12个组织样本*同表达假定外显子。(A)在亲和传播的每一次迭代中,显示了一小部分数据和集群的出现。 在每幅图中,用黑色勾画的100个方框对应100个数据点(总共75,066个假定的外显子),每个方框中的12个彩色块表示12个组织样本中相应DNA片段的转录水平。最左边的框对应于一个具有无限偏好的人工数据点,用于解释内含子区域(例如内含子)。 连接数据点的线表示潜在的赋值,其中灰色线表示当前具有弱证据的赋值,实线表示当前具有强证据的赋值。(B)对于不同数量的检测到的簇,尽量减少基因重建误差的性能。 对于每个簇数,亲和传播需要6分钟,而10000次k中心聚类在同一台计算机上需要208小时。 在每种情况下,亲和传播获得的重建误差明显低于 k-centers聚类。©一个用于检测外显子的真阳性率与假阳性率图[使用来自RefSeq(5)的标签]表明,亲和力传播在检测生物验证的外显子方面也比k中心聚类更好。

Affinity propagation’s ability to operate on the basis of nonstandard optimization criteria makes it suitable for exploratory data analysis using unusual measures of similarity. Unlike metric-space clustering techniques such as k-means clustering (1), affinity propagation can be applied to problems where the data do not lie in a continuous space. Indeed, it can be applied to problems where the similarities are not symmetric [i.e., s(i, k) ≠ s(k, i)] and to problems where the similarities do not satisfy the triangle inequality [i.e., s(i, k)< s(i, j)+ s(j, k)]. To identify a small number of sentences in a draft of this manuscript that summarize other sentences, we treated each sentence as a “bag of words” (6) and computed the similarity of sentence i to sentence k based on the cost of encoding the words in sentence i using the words in sentence k. We found that 97% of the resulting similarities (2, 3) were not symmetric. The preferences were adjusted to identify (using λ = 0.8) different numbers of representative exemplar sentences (2), and the solution with four sentences is shown in Fig. 4A.
【论文翻译】Clustering by Passing Messages Between Data Points
Fig. 4. Identifying key sentences and air-travel routing. Affinity propagation can be used to explore the identification of exemplars on the basis of nonstandard optimization criteria. (A) Similarities between pairs of sentences in a draft of this manuscript were constructed by matching words. Four exemplar sentences were identified by affinity propagation and are shown. (B) Affinity propagation was applied to similarities derived from air-travel efficiency (measured by estimated travel time) between the 456 busiest commercial airports in Canada and the United States—the travel times for both direct flights (shown in blue) and indirect flights (not shown), including the mean transfer time of up to a maximum of one stopover, were used as negative similarities (3). © Seven exemplars identified by affinity propagation are color-coded, and the assignments of other cities to these exemplars is shown. Cities located quite near to exemplar cities may be members of other more distant exemplars due to the lack of direct flights between them (e.g., Atlantic City is 100 km from Philadelphia, but is closer in flight time to Atlanta). (D) The inset shows that the Canada-USA border roughly divides the Toronto and Philadelphia clusters, due to a larger availability of domestic flights compared to international flights. However, this is not the case on the west coast as shown in (E), because extraordinarily frequent airline service between Vancouver and Seattle connects Canadian cities in the northwest to Seattle.

亲和传播在非标准优化准则的基础上进行操作的能力使得它适合于使用不寻常的相似性度量进行探索性数据分析。 与k-means聚类等度量空间聚类技术不同,亲和传播可以应用于数据不位于连续空间的问题。事实上,它可以适用于相似性不对称的问题[即s(i,k)=s(k,i)],也可以适用于相似性不满足三角形不等式的问题[即s(i,k)<s(i,j)s(j,k)]。为了确定这份手稿草稿中总结其他句子的少数句子,我们将每个句子视为一个“单词袋”,并根据使用句子k中的单词编码句子I中单词的成本计算句子I与句子k的相似性。我们发现97%的结果相似性是不对称的。 调整偏好以识别(使用λ=0.8)不同数量的代表性例句,四个句子的解决方案如图4a所示。
【论文翻译】Clustering by Passing Messages Between Data Points
图4 识别关键句子和空中旅行路线。亲和传播可以在非标准优化准则的基础上探索样本的识别。(A)这份手稿草稿中的一对句子之间的相似之处是通过匹配的单词来构建的。 通过亲和传播,确定了四个例句。(B)亲和性传播适用于加拿大和美国456个最繁忙的商业机场的空中旅行效率(以估计的旅行时间衡量)得出的相似之处----直飞航班(以蓝色显示)和间接航班(未显示)的旅行时间,包括最长一次停留的平均转移时间,被用作负面相似之处。通过亲和传播识别出的七个样本是彩色编码的,并显示了其他城市对这些样本的分配。 距离模范城市很近的城市可能是其他更远的模范城市的成员,因为它们之间缺乏直接航班(例如,大西洋城离费城100公里,但离亚特兰大更近)。(D)内嵌图显示,加拿大-美国边界大致划分了多伦多和费城两个集群,因为国内航班比国际航班更多。然而,如(E)所示,西海岸的情况并非如此,因为温哥华和西雅图之间非常频繁的航空服务连接了加拿大西北部的城市和西雅图。

We also applied affinity propagation to explore the problem of identifying a restricted number of Canadian and American cities that are most easily accessible by large subsets of other cities, in terms of estimated commercial airline travel time. Each data point was a city, and the similarity s(i, k) was set to the negative time it takes to travel from city i to city k by airline, including estimated stopover delays (3). Due to headwinds, the transit time was in many cases different depending on the direction of travel, so that 36% of the similarities were asymmetric. Further, for 97% of city pairs i and k, there was a third city j such that the triangle inequality was violated, because the trip from i to k included a long stopover delay in city j so it took longer than the sum of the durations of the trips from i to j and j to k. When the number of “most accessible cities” was constrained to be seven (by adjusting the input preference appropriately), the cities shown in Fig. 4, B to E, were identified. It is interesting that several major cities were not selected, either because heavy international travel makes them inappropriate as easily accessible domestic destinations (e.g., New York City, Los Angeles) or because their neighborhoods can be more efficiently accessed through other destinations (e.g., Atlanta, Philadelphia, and Minneapolis account for Chicago’s destinations, while avoiding potential airport delays).

我们还应用亲和力传播来探讨在估计商业航空旅行时间方面,确定少数加拿大和美国城市最容易被其他城市的大亚群访问的问题。每个数据点都是一个城市,相似性s(i,k)被设置为航空公司从城市i到城市k所需的负时间,包括估计的中途停留延迟。由于逆风,在许多情况下,过境时间取决于旅行方向,因此36%的相似性是不对称的。 此外,对于97%的城市对i和k,存在第三个城市j,从而违反了三角形不等式,因为从itok的旅行包括了j市的长期停留延迟,所以它花费了比从i到j和j到k的旅行持续时间之和更长的时间。当“最容易到达的城市”的数量被限制为7个(通过适当调整输入偏好)时,图4中B到E的城市就被确定了。有趣的是,一些主要城市没有被选中,要么是因为大量的国际旅行使它们不适合作为容易到达的国内目的地(例如纽约市、洛杉矶),要么是因为它们的社区可以更有效地通过其他目的地(例如亚特兰大、费城和明尼阿波利斯占芝加哥目的地,同时避免潜在的机场延误进入)。

Affinity propagation can be viewed as a method that searches for minima of an energy function (7) that depends on a set of N hidden labels, c1,…, cN, corresponding to the N data points. Each label indicates the exemplar to which the point belongs, so that s(i, ci) is the similarity of data point i to its exemplar. ci = i is a special case indicating that point i is itself an exemplar, so that s(i, ci) is the input preference for point i. Not all configurations of the labels are valid; a configuration c is valid when for every point i, if some other point i′ has chosen i as its exemplar (i.e., ci′ = i), then i must be an exemplar (i.e., ci = i). The energy of a valid configuration is E(c)=i=1Ns(i,ci)E(c) = −∑i=1 N_s(i,ci). Exactly minimizing the energy is computationally intractable, because a special case of this minimization problem is the NP-hard k-median problem (8). However, the update rules for affinity propagation correspond to fixed-point recursions for minimizing a Bethe free-energy (9) approximation. Affinity propagation is most easily derived as an instance of the max-sum algorithm in a factor graph (10) describing the constraints on the labels and the energy function (2).

In some degenerate cases, the energy function may have multiple minima with corresponding multiple fixed points of the update rules, and these may prevent convergence. For example, if s(1, 2) = s(2, 1) and s(1, 1) = s(2, 2), then the solutions c1 = c2 =1 and c1 = c2 = 2 both achieve the same energy. In this case, affinity propagation may oscillate, with both data points alternating between being exemplars and nonexemplars. In practice, we found that oscillations could always be avoided by adding a tiny amount of noise to the similarities to prevent degenerate situations, or by increasing the damping factor.

亲和力传播可以看作是一种搜索能量函数的极小值的方法,它依赖于一组N个隐藏标签,c1,…,cN,对应于N个数据点。每个标签都表示该点所属的样本,因此s(i,ci)是数据点i与其样本的相似性。 词=i是一个特例,表明点i本身是一个样本,因此s(i,ci)是对点i的输入偏好。并非所有标签的配置都是有效的;当对于每个点i时,配置c是有效的,如果其他点i‘选择了i作为它的示例(即ci‘=i),那么i必须是示例(即ci=i)。有效配置的能量是 E(c)=i=1Ns(i,ci)E(c) = −∑i=1 N_s(i,ci),这种精确地最小化能量是计算上难以解决的,因为这个最小化问题的一个特例是NP-hard k-median问题。 然而,亲和传播的更新规则对应于用于最小化*能近似的定点递归。亲和传播最容易导出为最大和算法在一个因子图中的一个实例,描述标签和能量函数上的约束。

在某些简并情况下,能量函数可能具有具有更新规则相应的多个固定点的多个极小值,这些都可能阻止收敛。例如,如果s(1,2)=s(2,1)和s(1,1)=s(2,2),那么解c1=c2和c1=c2和2都达到相同的能量。 在这种情况下,亲和力传播可能振荡,两个数据点在样本和非样本之间交替。 在实践中,我们发现,通过在相似点上添加少量噪声来防止退化情况,或者通过增加阻尼因子来避免振荡。

Affinity propagation has several advantages over related techniques. Methods such as k-centers clustering (1), k-means clustering (1), and the expectation maximization (EM) algorithm (11) store a relatively small set of estimated cluster centers at each step. These techniques are improved upon by methods that begin with a large number of clusters and then prune them (12), but they still rely on random sampling and make hard pruning decisions that cannot be recovered from. In contrast, by simultaneously considering all data points as candidate centers and gradually identifying clusters, affinity propagation is able to avoid many of the poor solutions caused by unlucky initializations and hard decisions. Markov chain Monte Carlo techniques (13) randomly search for good solutions, but do not share affinity propagation’s advantage of considering many possible solutions all at once.

Hierarchical agglomerative clustering (14) and spectral clustering (15) solve the quite different problem of recursively comparing pairs of points to find partitions of the data. These techniques do not require that all points within a cluster be similar to a single center and are thus not well-suited to many tasks. In particular, two points that should not be in the same cluster may be grouped together by an unfortunate sequence of pairwise groupings.

亲和力传播比相关技术有几个优点。 方法如k-centers聚类、k-means聚类和期望最大化(EM)算法在每一步存储一组相对较小的估计聚类中心。这些技术是通过从大量集群开始,然后对它们进行修剪的方法来改进的,但它们仍然依赖于随机抽样,并做出无法恢复的严格修剪决策。 相反,通过同时考虑所有数据点作为候选中心并逐渐识别集群,亲和传播能够避免许多由不幸的初始化和硬决策引起的差解。马尔可夫链蒙特卡洛技术随机解,但不具有同时考虑许多可能的解的亲和力传播的优点。

层次聚集聚类和谱聚类解决了递归比较点对以找到数据分区的不同问题。这些技术不要求集群中的所有点都与单个中心相似,因此不能很好地适应许多任务。 特别是,不应在同一组中的两个点可以被不幸的成对分组序列组合在一起。

In (8), it was shown that the related metric k-median problem could be relaxed to form a linear program with a constant factor approximation. There, the input was assumed to be metric, i.e., nonnegative, symmetric, and satisfying the triangle inequality. In contrast, affinity propagation can take as input general nonmetric similarities. Affinity propagation also provides a conceptually new approach that works well in practice. Whereas the linear programming relaxation is hard to solve and sophisticated software packages need to be applied (e.g., CPLEX), affinity propagation makes use of intuitive message updates that can be implemented in a few lines of code (2).

在(8)中,证明了相关的度量k-中值问题可以被放宽,形成一个具有常数因子近似的线性程序。 在那里,假设输入是度量的,即非负、对称和满足三角形不等式。相反,亲和传播可以作为输入的一般非度量相似性。 亲和力传播也提供了一种新的概念方法,在实践中很好地工作。虽然线性编程松弛很难解决,需要应用复杂的软件包(例如CPLEX),但亲和传播利用了可以在几行代码中实现的直观消息更新。

Affinity propagation is related in spirit to techniques recently used to obtain record-breaking results in quite different disciplines (16). The approach of recursively propagating messages (17) in a “loopy graph” has been used to approach Shannon’s limit in error-correcting decoding (18, 19), solve random satisfiability problems with an order-of-magnitude increase in size (20), solve instances of the NP-hard two-dimensional phase-unwrapping problem (21), and efficiently estimate depth from pairs of stereo images (22). Yet, to our knowledge, affinity propagation is the first method to make use of this idea to solve the age-old, fundamental problem of clustering data. Because of its simplicity, general applicability, and performance, we believe affinity propagation will prove to be of broad value in science and engineering.

亲和力传播在本质上与最近在完全不同的学科中获得破纪录结果的技术有关。在“循环图”中递归传播消息的方法已被用于接近Shannon在纠错解码中的极限,解决大小有序增加的随机可满足性问题,解决NP-hard二维相位展开问题的实例,并有效地估计对立体图像的深度。然而,据我们所知,亲和传播是第一种利用这一思想来解决数据聚类这一古老而基本的问题的方法。 由于其简单性、一般适用性和性能,我们相信亲和传播将在科学和工程中具有广泛的价值。