library(phutil)
library(ggplot2)
This vignette introduces the Wasserstein and bottleneck distances between persistence diagrams and their implementations in {phutil}, adapted from Hera, by way of two tasks:
In addition to {phutil}, we use {ggplot2} to visualize the benchmark results. We will also access the {tdaunif} package to generate larger point clouds and the {microbenchmark} package to perform benchmark tests.
library(phutil)
library(ggplot2)
Persistence diagrams are multisets (sets with multiplicity) of points in the plane that encode the interval decompositions of persistent modules obtained from filtrations of data (e.g. Vietoris–Rips filtrations of point clouds and cubical filtrations of numerical arrays). Most applications consider only ordinary persistent homology, so that all points live in the upper-half plane; and most involve non-negative-valued filtrations, so that all points live in the first quadrant. The examples in this vignette will be no exceptions.
We’ll distinguish between persistence diagrams, which encode one degree of a persistence module, and persistence data, which comprises persistent pairs of many degrees (and annotated as such). Whereas a diagram is typically represented as a 2-column matrix with columns for birth and death values, data are typically represented as a 3-column matrix with an additional column for (whole number) degrees.
The most common distance metrics between persistence diagrams exploit the family of Minkowski distances Dp between points in ℝn defined, for 1 ≤ p < ∞, as follows:
$$ D_p(x,y) = \left(\sum_{i=1}^{n}{(x_i - y_i)^p}\right)^{1/p}. $$
In the limit p → ∞, this expression approaches the following auxiliary definition:
$$ D_\infty(x,y) = \max_{i=1}^{n}{\lvert x_i - y_i \rvert}. $$
As the parameter p ranges between 1 and ∞, three of its values yield familiar distance metrics: The taxicab distance D1, the Euclidean distance D2, and the Chebyshev distance D∞.
The Kantorovich or Wasserstein metric derives from the problem of optimal transport: What is the minimum cost of relocating one distribution to another? We restrict ourselves to persistence diagrams with finitely many off-diagonal point masses, though each diagram is taken to include every point on the diagonal. So the cost of relocating one diagram X to another Y amounts to (a) the cost of relocating some off-diagonal points to other off-diagonal points plus (b) the cost of relocating the remaining off-diagonal points to the diagonal, and vice-versa.
Because the diagonal points are dense, this cost depends entirely on how the off-diagonal points of both diagrams are matched—either to each other or to the diagonal, with each point matched exactly once. For this purpose, define a matching to be any bijective map φ : X → Y, though in practice we assume that almost all diagonal points are matched to themselves and incur no cost.
The cost D(x, φ(x)) of relocating a point x to its matched point φ(x) is typically taken to be a Minkowski distance Dq(x, φ(x)) = ‖x − φ(x)‖q, defined by the Lq norm on ℝ2. (While simple, this geometric treatment elides that the points in the plane encode the collection of interval modules into which the persistence module decomposes. Other metrics have been proposed for this space, but we restrict to this family here.)
The total cost of the relocation is canonically taken to be the Minkowski distance (∑x ∈ XDq(x, φ(x))p)1/p of the vector of matched-point distances. The Wasserstein distance is defined to be the infimum of this value over all possible matches. This yields the formulae
Wpq(X, Y) = infφ : X → Y(∑x ∈ X‖x − φ(x)‖qp)1/p,
for p < ∞ and
W∞q(X, Y) = infφ : X → Ymaxx ∈ X‖x − φ(x)‖q
for p = ∞.
See Cohen-Steiner et al. (2010) and Bubenik, Scott, and Stanley (2023) for detailed treatments and stability results on these families of metrics.
The following persistence diagrams provide a tractable example:
$$ X = \left[ \begin{array}{cc} 1 & 3 \\ 3 & 5 \end{array} \right], \phantom{X = Y} Y = \left[ \begin{array}{cc} 3 & 4 \end{array} \right]. $$
For convenience in the code, we omit dimensionality and focus only on the matrix representations.
<- rbind(
X c(1, 3),
c(3, 5)
)<- rbind(
Y c(3, 4)
)
We overlay both diagrams in Figure 1. Note that the vector between the off-diagonal points (1, 3) of X and (3, 4) of Y is (2, 1), while the vector from (1, 3) to its nearest diagonal point (2, 2) is (1, −1). That one coordinate is the same size while the other is smaller implies that an optimal matching will always match (1, 3) with the diagonal, so long as p ≥ 1. A similar argument necessitates that (3, 4) of Y must match with (3, 5) of X.
<- par(mar = c(4, 4, 1, 1) + .1)
oldpar plot(
NA_real_,
xlim = c(0, 6), ylim = c(0, 6), asp = 1, xlab = "birth", ylab = "death"
)abline(a = 0, b = 1)
points(X, pch = 1)
points(Y, pch = 5)
segments(X[, 1], X[, 2], c(2, Y[, 1]), c(2, Y[, 2]), lty = 2)
par(oldpar)
Based on these observations, we get this expression for the Wasserstein distance using the q-norm half-plane metric and the p-norm “matched space” metric:
Wpq(X, Y) = (‖a‖qp + ‖b‖qp)1/p,
where a = (1, −1) and b = (0, −1) are the vectors between matched points. We can now calculate Wasserstein distances “by hand”; we’ll consider those using the half-plane Minkowski metrics with q = 1, 2, ∞ and the “matched space” metrics with p = 1, 2, ∞.
First, with q = 1, we get ‖a‖q = 1 + 1 = 2 and ‖b‖q = 0 + 1 = 1. So the (1, p)-Wasserstein distance will be the p-Minkowski norm of the vector (2, 1), given by Wp1(X, Y) = (2p + 1p)1/p. This nets us the values W11(X, Y) = 3 and $W_2^1(X,Y) = \sqrt{5}$. And then W∞1(X, Y) = max (2, 1) = 2. The reader is invited to complete the rest of Table 1.
Metric | ‖a‖ | ‖b‖ | W1 | W2 | W∞ |
---|---|---|---|---|---|
L1 | 2 | 1 | 3 | $\sqrt{5}$ | 2 |
L2 | $\sqrt{2}$ | 1 | $1+\sqrt{2}$ | $\sqrt{3}$ | $\sqrt{2}$ |
L∞ | 1 | 1 | 2 | $\sqrt{2}$ | 1 |
The results make intuitive sense; for example, the values change monotonically along each row and column. Let us now validate the bottom row—using the L∞ distance on the half-plane, giving the popular bottleneck distance—using both Hera, as exposed through {phutil}, and Dionysus, as exposed through {TDA}:
wasserstein_distance(X, Y, p = 1)
#> [1] 2
wasserstein_distance(X, Y, p = 2)
#> [1] 1.414214
bottleneck_distance(X, Y)
#> [1] 1
In order to compute distances with {TDA}, we must restructure the PDs to include a "dimension"
column. Note also that TDA::wasserstein()
does not take the 1/pth power after computing the sum of pth powers; we do this manually to get comparable results:
::wasserstein(cbind(0, X), cbind(0, Y), p = 1, dimension = 0)
TDA#> [1] 2
sqrt(TDA::wasserstein(cbind(0, X), cbind(0, Y), p = 2, dimension = 0))
#> [1] 1.414214
::bottleneck(cbind(0, X), cbind(0, Y), dimension = 0)
TDA#> [1] 1
An important edge case is when one persistence diagram is trivial, i.e. contains only the diagonal so is “empty” of off-diagonal points. This can occur unexpectedly in comparisons of persistence data, as the data may be large but higher-degree features present in one set but absent in another. To validate the distances in this case, we create an empty diagram E and use the same code to compare it to X. The point (3, 5) of X will be matched to the diagonal (4, 4), which yields the same ∞-distance 1 so the L∞ Wasserstein distances will be the same as before.
# empty PD
<- matrix(NA_real_, nrow = 0, ncol = 2)
E # with dimension column
<- cbind(matrix(NA_real_, nrow = 0, ncol = 1), E)
E_ # distance from empty using phutil/Hera
wasserstein_distance(E, X, p = 1)
#> [1] 2
wasserstein_distance(E, X, p = 2)
#> [1] 1.414214
bottleneck_distance(E, X)
#> [1] 1
# distance from empty using TDA/Dionysus
::wasserstein(E_, cbind(0, X), p = 1, dimension = 0)
TDA#> [1] 2
sqrt(TDA::wasserstein(E_, cbind(0, X), p = 2, dimension = 0))
#> [1] 1.414214
::bottleneck(E_, cbind(0, X), dimension = 0)
TDA#> [1] 1
For a straightforward benchmark test, we compute PDs from point clouds sampled with noise from two one-dimensional manifolds embedded in ℝ3: the circle as a trefoil knot and the segment as a two-armed archimedian spiral. To prevent the results from being sensitive to an accident of a single sample, we generate lists of 24 samples and benchmark only one iteration of each function on each.
set.seed(28415)
<- 24
n <- lapply(seq(n), function(i) {
PDs1 <- tdaunif::sample_trefoil(n = 120, sd = .05)
S1 as_persistence(TDA::ripsDiag(S1, maxdimension = 2, maxscale = 6))
})<- lapply(seq(n), function(i) {
PDs2 <- cbind(tdaunif::sample_arch_spiral(n = 120, arms = 2), 0)
S2 <- tdaunif::add_noise(S2, sd = .05)
S2 as_persistence(TDA::ripsDiag(S2, maxdimension = 2, maxscale = 6))
})
Both implementations are used to compute distances between successive pairs of diagrams. The computations are annotated by homological degree and Wasserstein power so that these results can be compared separately.
<- lapply(lapply(PDs1, as.data.frame), as.matrix)
PDs1_ <- lapply(lapply(PDs2, as.data.frame), as.matrix)
PDs2_ # iterate over homological degrees and Wasserstein powers
<- list()
bm_all <- seq_along(PDs1)
PDs_i for (dimension in seq(0, 2)) {
# compute
<- do.call(rbind, lapply(seq_along(PDs1), function(i) {
bm_1 as.data.frame(microbenchmark::microbenchmark(
TDA = TDA::wasserstein(
dimension = dimension, p = 1
PDs1_[[i]], PDs2_[[i]],
),phutil = wasserstein_distance(
dimension = dimension, p = 1
PDs1[[i]], PDs2[[i]],
),times = 1, unit = "ns"
))
}))<- do.call(rbind, lapply(seq_along(PDs1), function(i) {
bm_2 as.data.frame(microbenchmark::microbenchmark(
TDA = sqrt(TDA::wasserstein(
dimension = dimension, p = 2
PDs1_[[i]], PDs2_[[i]],
)),phutil = wasserstein_distance(
dimension = dimension, p = 2
PDs1[[i]], PDs2[[i]],
),times = 1, unit = "ns"
))
}))<- do.call(rbind, lapply(seq_along(PDs1), function(i) {
bm_inf as.data.frame(microbenchmark::microbenchmark(
TDA = TDA::bottleneck(
dimension = dimension
PDs1_[[i]], PDs2_[[i]],
),phutil = bottleneck_distance(
dimension = dimension
PDs1[[i]], PDs2[[i]],
),times = 1, unit = "ns"
))
}))# annotate and combine
$power <- 1; bm_2$power <- 2; bm_inf$power <- Inf
bm_1<- rbind(bm_1, bm_2, bm_inf)
bm_res $degree <- dimension
bm_res<- c(bm_all, list(bm_res))
bm_all
}<- do.call(rbind, bm_all) bm_all
Figure 2 compares the distributions of runtimes by homological degree (column) and Wasserstein power (row). We use nanoseconds in {microbenchmark} when benchmarking to avoid potential integer overflows. Hence, we convert the results into seconds ahead of formatting the axis in seconds.
<- transform(bm_all, expr = as.character(expr), time = unlist(time))
bm_all <- subset(bm_all, select = c(expr, degree, power, time))
bm_all ggplot(bm_all, aes(x = time * 10e-9, y = expr)) +
facet_grid(
rows = vars(power), cols = vars(degree),
labeller = label_both
+
) geom_violin() +
scale_x_continuous(
transform = "log10",
labels = scales::label_timespan(units = "secs")
+
) labs(x = NULL, y = NULL)
We note that Dionysus via {TDA} clearly outperforms Hera via {phutil} on degree-1 PDs, which in these cases have many fewer features. However, the tables are turned in degree 0, in which the PDs have many more features—which, when present, dominate the total computational cost. (The implementations are more evenly matched on the degree-2 PDs, which may have to do with many of them being empty.) While by no means exhaustive and not necessarily representative, these results suggest that Hera via {phutil} scales more efficiently than Dionysus via {TDA} and should therefore be preferred for projects involving more feature-rich data sets.