aid
string | mid
string | abstract
string | related_work
string | ref_abstract
dict | title
string | text_except_rw
string | total_words
int64 |
---|---|---|---|---|---|---|---|
1001.0279
|
2952610888
|
We consider the problem of reconstructing a low rank matrix from noisy observations of a subset of its entries. This task has applications in statistical learning, computer vision, and signal processing. In these contexts, "noise" generically refers to any contribution to the data that is not captured by the low-rank model. In most applications, the noise level is large compared to the underlying signal and it is important to avoid overfitting. In order to tackle this problem, we define a regularized cost function well suited for spectral reconstruction methods. Within a random noise model, and in the large system limit, we prove that the resulting accuracy undergoes a phase transition depending on the noise level and on the fraction of observed entries. The cost function can be minimized using OPTSPACE (a manifold gradient descent algorithm). Numerical simulations show that this approach is competitive with state-of-the-art alternatives.
|
The importance of regularization in matrix completion is well known to practitioners. For instance, one important component of many algorithms competing for the Netflix challenge @cite_2 , consisted in minimizing the cost function @math (this is also known as @cite_0 @cite_9 ). Here the minimization variables are @math , @math . Unlike in , these matrices are not constrained to be orthogonal, and as a consequence the problem becomes significantly more degenerate. Notice that, in our approach, the orthogonality constraint fixes the norms @math , @math . This motivates the use of @math as a regularization term.
|
{
"abstract": [
"We present a novel approach to collaborative prediction, using low-norm instead of low-rank factorizations. The approach is inspired by, and has strong connections to, large-margin linear discrimination. We show how to learn low-norm factorizations by solving a semi-definite program, and discuss generalization error bounds for them.",
"Maximum Margin Matrix Factorization (MMMF) was recently suggested (, 2005) as a convex, infinite dimensional alternative to low-rank approximations and standard factor models. MMMF can be formulated as a semi-definite programming (SDP) and learned using standard SDP solvers. However, current SDP solvers can only handle MMMF problems on matrices of dimensionality up to a few hundred. Here, we investigate a direct gradient-based optimization method for MMMF and demonstrate it on large collaborative prediction problems. We compare against results obtained by Marlin (2004) and find that MMMF substantially outperforms all nine methods he tested.",
""
],
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_2"
],
"mid": [
"2122090912",
"1976618413",
""
]
}
|
Regularization for Matrix Completion
|
Let N be an m × n matrix which is 'approximately' low rank, that is
N = M + W = U ΣV T + W .(1)
where U has dimensions m × r, V has dimensions n × r, and Σ is a diagonal r × r matrix. Thus M has rank r and W can be thought of as noise, or 'unexplained contributions' to N . Throughout the paper we assume the normalization U T U = m I r×r and V T V = n I r×r (I d×d being the d × d identity). Out of the m × n entries of N , a subset E ⊆ [m] × [n] is observed. We let P E (N ) be the m × n matrix that contains the observed entries of N , and is filled with 0's in the other positions
P E (N ) ij = N ij if (i, j) ∈ E , 0 otherwise.(2)
The noisy matrix completion problem requires to reconstruct the low rank matrix M from the observations P E (N ). In the following we will also write N E = P E (N ) for the sparsified matrix. Over the last year, matrix completion has attracted significant attention because of its relevance -among other applications-to colaborative filtering. In this case, the matrix N contains evaluations of a group of customers on a group of products, and one is interested in exploiting a sparsely filled matrix to provide personalized recommendations [1]. In such applications, the noise W is not a small perturbation and it is crucial to avoid overfitting. For instance, in the limit M → 0, the estimate of M risks to be a low-rank approximation of the noise W , which would be grossly incorrect.
In order to overcome this problem, we propose in this paper an algorithm based on minimizing the following cost function
F E (X, Y ; S) ≡ 1 2 ||P E (N − XSY T )|| 2 F + 1 2 λ ||S|| 2 F .(3)
Here the minimization variables are S ∈ R r×r , and X ∈ R m×r , Y ∈ R n×r with X T X = Y T Y = I r×r . Finally, λ > 0 is a regularization parameter.
A. Algorithm and main results
The algorithm is an adaptation of the OPTSPACE algorithm developed in [2]. A key observation is that the following modified cost function can be minimized by singular value decomposition (see Section I.1):
F E (X, Y ; S) ≡ 1 2 ||P E (N ) − XSY T || 2 F + 1 2 λ ||S|| 2 F . (4)
As emphasized in [2], [3], which analyzed the case λ = 0, this minimization can yield poor results unless the set of observations E is 'well balanced'. This problem can be bypassed by 'trimming' the set E, and constructing a balanced set E. The OPTSPACE algorithm is given as follows.
OPTSPACE ( set E, matrix N E )
1: Trim E, and let E be the output; 2: Minimize F E (X, Y ; S) via SVD, let X 0 , Y 0 , S 0 be the output; 3: Minimize F E (X, Y ; S) by gradient descent using X 0 , Y 0 , S 0 as initial condition.
In this paper we will study this algorithm under a model for which step 1 (trimming) is never called, i.e. E = E with high probability. We will therefore not discuss it any further. Section II compares the behavior of the present approach with alternative schemes. Our main analytical result is a sharp characterization of the mean square error after step 2. Here and below the limit n → ∞ is understood to be taken with m/n → α ∈ (0, ∞).
(i, j), N ij is observed (i.e. (i, j) ∈ E) independently with probability p. Finally let M = X 0 S 0 Y T 0
be the rank r matrix reconstructed by step 2 of OPTSPACE, for the optimal choice of λ. Then, almost surely for n → ∞ 1
||M || 2 F || M − M || 2 F = 1 − − r k=1 Σ 2 k 1 − σ 4 p 2 Σ 4 k + 2 ||Σ|| 2 F r k=1 Σ 2 k 1 + √ ασ 2 pΣ 2 k 1 + σ 2 pΣ 2 k √ α + o n (1) .
This theorem focuses on a high-noise regime, and predicts a sharp phase transition: if σ 2 /p < Σ 1 , we can successfully extract information on M , from the observations N E . If on the other hand σ 2 /p ≥ Σ 1 , the observations are essentialy useless in reconstructing M . It is possible to prove [4] that the resulting tradeoff between noise and observed entries is tight: no algorithm can obtain relative mean square error smaller than one for σ 2 /p ≥ Σ 1 , under a simple random model for M . To the best of our knowledge, this is the first sharp phase transition result for low rank matrix completion.
For the proof of Theorem I.1, we refer to Section III. An important byproduct of the proof is that it provides a rule for choosing the regularization parameter λ, in the large system limit.
II. NUMERICAL SIMULATIONS
In this section, we present the results of numerical simulations on synthetically generated matrices. The data are generated following the recipe of [9]: sample U ∈ R n×r and V ∈ R m×r by choosing U ij and V ij independently and indentically as N (0, 1).
Sample independently W ∈ R m×n by choosing W ij iid with distribution N (0, σ 2 √ mn). Set N = U V T + W .
We also use the parameters chosen in [9] and define Figure 1, we plot the train error and test error for the OPTSPACE algorithm on matrices generated as above with n = 100, r = 10, SNR=1 and p = 0.5. For comparison, we also plot the corresponding curves for SOFT-IMPUTE,HARD-IMPUTE and SVT taken from [9]. In Figures 2 and 3, we plot the same curves for different values of r, ǫ, SNR. In these plots, OPTSPACE(λ) corresponds to the algorithm that minimizes the cost (3). In particular OPTSPACE(0) corresponds to the algorithm described in [2]. Further, λ * = λ * (ρ) is the value of the regularization parameter that minimizes the test error while using rank ρ (this can be estimated on a subset of the data, not used for training). It is clear that regularization greatly improves the performance of OPTSPACE and makes it competitive with the best alternative methods.
SNR = Var((U V T ) ij ) Var(W ij ) , TestError = ||P ⊥ E (U V T − N )|| 2 F ||P ⊥ E (U V T )|| 2 F , TrainError = ||P E (N − N )|| 2 F ||P E (N )|| 2 F , where P ⊥ E (A) ≡ A − P E (A). In
III. PROOF OF THEOREM I.1
The proof of Theorem 1 is based on the following three steps: (i) Obtain an explicit expression for the root mean square error in terms of right and left singular vectors of N ; (ii) Estimate the effect of the noise W on the right and left singular vectors; (iii) Estimate the effect of missing entries.
Step (ii) builds on recent estimates on the eigenvectors of large covariance matrices [12]. In step (iii) we use the results of [2].
Step (i) is based on the following linear algebra calculation, whose proof we omit due to space constraints (here and below A, B ≡ Tr(AB T )). Proposition III.1. Let X 0 ∈ R m×r and Y 0 ∈ R m×r be the matrices whose columns are the first r, right and left, singular vectors of N E . Then the rank-r matrix reconstructed by step 2 of of OPTSPACE, with regularization parameter λ, has the form M (λ) = X 0 S 0 (λ)Y T 0 Further, there exists λ * > 0 such that
1 mn ||M − M (λ * )|| 2 F = ||Σ|| 2 F − X T 0 M Y 0 , X T 0 N E Y 0 √ mn||X 0 N E Y 0 || F 2 .(5)
A. The effect of noise
In order to isolate the effect of noise, we consider the matrix N = p U ΣV T + W E . Throughout this section we assume that the hypotheses of Theorem I.1 hold.
Lemma III.2. Let (nz 1,n , . . . , nz r,n ) be the r largest singular values of N . Then, as n → ∞, z i,n → z i almost surely, where, for Σ 2 i > σ 2 /p,
z i = pΣ i α σ 2 pΣ 2 i + 1 √ α σ 2 pΣ 2 i + √ α 1/2 ,(6)
and z i = σ pα 1/2 (1 + √ α) for Σ 2 i ≤ σ 2 /p. Further, let X ∈ R m×r and Y ∈ R n×r be the matrices whose columns are the first r, right and left, singular vectors . . . , a r ), B = diag(b 1 , . . . , b r ) and
1 √ m U T X − AQ n || F → 0, || 1 √ n V T Y − BQ n || F → 0 with A = diag(a 1 ,a 2 i = 1 − σ 4 p 2 Σ 4 i 1 + √ ασ 2 pΣ 2 i −1 , b 2 i = 1 − σ 4 p 2 Σ 4 i 1 + σ 2 p √ αΣ 2 i −1 ,(7)
for Σ 2 i > σ 2 /p, while a i = b i = 0 otherwise. Proof: Due to space limitations, we will focus here on the case Σ 1 , . . . , Σ r > σ 2 /p. The general proof proceeds along the same lines, and we defer it to [4].
Notice that W E is an m × n matrix with i.i.d. entries with variance √ mnσ 2 p and fourth moment bounded by Cn 2 . It is therefore sufficient to prove our claim for p = 1 and then rescale Σ by p and σ by √ p. We will also assume that, without loss of generality, m ≥ n.
Let Z be an r×r diagonal matrix containing the eigenvalues (nz n,1 , . . . , nz n,r ). The eigenvalue equations read
Uβ y + W Y − X Z = 0 ,(8)Vβ x + W T X − Y Z = 0 .(9)
where we definedβ x ≡ Σ U T X,β y ≡ Σ V T Y ∈ R r×r . By singular value decomposition we can write W = L diag(w 1 , w 2 , . . . w n )R T , with L T L = I m×m ,
R T R = I n×n . Let u T i , x T i , v T i , y T i ∈ R r be the i-th row of -respectively- L T U , L T X, R T V , R T Y .
In this basis equations (8) and (9) read
u T iβy + w i y T i − x T i Z = 0 , i ∈ [n] , u T iβy − x T i Z = 0 , i ∈ [m]\[n] , v T iβx + w i x T i − y T i Z = 0 , i ∈ [n] .
These can be solved to get
x T i = (u T iβy Z + w i v T iβx )(Z 2 − w 2 i ) −1 , i ∈ [n] , x T i = u T iβy Z −1 , i ∈ [m]\[n] , y T i = (v T iβx Z + w i u T iβy )( Z 2 − w 2 i ) −1 , i ∈ [n]. (10) By definition Σ −1β x = m i=1 u i x T i , and Σ −1β y = n i=1 v i y T i , whence Σ −1β x = n i=1 u i (u T iβy Z + w i v T iβx )( Z 2 − w 2 i ) −1 + m i=n+1 u i u T i β y Z −1 ,(11)Σ −1β y = n i=1 v i (v T iβx Z + w i u T iβy )( Z 2 − w 2 i ) −1 . (12) Let λ = w 2 i α 1/2 /(m 2 σ 2 ).
Then, it is a well known fact [13] that as n → ∞ the empirical law of the λ i 's converges weakly almost surely to the Marcenko-Pastur law, with density
ρ(λ) = α (λ − c 2 − )(c 2 + − λ)/(2πλ), with c ± = 1 ± α −1/2 . Let β x =β x / √ m, β y =β x / √ n, Z = Z/n. A priori, it
is not clear that the sequence (β x , β y , Z) -dependent on nconverges. However, it is immediate to show that the sequence is tight, and hence we can restrict ourselves to a subsequence Ξ ≡ {n i } i∈N along which a limit exists. Eventually we will show that the limit does not depend on the subsequence, apart, possibly, from the rotation Q n . Hence we shall denote the subsequential limit, by an abuse of notation, as (β x , β y , Z).
Consider now a such a convergent subsequence. It is possible to show that Σ 2 i > σ 2 /p implies Z 2 ii > α 3/2 σ 2 c + (α) 2 + δ for some positive δ. Since almost surely as n → ∞, w 2 i < α 3/2 σ 2 c + (α) 2 + δ/2 for all i, for all purposes the summands on the rhs of Eqs. (11), (12) can be replaced by uniformly continuous, bounded functions of the limiting eigenvalues λ i . Further, each entry of u i (resp. v i ) is just a single coordinate of the left (right) singular vectors of the random matrix W . Using Theorem 1 in [12], it follows that any subsequential limit satisfies the equations
β x = Σβ y Z (Z 2 − α 3/2 σ 2 λ) −1 ρ(λ)dλ + (α − 1)Z −1 ,(13)β y = Σβ x Z (Z 2 − α 3/2 σ 2 λ) −1 ρ(λ) dλ , .(14)
Solving for β y , we get an equation of the form
Σ −2 β y = β y f (Z) (15)
where f ( · ) is a function that can be given explicitely using the Stieltjis transform of the measure ρ(λ)dλ. Equation (15) implies that β y is block diagonal according to the degeneracy pattern of Σ. Considering each block, either β y vanishes in the block (a case that can be excluded using Σ 2 i > σ 2 /p) or Σ −2 i = f (Z ii ) in the block. Solving for Z ii shows that the eigenvalues are uniquely determined (independent of the subsequence) and given by Eq. (6).
In order to determine β x and β y first observe that, since I r×r = Y T Y = n i=1 y i y T i , we have, using Eq. (10)
I r×r = n i=1 ( Z 2 − w 2 i ) −1 ( Zβ T x v i + w iβ T y u i ) (v T iβx Z + w i u T iβy )( Z 2 − w 2 i ) −1 .
In the limit n → ∞, and assuming a convergent subsequence for (Z, β x , β y ), this sum can be computed as above. After
I r×r = Z 2 (Z 2 − α 3/2 σ 2 λ) 2 ρ(λ) dλ C x + α 3/2 σ 2 λ (Z 2 − α 3/2 σ 2 λ) 2 ρ(λ) dλ C y ,
where C x = β T
x β x , C y = β T y β y and the functions of Z on the rhs are defined as standard analyic functions of matrices.
Using Eqs. (13), (14) and solving the above, we get C x = diag(Σ 2 1 a 2 1 , . . . Σ 2 r a 2 r ), and B y = diag(Σ 2 1 b 2 1 , . . . Σ 2 r b 2 r ). We already concluded that β x and β y are block diagonals with blocks in correspondence with the degeneracy pattern of Σ. Since β T
x β x = C x and β T y β y = C y are diagonal, with the same degeneracy pattern, it follows that, inside each block of size d, each of β x and β y is proportional to a d×d orthogonal matrix. Therefore β x = ΣAQ s , β y = ΣBQ ′ s , for some othogonal matriced Q s , Q ′ s . Also, using equation (13) one can prove that Q s = Q ′ s . Notice, by the above argument A, B are uniquely fixed by our construction. On the other hand Q s might depend on the subsequence Ξ. Since our statmement allows for a seqence of rotations Q n , that depend on n, the eventual subsequence dependence of Q s can be factored out.
It is useful to point out a straightforward consequence of the above. Corollary III.3. There exists a sequence of orthogonal matrices Q n ∈ R r×r such that, almost surely,
lim n→∞ 1 √ mn X T U ΣV T Y − Q n DQ T n F = 0 ,(16)
with D = diag(Σ 1 a 1 b 1 , . . . , Σ r a r b r ).
B. The effect of missing entries
The proof of Theorem I.1 is completed by estabilishing a relation between the singular vectors X 0 , Y 0 of N E and the singular vectors X and Y of N . Lemma III.4. Let k ≤ r be the largest integer such that Σ 1 ≥ · · · ≥ Σ k > σ 2 /p, and denote by X
(k) 0 , Y (k)
0 , X (k) , and Y (k) the matrices containing the first k columns of X 0 , Y 0 , X, and Y , respectively. Let X
(k) 0 = X (k) S x + X (k) ⊥ , Y (k) 0 = Y (k) S y + Y (k) ⊥ where (X (k) ⊥ ) T X (k) = 0, (Y (k) ⊥ ) T Y (k) = 0
and S x , S y ∈ R r×r . Then there exists a numerical constant C = C(Σ i , σ 2 , α, M max ), such that, with high probability,
||X (k) ⊥ || 2 F , ||Y (k) ⊥ || 2 F ≤ Cr 1 n ,(17)
with probability approaching 1 as n → ∞.
Proof: We will prove our claim for the right singular vector Y , since the left case is completely analogous. Further we will drop the superscript k to lighten the notation.
We start by noticing that ||N E Y 0 || 2 F = k a=1 (nz a,n ) 2 , where nz a,n are the singular values of N E . Using Lemma
3.2 in [2] which bounds ||M E − pM || 2 = ||N E − N || 2 , we get ||N E Y 0 || 2 F ≥ k a=1 (nz a,n − CM max √ pn) 2 . (18) On the other hand ||N E Y 0 || F ≤ || N Y 0 || F + ||N E − N || 2 ||Y 0 || F . Further by letting S y = L y Θ y R T y , for L y , R y orthogonal matrices, we get || N Y 0 || 2 F = || N Y L y Θ y || 2 F + || N Y ⊥ || 2 F . Since Y T 0 Y 0 = I k×k , we have I k×k = R y Θ T y Θ y R T y + Y T ⊥ Y ⊥ , and therefore || N Y 0 || 2 F = || N Y L y || 2 F − || N Y L y R T y Y T ⊥ || 2 F + || N Y ⊥ || 2 F ≤ n 2 k a=1 z 2 a,n − n 2 z 2 k,n ||Y ⊥ || 2 F +n 2 pσ 2 α(c + (α) + δ)||Y ⊥ || 2 F = n 2 k a=1 z 2 a,n − n 2 e y ||Y ⊥ || 2 F ,
where e y ≡ z 2 k,n − pσ 2 α(c + (α) + δ), and used the inequality || N Y ⊥ || 2 F ≤ n 2 pσ 2 α(c + (α) + δ)||Y ⊥ || 2 F which holds for all δ > 0 asymptotically almost surely as n → ∞ (by an immediate generalization of Lemma III.2). It is simple to check that Σ k ≥ σ 2 /p implies e y > 0.
Using triangular inequality, Lemma 3. We now turn to upper bounding the right hand side of Eq. (5). Let k be defined as in the last lemma. Notice that by Lemma III.2, X T (U ΣV T )Y is well approximated by (X (k) ) T (U ΣV T )Y (k) . Analogously, it can be proved that X T 0 (U ΣV T )Y 0 is well approximated by (X (k) 0 ) T (U ΣV T )Y (k) 0 . Due to space limitations, we will omit this technical step and thus focus here on the case k = r (equivalently, neglect the error incurred by this approximation).
Using Lemma III.4 to bound the contribution of X ⊥ , Y ⊥ , we have N )Y 0 and, using once more the bound in Lemma 3.2 of [2], that implies
X T 0 (U ΣV T )Y 0 , X T 0 N E Y 0 = S T x X T (U ΣV T )Y S y , X T 0 N E Y 0 (1 + o n (1)) = X T (U ΣV T )Y , S T x X T 0 N E Y 0 S y (1 + o n (1)) . (19) Further X T 0 N E Y 0 = X T 0 N Y 0 + X T 0 (N E −|X T 0 (N E − N )Y 0 | ≤ Cr √ nrp, we get S T x X T 0 N E Y 0 S y = L x Θ 2 x L T x X T N Y R y Θ 2 y R T y + E 1 = Z + E 2 ,
where we recall that Z is the diagonal matrix with entries given by the singular values of N , and ||E 1 || 2 F , ||E 2 || 2 F ≤ C(p, r) √ n. Using this estimate in Eq. (19), together with the result in Lemma III.2, we finally get
X T 0 (U ΣV T )Y 0 , X T 0 N E Y 0 √ mn||X T 0 N E Y 0 || 2 F ≥ r k=1 Σ k a k b k z k √ α||z|| − o n (1) ,
which implies the thesis after simple algebraic manipulations
| 4,082 |
1001.0018
|
2951469503
|
We study the power of nonadaptive quantum query algorithms, which are algorithms whose queries to the input do not depend on the result of previous queries. First, we show that any bounded-error nonadaptive quantum query algorithm that computes some total boolean function depending on n variables must make Omega(n) queries to the input in total. Second, we show that, if there exists a quantum algorithm that uses k nonadaptive oracle queries to learn which one of a set of m boolean functions it has been given, there exists a nonadaptive classical algorithm using O(k log m) queries to solve the same problem. Thus, in the nonadaptive setting, quantum algorithms can achieve at most a very limited speed-up over classical query algorithms.
|
We note that the question of putting lower bounds on nonadaptive quantum query algorithms has been studied previously. First, Zalka has obtained a tight lower bound on the nonadaptive quantum query complexity of the unordered search problem, which is a particular learning problem @cite_2 . Second, in @cite_13 , Nishimura and Yamakami give lower bounds on the nonadaptive quantum query complexity of a multiple-block variant of the ordered search problem. Finally, @cite_14 develop the weighted adversary argument of Ambainis @cite_12 to obtain lower bounds that are specific to the nonadaptive setting. Unlike the situation considered here, their bounds also apply to quantum algorithms for computing partial functions.
|
{
"abstract": [
"We present two general methods for proving lower bounds on the query complexity of nonadaptive quantum algorithms. Both methods are based on the adversary method of Ambainis. We show that they yield optimal lower bounds for several natural problems, and we challenge the reader to determine the nonadaptive quantum query complexity of the ''1-to-1 versus 2-to-1'' problem and of Hidden Translation. In addition to the results presented at Wollic 2008 in the conference version of this paper, we show that the lower bound given by the second method is always at least as good (and sometimes better) as the lower bound given by the first method. We also compare these two quantum lower bounds to probabilistic lower bounds.",
"This paper employs a powerful argument, called an algorithmic argument, to prove lower bounds of the quantum query complexity of a multiple-block ordered search problem, which is a natural generalization of the ordered search problem. Apart from much studied polynomial and adversary methods for quantum query complexity lower bounds, our argument shows that the multiple-block ordered search needs a large number of nonadaptive oracle queries on a black-box model of quantum computation that is also supplemented with advice. Our argument is also applied to the notions of computational complexity theory: quantum truth-table reducibility and quantum truth-table autoreducibility.",
"The degree of a polynomial representing (or approximating) a function f is a lower bound for the quantum query complexity of f. This observation has been a source of many lower bounds on quantum algorithms. It has been an open problem whether this lower bound is tight. We exhibit a function with polynomial degree M and quantum query complexity @W(M^1^.^3^2^1^...). This is the first superlinear separation between polynomial degree and quantum query complexity. The lower bound is shown by a generalized version of the quantum adversary method.",
"I show that for any number of oracle lookups up to about pi 4thinsp radical (N) , Grover close_quote s quantum searching algorithm gives the maximal possible probability of finding the desired element. I explain why this is also true for quantum algorithms which use measurements during the computation. I also show that unfortunately quantum searching cannot be parallelized better than by assigning different parts of the search space to independent quantum computers. copyright ital 1999 ital The American Physical Society"
],
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_12",
"@cite_2"
],
"mid": [
"1888573181",
"2137637395",
"2770560274",
"2126338609"
]
}
|
Nonadaptive quantum query complexity
|
Many of the best-known results showing that quantum computers outperform their classical counterparts are proven in the query complexity model. This model studies the number of queries to the input x which are required to compute some function f (x). In this work, we study two broad classes of problem that fit into this model.
In the first class of problems, computational problems, one wishes to compute some boolean function f (x 1 , . . . , x n ) using a small number of queries to the bits of the input x ∈ {0, 1} n . The query complexity of f is the minimum number of queries required for any algorithm to compute f , with some requirement on the success probability. The deterministic query complexity of f , D(f ), is the minimum number of queries that a deterministic classical algorithm requires to compute f with certainty. D(f ) is also known as the decision tree complexity of f . Similarly, the randomised query complexity R 2 (f ) is the minimum number of queries required for a randomised classical algorithm to compute f with success probability at least 2/3. The choice of 2/3 is arbitrary; any constant strictly between 1/2 and 1 would give the same complexity, up to constant factors. input x through a unitary oracle operator O x . Many of the best-known quantum speed-ups can be understood in the query complexity model. Indeed, it is known that, for certain partial functions f (i.e. functions where there is a promise on the input), Q 2 (f ) may be exponentially smaller than R 2 (f ) [14]. However, if f is a total function, D(f ) = O(Q 2 (f ) 6 ) [4]. See [6,10] for good reviews of quantum and classical query complexity.
In the second class of problems, learning problems, one is given as an oracle an unknown function f ? (x 1 , . . . , x n ), which is picked from a known set C of m boolean functions f : {0, 1} n → {0, 1}. These functions can be identified with n-bit strings or subsets of [n], the integers between 1 and n. The goal is to determine which of the functions in C the oracle f ? is, with some requirement on the success probability, using the minimum number of queries to f ? . Note that the success probability required should be strictly greater than 1/2 for this model to make sense.
Borrowing terminology from the machine learning literature, each function in C is known as a concept, and C is known as a concept class [13]. We say that an algorithm that can identify any f ∈ C with worst-case success probability p learns C with success probability p. This problem is known classically as exact learning from membership queries [3,13], and also in the literature on quantum computation as the oracle identification problem [2]. Many interesting results in quantum algorithmics fit into this framework, a straightforward example being Grover's quantum search algorithm [9]. It has been shown by Servedio and Gortler that the speed-up that may be obtained by quantum query algorithms in this model is at most polynomial [13].
Nonadaptive query algorithms
This paper considers query algorithms of a highly restrictive form, where oracle queries are not allowed to depend on previous queries. In other words, the queries must all be made at the start of the algorithm. We call such algorithms nonadaptive, but one could also call them parallel, in contrast to the usual serial model of query complexity, where one query follows another. It is easy to see that, classically, a deterministic nonadaptive algorithm that computes a function f : {0, 1} n → {0, 1} which depends on all n input variables must query all n variables (x 1 , . . . , x n ). Indeed, for any 1 ≤ i ≤ n, consider an input x for which f (x) = 0, but f (x ⊕ e i ) = 1, where e i is the bit string which has a 1 at position i, and is 0 elsewhere. Then, if the i'th variable were not queried, changing the input from x to x ⊕ e i would change the output of the function, but the algorithm would not notice.
In the case of learning, the exact number of queries required by a nonadaptive deterministic classical algorithm to learn any concept class C can also be calculated. Identify each concept in C with an n-bit string, and imagine an algorithm A that queries some subset S ⊆ [n] of the input bits. If there are two or more concepts in C that do not differ on any of the bits in S, then A cannot distinguish between these two concepts, and so cannot succeed with certainty. On the other hand, if every concept x ∈ C is unique when restricted to S, then x can be identified exactly by A. Thus the number of queries required is the minimum size of a subset S ⊆ [n] such that every pair of concepts in C differs on at least one bit in S.
We will be concerned with the speed-up over classical query algorithms that can be achieved by nonadaptive quantum query algorithms. Interestingly, it is known that speedups can indeed be found in this model. In the case of computing partial functions, the speed-up can be dramatic; Simon's algorithm for the hidden subgroup problem over Z n 2 , for example, is nonadaptive and gives an exponential speed-up over the best possible classical algorithm [14]. There are also known speed-ups for computing total functions. For example, the parity of n bits can be computed exactly using only ⌈n/2⌉ nonadaptive quantum queries [8]. More generally, any function of n bits can be computed with bounded error using only n/2+O( √ n) nonadaptive queries, by a remarkable algorithm of van Dam [7]. This algorithm in fact retrieves all the bits of the input x successfully with constant probability, so can also be seen as an algorithm that learns the concept class consisting of all boolean functions on n bits using n/2 + O( √ n) nonadaptive queries.
Finally, one of the earliest results in quantum computation can be understood as a nonadaptive learning algorithm. The quantum algorithm solving the Bernstein-Vazirani parity problem [5] uses one query to learn a concept class of size 2 n , for which any classical learning algorithm requires n queries, showing that there can be an asymptotic quantumclassical separation for learning problems.
New results
We show here that these results are essentially the best possible. First, any nonadaptive quantum query algorithm that computes a total boolean function with a constant probability of success greater than 1/2 can only obtain a constant factor reduction in the number of queries used. In particular, if we restrict to nonadaptive query algorithms, then Q 2 (f ) = Θ(D(f )). In the case of exact nonadaptive algorithms, we show that the factor of 2 speed-up obtained for computing parity is tight. More formally, our result is the following theorem.
Theorem 1. Let f : {0, 1} n → {0, 1} be a total function that depends on all n variables, and let A be a nonadaptive quantum query algorithm that uses k queries to the input to compute f , and succeeds with probability at least 1 − ǫ on every input. Then
k ≥ n 2 1 − 2 ǫ(1 − ǫ) .
In the case of learning, we show that the speed-up obtained by the Bernstein-Vazirani algorithm [5] is asymptotically tight. That is, the query complexities of quantum and classical nonadaptive learning are equivalent, up to a logarithmic term. This is formalised as the following theorem.
Theorem 2. Let C be a concept class containing m concepts, and let A be a nonadaptive quantum query algorithm that uses k queries to the input to learn C, and succeeds with probability at least 1 − ǫ on every input, for some ǫ < 1/2. Then there exists a classical nonadaptive query algorithm that learns C with certainty using at most
4k log 2 m 1 − 2 ǫ(1 − ǫ)
queries to the input.
Nonadaptive quantum query complexity of computation
Let A be a nonadaptive quantum query algorithm. We will use what is essentially the standard model of quantum query complexity [10]. A is given access to the input x = x 1 . . . x n via an oracle O x which acts on an n + 1 dimensional space indexed by basis states |0 , . . . , |n , and performs the operation O x |i = (−1) x i |i . We define O x |0 = |0 for technical reasons (otherwise, A could not distinguish between x andx). Assume that A makes k queries to O x . As the queries are nonadaptive, we may assume they are made in parallel. Therefore, the existence of a nonadaptive quantum query algorithm that computes f and fails with probability ǫ is equivalent to the existence of an input state |ψ The intuition behind the proof of Theorem 1 is much the same as that behind "adversary" arguments lower bounding quantum query complexity [10]. As in Section 1.1, let e j denote the n-bit string which contains a single 1, at position j. In order to distinguish two inputs x, x ⊕ e j where f (x) = f (x ⊕ e j ), the algorithm must invest amplitude of |ψ in components where the oracle gives information about j. But, unless k is large, it is not possible to invest in many variables simultaneously.
We will use the following well-known fact from [5]. [5]). Imagine there exists a positive operator M ≤ I and states |ψ 1 , |ψ 2 such that ψ 1 |M |ψ 1
Fact 3 (Bernstein and Vazirani
≤ ǫ, but ψ 2 |M |ψ 2 ≥ 1 − ǫ. Then | ψ 1 |ψ 2 | 2 ≤ 4ǫ(1 − ǫ).
We now turn to the proof itself. Write the input state |ψ as
|ψ = i 1 ,...,i k α i 1 ,...,i k |i 1 , . . . , i k ,
where, for each m, 0 ≤ i m ≤ n. It is straightforward to compute that O ⊗k
x |i 1 , . . . , i k = (−1) x i 1 +···+x i k |i 1 , . . . , i k .
As f depends on all n inputs, for any j, there exists a bit string x j such that f (x j ) = f (x j ⊕ e j ). Then
(O x j O x j ⊕e j ) ⊗k |i 1 , . . . , i k = (−1) |{m:im=j}| |i 1 , . . . , i k ;
in other words (O x j O x j ⊕e j ) ⊗k negates those basis states that correspond to bit strings i 1 , . . . , i k where j occurs an odd number of times in the string. Therefore, we have
| ψ|(O x j O x j ⊕e j ) ⊗k |ψ | 2 = i 1 ,...,i k |α i 1 ,...,i k | 2 (−1) |{m:im=j}| 2 = 1 − 2 i 1 ,...,i k |α i 1 ,...,i k | 2 [|{m : i m = j}| odd] 2 =: (1 − 2W j ) 2 . Now, by Fact 3, (1 − 2W j ) 2 ≤ 4ǫ(1 − ǫ) for all j, so W j ≥ 1 2 1 − 2 ǫ(1 − ǫ) .
On the other hand, Combining these two inequalities, we have
k ≥ n 2 1 − 2 ǫ(1 − ǫ) .
Nonadaptive quantum query complexity of learning
In the case of learning, we use a very similar model to the previous section. Let A be a nonadaptive quantum query algorithm. A is given access to an oracle O x , which corresponds to a bit-string x picked from a concept class C. O x acts on an n+1 dimensional space indexed by basis states |0 , . . . , |n , and performs the operation
O x |i = (−1) x i |i , with O x |0 = |0 .
Assume that A makes k queries to O x and outputs x with probability strictly greater than 1/2 for all x ∈ C.
We will prove limitations on nonadaptive quantum algorithms in this model as follows. First, we show that a nonadaptive quantum query algorithm that uses k queries to learn C is equivalent to an algorithm using one query to learn a related concept class C ′ . We then show that existence of a quantum algorithm using one query that learns C ′ with constant success probability greater than 1/2 implies existence of a deterministic classical algorithm using O(log |C ′ |) queries. Combining these two results gives Theorem 2.
Lemma 4. Let C be a concept class over n-bit strings, and let C ⊗k be the concept class defined by
C ⊗k = {x ⊗k : x ∈ C},
where x ⊗k denotes the (n + 1) k -bit string indexed by 0 ≤ i 1 , . . . , i k ≤ n, with x ⊗k i 1 ,...,i k = x i 1 ⊕ · · · ⊕ x i k , and we define x 0 = 0. Then, if there exists a classical nonadaptive query algorithm that learns C ⊗k with success probability p and uses q queries, there exists a classical nonadaptive query algorithm that learns C with success probability p and uses at most kq queries.
Proof. Given access to x, an algorithm A can simulate a query of index (x 1 , . . . , x k ) of x ⊗k by using at most k queries to compute x 1 ⊕ · · · ⊕ x k . Hence, by simulating the algorithm for learning C ⊗k , A can learn C ⊗k with success probability p using at most kq nonadaptive queries. Learning C ⊗k suffices to learn C, because each concept in C ⊗k uniquely corresponds to a concept in C (to see this, note that the first n bits of x ⊗k are equal to x).
Lemma 5. Let C be a concept class containing m concepts. Assume that C can be learned using one quantum query by an algorithm that fails with probability at most ǫ, for some ǫ < 1/2. Then there exists a classical algorithm that uses at most (4 log 2 m)/(1−2 ǫ(1 − ǫ)) queries and learns C with certainty.
Proof. Associate each concept with an n-bit string, for some n, and suppose there exists a quantum algorithm that uses one query to learn C and fails with probability ǫ < 1/2. Then by Fact 3 there exists an input state |ψ = n i=0 α i |i such that, for all x = y ∈ C,
| ψ|O x O y |ψ | 2 ≤ 4ǫ(1 − ǫ), or in other words n i=0 |α i | 2 (−1) x i +y i 2 ≤ 4ǫ(1 − ǫ).(1)
We now show that, if this constraint holds, there must exist a subset of the inputs S ⊆ [n] such that every pair of concepts in C differs on at least one input in S, and |S| = O(log m). By the argument of Section 1.1, this implies that there is a nonadaptive classical algorithm that learns M with certainty using O(log m) queries.
We will use the probabilistic method to show the existence of S. For any k, form a subset S of at most k inputs between 1 and n by a process of k random, independent choices of input, where at each stage input i is picked to add to S with probability |α i | 2 . Now consider an arbitrary pair of concepts x = y, and let S + , S − be the set of inputs on which the concepts are equal and differ, respectively. By the constraint (1), we have
4ǫ(1 − ǫ) ≥ n i=0 |α i | 2 (−1) x i +y i 2 = i∈S + |α i | 2 − i∈S − |α i | 2 2 = 1 − 2 i∈S − |α i | 2 2 , so i∈S − |α i | 2 ≥ 1 2 − ǫ(1 − ǫ).
Therefore, at each stage of adding an input to S, the probability that an input in S − is added is at least 1 2 − ǫ(1 − ǫ). So, after k stages of doing so, the probability that none of these inputs has been added is at most
1 2 + ǫ(1 − ǫ) k .
As there are m 2 pairs of concepts x = y, by a union bound the probability that none of the pairs of concepts differs on any of the inputs in S is upper bounded by m 2
1 2 + ǫ(1 − ǫ) k ≤ m 2 1 2 + ǫ(1 − ǫ) k .
For any k greater than 2 log 2 m log 2 2/(1 + 2 ǫ(1 − ǫ)) < 4 log 2 m 1 − 2 ǫ(1 − ǫ) this probability is strictly less than 1, implying that there exists some choice of S ⊆ [n] with |S| ≤ k such that every pair of concepts differs on at least one of the inputs in S. This completes the proof.
We are finally ready to prove Theorem 2, which we restate for clarity.
Theorem. Let C be a concept class containing m concepts, and let A be a nonadaptive quantum query algorithm that uses k queries to the input to learn C, and succeeds with probability at least 1 − ǫ on every input, for some ǫ < 1/2. Then there exists a classical nonadaptive query algorithm that learns C with certainty using at most 4k log 2 m
1 − 2 ǫ(1 − ǫ)
queries to the input.
Proof. Let O x be the oracle operator corresponding to the concept x. Then a nonadaptive quantum algorithm A that learns x using k queries to O x is equivalent to a quantum algorithm that uses one query to O ⊗k x to learn x. It is easy to see that this is equivalent to A in fact using one query to learn the concept class C ⊗k . By Lemma 5, this implies that there exists a classical algorithm that uses at most (4k log 2 m)/(1 − 2 ǫ(1 − ǫ)) queries to learn C ⊗k with certainty. Finally, by Lemma 4, this implies in turn that there exists a classical algorithm that uses the same number of queries and learns C with certainty.
| 3,197 |
0910.4704
|
2129293544
|
We present an analytical framework to assess the link layer throughput of multichannel Opportunistic Spectrum Access (OSA) ad hoc networks. Specifically, we focus on analyzing various combinations of collaborative spectrum sensing and Medium Access Control (MAC) protocol abstractions. We decompose collaborative spectrum sensing into layers, parametrize each layer, classify existing solutions, and propose a new protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient distribution of sensing results in “K out of N” fusion rule. In case of multichannel MAC protocols, we evaluate two main approaches of control channel design with 1) dedicated and 2) hopping channel. We propose to augment these protocols with options of handling secondary user (SU) connections preempted by primary user (PU) by 1) connection buffering until PU departure and 2) connection switching to a vacant PU channel. By comparing and optimizing different design combinations, we show that 1) it is generally better to buffer preempted SU connections than to switch them to PU vacant channels and 2) TTDMA is a promising design option for collaborative spectrum sensing process when K does not change over time.
|
One of the first works that gained insight into the general performance of OSA networks, considering impact of PU activity on blocking and throughput of the SU network was @cite_29 , where the capacity of a multichannel OSA system was assessed by comparing centrally coordinated versus random SU channel assignment. A spectrum sensing process was not considered. A similar problem was investigated in @cite_11 where the spectrum sharing gains for PU and SU networks were obtained for a distributed and multichannel ad hoc OSA network. Unfortunately, a zero delay spectrum sensing process was assumed with genie-aided channel selection, i.e. in every time slot the receiver knew of the exact channel the transmitter will use to send data [Sec. III-C1] srinivasa_twc_2008 .
|
{
"abstract": [
"Static spectrum allocation prohibits radio devices from using spectral bands designated for others. As a result, some bands are under-utilized while other bands are over-populated with radio devices. To remedy this problem, the concept of spectrum agility has been considered so as to enable devices to opportunistically utilize others' spectral bands. In order to help realize this concept, we establish an analytical model to derive performance metrics, including spectrum utilization and spectrum-access blocking time in spectral-agile communication systems. We then propose three basic building blocks for spectral-agile systems, namely spectrum opportunity discovery, spectrum opportunity management, and spectrum usage coordination, and develop protocols for each blocks. These protocols are integrated with the IEEE 802.11 protocol, and simulated using ns-2 to evaluate the protocol overhead. The simulation results show that our proposed protocols can improve the throughput of an IEEE 802.11 wireless LAN by 90 for the simulated scenarios, and the improvements matched well our analytical model. These results demonstrate the great potential of using spectrum agility for improving spectral utilization in an efficient, distributed, and autonomous manner",
"We explore the performance tradeoff between opportunistic and regulated access inherent in the design of multiuser cognitive radio networks. We consider a multichannel cognitive radio system with sensing limits at the secondary users and interference tolerance limits at the primary and secondary users. Our objective is to determine the optimal amount of spectrum sharing, i.e., the number of secondary users that maximizes the total deliverable throughput in the network.We begin with the case of perfect primary user detection and zero interference tolerance at each of the primary and secondary nodes. With identical primary and secondary traffic statistics, we find that the optimal fraction of licensed users lies between the two extremes of fully opportunistic and fully licensed operation and is equal to the traffic duty cycle. When the secondary users can vary their transmission probabilities based on the number of active primary users, we find that the optimal number of opportunistic users is equal to the average number of unoccupied channels. We then consider the more involved case of imperfect sensing and non-zero interference tolerance constraints. We provide numerical simulation results to study the tradeoff between licensing and autonomy and the impact of primary user sensing and interference tolerance on the deliverable throughput for two different subchannel selection strategies at the secondary users."
],
"cite_N": [
"@cite_29",
"@cite_11"
],
"mid": [
"2164931063",
"2105744856"
]
}
|
Performance of Joint Spectrum Sensing and MAC Algorithms for Multichannel Opportunistic Spectrum Access Ad Hoc Networks
|
It is believed that Opportunistic Spectrum Access (OSA) networks will be one of the primary forces in combating spectrum scarcity [2] in the upcoming years [3], [4]. Therefore, OSA networks [5], [6] have become the topic of rigorous investigation by the communications theory community. Specifically, the assessment of spectrum sensing overhead on OSA medium access control (MAC) performance recently gained a significant attention.
A. Research Objective
In the OSA network performance analysis, a description of the relation between the primary (spectrum) user (PU) network and the secondary (spectrum) user (SU) network can be split into two general models: macroscopic and microscopic. In the macroscopic OSA model [7], [8], [9] it is assumed that the time limit to detect a PU and vacate its channel is very long compared to the SU time slot, frame or packet length duration. Such a time limit is assumed to be given by a radio spectrum regulatory organization.
For example, the timing requirements for signal detection of TV transmissions and low power licensed devices operating in TV bands by IEEE 802.22 networks [10] (including transmission termination and channel vacancy time, i.e. a time it takes the SU to stop transmitting from the moment of detecting PU) must be equal to or smaller than 4.1 s [11,Tab. 15.5], while the frame and superframe duration of IEEE 802.22 are equal to 10 ms and 160 ms, respectively [11]. Also, in the macroscopic model it is assumed that the PU channel holding time, i.e. the time in which the PU is seen by the SU as actively transmitting, is much longer than the delay incurred by the detection process performed at the SU. As a result it can be assumed in the analysis that, given high PU detection accuracy (which is a necessity), OSA network performance is determined by the traffic pattern of the SUs. That is, it depends on the total amount of data to be transmitted by the SU network, the duration of individual SU data packets and the number of SU nodes. In other words the PU bandwidth resource utilization by the SU is independent of PU detection efficiency.
In the microscopic OSA model, more popular than its macroscopic counterpart due to analytic challenges, the detection time is short in relation to the shortest transmission unit of the OSA system.
Detection is also performed much more frequently than in the macroscopic model, i.e. for every SU packet [12], [13] or in every time slot [14], [15], [16], [17], [18]. Also, the microscopic model assumes much higher PU activity than the macroscopic model, which justifies frequent detection cycles. Since the detection overhead is much larger than in the macroscopic model, the analysis of utilization of resources (temporarily unoccupied by PU) by OSA network cannot be decoupled from the analysis of the PU signal detection phase. Therefore, while the distinction between macroscopic and microscopic models are somehow fluid, it is important to partition the two cases and compare them in a systematic manner. More importantly, the comparison should be based on a detailed OSA multichannel and multiuser ad hoc network model [19,Sec. 7.4], which would not ignore the overhead from both the physical layer (PHY) and MAC layers of different cooperative and distributed spectrum sensing strategies [19,Tab. 7.1] and, in case of microscopic model, account for different channel access procedures and connection management strategies for the SUs upon PU detection, like buffering or switching to a vacant channel. Finally, the comparison should be realized using tractable analytical tools.
C. Our Contribution
In this paper, we present a unified analytical framework to design the spectrum sensing and the OSA data MAC jointly, for the macroscopic and microscopic cases. This design framework provides the (i) means of comparing different spectrum sensing techniques plus MAC architectures for OSA networks and (ii) spectrum sensing parameters such as observation time and detection rate for given design options. As a metric for optimization and comparison, we consider the average link layer OSA network throughput.
Our model will account for the combined effects of the cooperative spectrum sensing and the underlying MAC protocol. For spectrum sensing, we will consider several architectures parametrized by sensing radio bandwidth, the parameters of the sensing PHY, and the parameters of the sensing MAC needed to exchange sensing data between individual OSA nodes. Along with classifying most of the well known sensing MAC protocols, we introduce a novel protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient exchange of individual sensing decisions in "κ out of N " fusion rule.
For the data MAC we will consider two protocol abstractions, (i) Dedicated Control Channel (DCC) and
(ii) Hopping Control Channel (HCC), as analyzed in [15], [34] with novel extensions. That is, given the designs of [25], [26], [27], [30], we will analyze MAC protocols that (i) allow (or forbid) to buffer existing SU connections on the event of PU arrival, and (ii) allow (or forbid) to switch the SU connections preempted by the PU to the empty channels. Please note that in the case of the analytical model proposed in [15] for the SU connection buffering OSA MAC schemes we present an exact solution. Finally, using our framework, we compute the maximum link layer throughput for most relevant combinations of spectrum sensing and MAC, optimizing parameters of the model jointly, both for the microscopic and macroscopic models.
The rest of the paper is organized as follows. System model and a formal problem description is presented in Section II. Description of spectrum sensing techniques and their analysis is presented in Section III. Analysis of MAC strategies are presented in Section IV. Numerical results for spectrum sensing process, MAC and joint design framework are presented in Section V. Finally the conclusions are presented in Section VI.
II. SYSTEM MODEL AND FORMAL PROBLEM DESCRIPTION
The aim of this work is to analyze link layer throughput accounting for different combinations of MAC, spectrum sensing protocols and regulatory constraints. The model can later be used to optimize the network parameters jointly to maximize the throughput, subject to regulatory constraints. Before formalizing the problem, we need to introduce the system model, distinguishing between the microscopic and macroscopic approaches.
A. System Model 1) Microscopic Model: For two multichannel MAC abstractions considered, i.e. DCC and HCC, we distinguish between the following cases: (i) when SU data transfer interrupted by the PU is being buffered (or not) for further transmission and (ii) when existing SU connection can switch (or not) to a free channel on the event of PU arrival (both for buffering and non-buffering SU connection cases). Finally, we will distinguish two cases for DCC where (i) there is a separate control channel not used by the PU and (ii) when control channel is also used by the PU for communication. All these protocols will be explained in detail in Section IV.
We assume slotted transmission within the SU and PU networks, where PU and SU time slots are equal and synchronized with each other. The assumptions on slotted and synchronous transmission between PU and SU are commonly made in the literature, either while analyzing theoretical aspects of OSA (see [12, OSA scenarios (see [16,Fig. 2] in the context of secondary utilization of GSM spectrum or [38] in the context of secondary IEEE 802.16 resources usage). Our model can be generalized to the case where PU slots are offset in time from SU slots, however, it would require additional analysis of optimal channel access policies, see for example [36], [39], [40], which is beyond the scope of this paper. We also note that the synchrony assumption allows one to obtain upper bounds on the throughput when transmitting on a slot-asynchronous interface [41].
The total slot duration is t t µs. It is divided in three parts: (i) the detection part of length t q µs, denoted as quiet time, (ii) the data part of length t u µs, and if communication protocol requires channel switching (iii) switching part of length t p µs. The data part of the SU time slot is long enough to execute one request to send and clear to send exchange [15], [34]. For the PU the entire slot of t t µs is used for data transfer, see Fig. 1(a).
Our model assumes that there are M channels having fixed capacity C Mbps that are randomly and independently occupied by the PU in each slot with probability q p . There are N nodes in the SU network, each one communicating directly with another SU on one of the available PU channels in one hop fashion.
Also, we assume no merging of the channels, i.e only one channel can be used by a communicating pair of SUs at a time. SUs send packets with geometrically distributed length with an average of 1/q = d/(Ct u ) slots for DCC, and 1/q = d/(C {t u + t p }) slots for HCC [15,, [34,Sec. 3.2.3], where d is the average packet size given in bits. Difference between average packet length for DCC and HCC is a result of switching time overhead for HCC, because during channel switching SUs do not transfer any data, even though they occupy the channel. We therefore virtually prolong data packet by t p for HCC to keep the comparison fair.
Every time a node tries to communicate with another node it accesses the control channel and transmits a control packet with probability p to a randomly selected and non-occupied receiver. A connection is successful when only one node transmits a control packet in a particular time slot. The reason for selecting a variant of S-ALOHA as a contention resolution strategy was manyfold. First, in reality each real-life OSA multichannel MAC protocol belonging to each of the considered classes, i.e. HCC or DCC, will use its own contention resolution strategy. Implementing each and every approach in our analysis: (i) would complicate significantly the analysis, and most importantly (ii) would jeopardize the fairness of the comparison. Therefore a single protocol was needed for the analytical model. Since S-ALOHA is a widespread and well understood protocol in wireless networks and is a foundation of many other collision resolution strategies, including CSMA/CA, it has been selected for the system model herein.
In each quiet phase every SU node performs PU signal detection based on signal energy observation.
Since we assume that OSA nodes are fully connected in a one hop network, thus each node observes on 8 average the same signal realization in each time slot [13], [18], [42]. PU channels detected by the SU are assumed as Additive White Gaussian Noise with a channel experiencing Rayleigh fading. Therefore to increase the PU detectability by the OSA network we consider collaborative detection with hard decision combining in the detection process based on "κ out of N " rule, as in [43], [44]. Hence we divide the quiet phase into (i) the sensing phase of length t s µs and (ii) the reporting phase of length t r µs. The sensing phase is of the same length for all nodes. For simplicity we do not consider in this study sensing methods that adapt the sensing time to propagation conditions as in [45]. In the sensing phase, nodes perform their local measurements. Then, during the reporting phase, nodes exchange their sensing results and make a decision individually by combining individual sensing results. We will analyze different PHY and MAC approaches to collaborative spectrum sensing, especially (i) methods to assign sensing frequencies to users, (ii) rules in combining the sensing results, and (iii) multiple access schemes for measurement reporting. In this paper we do not consider sensing strategies applicable to single channel OSA networks [46], two stage spectrum sensing [8], and sensing MAC protocols based on random access [47], due to their excessive delay. We will explain our spectrum sensing approaches in more detail in Section III. Further we assume a error channel, for the sensing layer as well as for data layer where probability of error during transmission is denoted as p e .
Finally, we consider two regulatory constraints under which the OSA network is allowed to utilize the PU spectrum provided the channel is idle: (i) maximum detection delay t d,max , i.e. a time limit within which a SU must detect a PU, and (ii) minimum detection probability p d,min , i.e., a probability with which a OSA system has to detect a PU signal with minimum signal to noise ratio γ. Note that in the event of mis-detection and subsequent SU transmission in a channel occupied by PU, a packet fragment is considered successfully transmitted, since in our model transmission power of SU is much higher than interference from PU, and regulatory requirements considered here do not constrain SU transmission power 1 (refer for example to IEEE 802.22 draft where Urgent Coexistent Situation packets are transmitted on the same channel as active PU [10], [11]). Moreover, maximum transmission power is a metric specific to overlay OSA systems [19, Sec. 2.2.5 and 8.2.1] where typically no spectrum sensing is considered. Also we do not consider a metric based on a maximum allowable level of collisions between PU and SU. Note that the parameters of the introduced model are summarized in Table I and 1 The opposite case is to assume that a packet fragment is considered as lost and retransmitted. This approach however requires an acknowledgement mechanism for a lost packet fragment, see for example [17,, [41,Sec. II], that contradicts the model assumption on the geometric distribution of SU packets. Table I. the abbreviations are summarized in Table II. 2) Macroscopic Model: We assume the same system model as for the microscopic case, except for the following differences. OSA performs detection rarely, and the PU is stable for the duration of OSA network operation, i.e. it is either transmitting constantly on a channel or stays idle. In other words quiet period occurs for multiple time slots, see Fig. 1(b). Also, since the PU is considered stable on every channel we do not consider all types of OSA MAC protocols introduced for the microscopic model.
Instead we use classical DCC and HCC models proposed in [34] with the corrections of [15] accounting for the incomplete transition probability calculations whenever OSA network occupied all PU channels and new connection was established on the control channel.
B. Formal Problem Description
To compute the maximum throughput for different combinations of protocols and models, we define an optimization problem. The objective is the OSA network link layer throughput R t . Therefore, considering the regulatory constraints given above we need to
maximize R t = ξR subject to p d = p d,min , t d ≤ t d,max ,(1)
where t d is the detection time, i.e. the time to process whole detection operation as described in Section III-D, R is the steady state link layer throughput without sensing and switching overhead, which will be computed in Section IV, and
ξ = tt−tq −P (i) x,y , R (z)
x,y total PU arrival probability for no buffering, and buffering case -
T (j) k , S (j) m
termination, and arrangement probability - (1) is itself affected by p f , as it will be shown in Section IV. Also note that t p is removed from second condition of (2) since the switching time is negligible in comparison to inter-sensing time.
S (1) m ,Ŝ(1)
III. LAYERED MODEL OF SPECTRUM SENSING ANALYSIS
To design the spectrum sensing, we follow the approach of [7] in which the spectrum sensing process is handled jointly by (i) the sensing radio, (ii) the sensing PHY, and (iii) the sensing MAC. Using this layered model we can compare existing approaches to spectrum sensing and choose the best sensing architecture in a systematic way. Since the parameters of the design framework in (1) are determined by the choices of individual layers, we describe and parametrize each layer of the spectrum sensing, later describing cross-layer parameters.
A. Sensing Radio
The sensing radio scans the PU spectrum and passes the spectrum sensing result to the sensing PHY for analysis. The sensing radio banwidth is given as αM b, where α is a ratio of the bandwidth of the sensing radio to the total PU bandwidth and b MHz is the bandwidth of each PU channel 2 . With α > 1/M node can sense multiple channels at once. However the cost of such wideband sensing radio increases.
B. Sensing PHY
The sensing PHY analyzes the measurements from the sensing radio to determine if a PU is present in a channel. Independent of the sensing algorithm, such as energy detection, matched filter detection or feature detection [48], [49], there exists a common set of parameters for the sensing PHY: (i) time to observe the channel by one node t e µs, (ii) the PU signal to noise ratio detection threshold θ, and (iii) a transmit time of one bit of sensing information t a = 1/C µs. We denote conditional probability of sensing result p ij , i, j ∈ {0, 1}, where j = 1 denotes PU presence and j = 0 otherwise, and i = 1
indicates the detection result of PU being busy and i = 0 otherwise. Observe that p 10 = 1 − p 00 and
p 11 = 1 − p 01 .
As noted in Section II-A, we consider energy detection as the PU detection algorithm since it does not require a priori information of the PU signal. For this detection method in Rayleigh plus Additive
White Gaussian Noise channel p 10 is given as [15, Eq. (1)]
p 10 = Γ(ǫ, θ/2) Γ(ǫ) ,(3)
and
p 11 [15, Eq. (3)] p 11 = e − θ 2 ǫ−2 h=0 θ h h!2 h + 1 + γ γ ǫ−1 e θγ 2+2γ − ǫ−2 j=0 (θγ) h j!(2 + 2γ) h ,(4)
where Γ(·) and Γ(·, ·) are complete and incomplete Gamma functions, respectively, and ǫ = ⌊t e αM b⌋ is a time-bandwidth product. By defining G ǫ (θ) = p 10 and θ = G −1 ǫ (p 10 ), we can derive p 11 as a function of p 10 and t e .
C. Sensing MAC
The sensing MAC is a process responsible for sensing multiple channels, sharing sensing results with other users, and making a final decision on the PU presence. Because of the vast number of possibilities for sensing MAC algorithms it is hard to find a general set of parameters. Instead, we derive crosslayer parameters for a specific option of the sensing MAC. This methodology can be applied to any new sensing MAC scheme. We now introduce classifications which will be used in the derivation of cross-layer parameters.
1) Sensing Strategy for Grouping Channels and Users:
Each SU has to determine which channel should be sensed among the M channels. To reduce sensing and reporting overhead, OSA system can divide users and channels into n g sub-groups [50]. Sub-group i ∈ {1, · · · , n g } is formed by n u,i users who should sense m s,i channels to make a final decision cooperatively. Assume that all users are equally divided into groups then m s,i ∈ {⌊M/n g ⌋, ⌈M/n g ⌉} and n u,i ∈ {⌊N/n g ⌋, ⌈N/n g ⌉}. Note that for M/n g ∈ N and N/n g ∈ N all sub-groups have the same n u,i = N/n g and m s,i = M/n g for all i. Given N and M , if n g is small, more users are in a group and the collaboration gain increases, but at the same time more channels must be sensed, which results in more time overhead for sensing. For large n g , this relation is opposite.
2) Combining Scheme: By combining sensing results of other users, a OSA network makes a more reliable decision on PU state. As considered in [13], [51], we will take κ as a design parameter for the sensing MAC and find an optimum value to maximize the performance. Note that for the case of N user cooperation if κ = 1, the combining logic becomes the "or" rule [19,Sec. 3.2], [42, Sec. III-C] and if κ = N , it becomes the "and" rule.
3) Multiple Access for Measurement Reporting:
To transmit sensing results of multiple users through the shared media, a multiple access scheme is needed. Note that this multiple access scheme is only for the reporting process, different from the multiple access for data transfer. We consider the following approaches.
a) Time Division Multiple Access (TDMA):
This is a static and well-organized multiple access scheme for which a designated one bit slot for sensing report transmission is assigned to each user [43], [50]. b) TTDMA: In TDMA, when the SU receives all the reporting bits from other users the SU makes a final decision of presence of PU on the channel. However, in OSA network using TTDMA SUs may not need to wait until receiving the last reporting bit, because for the "κ out of N " rule, a reporting operation can stop as soon as κ one bits denoting PU presence are received. This sensing MAC aims at reducing the reporting overhead, but unfortunately we have not seen any paper proposing and discussing TTDMA.
c) Single Slot Multiple Access (SSMA):
For this scheme, known also as the boosting protocol [52], only one bit slot is assigned for reporting and all SUs use this slot as a common reporting period. Any SU that detects a PU transmits one bit in the common designated slot. Otherwise, a user does not transmit any bit in the designated slot. Then, reporting bits from SUs who detect a PU are overlapped and as a result all power of the slot is summed up. By measuring the power in the designated slot, a SU can determine whether the primary user exists or not. We assume perfect power control and perfect synchronization.
Even though this may not be practical, because carrier frequency or the phase offset cannot be avoided in real systems, this scheme serves as an upper bound for sensing MAC performance. For the analysis of SSMA in isolation but in a more realistic physical layer conditions the reader is referred to [53], [54].
D. Cross-Layer Parameters
Considering the combined impact of the individual layers, we derive cross-layer parameters in the framework as described in (1). More specifically these are t q and t d , derived as a function of individual parameters and p f , and p d , denoting final network-wide probabilities of false alarm and detection, respectively.
1) Detection Time t d and Quiet Time t q : Detection time t d is defined as the time duration from the point that a SU starts to sense, to the point that a SU makes a final decision on PU presence. Regardless of the data transfer and spectrum sensing time overlap, the final detection decision is made only after combining the sensing group's reported information [55]. Thus t d is the time from the start of the sensing phase to the end of the reporting phase, i.e. t d = t s + t r .
Since the data transfer may not be possible during sensing or reporting phases t q ≤ t d , depending on the approach. When spectrum sensing and data transfer are divided in time division manner t q = t s + t r .
Note that three other methods sharing the same problem are possible (they will not be considered in the remainder of the paper): (i) simultaneous reporting and data, which can be implemented by using the separate channel as in [56], for which t q = t s , (ii) simultaneous sensing and data, implemented by using the frequency hopping method as in [57], for which t q = t r , and (iii) simultaneous sensing, reporting, and data for which t q = 0. Conceptually, simultaneous sensing, reporting, and data transfer is possible and seems most efficient but we have not found any implementation of it in the literature. Note that in order to implement simultaneous sensing and transmission at least two radio front ends are needed, which increases the total cost of the device.
Definem s as the number of individual sensing events to complete sensing operation andm r as the average number of bits to report. Then the sensing time and the reporting time can be calculated as t s =m s t e and t r =m r t a . Note thatm s is affected by the bandwidth of the sensing radio because it can scan multiple channels at once if the bandwidth of the sensing radio is wide. For the case that the sensing radio is narrower than the bandwidth to sense, i.e. α < max{m s,1 , · · · , m s,ng }/M , we assume that a SU monitors all channels by sequential sensing [33], because the reporting phase should be synchronized after all SUs finish the sensing phase. With this assumptionm s = max{m s,1 , · · · , m s,ng }/αM , because even though the bandwidth to sense is less than that of the sensing radio it still needs one sensing cycle to get information. Form r , because there are n g groups in a OSA system,m r = ng i=1m r,i wherem r,i depends on the multiple access schemes for reporting, which we compute below. a) TDMA: All n u,i users should transmit the sensing results of m s,i channels. Thus,m r,i = n u,i m s,i . b) TTDMA: For κ < n u,i /2, if κ of ones are received, the reporting process will end. We introduce a variable δ which is the number of bits when the reporting process finishes. Thus there should be κ − 1 of ones within δ − 1 bits and then δ-th bit should be one. Because the range of δ is from κ to n u,i , the average number of bits for this condition is derived as
m 1,i = nu,i δ=κ δ − 1 κ − 1 (1 − q p )δp δ−κ 00 p κ 10 + q p δp δ−κ 01 p κ 11 .(5)
Moreover, if the number of received zeros, denoting PU absence, equals to n u,i − κ + 1, the reporting process will stop because even if the remaining bits are all one, the number of ones must be less than κ. Then the reporting process stops at δ-th bit if δ − n u,i + κ − 1 bits of one are received within δ − 1 bits and zero is received at δ-th bit. The range of δ is from n u,i − κ + 1 to n u,i , and thus the average number of bits for this condition is
m 2,i = nu,i δ=νi δ − 1 δ − ν i (1 − q p )δp νi 00 p δ−νi 10 + q p δp νi 01 p δ−νi 11 ,(6)
where
ν i = n u,i − κ + 1. Therefore because there are m s,i channels to sense in a group i,m r,i = m s,i (m 1,i + m 2,i ).
For the case κ ≥ n u,i /2, m 1,i is calculated by counting zeros and m 2,i by counting ones. Thus, we usem r,i = m s,i (m 1,i + m 2,i ) again, by replacing κ with n u,i − κ + 1, p 00 with p 10 and p 01 with p 11 .
Because we assumed so far that κ is known to each node in the network, OSA nodes know when to stop reporting measurements and start data communication without being instructed by external parties.
For comparison we analyze another type of TTDMA, denoted as κTTDMA, where a cluster head node makes a decision to stop the reporting phase in the OSA network. For example, this approach may be necessary if the κ value is updated in real time. In the worst case scenario this approach requires two bits to be reported by the SU, i.e. one for sending sensing data and one for an acknowledgment from the cluster head to report. Then (5) and (6)
p f = 1 n g ng i=1 p f,i ,(7)
where p f,i is the probability of false alarm of sub-group i. Using (7) we can also derive p d by substituting (8) b) TTDMA: In this case SU does not need to receive n u,i bits to make a final decision because the reporting phase is ended when the number of ones is κ. To derive p f,i for this case, we introduce a variable β denoting the number of zeros. Then total number of reporting bits is κ + β if the last bit is one because otherwise reporting phase will end at less than κ + β bits. Therefore, there should be β of zeros in κ + β − 1 bits and κ-th bit should be one. Because β can vary from 0 to n u,i − κ
p f,i = nu,i δ=κ n u,i δ p δ 10p nu,i−δ 00 ,(8)wherep x = (1 − p e )p x + p e (1 − p x ) for p x ∈ {p 10 , p 00 }, while p d,i is derived fromp f,i = nu,i−κ β=0 κ + β − 1 β p κ 10p β 00 .(9)
Finally p d,i is obtained from (9) by substitutingp 10 withp 11 andp 00 withp 01 .
c) SSMA: Obviously, the process of the reporting information for SSMA is the same as for TDMA.
Therefore p f,i and p d,i are defined the same as for TDMA.
IV. MULTICHANNEL OSA MAC PROTOCOL ANALYSIS
In this section we present the analysis of throughput R for all considered combinations of MAC protocol architectures. As noted in Section I-C, we propose a set of new multichannel MAC protocols for OSA. We will first describe their operation, later presenting the analysis framework.
A. Description of New Multichannel MAC protocols for OSA
We consider two major groups of MAC protocols for OSA: (i) those enabling buffering of the SU connections preempted by the PU arrival, and (ii) those enabling switching of the SU connections to a vacant channel when preempted. In the former group, when the PU arrives the existing SU connection will pause at the time of preemption and resume on the same channel as soon as the PU goes idle. We assume that the SU always waits for the PU to finish its transmission. The case where the buffered SU connection expires after a predefined time, not analyzed here, is presented in [22] for the centralized network. We do not consider any channel reservation schemes for potential SU connections to be buffered [25]. When buffering is not possible, the preempted SU connection is considered as lost and a new connection must be established on the control channel. In the latter group, when the PU arrives the existing SU connection will look for a new empty channel, to continue transmission. If such a channel cannot be found the connection is lost. Without channel switching, the exiting SU connection is lost as soon as the PU preempts the channel.
Obviously we can have four combinations of these groups for OSA MAC, which have all been considered in the analysis: (i) with no buffering and no channel switching [30] scheme denoted as B 0 S 0 , where SU connections preempted by PU are lost; (ii) with no buffering and channel switching [24], [25], [26] denoted as B 0 S 1 , where SU connections preempted by PU switch to a free channel and connections that cannot find a free channel are blocked; (iii) with buffering and no channel switching [15], [22], [23] We propose a three dimensional Markov chain of which the state vector is given as Sec. III] where buffered SU connections were also considered to be utilizing the PU channels.
(X t , Y t , Z t ), where
Considering a real OSA system, there are conditions that qualify valid states. With SU connection buffering-enabled MAC protocols for OSA, the number of connections cannot be less than the number of channels utilized by SUs, i.e. X t ≤ Z t . Additionally, SUs do not pause transmissions over unoccupied channels. Therefore, the number of SU connections not utilizing a channel cannot exceed the number of channels occupied by PUs, i.e. Z t − X t ≤ Y t or Z t ≤ X t + Y t . Finally, the sum of the channels utilized by PUs and the SUs cannot be greater than M D , i.e. X t + Y t ≤ M D . By combining these conditions we can compactly write them as
0 ≤ X t ≤ Z t ≤ X t + Y t ≤ M D .(10)
When connection buffering is disabled the number of SU connections must be the same as the number of channels utilized by SUs, i.e. X t = Z t . Therefore, for non-buffering SU connection OSA MAC
protocols (X t , Y t , Z t = X t ) ⇒ (X t , Y t ).
For the microscopic case the average channel throughput, excluding switching and sensing overhead, is computed as
R = C sm x=0 MD y=0 sm z=0 xπ xyz ,(11)
where s m = max{S} and the steady-state probability π xyz is given by
π xyz = lim t→∞ Pr(X t = x, Y t = y, Z t = z),(12)
and the state transition probabilities to compute (12) will be derived in the subsequent section, uniquely for each OSA multichannel MAC protocol.
Finally, for the macroscopic case the average channel throughput, excluding switching and sensing overhead, is computed as
R = {q p (1 − p d ) + (1 − q p )(1 − p f )}R c C,(13)
where R c = sm i=1 iπ i and π i is a solution to a steady state Markov chain given by [15,Eq. (13)]. Since the macroscopic model assumes no PU activity in each time slot, SU connection buffering and switching is not needed. Note that contrary to the incorrect assumptions of [15,Eq. (12)], [34, Eq. (7) and (9)] we compute R in (11) and (13) taking all the channels into account, irrespective of the type of OSA MAC. This is because models of [15], [34] considered only data channels for the throughput investigation in DCC in the final calculation stage, assuming that no data traffic is being transmitted on control channel. However, the utilization must be computed over all channels, irrespective of whether one channel transmitted only control data or not.
C. Derivation of State Transition Probabilities for the Microscopic Model
We denote the state transition probability as
p xyz|klm = P r(X t = x, Y t = y, Z t = z|X t−1 = k, Y t−1 = l, Z t−1 = m).(14)
Note that changes in X t and Z t depend on the detection of the PU. In addition, changes in Z t depend on
OSA traffic characteristics such as the packet generation probability p and the average packet length 1/q.
Also, note that the steady state probability vector π containing all possible steady state probabilities π xyz is derived by solving π = πP, where entries of right stochastic matrix P are defined as (14) knowing that x,y,z π xyz = 1.
As a parameter to model PU state, p c denotes the probability that a OSA network collectively detects a PU channel as occupied 3 , i.e.
p c = q p p d + (1 − q p )p f .(15)
We introduce two supporting functions. First, we denote T
T (j) k = k j q j (1 − q) k−j , k ≥ j > 0, 0, otherwise.(16)
Note that k in T 5) and (8)] considering PU detection on the control channel. If a PU is detected on a control channel, an SU connection cannot be generated because there is no chance to acquire a data channel. We then have [15,Eq. (17)]
S (j) m = S (1) m , j = 1 (DCC), S (1) m N −2m−1 N −1 MD−m M , j = 1 (HCC), 1 − S (1) m , j = 0, 0, otherwise,(17)
whereS
(1) m = Ŝ (1)
m , PU free control channel, DCC only, This is because we assume that a SU that has a connection but pauses data transmission due to the PU presence does not try to make another connection. We can now derive the transition probabilities individually for all four different OSA MAC protocols.
(1 − p c )Ŝ (1) m , otherwise,(18)
1) Case B 0 S 0 : Recall that for non-buffering OSA MAC protocols Z t = X t . Thus p kl|xy is defined as Now, consider the case x < k + 1. When a SU data connection is terminated, there can be two possible reasons: (i) a SU completes its transmission, or (ii) a PU is detected on a channel that is assigned to a SU for data transmission before sensing. The former was analyzed [34,Sec. 3]. To model the latter, we introduce variable i denoting the number of channels that were reserved for SU data transmission before sensing but cannot be utilized due to PU detection. We have the following observation. In addition, we need to discuss the edge state 4 which considers two cases: (i) no more channels are available, either utilized by SUs or PUs, and (ii) all possible SU connections are established 5 which we denote as "full connection state". For the transition from full connection state to edge state, we have to consider the case that one new connection is generated while any existing connection is not terminated, which means a trial for the new connection by the free SU is not established because there already exists all possible connections.
Writing all conditions compactly, denote the indicator for the edge state
1 x,y = 1, x + y = M D or x = s m , 0, otherwise,(19)
and define P
p xy|kl = 0, x > k + 1 T (0) k S (1) k P (0) x,y , x = k + 1, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (i) x,y , x < k + 1, k < s m or 1 x,y = 0, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (i) x,y + T (0) k S (1) k P (0) 0,y , x < k + 1, k = s m , 1 x,y = 1,(20)
where i m = min(s m − x, y).
2) Case B 0 S 1 : Although in the SU connection non-switching case both DCC and HCC can be considered, only DCC will be able to perform switching without any additional control data exchange, which we prove formally.
Before going into detail of the derivation note that for the class of OSA MAC protocols with a dedicated control channel every node can follow the connection arrangement of the entire network. [34] it is impossible for a single node to learn the whole network connection arrangement since each sender receiver pair cannot listen to others while following its own hopping sequence. We now present the following proof.
Theorem 1: Channel switching in DCC can be performed without any additional control message exchange.
Proof: We prove this by showing a possible distributed channel switching process. Following earlier observation, in DCC each node can trace the connection arrangement of others, i.e. which channel has been reserved by a sender receiver pair. To distribute the switching events equally among SUs each SU computes the priority level as
Π i,t = Π i,t−1 + 1 p ,(21)
where
1 p = 1, preemption by PU, 0, otherwise,(22)
and Π i,t is the priority level of SU i at time t. For Π i,0 / ∈ N the priority is a MAC address of the SU,
where |I| = |U| = M D − X t − Y t , → is the mapping operator denoting process of switching active SU connection i to free channel j, I i,t denotes index of communicating SUs (transmitters) at time t, where Π a,t > Π b,t > · · · > Π c,t and U j,t denotes free channel with index j at t.
Note that existing connections that have not been mapped to a channel are considered blocked. Also note that algorithm given in Theorem 1 connections are preempted randomly with equal probability by PU.
Since new SU connections are also assumed to use new channels randomly with equal probability, each SU connection is blocked with uniform probability.
To enable SU connection switching in HCC one way is to augment it with a separate radio front end which would follow the hopping sequences and control data exchange of the OSA network. Obviously this increases the cost of hardware and contradicts the idea of HCC, where all channels should be used for data communication. Therefore while evaluating OSA MAC protocols in Section V-B, we will not consider SU connection switching for HCC.
We now define the state transition probability p xy|kl for the considered OSA MAC protocol. Because
x > k + 1 is infeasible, the state transition probability for x > k + 1 equals to zero. For x = k + 1, y
PUs can appear on any of M D channels because even though a PU is detected, the SUs can still transmit data by switching to the idle channels and the possible number of PU appearances is MD y . Note that the possible number of PU appearances in the case B 0 S 1 is always MD y , even for the edge state, because the data channel can be changed by switching to a vacant channel after the PU detection. Because it is impossible to create more than one new connection at a time, the OSA connection creation probabilities for x = k + 1 are the same as in (20), i.e. T
p xy|kl = 0,
x > k + 1,
T (0) k S (1) k P (0) 0,y , x = k + 1, T (x−k) k S (0) k + T (x−k+1) k S (1) k P (0) 0,y , x < k + 1, 1 x,y = 0, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (0) 0,y , x < k + 1, k < s m , 1 x,y = 1, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (0) 0,y + T (0) k S (1) k P (0) 0,y , x < k + 1, k = s m , 1 x,y = 1.(24)
3) Case B 1 S 0 : Before we discuss this case we present the following observation, which implicates the design of simulation models and derivation of p xyz|klm for SU connection buffering MAC protocols.
Observation 2: For all SU connection buffering OSA MAC protocols the same average link level throughput results from creating a brand new connection or resuming a previously preempted and buffered connection on the arrival of PU on a channel.
Proof: Due to the memoryless property of the geometric distribution
Pr(1/q i > 1/q t1 + 1/q t2 |1/q i > 1/q t1 ) = Pr(1/q i > 1/q t2 ),
where 1/q i is the duration of connection i, 1/q t1 is the connection length until time t 1 when it has been preempted by PU, and 1/q t2 is the remaining length of the connection after SU resumes connection at time t 2 . Since either a newly generated SU connection after resumption, or the remaining part of a preempted connection needs a new connection arrangement on the control channel, the number of slots occupied by each connection type is the same.
Having Observation 2 we can derive transition probabilities. Because packet generation is affected by the number of connections, we use Z t to classify conditions to derive the state transition probabilities.
Due to the assumption of a maximum number of one connection generation in one time slot, the state transition probability of the case of z > m + 1 is zero.
p xyz|klm = 0, z > m + 1, T (0) k S (1) m R (z)
x,y , z = m + 1,
T (m−z) k S (0) m + T (m−z+1) k S (1) m R (z)
x,y , z < m + 1, m < s m or z < s m ,
T (0) k S (0) m + T (1) k S (1) m + T (0) k S (1) m R (z) x,y , z = m = s m .(26)
Note that this OSA MAC has been previously analyzed in [15]. As it has been pointed out, the model proposed did not work well for the full range of parameters. This is due to the following. A Markov model has been derived for {X t , Y t } (using unmodified transition probabilities of [34,Eq. 6] used to calculate average throughput of networks based on non-OSA multichannel MAC protocols). With this limitation termination, the probability in [15,Eq. (14)], analogue to (16), included an aggregated stream of PU and SU traffic, where PU traffic q p was later substracted from steady state channel utilization in [15,Eq. (10)], analogue to (11). The approximation of [15], although Markovian, worked reasonably well only for a moderate values of PU activity q p .
p xyz|klm = 0, z > m + 1, or z = x, x + y < M D , or m = k, k + l < M D , T (0) k S (1) m R (0) 0,y , z = m + 1, T (m−z) k S (0) m + T (m−z+1) k S (1) m R (0)
0,y , z < m + 1, m < s m or z < s m ,
T (0) k S (0) m + T (1) k S (1) m + T (0) k S (1) m R (0) 0,y , z = m = s m .(27)
5) Impact of Channel Error on the Throughput Calculations: All previous analysis were done under
the assumption of the error-free channel. In this section we will briefly discuss the impact of channel error on the throughput calculations.
Channel error impacts the throughput in two ways. First, error affects throughput when SU involved in a connection setup fails to receive a control message from the transmitter. As a result no connection is established. Second, error affects throughput when SU not associated with the current connection setup For HCC, the control channel is selected as one of the data channels by a hopping method. Thus, if we assume an error on the control channel, it is reasonable to consider the error on the data channel as well.
For the control channel, if an error occurs, a connection fails to be established. Thus it is modeled by multiplyingŜ m by 1 − p e , where p e is a probability of error in the current time slot. For the data channel, different error handling strategies can be considered. We focus on the two following situations: i) case E 1 denoting packet punctured by unrecovered errors and ii) case E 2 denoting transmission termination on error.
a) Case E 1 : It can be assumed that when an error occurs on a time slot, the SU simply discards that time slot and resumes transmitting the remaining packet fragment from the next correct time slot. This is modeled by replacing the capacity C with C(1 − p e ).
b) Case E 2 : It can also be assumed that the connection terminates when an error occurs. Thus the probability that the packet finishes transmitting, q, should be replaced by q + (1 − q)p e . In addition, if the control channel hops to a channel which is being utilized for data transmission but error occurs, a new connection cannot be established. This is modeled by multiplyingŜ m by (1 − p e ) 2 .
V. NUMERICAL RESULTS
We now present numerical results for our model. First, we present results independently for spectrum sensing and OSA MAC performance, in Section V-A and Section V-B, respectively, for the microscopic case. Then in Section V-C we present the results of joint optimization of these two layers in the microscopic and macroscopic context. Moreover, due to a vast combination of parameters to consider we have decided to follow the convention of [15], [34] and focus on two general network setups (unless In this section we will also compare the analytical model of the sensing layer and OSA MAC protocols to simulation results. The simulations were developed with Matlab and reflect exactly the sensing models and MAC protocols presented in this paper. Simulation results for each system were obtained using the method of batch means for a 90% confidence interval. To evaluate the sensing protocols each batch contained 100 events and the whole simulation run was divided into 10 batches with no warm up phase.
When simulating the OSA MAC protocols, each batch contained 1000 events while the whole simulation was divided into 100 batches with the warm up period equal of 100 events.
A. Spectrum Sensing Architecture Performance
For all possible combinations of sensing architectures we compute the probability of false alarm for a wide range of t q . For two networks considered we select a set of the following common parameters: The advantage of TTDMA and SSMA can be shown more clearly if we compare the results of different p d = p d,min requirements. We can observe that high detection requirement such as p d = 0.99 makes the performance worse, as generally known. However if TTDMA or SSMA is applied, the performance for p d = 0.99 can be higher than that of TDMA for p d = 0.9. For example, in the range that t q < 50 µs in Fig. 2(a), SSMA for p d = 0.99 outperforms TDMA for p d = 0.9. Moreover, in Fig. 2(b), for t q 550 µs, SSMA and TTDMA for p d = 0.99 outperforms TDMA for p d = 0.9.
t t = t d,
It is important to note that κTTDMA performs worse than the rest of the protocols. It is due to excessive delay caused by instant acknowledgment of reporting result to the cluster head node. Note that κTTDMA is a lower bound for the operation of TTDMA. Also note that when TDMA needs to be equipped with acknowledgment function, as κTTDMA, its performance would be degraded the same way as TTDMA. Since we analyze static network with pre-set parameter values, e.g. κ does not change over time, in the following sections we proceed with unmodified TTDMA only.
2) Impact of Channel Errors during Reporting on PU Detection Performance:
The results are presented in Fig. 3. For small and large scale network, and the same parameters as used in Section V-A1, we have observed the probability of false alarm keeping detection probability p d constant for varying quiet time t q . First, it is obvious when comparing Fig. 2 (no channel error) and Fig. 3 (channel error) the impact of error is clearly visible, i.e. p f increases for every protocol. However, the relation between individual protocols is the same since error affects all protocols equally. Second, the effect of error on the small scale network is smaller than for the large scale network, compare Fig. 3(a) and Fig. 3(b), since the probability that SU will send a wrong report is larger for network with large number of nodes. Lastly,
for small values of κ probability of false alarm stabilizes and never reaches zero. However, large values of κ reduce significantly the effect of channel errors. It is because with high κ probability of making an error decreases rapidly. With 20% of nodes participating in the cooperative agreement on PU state, κ = 2 for small network and κ = 8 for large scale network, effect of error is reduced almost to zero.
3) Impact of Cooperation Level on PU Detection Performance:
The results are presented in Fig. 4.
We have selected TTDMA and set p d = p d,min = 0.99 as a protocol for further investigation. We observe that for the small scale network, see Fig. 4(a), the performance for κ = 2 is the best, while for the large scale network, see Fig. 4(b), the best performance can be achieved when κ = 8 or 16 if p f < 0.1.
Based on this observation, we conclude that for given detection requirements, high detection rate of PU is obtained when κ is well below the total number of SUs in the network. While for the considered setup optimal κ ≈ 20%, this value might be different for other network configurations. An interesting observation is that the number of groups to achieve the best performance becomes larger as the number of users N increases. For the small scale network, see Fig. 5(a), the best performance is observed for n g = 2 or n g = 3, while for large scale network, Fig. 5(b), n g = 6 is the best. This is because for the large scale network, the reporting overhead caused by large number of users offsets the performance improvement achieved by large cooperation scale. independent from κ, which differs them from TTDMA whose operation strictly depends on the value of κ considered. And again, when comparing Fig. 6(c) and Fig. 6(d) the optimal value of t q for TTDMA is in the same range as p f which proves the optimality of the design.
5) Impact of κ on PU
B. OSA MAC Protocol Performance
To evaluate the effectiveness of all proposed and analyzed MAC protocols we have fixed C = 1 Mbps, Section V-C), assuming that spectrum sensing layer is able to obtain such quality of detection. Again, as
p = e −
in Section V-A, results are presented separately for error-free and error channel.
1) Impact of PU Activity Level on OSA MAC Protocols:
The results are presented in Fig. 7. We observe that PU activity degrades DCC and HCC for B 0 S 0 , irrespective of other network parameters.
Their performances are comparable in this case. DCC and HCC performs best with B 1 S 0 . The results
show that the non-buffering OSA MAC protocols are very sensitive to q p where the greatest throughput decrease is visible at low ranges of PU activity. On the other hand, with connection buffering we observe a linear relation between q p and R t .
2) Impact of SU Packet Size on OSA MAC Protocols:
The results are presented in Fig. 8. Obviously, for larger SU packet size, the OSA network is able to grab more capacity. However, when packets become excessively large the throughput saturates. It remains that with no buffering and no channel switching protocols obtain the lowest throughput, no matter what network setup is chosen. Interestingly, although
intuitevely B 1 S 1 should obtain the highest channel utilization, it does not perform better than B 1 S 0 due to large switching time. With t p approaching zero, DCC B 1 S 1 would perform best, irrespective of the network setup as we discuss below.
3) Impact of Switching Time on OSA MAC Protocols:
The results are presented in Fig. 9. In this experiment, we verify that for small t p DCC B 1 S 1 outperforms DCC B 1 S 0 . However, there is no huge difference between their performances even at t p = 10 µs. This is because connection switching does not comparing channel switching and buffering options we conclude that much more channel utilization is obtained by connection buffering than by channel switching alone when N/M > 1.
4) Relation Between Number of SUs and PU
Note that for all cases described in this section simulation results agrees with our analytical model.
Comparing our model and analytical results of [15] for DCC B 1 S 0 , see Fig. 10(b), we observe that prior analysis overestimated the performance resulting in more than 2 Mbps difference at N/M = 1.
Interestingly, if we consider the same set of parameters as in Section V-B1 then the model of [15] almost agrees with the model of our paper. Since the set of parameters that has been chosen in V-B1 are similar to [15] we remark that the observations on the performance of this OSA MAC in [15] were reflecting the reality. Fig. 7, except for qp = 0.1. E1 and E2 denote error models described in Section IV-C5. E0 denotes the system with pe = 0.
5) Impact of Channel Errors on the OSA Multichannel MAC Performance:
To observe the impact of channel errors on the MAC protocol throughput we have set up the following experiment. For HCC and both network sizes, small and large, we have observed the average throughput for different SU packet lengths and channel error probabilities. The results are presented in Fig. 11. For comparison in Fig. 11 we present the system with no errors, denoted as E 0 . We kept values of p e realistic, not exceeding 1%.
Obviously system with punctured errors E 1 obtains much higher throughput than system E 2 , since more data can be potentially sent after one control packet exchange. Again, buffering allows to obtain higher throughput in comparison to non-buffered case, even with the data channel errors present. Note that system E 2 is more prone to errors than E 1 , observe Fig. 11(a) and Fig. 11 , ii) log-normal (denoted symbolically as L), and for comparison iii) geometric (denoted symbolically as E) used in the analysis. We have tested the protocol performance for different combinations of "on"
and "off" times of PU activity. These were EE, LE, EL, LL (all possible combinations of "on" and "off" times obtained in [60, Tab. 3 and Tab. 4]) and additionally EU, UU, where first and second letter denotes selected distribution for "on" and "off" times, respectively. Due to the complexity of the analysis we show only the simulation results using the same simulation method of batch means, with the same parameters as described at the beginning of Section V.
The parameter of each distribution was selected such that the mean value of each distribution was equal to 1/p c for "on" time and 1 − 1/p c for "off" time. The uniform distribution has a non-continuous set of mean values, (a b + a n )/2, where a b , a n ∈ N denoting lower and upper limit of the distribution, respectively, which precludes existence of every mean on or off value for p c ∈ (0, 1). To solve that problem an continuous uniform distribution with required mean was used and rounded to the highest integer. This resulted in a slightly lower last peak in the probability mass function at a n for 1/p c / ∈ N or where c l = 1/p c , v l = (1 − p c )/p 2 c is the mean and variance of the resulting discretized log-normal distribution. Note that the variance of the used discretized log-normal distribution is equal to the variance of geometric distribution for the same mean value. The variance of resulting discretized uniform continuos distribution could not be equal to the variance of the geometric distribution due the reasons described earlier.
The results are presented in Fig. 12. We focus on two network types, as indicated earlier: (i) large scale and (ii) small scale, with the assumed parameters as in Fig. 7. We select four values of q p for the clarity of the presentation. The most important observation is that irrespective of the considered distribution DCC obtains relatively the same throughput and the same relation between different protocol options exists as it was shown analytically in Fig. 7. If one wants to select the distribution combination with the highest throughput it would be LE and LL, while the throughput obtained being almost equal to the one obtained via analysis for the geometric distribution. The distribution with the lowest throughput is UU and EU, due to the difference of the second moment between the other two distributions for the on time. The difference in throughput between UU, EU and the remaining distributions is more visible for
C. Performance of Joint Spectrum Sensing and OSA MAC Protocols
Having results for spectrum sensing protocol and OSA MAC we join these two layers to form a complete OSA network stack. By means of exhaustive search we solve the optimization problem of (1).
We will also investigate the set of parameters that maximize R t for small and large scale network.
We divide our analysis in macroscopic and microscopic case observing R t for small scale network with M = 3, N = 12, d = 5 kB, and large scale network with M = 12, N = 40, d = 20 kB. For each case we select a set of spectrum sensing and OSA MAC protocols that are possible and, as we believe, most important to the research community. For a fixed set of parameters C = 1 Mbps, b = 1 MHz, p = e −1 /N , t d,max = 1 ms (microscopic case), t d,max = 2 s (macroscopic case), α = 1/M , t t = 1 ms, p d,min = 0.99, γ = −5 dB, q p = 0.1, and t p = 100 µs we leave κ, t e , n g , and p f as optimization variables.
1) Microscopic Model:
Here we focus only on DCC protocol, since collaborative spectrum sensing is only possible via a PU free control channel, which is inefficient to accomplish with HCC. Also, for sensing measurement dissemination we do not consider SSMA, which would be most difficult to implement in practice. The results are presented in Fig. 13. DCC B 1 S 0 with TTDMA is the best option, both for small scale and large scale network, see Fig. 13(a) and Fig. 13(b), respectively. Because of relatively high switching time B 1 S 1 performs slightly worse than B 1 S 0 , for small and large scale network. DCC B 0 S 0 with TDMA is the worst protocol combination, which confirms earlier results from Section V-A and Section V-B. Irrespective of network size it is always better to buffer SU connections preempted by PU than to look for vacant channels, compare again B 1 S 0 and B 0 S 1 in Fig. 13(a) and Fig. 13(b). The difference between B 0 S 0 and B 0 S 1 is mostly visible for a large network scenario, see Fig. 13(b), since with a large number of channels there are more possibilities to look for empty channels.
For all protocol combinations and both network sizes κ = 2 maximizes throughput performance, see Fig. 13(a). Interestingly, network size dictates the size of a sensing group. For small scale network, n g = 1 is the optimal value, see Fig. 13(a), but for a large network R t is maximized when n g = 3 (for B 0 S 0 ) and n g = 4 (for the rest). We can conclude that with a small network it is better to involve all nodes in sensing, while for larger networks it is better to divide them into groups, which agrees with the observation from Section V-A4. Moreover, we observe that the performance difference between TTDMA and TDMA is not as big as in Fig. 2 when parameters are optimized.
The most interesting result is observed for p f . With the increase of protocol complexity false alarm increases as well. Also with an increase of p f , quiet time is decreasing. Because buffering and switching improves the performance, there can be more margin to design the spectrum sensing. DCC obtains higher throughput than HCC for a small scale network, and vice versa, compare Fig. 14(a) and Fig. 14(b), respectively. This confirms the observations of [15, Fig. 3], [34, Fig. 3]. Just like in Fig. 13(a), for small scale network κ = 2 and n g = 2 are the ones that maximize R t . For the large scale network, however, κ = 3 and n g = 3 is optimal for TDMA, and κ = 4 and n g = 4 for TTDMA.
This means that for large networks it is beneficial to split the network into smaller groups. Again, this confirms our findings from Section V-C1. For both network scenarios p f and t e is relatively the same for all protocols considered.
Note that for the large scale network in the macroscopic model, an SU takes more time to detect a PU than in the microscopic model because large t d,max reduces the time overhead. The release of time restriction impacts the large scale network by requiring greater value of κ to achieve the maximum throughput.
VI. CONCLUSION
We have presented a comprehensive framework enabling assessment of the performance of joint spectrum sensing and MAC protocol operation for OSA networks. In the model we have proposed we focused on the link layer throughput as the fundamental metric to assess performance. We have parameterized spectrum sensing architectures for energy detection based systems with collaborative measurements combining. We have proposed a novel spectrum sensing MAC denoted Truncated Time Division Multiple Access. We have also categorized multichannel MAC protocols for OSA networks based on their ability to buffer and switch existing SU connections on the arrival of a PU. Our analysis is supported by simulations which prove the accuracy of the obtained expressions.
Some of the design guidelines that need to be noted are as follows. For spectrum sensing introducing TTDMA gives an improvement in obtained performance in compared to TDMA. Large networks, i.e.
having many channels and users, benefit from clustering, while for small networks it is better to create small number of clusters such that sensing time is optimized. When considering MAC protocol design for OSA it is clear that more benefit comes from introducing SU connection buffering than channel switching, for those SU connections that have been preempted by PU. Interestingly, although intuition would suggest that MAC protocols that combine SU connection buffering and channel switching would outperform all other protocols, due to switching overhead this combination is usually inferior to protocols that involve only SU connection buffering.
Our future task will be to investigate the delay experience by using any of OSA MAC protocols proposed. We plan to develop a comprehensive simulation software which will implement features not covered by our model, like queue per each SU.
| 11,421 |
0910.4704
|
2129293544
|
We present an analytical framework to assess the link layer throughput of multichannel Opportunistic Spectrum Access (OSA) ad hoc networks. Specifically, we focus on analyzing various combinations of collaborative spectrum sensing and Medium Access Control (MAC) protocol abstractions. We decompose collaborative spectrum sensing into layers, parametrize each layer, classify existing solutions, and propose a new protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient distribution of sensing results in “K out of N” fusion rule. In case of multichannel MAC protocols, we evaluate two main approaches of control channel design with 1) dedicated and 2) hopping channel. We propose to augment these protocols with options of handling secondary user (SU) connections preempted by primary user (PU) by 1) connection buffering until PU departure and 2) connection switching to a vacant PU channel. By comparing and optimizing different design combinations, we show that 1) it is generally better to buffer preempted SU connections than to switch them to PU vacant channels and 2) TTDMA is a promising design option for collaborative spectrum sensing process when K does not change over time.
|
In later works, assumptions on the OSA network model became more realistic. Specifically, Markovian analysis of SU traffic buffering on the event of PU arrival was presented for a SU exponential service time @cite_48 and for a SU phase-type service time @cite_55 . Unfortunately the impact of spectrum sensing detection time overhead on the OSA network performance was not investigated and the connection arrangement process for new SU arrivals, i.e. method to select and access a channel for a new sender-receiver pair, was assumed to be performed by a centralized entity. A different option of the above model has been analyzed in @cite_26 , with only PU channels dedicated to the OSA network and with a mixture of PU and SU exclusive channels. SU connection buffering was not allowed, however, SU connections were able to switch to an empty SU exclusive channel on the event of channel preemption by the PU.
|
{
"abstract": [
"We analyze the performance of a wireless system consisting of a set of secondary users opportunistically sharing bandwidth with a set of primary users over a coverage area. The secondary users employ spectrum sensing to detect channels that are unused by the primary users and hence make use of the idle channels. If an active secondary user detects the presence of a primary user on a given channel, it releases the channel and switches to another idle channel, if one is available. In the event that no channel is available, the call waits in a buffer until either a channel becomes available or a maximum waiting time is reached. Spectrum sensing errors on the part of a secondary user cause false alarm and mis-detection events, which can potentially degrade the quality-of-service experienced by primary users. We derive system performance metrics of interest such as blocking probabilities. Our results suggest that opportunistic spectrum sharing can significantly improve spectrum efficiency and system capacity, even under unreliable spectrum detection. The proposed model and analysis method can be used to evaluate the performance of future opportunistic spectrum sharing systems.",
"We develop a general framework for analyzing the performance of an opportunistic spectrum sharing (OSS) wireless system at the session level with Markovian arrivals and phasetype service times. The OSS system consists of primary or licensed users of the spectrum and secondary users that sense the channel status and opportunistically share the spectrum resources with the primary users in a coverage area. When a secondary user with an active session detects an arrival of a primary session in its current channel, the secondary user leaves the channel quickly and switches to an idle channel, if one is available, to continue the session. Otherwise, the secondary session is preempted and moved to a preemption queue. The OSS system is modeled by a multi-dimensional Markov process. We derive explicit expressions for the related transition rate matrices using matrix-analytic methods. We also obtain expressions for several performance measures of interest, and present both analytic and simulation results in terms of these performance measures. The proposed OSS model encompasses a large class of specific models as special cases, and should be useful for modeling and performance evaluation of future opportunistic spectrum sharing systems.",
"Cognitive radio (CR) is a promising technology for increasing the spectrum capacity for ad hoc networks. Based on CR, the unlicensed users will utilize the unused spectrum of the licensed users in an opportunistic manner. Therefore, the average spectrum usage will be increased. However, the sudden appearance of the licensed users forces the unlicensed user to vacate its operating channel and handoff to another free one. Spectrum handoff is one of the main challenges in cognitive ad hoc networks. In this paper, we aim to reduce the effect of consecutive spectrum handoff for cognitive ad hoc users. To achieve that, the licensed channels will be used as operating channels and the unlicensed channels will be used as backup channels when the primary user appears. Therefore, the number of spectrum handoff will be reduced, since unlicensed bands are primary user free bands. A Markov chain model is presented to evaluate the proposed scheme. Performance metrics such as blocking probability and dropping probabilities are obtained. The results show that the proposed scheme reduces all the aforementioned performance metrics."
],
"cite_N": [
"@cite_48",
"@cite_55",
"@cite_26"
],
"mid": [
"2163965981",
"2171528576",
"117948830"
]
}
|
Performance of Joint Spectrum Sensing and MAC Algorithms for Multichannel Opportunistic Spectrum Access Ad Hoc Networks
|
It is believed that Opportunistic Spectrum Access (OSA) networks will be one of the primary forces in combating spectrum scarcity [2] in the upcoming years [3], [4]. Therefore, OSA networks [5], [6] have become the topic of rigorous investigation by the communications theory community. Specifically, the assessment of spectrum sensing overhead on OSA medium access control (MAC) performance recently gained a significant attention.
A. Research Objective
In the OSA network performance analysis, a description of the relation between the primary (spectrum) user (PU) network and the secondary (spectrum) user (SU) network can be split into two general models: macroscopic and microscopic. In the macroscopic OSA model [7], [8], [9] it is assumed that the time limit to detect a PU and vacate its channel is very long compared to the SU time slot, frame or packet length duration. Such a time limit is assumed to be given by a radio spectrum regulatory organization.
For example, the timing requirements for signal detection of TV transmissions and low power licensed devices operating in TV bands by IEEE 802.22 networks [10] (including transmission termination and channel vacancy time, i.e. a time it takes the SU to stop transmitting from the moment of detecting PU) must be equal to or smaller than 4.1 s [11,Tab. 15.5], while the frame and superframe duration of IEEE 802.22 are equal to 10 ms and 160 ms, respectively [11]. Also, in the macroscopic model it is assumed that the PU channel holding time, i.e. the time in which the PU is seen by the SU as actively transmitting, is much longer than the delay incurred by the detection process performed at the SU. As a result it can be assumed in the analysis that, given high PU detection accuracy (which is a necessity), OSA network performance is determined by the traffic pattern of the SUs. That is, it depends on the total amount of data to be transmitted by the SU network, the duration of individual SU data packets and the number of SU nodes. In other words the PU bandwidth resource utilization by the SU is independent of PU detection efficiency.
In the microscopic OSA model, more popular than its macroscopic counterpart due to analytic challenges, the detection time is short in relation to the shortest transmission unit of the OSA system.
Detection is also performed much more frequently than in the macroscopic model, i.e. for every SU packet [12], [13] or in every time slot [14], [15], [16], [17], [18]. Also, the microscopic model assumes much higher PU activity than the macroscopic model, which justifies frequent detection cycles. Since the detection overhead is much larger than in the macroscopic model, the analysis of utilization of resources (temporarily unoccupied by PU) by OSA network cannot be decoupled from the analysis of the PU signal detection phase. Therefore, while the distinction between macroscopic and microscopic models are somehow fluid, it is important to partition the two cases and compare them in a systematic manner. More importantly, the comparison should be based on a detailed OSA multichannel and multiuser ad hoc network model [19,Sec. 7.4], which would not ignore the overhead from both the physical layer (PHY) and MAC layers of different cooperative and distributed spectrum sensing strategies [19,Tab. 7.1] and, in case of microscopic model, account for different channel access procedures and connection management strategies for the SUs upon PU detection, like buffering or switching to a vacant channel. Finally, the comparison should be realized using tractable analytical tools.
C. Our Contribution
In this paper, we present a unified analytical framework to design the spectrum sensing and the OSA data MAC jointly, for the macroscopic and microscopic cases. This design framework provides the (i) means of comparing different spectrum sensing techniques plus MAC architectures for OSA networks and (ii) spectrum sensing parameters such as observation time and detection rate for given design options. As a metric for optimization and comparison, we consider the average link layer OSA network throughput.
Our model will account for the combined effects of the cooperative spectrum sensing and the underlying MAC protocol. For spectrum sensing, we will consider several architectures parametrized by sensing radio bandwidth, the parameters of the sensing PHY, and the parameters of the sensing MAC needed to exchange sensing data between individual OSA nodes. Along with classifying most of the well known sensing MAC protocols, we introduce a novel protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient exchange of individual sensing decisions in "κ out of N " fusion rule.
For the data MAC we will consider two protocol abstractions, (i) Dedicated Control Channel (DCC) and
(ii) Hopping Control Channel (HCC), as analyzed in [15], [34] with novel extensions. That is, given the designs of [25], [26], [27], [30], we will analyze MAC protocols that (i) allow (or forbid) to buffer existing SU connections on the event of PU arrival, and (ii) allow (or forbid) to switch the SU connections preempted by the PU to the empty channels. Please note that in the case of the analytical model proposed in [15] for the SU connection buffering OSA MAC schemes we present an exact solution. Finally, using our framework, we compute the maximum link layer throughput for most relevant combinations of spectrum sensing and MAC, optimizing parameters of the model jointly, both for the microscopic and macroscopic models.
The rest of the paper is organized as follows. System model and a formal problem description is presented in Section II. Description of spectrum sensing techniques and their analysis is presented in Section III. Analysis of MAC strategies are presented in Section IV. Numerical results for spectrum sensing process, MAC and joint design framework are presented in Section V. Finally the conclusions are presented in Section VI.
II. SYSTEM MODEL AND FORMAL PROBLEM DESCRIPTION
The aim of this work is to analyze link layer throughput accounting for different combinations of MAC, spectrum sensing protocols and regulatory constraints. The model can later be used to optimize the network parameters jointly to maximize the throughput, subject to regulatory constraints. Before formalizing the problem, we need to introduce the system model, distinguishing between the microscopic and macroscopic approaches.
A. System Model 1) Microscopic Model: For two multichannel MAC abstractions considered, i.e. DCC and HCC, we distinguish between the following cases: (i) when SU data transfer interrupted by the PU is being buffered (or not) for further transmission and (ii) when existing SU connection can switch (or not) to a free channel on the event of PU arrival (both for buffering and non-buffering SU connection cases). Finally, we will distinguish two cases for DCC where (i) there is a separate control channel not used by the PU and (ii) when control channel is also used by the PU for communication. All these protocols will be explained in detail in Section IV.
We assume slotted transmission within the SU and PU networks, where PU and SU time slots are equal and synchronized with each other. The assumptions on slotted and synchronous transmission between PU and SU are commonly made in the literature, either while analyzing theoretical aspects of OSA (see [12, OSA scenarios (see [16,Fig. 2] in the context of secondary utilization of GSM spectrum or [38] in the context of secondary IEEE 802.16 resources usage). Our model can be generalized to the case where PU slots are offset in time from SU slots, however, it would require additional analysis of optimal channel access policies, see for example [36], [39], [40], which is beyond the scope of this paper. We also note that the synchrony assumption allows one to obtain upper bounds on the throughput when transmitting on a slot-asynchronous interface [41].
The total slot duration is t t µs. It is divided in three parts: (i) the detection part of length t q µs, denoted as quiet time, (ii) the data part of length t u µs, and if communication protocol requires channel switching (iii) switching part of length t p µs. The data part of the SU time slot is long enough to execute one request to send and clear to send exchange [15], [34]. For the PU the entire slot of t t µs is used for data transfer, see Fig. 1(a).
Our model assumes that there are M channels having fixed capacity C Mbps that are randomly and independently occupied by the PU in each slot with probability q p . There are N nodes in the SU network, each one communicating directly with another SU on one of the available PU channels in one hop fashion.
Also, we assume no merging of the channels, i.e only one channel can be used by a communicating pair of SUs at a time. SUs send packets with geometrically distributed length with an average of 1/q = d/(Ct u ) slots for DCC, and 1/q = d/(C {t u + t p }) slots for HCC [15,, [34,Sec. 3.2.3], where d is the average packet size given in bits. Difference between average packet length for DCC and HCC is a result of switching time overhead for HCC, because during channel switching SUs do not transfer any data, even though they occupy the channel. We therefore virtually prolong data packet by t p for HCC to keep the comparison fair.
Every time a node tries to communicate with another node it accesses the control channel and transmits a control packet with probability p to a randomly selected and non-occupied receiver. A connection is successful when only one node transmits a control packet in a particular time slot. The reason for selecting a variant of S-ALOHA as a contention resolution strategy was manyfold. First, in reality each real-life OSA multichannel MAC protocol belonging to each of the considered classes, i.e. HCC or DCC, will use its own contention resolution strategy. Implementing each and every approach in our analysis: (i) would complicate significantly the analysis, and most importantly (ii) would jeopardize the fairness of the comparison. Therefore a single protocol was needed for the analytical model. Since S-ALOHA is a widespread and well understood protocol in wireless networks and is a foundation of many other collision resolution strategies, including CSMA/CA, it has been selected for the system model herein.
In each quiet phase every SU node performs PU signal detection based on signal energy observation.
Since we assume that OSA nodes are fully connected in a one hop network, thus each node observes on 8 average the same signal realization in each time slot [13], [18], [42]. PU channels detected by the SU are assumed as Additive White Gaussian Noise with a channel experiencing Rayleigh fading. Therefore to increase the PU detectability by the OSA network we consider collaborative detection with hard decision combining in the detection process based on "κ out of N " rule, as in [43], [44]. Hence we divide the quiet phase into (i) the sensing phase of length t s µs and (ii) the reporting phase of length t r µs. The sensing phase is of the same length for all nodes. For simplicity we do not consider in this study sensing methods that adapt the sensing time to propagation conditions as in [45]. In the sensing phase, nodes perform their local measurements. Then, during the reporting phase, nodes exchange their sensing results and make a decision individually by combining individual sensing results. We will analyze different PHY and MAC approaches to collaborative spectrum sensing, especially (i) methods to assign sensing frequencies to users, (ii) rules in combining the sensing results, and (iii) multiple access schemes for measurement reporting. In this paper we do not consider sensing strategies applicable to single channel OSA networks [46], two stage spectrum sensing [8], and sensing MAC protocols based on random access [47], due to their excessive delay. We will explain our spectrum sensing approaches in more detail in Section III. Further we assume a error channel, for the sensing layer as well as for data layer where probability of error during transmission is denoted as p e .
Finally, we consider two regulatory constraints under which the OSA network is allowed to utilize the PU spectrum provided the channel is idle: (i) maximum detection delay t d,max , i.e. a time limit within which a SU must detect a PU, and (ii) minimum detection probability p d,min , i.e., a probability with which a OSA system has to detect a PU signal with minimum signal to noise ratio γ. Note that in the event of mis-detection and subsequent SU transmission in a channel occupied by PU, a packet fragment is considered successfully transmitted, since in our model transmission power of SU is much higher than interference from PU, and regulatory requirements considered here do not constrain SU transmission power 1 (refer for example to IEEE 802.22 draft where Urgent Coexistent Situation packets are transmitted on the same channel as active PU [10], [11]). Moreover, maximum transmission power is a metric specific to overlay OSA systems [19, Sec. 2.2.5 and 8.2.1] where typically no spectrum sensing is considered. Also we do not consider a metric based on a maximum allowable level of collisions between PU and SU. Note that the parameters of the introduced model are summarized in Table I and 1 The opposite case is to assume that a packet fragment is considered as lost and retransmitted. This approach however requires an acknowledgement mechanism for a lost packet fragment, see for example [17,, [41,Sec. II], that contradicts the model assumption on the geometric distribution of SU packets. Table I. the abbreviations are summarized in Table II. 2) Macroscopic Model: We assume the same system model as for the microscopic case, except for the following differences. OSA performs detection rarely, and the PU is stable for the duration of OSA network operation, i.e. it is either transmitting constantly on a channel or stays idle. In other words quiet period occurs for multiple time slots, see Fig. 1(b). Also, since the PU is considered stable on every channel we do not consider all types of OSA MAC protocols introduced for the microscopic model.
Instead we use classical DCC and HCC models proposed in [34] with the corrections of [15] accounting for the incomplete transition probability calculations whenever OSA network occupied all PU channels and new connection was established on the control channel.
B. Formal Problem Description
To compute the maximum throughput for different combinations of protocols and models, we define an optimization problem. The objective is the OSA network link layer throughput R t . Therefore, considering the regulatory constraints given above we need to
maximize R t = ξR subject to p d = p d,min , t d ≤ t d,max ,(1)
where t d is the detection time, i.e. the time to process whole detection operation as described in Section III-D, R is the steady state link layer throughput without sensing and switching overhead, which will be computed in Section IV, and
ξ = tt−tq −P (i) x,y , R (z)
x,y total PU arrival probability for no buffering, and buffering case -
T (j) k , S (j) m
termination, and arrangement probability - (1) is itself affected by p f , as it will be shown in Section IV. Also note that t p is removed from second condition of (2) since the switching time is negligible in comparison to inter-sensing time.
S (1) m ,Ŝ(1)
III. LAYERED MODEL OF SPECTRUM SENSING ANALYSIS
To design the spectrum sensing, we follow the approach of [7] in which the spectrum sensing process is handled jointly by (i) the sensing radio, (ii) the sensing PHY, and (iii) the sensing MAC. Using this layered model we can compare existing approaches to spectrum sensing and choose the best sensing architecture in a systematic way. Since the parameters of the design framework in (1) are determined by the choices of individual layers, we describe and parametrize each layer of the spectrum sensing, later describing cross-layer parameters.
A. Sensing Radio
The sensing radio scans the PU spectrum and passes the spectrum sensing result to the sensing PHY for analysis. The sensing radio banwidth is given as αM b, where α is a ratio of the bandwidth of the sensing radio to the total PU bandwidth and b MHz is the bandwidth of each PU channel 2 . With α > 1/M node can sense multiple channels at once. However the cost of such wideband sensing radio increases.
B. Sensing PHY
The sensing PHY analyzes the measurements from the sensing radio to determine if a PU is present in a channel. Independent of the sensing algorithm, such as energy detection, matched filter detection or feature detection [48], [49], there exists a common set of parameters for the sensing PHY: (i) time to observe the channel by one node t e µs, (ii) the PU signal to noise ratio detection threshold θ, and (iii) a transmit time of one bit of sensing information t a = 1/C µs. We denote conditional probability of sensing result p ij , i, j ∈ {0, 1}, where j = 1 denotes PU presence and j = 0 otherwise, and i = 1
indicates the detection result of PU being busy and i = 0 otherwise. Observe that p 10 = 1 − p 00 and
p 11 = 1 − p 01 .
As noted in Section II-A, we consider energy detection as the PU detection algorithm since it does not require a priori information of the PU signal. For this detection method in Rayleigh plus Additive
White Gaussian Noise channel p 10 is given as [15, Eq. (1)]
p 10 = Γ(ǫ, θ/2) Γ(ǫ) ,(3)
and
p 11 [15, Eq. (3)] p 11 = e − θ 2 ǫ−2 h=0 θ h h!2 h + 1 + γ γ ǫ−1 e θγ 2+2γ − ǫ−2 j=0 (θγ) h j!(2 + 2γ) h ,(4)
where Γ(·) and Γ(·, ·) are complete and incomplete Gamma functions, respectively, and ǫ = ⌊t e αM b⌋ is a time-bandwidth product. By defining G ǫ (θ) = p 10 and θ = G −1 ǫ (p 10 ), we can derive p 11 as a function of p 10 and t e .
C. Sensing MAC
The sensing MAC is a process responsible for sensing multiple channels, sharing sensing results with other users, and making a final decision on the PU presence. Because of the vast number of possibilities for sensing MAC algorithms it is hard to find a general set of parameters. Instead, we derive crosslayer parameters for a specific option of the sensing MAC. This methodology can be applied to any new sensing MAC scheme. We now introduce classifications which will be used in the derivation of cross-layer parameters.
1) Sensing Strategy for Grouping Channels and Users:
Each SU has to determine which channel should be sensed among the M channels. To reduce sensing and reporting overhead, OSA system can divide users and channels into n g sub-groups [50]. Sub-group i ∈ {1, · · · , n g } is formed by n u,i users who should sense m s,i channels to make a final decision cooperatively. Assume that all users are equally divided into groups then m s,i ∈ {⌊M/n g ⌋, ⌈M/n g ⌉} and n u,i ∈ {⌊N/n g ⌋, ⌈N/n g ⌉}. Note that for M/n g ∈ N and N/n g ∈ N all sub-groups have the same n u,i = N/n g and m s,i = M/n g for all i. Given N and M , if n g is small, more users are in a group and the collaboration gain increases, but at the same time more channels must be sensed, which results in more time overhead for sensing. For large n g , this relation is opposite.
2) Combining Scheme: By combining sensing results of other users, a OSA network makes a more reliable decision on PU state. As considered in [13], [51], we will take κ as a design parameter for the sensing MAC and find an optimum value to maximize the performance. Note that for the case of N user cooperation if κ = 1, the combining logic becomes the "or" rule [19,Sec. 3.2], [42, Sec. III-C] and if κ = N , it becomes the "and" rule.
3) Multiple Access for Measurement Reporting:
To transmit sensing results of multiple users through the shared media, a multiple access scheme is needed. Note that this multiple access scheme is only for the reporting process, different from the multiple access for data transfer. We consider the following approaches.
a) Time Division Multiple Access (TDMA):
This is a static and well-organized multiple access scheme for which a designated one bit slot for sensing report transmission is assigned to each user [43], [50]. b) TTDMA: In TDMA, when the SU receives all the reporting bits from other users the SU makes a final decision of presence of PU on the channel. However, in OSA network using TTDMA SUs may not need to wait until receiving the last reporting bit, because for the "κ out of N " rule, a reporting operation can stop as soon as κ one bits denoting PU presence are received. This sensing MAC aims at reducing the reporting overhead, but unfortunately we have not seen any paper proposing and discussing TTDMA.
c) Single Slot Multiple Access (SSMA):
For this scheme, known also as the boosting protocol [52], only one bit slot is assigned for reporting and all SUs use this slot as a common reporting period. Any SU that detects a PU transmits one bit in the common designated slot. Otherwise, a user does not transmit any bit in the designated slot. Then, reporting bits from SUs who detect a PU are overlapped and as a result all power of the slot is summed up. By measuring the power in the designated slot, a SU can determine whether the primary user exists or not. We assume perfect power control and perfect synchronization.
Even though this may not be practical, because carrier frequency or the phase offset cannot be avoided in real systems, this scheme serves as an upper bound for sensing MAC performance. For the analysis of SSMA in isolation but in a more realistic physical layer conditions the reader is referred to [53], [54].
D. Cross-Layer Parameters
Considering the combined impact of the individual layers, we derive cross-layer parameters in the framework as described in (1). More specifically these are t q and t d , derived as a function of individual parameters and p f , and p d , denoting final network-wide probabilities of false alarm and detection, respectively.
1) Detection Time t d and Quiet Time t q : Detection time t d is defined as the time duration from the point that a SU starts to sense, to the point that a SU makes a final decision on PU presence. Regardless of the data transfer and spectrum sensing time overlap, the final detection decision is made only after combining the sensing group's reported information [55]. Thus t d is the time from the start of the sensing phase to the end of the reporting phase, i.e. t d = t s + t r .
Since the data transfer may not be possible during sensing or reporting phases t q ≤ t d , depending on the approach. When spectrum sensing and data transfer are divided in time division manner t q = t s + t r .
Note that three other methods sharing the same problem are possible (they will not be considered in the remainder of the paper): (i) simultaneous reporting and data, which can be implemented by using the separate channel as in [56], for which t q = t s , (ii) simultaneous sensing and data, implemented by using the frequency hopping method as in [57], for which t q = t r , and (iii) simultaneous sensing, reporting, and data for which t q = 0. Conceptually, simultaneous sensing, reporting, and data transfer is possible and seems most efficient but we have not found any implementation of it in the literature. Note that in order to implement simultaneous sensing and transmission at least two radio front ends are needed, which increases the total cost of the device.
Definem s as the number of individual sensing events to complete sensing operation andm r as the average number of bits to report. Then the sensing time and the reporting time can be calculated as t s =m s t e and t r =m r t a . Note thatm s is affected by the bandwidth of the sensing radio because it can scan multiple channels at once if the bandwidth of the sensing radio is wide. For the case that the sensing radio is narrower than the bandwidth to sense, i.e. α < max{m s,1 , · · · , m s,ng }/M , we assume that a SU monitors all channels by sequential sensing [33], because the reporting phase should be synchronized after all SUs finish the sensing phase. With this assumptionm s = max{m s,1 , · · · , m s,ng }/αM , because even though the bandwidth to sense is less than that of the sensing radio it still needs one sensing cycle to get information. Form r , because there are n g groups in a OSA system,m r = ng i=1m r,i wherem r,i depends on the multiple access schemes for reporting, which we compute below. a) TDMA: All n u,i users should transmit the sensing results of m s,i channels. Thus,m r,i = n u,i m s,i . b) TTDMA: For κ < n u,i /2, if κ of ones are received, the reporting process will end. We introduce a variable δ which is the number of bits when the reporting process finishes. Thus there should be κ − 1 of ones within δ − 1 bits and then δ-th bit should be one. Because the range of δ is from κ to n u,i , the average number of bits for this condition is derived as
m 1,i = nu,i δ=κ δ − 1 κ − 1 (1 − q p )δp δ−κ 00 p κ 10 + q p δp δ−κ 01 p κ 11 .(5)
Moreover, if the number of received zeros, denoting PU absence, equals to n u,i − κ + 1, the reporting process will stop because even if the remaining bits are all one, the number of ones must be less than κ. Then the reporting process stops at δ-th bit if δ − n u,i + κ − 1 bits of one are received within δ − 1 bits and zero is received at δ-th bit. The range of δ is from n u,i − κ + 1 to n u,i , and thus the average number of bits for this condition is
m 2,i = nu,i δ=νi δ − 1 δ − ν i (1 − q p )δp νi 00 p δ−νi 10 + q p δp νi 01 p δ−νi 11 ,(6)
where
ν i = n u,i − κ + 1. Therefore because there are m s,i channels to sense in a group i,m r,i = m s,i (m 1,i + m 2,i ).
For the case κ ≥ n u,i /2, m 1,i is calculated by counting zeros and m 2,i by counting ones. Thus, we usem r,i = m s,i (m 1,i + m 2,i ) again, by replacing κ with n u,i − κ + 1, p 00 with p 10 and p 01 with p 11 .
Because we assumed so far that κ is known to each node in the network, OSA nodes know when to stop reporting measurements and start data communication without being instructed by external parties.
For comparison we analyze another type of TTDMA, denoted as κTTDMA, where a cluster head node makes a decision to stop the reporting phase in the OSA network. For example, this approach may be necessary if the κ value is updated in real time. In the worst case scenario this approach requires two bits to be reported by the SU, i.e. one for sending sensing data and one for an acknowledgment from the cluster head to report. Then (5) and (6)
p f = 1 n g ng i=1 p f,i ,(7)
where p f,i is the probability of false alarm of sub-group i. Using (7) we can also derive p d by substituting (8) b) TTDMA: In this case SU does not need to receive n u,i bits to make a final decision because the reporting phase is ended when the number of ones is κ. To derive p f,i for this case, we introduce a variable β denoting the number of zeros. Then total number of reporting bits is κ + β if the last bit is one because otherwise reporting phase will end at less than κ + β bits. Therefore, there should be β of zeros in κ + β − 1 bits and κ-th bit should be one. Because β can vary from 0 to n u,i − κ
p f,i = nu,i δ=κ n u,i δ p δ 10p nu,i−δ 00 ,(8)wherep x = (1 − p e )p x + p e (1 − p x ) for p x ∈ {p 10 , p 00 }, while p d,i is derived fromp f,i = nu,i−κ β=0 κ + β − 1 β p κ 10p β 00 .(9)
Finally p d,i is obtained from (9) by substitutingp 10 withp 11 andp 00 withp 01 .
c) SSMA: Obviously, the process of the reporting information for SSMA is the same as for TDMA.
Therefore p f,i and p d,i are defined the same as for TDMA.
IV. MULTICHANNEL OSA MAC PROTOCOL ANALYSIS
In this section we present the analysis of throughput R for all considered combinations of MAC protocol architectures. As noted in Section I-C, we propose a set of new multichannel MAC protocols for OSA. We will first describe their operation, later presenting the analysis framework.
A. Description of New Multichannel MAC protocols for OSA
We consider two major groups of MAC protocols for OSA: (i) those enabling buffering of the SU connections preempted by the PU arrival, and (ii) those enabling switching of the SU connections to a vacant channel when preempted. In the former group, when the PU arrives the existing SU connection will pause at the time of preemption and resume on the same channel as soon as the PU goes idle. We assume that the SU always waits for the PU to finish its transmission. The case where the buffered SU connection expires after a predefined time, not analyzed here, is presented in [22] for the centralized network. We do not consider any channel reservation schemes for potential SU connections to be buffered [25]. When buffering is not possible, the preempted SU connection is considered as lost and a new connection must be established on the control channel. In the latter group, when the PU arrives the existing SU connection will look for a new empty channel, to continue transmission. If such a channel cannot be found the connection is lost. Without channel switching, the exiting SU connection is lost as soon as the PU preempts the channel.
Obviously we can have four combinations of these groups for OSA MAC, which have all been considered in the analysis: (i) with no buffering and no channel switching [30] scheme denoted as B 0 S 0 , where SU connections preempted by PU are lost; (ii) with no buffering and channel switching [24], [25], [26] denoted as B 0 S 1 , where SU connections preempted by PU switch to a free channel and connections that cannot find a free channel are blocked; (iii) with buffering and no channel switching [15], [22], [23] We propose a three dimensional Markov chain of which the state vector is given as Sec. III] where buffered SU connections were also considered to be utilizing the PU channels.
(X t , Y t , Z t ), where
Considering a real OSA system, there are conditions that qualify valid states. With SU connection buffering-enabled MAC protocols for OSA, the number of connections cannot be less than the number of channels utilized by SUs, i.e. X t ≤ Z t . Additionally, SUs do not pause transmissions over unoccupied channels. Therefore, the number of SU connections not utilizing a channel cannot exceed the number of channels occupied by PUs, i.e. Z t − X t ≤ Y t or Z t ≤ X t + Y t . Finally, the sum of the channels utilized by PUs and the SUs cannot be greater than M D , i.e. X t + Y t ≤ M D . By combining these conditions we can compactly write them as
0 ≤ X t ≤ Z t ≤ X t + Y t ≤ M D .(10)
When connection buffering is disabled the number of SU connections must be the same as the number of channels utilized by SUs, i.e. X t = Z t . Therefore, for non-buffering SU connection OSA MAC
protocols (X t , Y t , Z t = X t ) ⇒ (X t , Y t ).
For the microscopic case the average channel throughput, excluding switching and sensing overhead, is computed as
R = C sm x=0 MD y=0 sm z=0 xπ xyz ,(11)
where s m = max{S} and the steady-state probability π xyz is given by
π xyz = lim t→∞ Pr(X t = x, Y t = y, Z t = z),(12)
and the state transition probabilities to compute (12) will be derived in the subsequent section, uniquely for each OSA multichannel MAC protocol.
Finally, for the macroscopic case the average channel throughput, excluding switching and sensing overhead, is computed as
R = {q p (1 − p d ) + (1 − q p )(1 − p f )}R c C,(13)
where R c = sm i=1 iπ i and π i is a solution to a steady state Markov chain given by [15,Eq. (13)]. Since the macroscopic model assumes no PU activity in each time slot, SU connection buffering and switching is not needed. Note that contrary to the incorrect assumptions of [15,Eq. (12)], [34, Eq. (7) and (9)] we compute R in (11) and (13) taking all the channels into account, irrespective of the type of OSA MAC. This is because models of [15], [34] considered only data channels for the throughput investigation in DCC in the final calculation stage, assuming that no data traffic is being transmitted on control channel. However, the utilization must be computed over all channels, irrespective of whether one channel transmitted only control data or not.
C. Derivation of State Transition Probabilities for the Microscopic Model
We denote the state transition probability as
p xyz|klm = P r(X t = x, Y t = y, Z t = z|X t−1 = k, Y t−1 = l, Z t−1 = m).(14)
Note that changes in X t and Z t depend on the detection of the PU. In addition, changes in Z t depend on
OSA traffic characteristics such as the packet generation probability p and the average packet length 1/q.
Also, note that the steady state probability vector π containing all possible steady state probabilities π xyz is derived by solving π = πP, where entries of right stochastic matrix P are defined as (14) knowing that x,y,z π xyz = 1.
As a parameter to model PU state, p c denotes the probability that a OSA network collectively detects a PU channel as occupied 3 , i.e.
p c = q p p d + (1 − q p )p f .(15)
We introduce two supporting functions. First, we denote T
T (j) k = k j q j (1 − q) k−j , k ≥ j > 0, 0, otherwise.(16)
Note that k in T 5) and (8)] considering PU detection on the control channel. If a PU is detected on a control channel, an SU connection cannot be generated because there is no chance to acquire a data channel. We then have [15,Eq. (17)]
S (j) m = S (1) m , j = 1 (DCC), S (1) m N −2m−1 N −1 MD−m M , j = 1 (HCC), 1 − S (1) m , j = 0, 0, otherwise,(17)
whereS
(1) m = Ŝ (1)
m , PU free control channel, DCC only, This is because we assume that a SU that has a connection but pauses data transmission due to the PU presence does not try to make another connection. We can now derive the transition probabilities individually for all four different OSA MAC protocols.
(1 − p c )Ŝ (1) m , otherwise,(18)
1) Case B 0 S 0 : Recall that for non-buffering OSA MAC protocols Z t = X t . Thus p kl|xy is defined as Now, consider the case x < k + 1. When a SU data connection is terminated, there can be two possible reasons: (i) a SU completes its transmission, or (ii) a PU is detected on a channel that is assigned to a SU for data transmission before sensing. The former was analyzed [34,Sec. 3]. To model the latter, we introduce variable i denoting the number of channels that were reserved for SU data transmission before sensing but cannot be utilized due to PU detection. We have the following observation. In addition, we need to discuss the edge state 4 which considers two cases: (i) no more channels are available, either utilized by SUs or PUs, and (ii) all possible SU connections are established 5 which we denote as "full connection state". For the transition from full connection state to edge state, we have to consider the case that one new connection is generated while any existing connection is not terminated, which means a trial for the new connection by the free SU is not established because there already exists all possible connections.
Writing all conditions compactly, denote the indicator for the edge state
1 x,y = 1, x + y = M D or x = s m , 0, otherwise,(19)
and define P
p xy|kl = 0, x > k + 1 T (0) k S (1) k P (0) x,y , x = k + 1, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (i) x,y , x < k + 1, k < s m or 1 x,y = 0, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (i) x,y + T (0) k S (1) k P (0) 0,y , x < k + 1, k = s m , 1 x,y = 1,(20)
where i m = min(s m − x, y).
2) Case B 0 S 1 : Although in the SU connection non-switching case both DCC and HCC can be considered, only DCC will be able to perform switching without any additional control data exchange, which we prove formally.
Before going into detail of the derivation note that for the class of OSA MAC protocols with a dedicated control channel every node can follow the connection arrangement of the entire network. [34] it is impossible for a single node to learn the whole network connection arrangement since each sender receiver pair cannot listen to others while following its own hopping sequence. We now present the following proof.
Theorem 1: Channel switching in DCC can be performed without any additional control message exchange.
Proof: We prove this by showing a possible distributed channel switching process. Following earlier observation, in DCC each node can trace the connection arrangement of others, i.e. which channel has been reserved by a sender receiver pair. To distribute the switching events equally among SUs each SU computes the priority level as
Π i,t = Π i,t−1 + 1 p ,(21)
where
1 p = 1, preemption by PU, 0, otherwise,(22)
and Π i,t is the priority level of SU i at time t. For Π i,0 / ∈ N the priority is a MAC address of the SU,
where |I| = |U| = M D − X t − Y t , → is the mapping operator denoting process of switching active SU connection i to free channel j, I i,t denotes index of communicating SUs (transmitters) at time t, where Π a,t > Π b,t > · · · > Π c,t and U j,t denotes free channel with index j at t.
Note that existing connections that have not been mapped to a channel are considered blocked. Also note that algorithm given in Theorem 1 connections are preempted randomly with equal probability by PU.
Since new SU connections are also assumed to use new channels randomly with equal probability, each SU connection is blocked with uniform probability.
To enable SU connection switching in HCC one way is to augment it with a separate radio front end which would follow the hopping sequences and control data exchange of the OSA network. Obviously this increases the cost of hardware and contradicts the idea of HCC, where all channels should be used for data communication. Therefore while evaluating OSA MAC protocols in Section V-B, we will not consider SU connection switching for HCC.
We now define the state transition probability p xy|kl for the considered OSA MAC protocol. Because
x > k + 1 is infeasible, the state transition probability for x > k + 1 equals to zero. For x = k + 1, y
PUs can appear on any of M D channels because even though a PU is detected, the SUs can still transmit data by switching to the idle channels and the possible number of PU appearances is MD y . Note that the possible number of PU appearances in the case B 0 S 1 is always MD y , even for the edge state, because the data channel can be changed by switching to a vacant channel after the PU detection. Because it is impossible to create more than one new connection at a time, the OSA connection creation probabilities for x = k + 1 are the same as in (20), i.e. T
p xy|kl = 0,
x > k + 1,
T (0) k S (1) k P (0) 0,y , x = k + 1, T (x−k) k S (0) k + T (x−k+1) k S (1) k P (0) 0,y , x < k + 1, 1 x,y = 0, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (0) 0,y , x < k + 1, k < s m , 1 x,y = 1, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (0) 0,y + T (0) k S (1) k P (0) 0,y , x < k + 1, k = s m , 1 x,y = 1.(24)
3) Case B 1 S 0 : Before we discuss this case we present the following observation, which implicates the design of simulation models and derivation of p xyz|klm for SU connection buffering MAC protocols.
Observation 2: For all SU connection buffering OSA MAC protocols the same average link level throughput results from creating a brand new connection or resuming a previously preempted and buffered connection on the arrival of PU on a channel.
Proof: Due to the memoryless property of the geometric distribution
Pr(1/q i > 1/q t1 + 1/q t2 |1/q i > 1/q t1 ) = Pr(1/q i > 1/q t2 ),
where 1/q i is the duration of connection i, 1/q t1 is the connection length until time t 1 when it has been preempted by PU, and 1/q t2 is the remaining length of the connection after SU resumes connection at time t 2 . Since either a newly generated SU connection after resumption, or the remaining part of a preempted connection needs a new connection arrangement on the control channel, the number of slots occupied by each connection type is the same.
Having Observation 2 we can derive transition probabilities. Because packet generation is affected by the number of connections, we use Z t to classify conditions to derive the state transition probabilities.
Due to the assumption of a maximum number of one connection generation in one time slot, the state transition probability of the case of z > m + 1 is zero.
p xyz|klm = 0, z > m + 1, T (0) k S (1) m R (z)
x,y , z = m + 1,
T (m−z) k S (0) m + T (m−z+1) k S (1) m R (z)
x,y , z < m + 1, m < s m or z < s m ,
T (0) k S (0) m + T (1) k S (1) m + T (0) k S (1) m R (z) x,y , z = m = s m .(26)
Note that this OSA MAC has been previously analyzed in [15]. As it has been pointed out, the model proposed did not work well for the full range of parameters. This is due to the following. A Markov model has been derived for {X t , Y t } (using unmodified transition probabilities of [34,Eq. 6] used to calculate average throughput of networks based on non-OSA multichannel MAC protocols). With this limitation termination, the probability in [15,Eq. (14)], analogue to (16), included an aggregated stream of PU and SU traffic, where PU traffic q p was later substracted from steady state channel utilization in [15,Eq. (10)], analogue to (11). The approximation of [15], although Markovian, worked reasonably well only for a moderate values of PU activity q p .
p xyz|klm = 0, z > m + 1, or z = x, x + y < M D , or m = k, k + l < M D , T (0) k S (1) m R (0) 0,y , z = m + 1, T (m−z) k S (0) m + T (m−z+1) k S (1) m R (0)
0,y , z < m + 1, m < s m or z < s m ,
T (0) k S (0) m + T (1) k S (1) m + T (0) k S (1) m R (0) 0,y , z = m = s m .(27)
5) Impact of Channel Error on the Throughput Calculations: All previous analysis were done under
the assumption of the error-free channel. In this section we will briefly discuss the impact of channel error on the throughput calculations.
Channel error impacts the throughput in two ways. First, error affects throughput when SU involved in a connection setup fails to receive a control message from the transmitter. As a result no connection is established. Second, error affects throughput when SU not associated with the current connection setup For HCC, the control channel is selected as one of the data channels by a hopping method. Thus, if we assume an error on the control channel, it is reasonable to consider the error on the data channel as well.
For the control channel, if an error occurs, a connection fails to be established. Thus it is modeled by multiplyingŜ m by 1 − p e , where p e is a probability of error in the current time slot. For the data channel, different error handling strategies can be considered. We focus on the two following situations: i) case E 1 denoting packet punctured by unrecovered errors and ii) case E 2 denoting transmission termination on error.
a) Case E 1 : It can be assumed that when an error occurs on a time slot, the SU simply discards that time slot and resumes transmitting the remaining packet fragment from the next correct time slot. This is modeled by replacing the capacity C with C(1 − p e ).
b) Case E 2 : It can also be assumed that the connection terminates when an error occurs. Thus the probability that the packet finishes transmitting, q, should be replaced by q + (1 − q)p e . In addition, if the control channel hops to a channel which is being utilized for data transmission but error occurs, a new connection cannot be established. This is modeled by multiplyingŜ m by (1 − p e ) 2 .
V. NUMERICAL RESULTS
We now present numerical results for our model. First, we present results independently for spectrum sensing and OSA MAC performance, in Section V-A and Section V-B, respectively, for the microscopic case. Then in Section V-C we present the results of joint optimization of these two layers in the microscopic and macroscopic context. Moreover, due to a vast combination of parameters to consider we have decided to follow the convention of [15], [34] and focus on two general network setups (unless In this section we will also compare the analytical model of the sensing layer and OSA MAC protocols to simulation results. The simulations were developed with Matlab and reflect exactly the sensing models and MAC protocols presented in this paper. Simulation results for each system were obtained using the method of batch means for a 90% confidence interval. To evaluate the sensing protocols each batch contained 100 events and the whole simulation run was divided into 10 batches with no warm up phase.
When simulating the OSA MAC protocols, each batch contained 1000 events while the whole simulation was divided into 100 batches with the warm up period equal of 100 events.
A. Spectrum Sensing Architecture Performance
For all possible combinations of sensing architectures we compute the probability of false alarm for a wide range of t q . For two networks considered we select a set of the following common parameters: The advantage of TTDMA and SSMA can be shown more clearly if we compare the results of different p d = p d,min requirements. We can observe that high detection requirement such as p d = 0.99 makes the performance worse, as generally known. However if TTDMA or SSMA is applied, the performance for p d = 0.99 can be higher than that of TDMA for p d = 0.9. For example, in the range that t q < 50 µs in Fig. 2(a), SSMA for p d = 0.99 outperforms TDMA for p d = 0.9. Moreover, in Fig. 2(b), for t q 550 µs, SSMA and TTDMA for p d = 0.99 outperforms TDMA for p d = 0.9.
t t = t d,
It is important to note that κTTDMA performs worse than the rest of the protocols. It is due to excessive delay caused by instant acknowledgment of reporting result to the cluster head node. Note that κTTDMA is a lower bound for the operation of TTDMA. Also note that when TDMA needs to be equipped with acknowledgment function, as κTTDMA, its performance would be degraded the same way as TTDMA. Since we analyze static network with pre-set parameter values, e.g. κ does not change over time, in the following sections we proceed with unmodified TTDMA only.
2) Impact of Channel Errors during Reporting on PU Detection Performance:
The results are presented in Fig. 3. For small and large scale network, and the same parameters as used in Section V-A1, we have observed the probability of false alarm keeping detection probability p d constant for varying quiet time t q . First, it is obvious when comparing Fig. 2 (no channel error) and Fig. 3 (channel error) the impact of error is clearly visible, i.e. p f increases for every protocol. However, the relation between individual protocols is the same since error affects all protocols equally. Second, the effect of error on the small scale network is smaller than for the large scale network, compare Fig. 3(a) and Fig. 3(b), since the probability that SU will send a wrong report is larger for network with large number of nodes. Lastly,
for small values of κ probability of false alarm stabilizes and never reaches zero. However, large values of κ reduce significantly the effect of channel errors. It is because with high κ probability of making an error decreases rapidly. With 20% of nodes participating in the cooperative agreement on PU state, κ = 2 for small network and κ = 8 for large scale network, effect of error is reduced almost to zero.
3) Impact of Cooperation Level on PU Detection Performance:
The results are presented in Fig. 4.
We have selected TTDMA and set p d = p d,min = 0.99 as a protocol for further investigation. We observe that for the small scale network, see Fig. 4(a), the performance for κ = 2 is the best, while for the large scale network, see Fig. 4(b), the best performance can be achieved when κ = 8 or 16 if p f < 0.1.
Based on this observation, we conclude that for given detection requirements, high detection rate of PU is obtained when κ is well below the total number of SUs in the network. While for the considered setup optimal κ ≈ 20%, this value might be different for other network configurations. An interesting observation is that the number of groups to achieve the best performance becomes larger as the number of users N increases. For the small scale network, see Fig. 5(a), the best performance is observed for n g = 2 or n g = 3, while for large scale network, Fig. 5(b), n g = 6 is the best. This is because for the large scale network, the reporting overhead caused by large number of users offsets the performance improvement achieved by large cooperation scale. independent from κ, which differs them from TTDMA whose operation strictly depends on the value of κ considered. And again, when comparing Fig. 6(c) and Fig. 6(d) the optimal value of t q for TTDMA is in the same range as p f which proves the optimality of the design.
5) Impact of κ on PU
B. OSA MAC Protocol Performance
To evaluate the effectiveness of all proposed and analyzed MAC protocols we have fixed C = 1 Mbps, Section V-C), assuming that spectrum sensing layer is able to obtain such quality of detection. Again, as
p = e −
in Section V-A, results are presented separately for error-free and error channel.
1) Impact of PU Activity Level on OSA MAC Protocols:
The results are presented in Fig. 7. We observe that PU activity degrades DCC and HCC for B 0 S 0 , irrespective of other network parameters.
Their performances are comparable in this case. DCC and HCC performs best with B 1 S 0 . The results
show that the non-buffering OSA MAC protocols are very sensitive to q p where the greatest throughput decrease is visible at low ranges of PU activity. On the other hand, with connection buffering we observe a linear relation between q p and R t .
2) Impact of SU Packet Size on OSA MAC Protocols:
The results are presented in Fig. 8. Obviously, for larger SU packet size, the OSA network is able to grab more capacity. However, when packets become excessively large the throughput saturates. It remains that with no buffering and no channel switching protocols obtain the lowest throughput, no matter what network setup is chosen. Interestingly, although
intuitevely B 1 S 1 should obtain the highest channel utilization, it does not perform better than B 1 S 0 due to large switching time. With t p approaching zero, DCC B 1 S 1 would perform best, irrespective of the network setup as we discuss below.
3) Impact of Switching Time on OSA MAC Protocols:
The results are presented in Fig. 9. In this experiment, we verify that for small t p DCC B 1 S 1 outperforms DCC B 1 S 0 . However, there is no huge difference between their performances even at t p = 10 µs. This is because connection switching does not comparing channel switching and buffering options we conclude that much more channel utilization is obtained by connection buffering than by channel switching alone when N/M > 1.
4) Relation Between Number of SUs and PU
Note that for all cases described in this section simulation results agrees with our analytical model.
Comparing our model and analytical results of [15] for DCC B 1 S 0 , see Fig. 10(b), we observe that prior analysis overestimated the performance resulting in more than 2 Mbps difference at N/M = 1.
Interestingly, if we consider the same set of parameters as in Section V-B1 then the model of [15] almost agrees with the model of our paper. Since the set of parameters that has been chosen in V-B1 are similar to [15] we remark that the observations on the performance of this OSA MAC in [15] were reflecting the reality. Fig. 7, except for qp = 0.1. E1 and E2 denote error models described in Section IV-C5. E0 denotes the system with pe = 0.
5) Impact of Channel Errors on the OSA Multichannel MAC Performance:
To observe the impact of channel errors on the MAC protocol throughput we have set up the following experiment. For HCC and both network sizes, small and large, we have observed the average throughput for different SU packet lengths and channel error probabilities. The results are presented in Fig. 11. For comparison in Fig. 11 we present the system with no errors, denoted as E 0 . We kept values of p e realistic, not exceeding 1%.
Obviously system with punctured errors E 1 obtains much higher throughput than system E 2 , since more data can be potentially sent after one control packet exchange. Again, buffering allows to obtain higher throughput in comparison to non-buffered case, even with the data channel errors present. Note that system E 2 is more prone to errors than E 1 , observe Fig. 11(a) and Fig. 11 , ii) log-normal (denoted symbolically as L), and for comparison iii) geometric (denoted symbolically as E) used in the analysis. We have tested the protocol performance for different combinations of "on"
and "off" times of PU activity. These were EE, LE, EL, LL (all possible combinations of "on" and "off" times obtained in [60, Tab. 3 and Tab. 4]) and additionally EU, UU, where first and second letter denotes selected distribution for "on" and "off" times, respectively. Due to the complexity of the analysis we show only the simulation results using the same simulation method of batch means, with the same parameters as described at the beginning of Section V.
The parameter of each distribution was selected such that the mean value of each distribution was equal to 1/p c for "on" time and 1 − 1/p c for "off" time. The uniform distribution has a non-continuous set of mean values, (a b + a n )/2, where a b , a n ∈ N denoting lower and upper limit of the distribution, respectively, which precludes existence of every mean on or off value for p c ∈ (0, 1). To solve that problem an continuous uniform distribution with required mean was used and rounded to the highest integer. This resulted in a slightly lower last peak in the probability mass function at a n for 1/p c / ∈ N or where c l = 1/p c , v l = (1 − p c )/p 2 c is the mean and variance of the resulting discretized log-normal distribution. Note that the variance of the used discretized log-normal distribution is equal to the variance of geometric distribution for the same mean value. The variance of resulting discretized uniform continuos distribution could not be equal to the variance of the geometric distribution due the reasons described earlier.
The results are presented in Fig. 12. We focus on two network types, as indicated earlier: (i) large scale and (ii) small scale, with the assumed parameters as in Fig. 7. We select four values of q p for the clarity of the presentation. The most important observation is that irrespective of the considered distribution DCC obtains relatively the same throughput and the same relation between different protocol options exists as it was shown analytically in Fig. 7. If one wants to select the distribution combination with the highest throughput it would be LE and LL, while the throughput obtained being almost equal to the one obtained via analysis for the geometric distribution. The distribution with the lowest throughput is UU and EU, due to the difference of the second moment between the other two distributions for the on time. The difference in throughput between UU, EU and the remaining distributions is more visible for
C. Performance of Joint Spectrum Sensing and OSA MAC Protocols
Having results for spectrum sensing protocol and OSA MAC we join these two layers to form a complete OSA network stack. By means of exhaustive search we solve the optimization problem of (1).
We will also investigate the set of parameters that maximize R t for small and large scale network.
We divide our analysis in macroscopic and microscopic case observing R t for small scale network with M = 3, N = 12, d = 5 kB, and large scale network with M = 12, N = 40, d = 20 kB. For each case we select a set of spectrum sensing and OSA MAC protocols that are possible and, as we believe, most important to the research community. For a fixed set of parameters C = 1 Mbps, b = 1 MHz, p = e −1 /N , t d,max = 1 ms (microscopic case), t d,max = 2 s (macroscopic case), α = 1/M , t t = 1 ms, p d,min = 0.99, γ = −5 dB, q p = 0.1, and t p = 100 µs we leave κ, t e , n g , and p f as optimization variables.
1) Microscopic Model:
Here we focus only on DCC protocol, since collaborative spectrum sensing is only possible via a PU free control channel, which is inefficient to accomplish with HCC. Also, for sensing measurement dissemination we do not consider SSMA, which would be most difficult to implement in practice. The results are presented in Fig. 13. DCC B 1 S 0 with TTDMA is the best option, both for small scale and large scale network, see Fig. 13(a) and Fig. 13(b), respectively. Because of relatively high switching time B 1 S 1 performs slightly worse than B 1 S 0 , for small and large scale network. DCC B 0 S 0 with TDMA is the worst protocol combination, which confirms earlier results from Section V-A and Section V-B. Irrespective of network size it is always better to buffer SU connections preempted by PU than to look for vacant channels, compare again B 1 S 0 and B 0 S 1 in Fig. 13(a) and Fig. 13(b). The difference between B 0 S 0 and B 0 S 1 is mostly visible for a large network scenario, see Fig. 13(b), since with a large number of channels there are more possibilities to look for empty channels.
For all protocol combinations and both network sizes κ = 2 maximizes throughput performance, see Fig. 13(a). Interestingly, network size dictates the size of a sensing group. For small scale network, n g = 1 is the optimal value, see Fig. 13(a), but for a large network R t is maximized when n g = 3 (for B 0 S 0 ) and n g = 4 (for the rest). We can conclude that with a small network it is better to involve all nodes in sensing, while for larger networks it is better to divide them into groups, which agrees with the observation from Section V-A4. Moreover, we observe that the performance difference between TTDMA and TDMA is not as big as in Fig. 2 when parameters are optimized.
The most interesting result is observed for p f . With the increase of protocol complexity false alarm increases as well. Also with an increase of p f , quiet time is decreasing. Because buffering and switching improves the performance, there can be more margin to design the spectrum sensing. DCC obtains higher throughput than HCC for a small scale network, and vice versa, compare Fig. 14(a) and Fig. 14(b), respectively. This confirms the observations of [15, Fig. 3], [34, Fig. 3]. Just like in Fig. 13(a), for small scale network κ = 2 and n g = 2 are the ones that maximize R t . For the large scale network, however, κ = 3 and n g = 3 is optimal for TDMA, and κ = 4 and n g = 4 for TTDMA.
This means that for large networks it is beneficial to split the network into smaller groups. Again, this confirms our findings from Section V-C1. For both network scenarios p f and t e is relatively the same for all protocols considered.
Note that for the large scale network in the macroscopic model, an SU takes more time to detect a PU than in the microscopic model because large t d,max reduces the time overhead. The release of time restriction impacts the large scale network by requiring greater value of κ to achieve the maximum throughput.
VI. CONCLUSION
We have presented a comprehensive framework enabling assessment of the performance of joint spectrum sensing and MAC protocol operation for OSA networks. In the model we have proposed we focused on the link layer throughput as the fundamental metric to assess performance. We have parameterized spectrum sensing architectures for energy detection based systems with collaborative measurements combining. We have proposed a novel spectrum sensing MAC denoted Truncated Time Division Multiple Access. We have also categorized multichannel MAC protocols for OSA networks based on their ability to buffer and switch existing SU connections on the arrival of a PU. Our analysis is supported by simulations which prove the accuracy of the obtained expressions.
Some of the design guidelines that need to be noted are as follows. For spectrum sensing introducing TTDMA gives an improvement in obtained performance in compared to TDMA. Large networks, i.e.
having many channels and users, benefit from clustering, while for small networks it is better to create small number of clusters such that sensing time is optimized. When considering MAC protocol design for OSA it is clear that more benefit comes from introducing SU connection buffering than channel switching, for those SU connections that have been preempted by PU. Interestingly, although intuition would suggest that MAC protocols that combine SU connection buffering and channel switching would outperform all other protocols, due to switching overhead this combination is usually inferior to protocols that involve only SU connection buffering.
Our future task will be to investigate the delay experience by using any of OSA MAC protocols proposed. We plan to develop a comprehensive simulation software which will implement features not covered by our model, like queue per each SU.
| 11,421 |
0910.4704
|
2129293544
|
We present an analytical framework to assess the link layer throughput of multichannel Opportunistic Spectrum Access (OSA) ad hoc networks. Specifically, we focus on analyzing various combinations of collaborative spectrum sensing and Medium Access Control (MAC) protocol abstractions. We decompose collaborative spectrum sensing into layers, parametrize each layer, classify existing solutions, and propose a new protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient distribution of sensing results in “K out of N” fusion rule. In case of multichannel MAC protocols, we evaluate two main approaches of control channel design with 1) dedicated and 2) hopping channel. We propose to augment these protocols with options of handling secondary user (SU) connections preempted by primary user (PU) by 1) connection buffering until PU departure and 2) connection switching to a vacant PU channel. By comparing and optimizing different design combinations, we show that 1) it is generally better to buffer preempted SU connections than to switch them to PU vacant channels and 2) TTDMA is a promising design option for collaborative spectrum sensing process when K does not change over time.
|
A similar analysis, but with a different channelization structure, where the PU occupied more than one SU channel (contrary to @cite_48 @cite_55 ) was performed in @cite_6 . The authors addressed the cases of (i) connection blocking, and (ii) channel reservation and switching of SU connections to empty channels on PU arrival. This analysis was later extended to the case of finite SU population and packet queuing @cite_39 , and buffering and switching of SU connections preempted by PU arrivals @cite_15 . Again, in all papers listed above the spectrum sensing process was assumed to have no overhead and perfect reliability. Moreover the connection arrangement process for SUs was not considered.
|
{
"abstract": [
"We analyze the performance of a wireless system consisting of a set of secondary users opportunistically sharing bandwidth with a set of primary users over a coverage area. The secondary users employ spectrum sensing to detect channels that are unused by the primary users and hence make use of the idle channels. If an active secondary user detects the presence of a primary user on a given channel, it releases the channel and switches to another idle channel, if one is available. In the event that no channel is available, the call waits in a buffer until either a channel becomes available or a maximum waiting time is reached. Spectrum sensing errors on the part of a secondary user cause false alarm and mis-detection events, which can potentially degrade the quality-of-service experienced by primary users. We derive system performance metrics of interest such as blocking probabilities. Our results suggest that opportunistic spectrum sharing can significantly improve spectrum efficiency and system capacity, even under unreliable spectrum detection. The proposed model and analysis method can be used to evaluate the performance of future opportunistic spectrum sharing systems.",
"We develop a general framework for analyzing the performance of an opportunistic spectrum sharing (OSS) wireless system at the session level with Markovian arrivals and phasetype service times. The OSS system consists of primary or licensed users of the spectrum and secondary users that sense the channel status and opportunistically share the spectrum resources with the primary users in a coverage area. When a secondary user with an active session detects an arrival of a primary session in its current channel, the secondary user leaves the channel quickly and switches to an idle channel, if one is available, to continue the session. Otherwise, the secondary session is preempted and moved to a preemption queue. The OSS system is modeled by a multi-dimensional Markov process. We derive explicit expressions for the related transition rate matrices using matrix-analytic methods. We also obtain expressions for several performance measures of interest, and present both analytic and simulation results in terms of these performance measures. The proposed OSS model encompasses a large class of specific models as special cases, and should be useful for modeling and performance evaluation of future opportunistic spectrum sharing systems.",
"A Markov chain analysis for spectrum access in licensed bands for cognitive radios is presented and forced termination probability, blocking probability and traffic throughput are derived. In addition, a channel reservation scheme for cognitive radio spectrum handoff is proposed. This scheme allows the tradeoff between forced termination and blocking according to QoS requirements. Numerical results show that the proposed scheme can greatly reduce forced termination probability at a slight increase in blocking probability",
"A new loss model for cognitive radio spectrum access with finite user population are presented, and exact solution for the model and its approximation for computation scalability are given. Our model provides the investigation of the delay performance of a cognitive radio system. We study the delay performance of a cognitive radio system under various primary traffic loads and spectrum band allocations.",
"Cognitive radio wireless networks is an emerging communication paradigm to effectively address spectrum scarcity challenge. Spectrum sharing enables the secondary unlicensed system to dynamically access the licensed frequency bands in the primary system without any modification to the devices, terminals, services and networks in the primary system. In this paper, we propose and analyze new dynamic spectrum access schemes in the absence or presence of buffering mechanism for the cognitive secondary subscriber (SU). A Markov approach is developed to analyze the proposed spectrum sharing policies with generalized bandwidth size in both primary system and secondary system. Performance metrics for SU are developed with respect to blocking probability, interrupted probability, forced termination probability, non-completion probability and waiting time. Numerical examples are presented to explore the impact of key systems parameters like the traffic load on the performance metrics. Comparison results indicate that the buffer is able to significantly reduce the SU blocking probability and non-completion probability with very minor increased forced termination probability. The analytic model has been verified by extensive simulation."
],
"cite_N": [
"@cite_48",
"@cite_55",
"@cite_6",
"@cite_39",
"@cite_15"
],
"mid": [
"2163965981",
"2171528576",
"2101640072",
"2156433293",
"2129559079"
]
}
|
Performance of Joint Spectrum Sensing and MAC Algorithms for Multichannel Opportunistic Spectrum Access Ad Hoc Networks
|
It is believed that Opportunistic Spectrum Access (OSA) networks will be one of the primary forces in combating spectrum scarcity [2] in the upcoming years [3], [4]. Therefore, OSA networks [5], [6] have become the topic of rigorous investigation by the communications theory community. Specifically, the assessment of spectrum sensing overhead on OSA medium access control (MAC) performance recently gained a significant attention.
A. Research Objective
In the OSA network performance analysis, a description of the relation between the primary (spectrum) user (PU) network and the secondary (spectrum) user (SU) network can be split into two general models: macroscopic and microscopic. In the macroscopic OSA model [7], [8], [9] it is assumed that the time limit to detect a PU and vacate its channel is very long compared to the SU time slot, frame or packet length duration. Such a time limit is assumed to be given by a radio spectrum regulatory organization.
For example, the timing requirements for signal detection of TV transmissions and low power licensed devices operating in TV bands by IEEE 802.22 networks [10] (including transmission termination and channel vacancy time, i.e. a time it takes the SU to stop transmitting from the moment of detecting PU) must be equal to or smaller than 4.1 s [11,Tab. 15.5], while the frame and superframe duration of IEEE 802.22 are equal to 10 ms and 160 ms, respectively [11]. Also, in the macroscopic model it is assumed that the PU channel holding time, i.e. the time in which the PU is seen by the SU as actively transmitting, is much longer than the delay incurred by the detection process performed at the SU. As a result it can be assumed in the analysis that, given high PU detection accuracy (which is a necessity), OSA network performance is determined by the traffic pattern of the SUs. That is, it depends on the total amount of data to be transmitted by the SU network, the duration of individual SU data packets and the number of SU nodes. In other words the PU bandwidth resource utilization by the SU is independent of PU detection efficiency.
In the microscopic OSA model, more popular than its macroscopic counterpart due to analytic challenges, the detection time is short in relation to the shortest transmission unit of the OSA system.
Detection is also performed much more frequently than in the macroscopic model, i.e. for every SU packet [12], [13] or in every time slot [14], [15], [16], [17], [18]. Also, the microscopic model assumes much higher PU activity than the macroscopic model, which justifies frequent detection cycles. Since the detection overhead is much larger than in the macroscopic model, the analysis of utilization of resources (temporarily unoccupied by PU) by OSA network cannot be decoupled from the analysis of the PU signal detection phase. Therefore, while the distinction between macroscopic and microscopic models are somehow fluid, it is important to partition the two cases and compare them in a systematic manner. More importantly, the comparison should be based on a detailed OSA multichannel and multiuser ad hoc network model [19,Sec. 7.4], which would not ignore the overhead from both the physical layer (PHY) and MAC layers of different cooperative and distributed spectrum sensing strategies [19,Tab. 7.1] and, in case of microscopic model, account for different channel access procedures and connection management strategies for the SUs upon PU detection, like buffering or switching to a vacant channel. Finally, the comparison should be realized using tractable analytical tools.
C. Our Contribution
In this paper, we present a unified analytical framework to design the spectrum sensing and the OSA data MAC jointly, for the macroscopic and microscopic cases. This design framework provides the (i) means of comparing different spectrum sensing techniques plus MAC architectures for OSA networks and (ii) spectrum sensing parameters such as observation time and detection rate for given design options. As a metric for optimization and comparison, we consider the average link layer OSA network throughput.
Our model will account for the combined effects of the cooperative spectrum sensing and the underlying MAC protocol. For spectrum sensing, we will consider several architectures parametrized by sensing radio bandwidth, the parameters of the sensing PHY, and the parameters of the sensing MAC needed to exchange sensing data between individual OSA nodes. Along with classifying most of the well known sensing MAC protocols, we introduce a novel protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient exchange of individual sensing decisions in "κ out of N " fusion rule.
For the data MAC we will consider two protocol abstractions, (i) Dedicated Control Channel (DCC) and
(ii) Hopping Control Channel (HCC), as analyzed in [15], [34] with novel extensions. That is, given the designs of [25], [26], [27], [30], we will analyze MAC protocols that (i) allow (or forbid) to buffer existing SU connections on the event of PU arrival, and (ii) allow (or forbid) to switch the SU connections preempted by the PU to the empty channels. Please note that in the case of the analytical model proposed in [15] for the SU connection buffering OSA MAC schemes we present an exact solution. Finally, using our framework, we compute the maximum link layer throughput for most relevant combinations of spectrum sensing and MAC, optimizing parameters of the model jointly, both for the microscopic and macroscopic models.
The rest of the paper is organized as follows. System model and a formal problem description is presented in Section II. Description of spectrum sensing techniques and their analysis is presented in Section III. Analysis of MAC strategies are presented in Section IV. Numerical results for spectrum sensing process, MAC and joint design framework are presented in Section V. Finally the conclusions are presented in Section VI.
II. SYSTEM MODEL AND FORMAL PROBLEM DESCRIPTION
The aim of this work is to analyze link layer throughput accounting for different combinations of MAC, spectrum sensing protocols and regulatory constraints. The model can later be used to optimize the network parameters jointly to maximize the throughput, subject to regulatory constraints. Before formalizing the problem, we need to introduce the system model, distinguishing between the microscopic and macroscopic approaches.
A. System Model 1) Microscopic Model: For two multichannel MAC abstractions considered, i.e. DCC and HCC, we distinguish between the following cases: (i) when SU data transfer interrupted by the PU is being buffered (or not) for further transmission and (ii) when existing SU connection can switch (or not) to a free channel on the event of PU arrival (both for buffering and non-buffering SU connection cases). Finally, we will distinguish two cases for DCC where (i) there is a separate control channel not used by the PU and (ii) when control channel is also used by the PU for communication. All these protocols will be explained in detail in Section IV.
We assume slotted transmission within the SU and PU networks, where PU and SU time slots are equal and synchronized with each other. The assumptions on slotted and synchronous transmission between PU and SU are commonly made in the literature, either while analyzing theoretical aspects of OSA (see [12, OSA scenarios (see [16,Fig. 2] in the context of secondary utilization of GSM spectrum or [38] in the context of secondary IEEE 802.16 resources usage). Our model can be generalized to the case where PU slots are offset in time from SU slots, however, it would require additional analysis of optimal channel access policies, see for example [36], [39], [40], which is beyond the scope of this paper. We also note that the synchrony assumption allows one to obtain upper bounds on the throughput when transmitting on a slot-asynchronous interface [41].
The total slot duration is t t µs. It is divided in three parts: (i) the detection part of length t q µs, denoted as quiet time, (ii) the data part of length t u µs, and if communication protocol requires channel switching (iii) switching part of length t p µs. The data part of the SU time slot is long enough to execute one request to send and clear to send exchange [15], [34]. For the PU the entire slot of t t µs is used for data transfer, see Fig. 1(a).
Our model assumes that there are M channels having fixed capacity C Mbps that are randomly and independently occupied by the PU in each slot with probability q p . There are N nodes in the SU network, each one communicating directly with another SU on one of the available PU channels in one hop fashion.
Also, we assume no merging of the channels, i.e only one channel can be used by a communicating pair of SUs at a time. SUs send packets with geometrically distributed length with an average of 1/q = d/(Ct u ) slots for DCC, and 1/q = d/(C {t u + t p }) slots for HCC [15,, [34,Sec. 3.2.3], where d is the average packet size given in bits. Difference between average packet length for DCC and HCC is a result of switching time overhead for HCC, because during channel switching SUs do not transfer any data, even though they occupy the channel. We therefore virtually prolong data packet by t p for HCC to keep the comparison fair.
Every time a node tries to communicate with another node it accesses the control channel and transmits a control packet with probability p to a randomly selected and non-occupied receiver. A connection is successful when only one node transmits a control packet in a particular time slot. The reason for selecting a variant of S-ALOHA as a contention resolution strategy was manyfold. First, in reality each real-life OSA multichannel MAC protocol belonging to each of the considered classes, i.e. HCC or DCC, will use its own contention resolution strategy. Implementing each and every approach in our analysis: (i) would complicate significantly the analysis, and most importantly (ii) would jeopardize the fairness of the comparison. Therefore a single protocol was needed for the analytical model. Since S-ALOHA is a widespread and well understood protocol in wireless networks and is a foundation of many other collision resolution strategies, including CSMA/CA, it has been selected for the system model herein.
In each quiet phase every SU node performs PU signal detection based on signal energy observation.
Since we assume that OSA nodes are fully connected in a one hop network, thus each node observes on 8 average the same signal realization in each time slot [13], [18], [42]. PU channels detected by the SU are assumed as Additive White Gaussian Noise with a channel experiencing Rayleigh fading. Therefore to increase the PU detectability by the OSA network we consider collaborative detection with hard decision combining in the detection process based on "κ out of N " rule, as in [43], [44]. Hence we divide the quiet phase into (i) the sensing phase of length t s µs and (ii) the reporting phase of length t r µs. The sensing phase is of the same length for all nodes. For simplicity we do not consider in this study sensing methods that adapt the sensing time to propagation conditions as in [45]. In the sensing phase, nodes perform their local measurements. Then, during the reporting phase, nodes exchange their sensing results and make a decision individually by combining individual sensing results. We will analyze different PHY and MAC approaches to collaborative spectrum sensing, especially (i) methods to assign sensing frequencies to users, (ii) rules in combining the sensing results, and (iii) multiple access schemes for measurement reporting. In this paper we do not consider sensing strategies applicable to single channel OSA networks [46], two stage spectrum sensing [8], and sensing MAC protocols based on random access [47], due to their excessive delay. We will explain our spectrum sensing approaches in more detail in Section III. Further we assume a error channel, for the sensing layer as well as for data layer where probability of error during transmission is denoted as p e .
Finally, we consider two regulatory constraints under which the OSA network is allowed to utilize the PU spectrum provided the channel is idle: (i) maximum detection delay t d,max , i.e. a time limit within which a SU must detect a PU, and (ii) minimum detection probability p d,min , i.e., a probability with which a OSA system has to detect a PU signal with minimum signal to noise ratio γ. Note that in the event of mis-detection and subsequent SU transmission in a channel occupied by PU, a packet fragment is considered successfully transmitted, since in our model transmission power of SU is much higher than interference from PU, and regulatory requirements considered here do not constrain SU transmission power 1 (refer for example to IEEE 802.22 draft where Urgent Coexistent Situation packets are transmitted on the same channel as active PU [10], [11]). Moreover, maximum transmission power is a metric specific to overlay OSA systems [19, Sec. 2.2.5 and 8.2.1] where typically no spectrum sensing is considered. Also we do not consider a metric based on a maximum allowable level of collisions between PU and SU. Note that the parameters of the introduced model are summarized in Table I and 1 The opposite case is to assume that a packet fragment is considered as lost and retransmitted. This approach however requires an acknowledgement mechanism for a lost packet fragment, see for example [17,, [41,Sec. II], that contradicts the model assumption on the geometric distribution of SU packets. Table I. the abbreviations are summarized in Table II. 2) Macroscopic Model: We assume the same system model as for the microscopic case, except for the following differences. OSA performs detection rarely, and the PU is stable for the duration of OSA network operation, i.e. it is either transmitting constantly on a channel or stays idle. In other words quiet period occurs for multiple time slots, see Fig. 1(b). Also, since the PU is considered stable on every channel we do not consider all types of OSA MAC protocols introduced for the microscopic model.
Instead we use classical DCC and HCC models proposed in [34] with the corrections of [15] accounting for the incomplete transition probability calculations whenever OSA network occupied all PU channels and new connection was established on the control channel.
B. Formal Problem Description
To compute the maximum throughput for different combinations of protocols and models, we define an optimization problem. The objective is the OSA network link layer throughput R t . Therefore, considering the regulatory constraints given above we need to
maximize R t = ξR subject to p d = p d,min , t d ≤ t d,max ,(1)
where t d is the detection time, i.e. the time to process whole detection operation as described in Section III-D, R is the steady state link layer throughput without sensing and switching overhead, which will be computed in Section IV, and
ξ = tt−tq −P (i) x,y , R (z)
x,y total PU arrival probability for no buffering, and buffering case -
T (j) k , S (j) m
termination, and arrangement probability - (1) is itself affected by p f , as it will be shown in Section IV. Also note that t p is removed from second condition of (2) since the switching time is negligible in comparison to inter-sensing time.
S (1) m ,Ŝ(1)
III. LAYERED MODEL OF SPECTRUM SENSING ANALYSIS
To design the spectrum sensing, we follow the approach of [7] in which the spectrum sensing process is handled jointly by (i) the sensing radio, (ii) the sensing PHY, and (iii) the sensing MAC. Using this layered model we can compare existing approaches to spectrum sensing and choose the best sensing architecture in a systematic way. Since the parameters of the design framework in (1) are determined by the choices of individual layers, we describe and parametrize each layer of the spectrum sensing, later describing cross-layer parameters.
A. Sensing Radio
The sensing radio scans the PU spectrum and passes the spectrum sensing result to the sensing PHY for analysis. The sensing radio banwidth is given as αM b, where α is a ratio of the bandwidth of the sensing radio to the total PU bandwidth and b MHz is the bandwidth of each PU channel 2 . With α > 1/M node can sense multiple channels at once. However the cost of such wideband sensing radio increases.
B. Sensing PHY
The sensing PHY analyzes the measurements from the sensing radio to determine if a PU is present in a channel. Independent of the sensing algorithm, such as energy detection, matched filter detection or feature detection [48], [49], there exists a common set of parameters for the sensing PHY: (i) time to observe the channel by one node t e µs, (ii) the PU signal to noise ratio detection threshold θ, and (iii) a transmit time of one bit of sensing information t a = 1/C µs. We denote conditional probability of sensing result p ij , i, j ∈ {0, 1}, where j = 1 denotes PU presence and j = 0 otherwise, and i = 1
indicates the detection result of PU being busy and i = 0 otherwise. Observe that p 10 = 1 − p 00 and
p 11 = 1 − p 01 .
As noted in Section II-A, we consider energy detection as the PU detection algorithm since it does not require a priori information of the PU signal. For this detection method in Rayleigh plus Additive
White Gaussian Noise channel p 10 is given as [15, Eq. (1)]
p 10 = Γ(ǫ, θ/2) Γ(ǫ) ,(3)
and
p 11 [15, Eq. (3)] p 11 = e − θ 2 ǫ−2 h=0 θ h h!2 h + 1 + γ γ ǫ−1 e θγ 2+2γ − ǫ−2 j=0 (θγ) h j!(2 + 2γ) h ,(4)
where Γ(·) and Γ(·, ·) are complete and incomplete Gamma functions, respectively, and ǫ = ⌊t e αM b⌋ is a time-bandwidth product. By defining G ǫ (θ) = p 10 and θ = G −1 ǫ (p 10 ), we can derive p 11 as a function of p 10 and t e .
C. Sensing MAC
The sensing MAC is a process responsible for sensing multiple channels, sharing sensing results with other users, and making a final decision on the PU presence. Because of the vast number of possibilities for sensing MAC algorithms it is hard to find a general set of parameters. Instead, we derive crosslayer parameters for a specific option of the sensing MAC. This methodology can be applied to any new sensing MAC scheme. We now introduce classifications which will be used in the derivation of cross-layer parameters.
1) Sensing Strategy for Grouping Channels and Users:
Each SU has to determine which channel should be sensed among the M channels. To reduce sensing and reporting overhead, OSA system can divide users and channels into n g sub-groups [50]. Sub-group i ∈ {1, · · · , n g } is formed by n u,i users who should sense m s,i channels to make a final decision cooperatively. Assume that all users are equally divided into groups then m s,i ∈ {⌊M/n g ⌋, ⌈M/n g ⌉} and n u,i ∈ {⌊N/n g ⌋, ⌈N/n g ⌉}. Note that for M/n g ∈ N and N/n g ∈ N all sub-groups have the same n u,i = N/n g and m s,i = M/n g for all i. Given N and M , if n g is small, more users are in a group and the collaboration gain increases, but at the same time more channels must be sensed, which results in more time overhead for sensing. For large n g , this relation is opposite.
2) Combining Scheme: By combining sensing results of other users, a OSA network makes a more reliable decision on PU state. As considered in [13], [51], we will take κ as a design parameter for the sensing MAC and find an optimum value to maximize the performance. Note that for the case of N user cooperation if κ = 1, the combining logic becomes the "or" rule [19,Sec. 3.2], [42, Sec. III-C] and if κ = N , it becomes the "and" rule.
3) Multiple Access for Measurement Reporting:
To transmit sensing results of multiple users through the shared media, a multiple access scheme is needed. Note that this multiple access scheme is only for the reporting process, different from the multiple access for data transfer. We consider the following approaches.
a) Time Division Multiple Access (TDMA):
This is a static and well-organized multiple access scheme for which a designated one bit slot for sensing report transmission is assigned to each user [43], [50]. b) TTDMA: In TDMA, when the SU receives all the reporting bits from other users the SU makes a final decision of presence of PU on the channel. However, in OSA network using TTDMA SUs may not need to wait until receiving the last reporting bit, because for the "κ out of N " rule, a reporting operation can stop as soon as κ one bits denoting PU presence are received. This sensing MAC aims at reducing the reporting overhead, but unfortunately we have not seen any paper proposing and discussing TTDMA.
c) Single Slot Multiple Access (SSMA):
For this scheme, known also as the boosting protocol [52], only one bit slot is assigned for reporting and all SUs use this slot as a common reporting period. Any SU that detects a PU transmits one bit in the common designated slot. Otherwise, a user does not transmit any bit in the designated slot. Then, reporting bits from SUs who detect a PU are overlapped and as a result all power of the slot is summed up. By measuring the power in the designated slot, a SU can determine whether the primary user exists or not. We assume perfect power control and perfect synchronization.
Even though this may not be practical, because carrier frequency or the phase offset cannot be avoided in real systems, this scheme serves as an upper bound for sensing MAC performance. For the analysis of SSMA in isolation but in a more realistic physical layer conditions the reader is referred to [53], [54].
D. Cross-Layer Parameters
Considering the combined impact of the individual layers, we derive cross-layer parameters in the framework as described in (1). More specifically these are t q and t d , derived as a function of individual parameters and p f , and p d , denoting final network-wide probabilities of false alarm and detection, respectively.
1) Detection Time t d and Quiet Time t q : Detection time t d is defined as the time duration from the point that a SU starts to sense, to the point that a SU makes a final decision on PU presence. Regardless of the data transfer and spectrum sensing time overlap, the final detection decision is made only after combining the sensing group's reported information [55]. Thus t d is the time from the start of the sensing phase to the end of the reporting phase, i.e. t d = t s + t r .
Since the data transfer may not be possible during sensing or reporting phases t q ≤ t d , depending on the approach. When spectrum sensing and data transfer are divided in time division manner t q = t s + t r .
Note that three other methods sharing the same problem are possible (they will not be considered in the remainder of the paper): (i) simultaneous reporting and data, which can be implemented by using the separate channel as in [56], for which t q = t s , (ii) simultaneous sensing and data, implemented by using the frequency hopping method as in [57], for which t q = t r , and (iii) simultaneous sensing, reporting, and data for which t q = 0. Conceptually, simultaneous sensing, reporting, and data transfer is possible and seems most efficient but we have not found any implementation of it in the literature. Note that in order to implement simultaneous sensing and transmission at least two radio front ends are needed, which increases the total cost of the device.
Definem s as the number of individual sensing events to complete sensing operation andm r as the average number of bits to report. Then the sensing time and the reporting time can be calculated as t s =m s t e and t r =m r t a . Note thatm s is affected by the bandwidth of the sensing radio because it can scan multiple channels at once if the bandwidth of the sensing radio is wide. For the case that the sensing radio is narrower than the bandwidth to sense, i.e. α < max{m s,1 , · · · , m s,ng }/M , we assume that a SU monitors all channels by sequential sensing [33], because the reporting phase should be synchronized after all SUs finish the sensing phase. With this assumptionm s = max{m s,1 , · · · , m s,ng }/αM , because even though the bandwidth to sense is less than that of the sensing radio it still needs one sensing cycle to get information. Form r , because there are n g groups in a OSA system,m r = ng i=1m r,i wherem r,i depends on the multiple access schemes for reporting, which we compute below. a) TDMA: All n u,i users should transmit the sensing results of m s,i channels. Thus,m r,i = n u,i m s,i . b) TTDMA: For κ < n u,i /2, if κ of ones are received, the reporting process will end. We introduce a variable δ which is the number of bits when the reporting process finishes. Thus there should be κ − 1 of ones within δ − 1 bits and then δ-th bit should be one. Because the range of δ is from κ to n u,i , the average number of bits for this condition is derived as
m 1,i = nu,i δ=κ δ − 1 κ − 1 (1 − q p )δp δ−κ 00 p κ 10 + q p δp δ−κ 01 p κ 11 .(5)
Moreover, if the number of received zeros, denoting PU absence, equals to n u,i − κ + 1, the reporting process will stop because even if the remaining bits are all one, the number of ones must be less than κ. Then the reporting process stops at δ-th bit if δ − n u,i + κ − 1 bits of one are received within δ − 1 bits and zero is received at δ-th bit. The range of δ is from n u,i − κ + 1 to n u,i , and thus the average number of bits for this condition is
m 2,i = nu,i δ=νi δ − 1 δ − ν i (1 − q p )δp νi 00 p δ−νi 10 + q p δp νi 01 p δ−νi 11 ,(6)
where
ν i = n u,i − κ + 1. Therefore because there are m s,i channels to sense in a group i,m r,i = m s,i (m 1,i + m 2,i ).
For the case κ ≥ n u,i /2, m 1,i is calculated by counting zeros and m 2,i by counting ones. Thus, we usem r,i = m s,i (m 1,i + m 2,i ) again, by replacing κ with n u,i − κ + 1, p 00 with p 10 and p 01 with p 11 .
Because we assumed so far that κ is known to each node in the network, OSA nodes know when to stop reporting measurements and start data communication without being instructed by external parties.
For comparison we analyze another type of TTDMA, denoted as κTTDMA, where a cluster head node makes a decision to stop the reporting phase in the OSA network. For example, this approach may be necessary if the κ value is updated in real time. In the worst case scenario this approach requires two bits to be reported by the SU, i.e. one for sending sensing data and one for an acknowledgment from the cluster head to report. Then (5) and (6)
p f = 1 n g ng i=1 p f,i ,(7)
where p f,i is the probability of false alarm of sub-group i. Using (7) we can also derive p d by substituting (8) b) TTDMA: In this case SU does not need to receive n u,i bits to make a final decision because the reporting phase is ended when the number of ones is κ. To derive p f,i for this case, we introduce a variable β denoting the number of zeros. Then total number of reporting bits is κ + β if the last bit is one because otherwise reporting phase will end at less than κ + β bits. Therefore, there should be β of zeros in κ + β − 1 bits and κ-th bit should be one. Because β can vary from 0 to n u,i − κ
p f,i = nu,i δ=κ n u,i δ p δ 10p nu,i−δ 00 ,(8)wherep x = (1 − p e )p x + p e (1 − p x ) for p x ∈ {p 10 , p 00 }, while p d,i is derived fromp f,i = nu,i−κ β=0 κ + β − 1 β p κ 10p β 00 .(9)
Finally p d,i is obtained from (9) by substitutingp 10 withp 11 andp 00 withp 01 .
c) SSMA: Obviously, the process of the reporting information for SSMA is the same as for TDMA.
Therefore p f,i and p d,i are defined the same as for TDMA.
IV. MULTICHANNEL OSA MAC PROTOCOL ANALYSIS
In this section we present the analysis of throughput R for all considered combinations of MAC protocol architectures. As noted in Section I-C, we propose a set of new multichannel MAC protocols for OSA. We will first describe their operation, later presenting the analysis framework.
A. Description of New Multichannel MAC protocols for OSA
We consider two major groups of MAC protocols for OSA: (i) those enabling buffering of the SU connections preempted by the PU arrival, and (ii) those enabling switching of the SU connections to a vacant channel when preempted. In the former group, when the PU arrives the existing SU connection will pause at the time of preemption and resume on the same channel as soon as the PU goes idle. We assume that the SU always waits for the PU to finish its transmission. The case where the buffered SU connection expires after a predefined time, not analyzed here, is presented in [22] for the centralized network. We do not consider any channel reservation schemes for potential SU connections to be buffered [25]. When buffering is not possible, the preempted SU connection is considered as lost and a new connection must be established on the control channel. In the latter group, when the PU arrives the existing SU connection will look for a new empty channel, to continue transmission. If such a channel cannot be found the connection is lost. Without channel switching, the exiting SU connection is lost as soon as the PU preempts the channel.
Obviously we can have four combinations of these groups for OSA MAC, which have all been considered in the analysis: (i) with no buffering and no channel switching [30] scheme denoted as B 0 S 0 , where SU connections preempted by PU are lost; (ii) with no buffering and channel switching [24], [25], [26] denoted as B 0 S 1 , where SU connections preempted by PU switch to a free channel and connections that cannot find a free channel are blocked; (iii) with buffering and no channel switching [15], [22], [23] We propose a three dimensional Markov chain of which the state vector is given as Sec. III] where buffered SU connections were also considered to be utilizing the PU channels.
(X t , Y t , Z t ), where
Considering a real OSA system, there are conditions that qualify valid states. With SU connection buffering-enabled MAC protocols for OSA, the number of connections cannot be less than the number of channels utilized by SUs, i.e. X t ≤ Z t . Additionally, SUs do not pause transmissions over unoccupied channels. Therefore, the number of SU connections not utilizing a channel cannot exceed the number of channels occupied by PUs, i.e. Z t − X t ≤ Y t or Z t ≤ X t + Y t . Finally, the sum of the channels utilized by PUs and the SUs cannot be greater than M D , i.e. X t + Y t ≤ M D . By combining these conditions we can compactly write them as
0 ≤ X t ≤ Z t ≤ X t + Y t ≤ M D .(10)
When connection buffering is disabled the number of SU connections must be the same as the number of channels utilized by SUs, i.e. X t = Z t . Therefore, for non-buffering SU connection OSA MAC
protocols (X t , Y t , Z t = X t ) ⇒ (X t , Y t ).
For the microscopic case the average channel throughput, excluding switching and sensing overhead, is computed as
R = C sm x=0 MD y=0 sm z=0 xπ xyz ,(11)
where s m = max{S} and the steady-state probability π xyz is given by
π xyz = lim t→∞ Pr(X t = x, Y t = y, Z t = z),(12)
and the state transition probabilities to compute (12) will be derived in the subsequent section, uniquely for each OSA multichannel MAC protocol.
Finally, for the macroscopic case the average channel throughput, excluding switching and sensing overhead, is computed as
R = {q p (1 − p d ) + (1 − q p )(1 − p f )}R c C,(13)
where R c = sm i=1 iπ i and π i is a solution to a steady state Markov chain given by [15,Eq. (13)]. Since the macroscopic model assumes no PU activity in each time slot, SU connection buffering and switching is not needed. Note that contrary to the incorrect assumptions of [15,Eq. (12)], [34, Eq. (7) and (9)] we compute R in (11) and (13) taking all the channels into account, irrespective of the type of OSA MAC. This is because models of [15], [34] considered only data channels for the throughput investigation in DCC in the final calculation stage, assuming that no data traffic is being transmitted on control channel. However, the utilization must be computed over all channels, irrespective of whether one channel transmitted only control data or not.
C. Derivation of State Transition Probabilities for the Microscopic Model
We denote the state transition probability as
p xyz|klm = P r(X t = x, Y t = y, Z t = z|X t−1 = k, Y t−1 = l, Z t−1 = m).(14)
Note that changes in X t and Z t depend on the detection of the PU. In addition, changes in Z t depend on
OSA traffic characteristics such as the packet generation probability p and the average packet length 1/q.
Also, note that the steady state probability vector π containing all possible steady state probabilities π xyz is derived by solving π = πP, where entries of right stochastic matrix P are defined as (14) knowing that x,y,z π xyz = 1.
As a parameter to model PU state, p c denotes the probability that a OSA network collectively detects a PU channel as occupied 3 , i.e.
p c = q p p d + (1 − q p )p f .(15)
We introduce two supporting functions. First, we denote T
T (j) k = k j q j (1 − q) k−j , k ≥ j > 0, 0, otherwise.(16)
Note that k in T 5) and (8)] considering PU detection on the control channel. If a PU is detected on a control channel, an SU connection cannot be generated because there is no chance to acquire a data channel. We then have [15,Eq. (17)]
S (j) m = S (1) m , j = 1 (DCC), S (1) m N −2m−1 N −1 MD−m M , j = 1 (HCC), 1 − S (1) m , j = 0, 0, otherwise,(17)
whereS
(1) m = Ŝ (1)
m , PU free control channel, DCC only, This is because we assume that a SU that has a connection but pauses data transmission due to the PU presence does not try to make another connection. We can now derive the transition probabilities individually for all four different OSA MAC protocols.
(1 − p c )Ŝ (1) m , otherwise,(18)
1) Case B 0 S 0 : Recall that for non-buffering OSA MAC protocols Z t = X t . Thus p kl|xy is defined as Now, consider the case x < k + 1. When a SU data connection is terminated, there can be two possible reasons: (i) a SU completes its transmission, or (ii) a PU is detected on a channel that is assigned to a SU for data transmission before sensing. The former was analyzed [34,Sec. 3]. To model the latter, we introduce variable i denoting the number of channels that were reserved for SU data transmission before sensing but cannot be utilized due to PU detection. We have the following observation. In addition, we need to discuss the edge state 4 which considers two cases: (i) no more channels are available, either utilized by SUs or PUs, and (ii) all possible SU connections are established 5 which we denote as "full connection state". For the transition from full connection state to edge state, we have to consider the case that one new connection is generated while any existing connection is not terminated, which means a trial for the new connection by the free SU is not established because there already exists all possible connections.
Writing all conditions compactly, denote the indicator for the edge state
1 x,y = 1, x + y = M D or x = s m , 0, otherwise,(19)
and define P
p xy|kl = 0, x > k + 1 T (0) k S (1) k P (0) x,y , x = k + 1, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (i) x,y , x < k + 1, k < s m or 1 x,y = 0, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (i) x,y + T (0) k S (1) k P (0) 0,y , x < k + 1, k = s m , 1 x,y = 1,(20)
where i m = min(s m − x, y).
2) Case B 0 S 1 : Although in the SU connection non-switching case both DCC and HCC can be considered, only DCC will be able to perform switching without any additional control data exchange, which we prove formally.
Before going into detail of the derivation note that for the class of OSA MAC protocols with a dedicated control channel every node can follow the connection arrangement of the entire network. [34] it is impossible for a single node to learn the whole network connection arrangement since each sender receiver pair cannot listen to others while following its own hopping sequence. We now present the following proof.
Theorem 1: Channel switching in DCC can be performed without any additional control message exchange.
Proof: We prove this by showing a possible distributed channel switching process. Following earlier observation, in DCC each node can trace the connection arrangement of others, i.e. which channel has been reserved by a sender receiver pair. To distribute the switching events equally among SUs each SU computes the priority level as
Π i,t = Π i,t−1 + 1 p ,(21)
where
1 p = 1, preemption by PU, 0, otherwise,(22)
and Π i,t is the priority level of SU i at time t. For Π i,0 / ∈ N the priority is a MAC address of the SU,
where |I| = |U| = M D − X t − Y t , → is the mapping operator denoting process of switching active SU connection i to free channel j, I i,t denotes index of communicating SUs (transmitters) at time t, where Π a,t > Π b,t > · · · > Π c,t and U j,t denotes free channel with index j at t.
Note that existing connections that have not been mapped to a channel are considered blocked. Also note that algorithm given in Theorem 1 connections are preempted randomly with equal probability by PU.
Since new SU connections are also assumed to use new channels randomly with equal probability, each SU connection is blocked with uniform probability.
To enable SU connection switching in HCC one way is to augment it with a separate radio front end which would follow the hopping sequences and control data exchange of the OSA network. Obviously this increases the cost of hardware and contradicts the idea of HCC, where all channels should be used for data communication. Therefore while evaluating OSA MAC protocols in Section V-B, we will not consider SU connection switching for HCC.
We now define the state transition probability p xy|kl for the considered OSA MAC protocol. Because
x > k + 1 is infeasible, the state transition probability for x > k + 1 equals to zero. For x = k + 1, y
PUs can appear on any of M D channels because even though a PU is detected, the SUs can still transmit data by switching to the idle channels and the possible number of PU appearances is MD y . Note that the possible number of PU appearances in the case B 0 S 1 is always MD y , even for the edge state, because the data channel can be changed by switching to a vacant channel after the PU detection. Because it is impossible to create more than one new connection at a time, the OSA connection creation probabilities for x = k + 1 are the same as in (20), i.e. T
p xy|kl = 0,
x > k + 1,
T (0) k S (1) k P (0) 0,y , x = k + 1, T (x−k) k S (0) k + T (x−k+1) k S (1) k P (0) 0,y , x < k + 1, 1 x,y = 0, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (0) 0,y , x < k + 1, k < s m , 1 x,y = 1, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (0) 0,y + T (0) k S (1) k P (0) 0,y , x < k + 1, k = s m , 1 x,y = 1.(24)
3) Case B 1 S 0 : Before we discuss this case we present the following observation, which implicates the design of simulation models and derivation of p xyz|klm for SU connection buffering MAC protocols.
Observation 2: For all SU connection buffering OSA MAC protocols the same average link level throughput results from creating a brand new connection or resuming a previously preempted and buffered connection on the arrival of PU on a channel.
Proof: Due to the memoryless property of the geometric distribution
Pr(1/q i > 1/q t1 + 1/q t2 |1/q i > 1/q t1 ) = Pr(1/q i > 1/q t2 ),
where 1/q i is the duration of connection i, 1/q t1 is the connection length until time t 1 when it has been preempted by PU, and 1/q t2 is the remaining length of the connection after SU resumes connection at time t 2 . Since either a newly generated SU connection after resumption, or the remaining part of a preempted connection needs a new connection arrangement on the control channel, the number of slots occupied by each connection type is the same.
Having Observation 2 we can derive transition probabilities. Because packet generation is affected by the number of connections, we use Z t to classify conditions to derive the state transition probabilities.
Due to the assumption of a maximum number of one connection generation in one time slot, the state transition probability of the case of z > m + 1 is zero.
p xyz|klm = 0, z > m + 1, T (0) k S (1) m R (z)
x,y , z = m + 1,
T (m−z) k S (0) m + T (m−z+1) k S (1) m R (z)
x,y , z < m + 1, m < s m or z < s m ,
T (0) k S (0) m + T (1) k S (1) m + T (0) k S (1) m R (z) x,y , z = m = s m .(26)
Note that this OSA MAC has been previously analyzed in [15]. As it has been pointed out, the model proposed did not work well for the full range of parameters. This is due to the following. A Markov model has been derived for {X t , Y t } (using unmodified transition probabilities of [34,Eq. 6] used to calculate average throughput of networks based on non-OSA multichannel MAC protocols). With this limitation termination, the probability in [15,Eq. (14)], analogue to (16), included an aggregated stream of PU and SU traffic, where PU traffic q p was later substracted from steady state channel utilization in [15,Eq. (10)], analogue to (11). The approximation of [15], although Markovian, worked reasonably well only for a moderate values of PU activity q p .
p xyz|klm = 0, z > m + 1, or z = x, x + y < M D , or m = k, k + l < M D , T (0) k S (1) m R (0) 0,y , z = m + 1, T (m−z) k S (0) m + T (m−z+1) k S (1) m R (0)
0,y , z < m + 1, m < s m or z < s m ,
T (0) k S (0) m + T (1) k S (1) m + T (0) k S (1) m R (0) 0,y , z = m = s m .(27)
5) Impact of Channel Error on the Throughput Calculations: All previous analysis were done under
the assumption of the error-free channel. In this section we will briefly discuss the impact of channel error on the throughput calculations.
Channel error impacts the throughput in two ways. First, error affects throughput when SU involved in a connection setup fails to receive a control message from the transmitter. As a result no connection is established. Second, error affects throughput when SU not associated with the current connection setup For HCC, the control channel is selected as one of the data channels by a hopping method. Thus, if we assume an error on the control channel, it is reasonable to consider the error on the data channel as well.
For the control channel, if an error occurs, a connection fails to be established. Thus it is modeled by multiplyingŜ m by 1 − p e , where p e is a probability of error in the current time slot. For the data channel, different error handling strategies can be considered. We focus on the two following situations: i) case E 1 denoting packet punctured by unrecovered errors and ii) case E 2 denoting transmission termination on error.
a) Case E 1 : It can be assumed that when an error occurs on a time slot, the SU simply discards that time slot and resumes transmitting the remaining packet fragment from the next correct time slot. This is modeled by replacing the capacity C with C(1 − p e ).
b) Case E 2 : It can also be assumed that the connection terminates when an error occurs. Thus the probability that the packet finishes transmitting, q, should be replaced by q + (1 − q)p e . In addition, if the control channel hops to a channel which is being utilized for data transmission but error occurs, a new connection cannot be established. This is modeled by multiplyingŜ m by (1 − p e ) 2 .
V. NUMERICAL RESULTS
We now present numerical results for our model. First, we present results independently for spectrum sensing and OSA MAC performance, in Section V-A and Section V-B, respectively, for the microscopic case. Then in Section V-C we present the results of joint optimization of these two layers in the microscopic and macroscopic context. Moreover, due to a vast combination of parameters to consider we have decided to follow the convention of [15], [34] and focus on two general network setups (unless In this section we will also compare the analytical model of the sensing layer and OSA MAC protocols to simulation results. The simulations were developed with Matlab and reflect exactly the sensing models and MAC protocols presented in this paper. Simulation results for each system were obtained using the method of batch means for a 90% confidence interval. To evaluate the sensing protocols each batch contained 100 events and the whole simulation run was divided into 10 batches with no warm up phase.
When simulating the OSA MAC protocols, each batch contained 1000 events while the whole simulation was divided into 100 batches with the warm up period equal of 100 events.
A. Spectrum Sensing Architecture Performance
For all possible combinations of sensing architectures we compute the probability of false alarm for a wide range of t q . For two networks considered we select a set of the following common parameters: The advantage of TTDMA and SSMA can be shown more clearly if we compare the results of different p d = p d,min requirements. We can observe that high detection requirement such as p d = 0.99 makes the performance worse, as generally known. However if TTDMA or SSMA is applied, the performance for p d = 0.99 can be higher than that of TDMA for p d = 0.9. For example, in the range that t q < 50 µs in Fig. 2(a), SSMA for p d = 0.99 outperforms TDMA for p d = 0.9. Moreover, in Fig. 2(b), for t q 550 µs, SSMA and TTDMA for p d = 0.99 outperforms TDMA for p d = 0.9.
t t = t d,
It is important to note that κTTDMA performs worse than the rest of the protocols. It is due to excessive delay caused by instant acknowledgment of reporting result to the cluster head node. Note that κTTDMA is a lower bound for the operation of TTDMA. Also note that when TDMA needs to be equipped with acknowledgment function, as κTTDMA, its performance would be degraded the same way as TTDMA. Since we analyze static network with pre-set parameter values, e.g. κ does not change over time, in the following sections we proceed with unmodified TTDMA only.
2) Impact of Channel Errors during Reporting on PU Detection Performance:
The results are presented in Fig. 3. For small and large scale network, and the same parameters as used in Section V-A1, we have observed the probability of false alarm keeping detection probability p d constant for varying quiet time t q . First, it is obvious when comparing Fig. 2 (no channel error) and Fig. 3 (channel error) the impact of error is clearly visible, i.e. p f increases for every protocol. However, the relation between individual protocols is the same since error affects all protocols equally. Second, the effect of error on the small scale network is smaller than for the large scale network, compare Fig. 3(a) and Fig. 3(b), since the probability that SU will send a wrong report is larger for network with large number of nodes. Lastly,
for small values of κ probability of false alarm stabilizes and never reaches zero. However, large values of κ reduce significantly the effect of channel errors. It is because with high κ probability of making an error decreases rapidly. With 20% of nodes participating in the cooperative agreement on PU state, κ = 2 for small network and κ = 8 for large scale network, effect of error is reduced almost to zero.
3) Impact of Cooperation Level on PU Detection Performance:
The results are presented in Fig. 4.
We have selected TTDMA and set p d = p d,min = 0.99 as a protocol for further investigation. We observe that for the small scale network, see Fig. 4(a), the performance for κ = 2 is the best, while for the large scale network, see Fig. 4(b), the best performance can be achieved when κ = 8 or 16 if p f < 0.1.
Based on this observation, we conclude that for given detection requirements, high detection rate of PU is obtained when κ is well below the total number of SUs in the network. While for the considered setup optimal κ ≈ 20%, this value might be different for other network configurations. An interesting observation is that the number of groups to achieve the best performance becomes larger as the number of users N increases. For the small scale network, see Fig. 5(a), the best performance is observed for n g = 2 or n g = 3, while for large scale network, Fig. 5(b), n g = 6 is the best. This is because for the large scale network, the reporting overhead caused by large number of users offsets the performance improvement achieved by large cooperation scale. independent from κ, which differs them from TTDMA whose operation strictly depends on the value of κ considered. And again, when comparing Fig. 6(c) and Fig. 6(d) the optimal value of t q for TTDMA is in the same range as p f which proves the optimality of the design.
5) Impact of κ on PU
B. OSA MAC Protocol Performance
To evaluate the effectiveness of all proposed and analyzed MAC protocols we have fixed C = 1 Mbps, Section V-C), assuming that spectrum sensing layer is able to obtain such quality of detection. Again, as
p = e −
in Section V-A, results are presented separately for error-free and error channel.
1) Impact of PU Activity Level on OSA MAC Protocols:
The results are presented in Fig. 7. We observe that PU activity degrades DCC and HCC for B 0 S 0 , irrespective of other network parameters.
Their performances are comparable in this case. DCC and HCC performs best with B 1 S 0 . The results
show that the non-buffering OSA MAC protocols are very sensitive to q p where the greatest throughput decrease is visible at low ranges of PU activity. On the other hand, with connection buffering we observe a linear relation between q p and R t .
2) Impact of SU Packet Size on OSA MAC Protocols:
The results are presented in Fig. 8. Obviously, for larger SU packet size, the OSA network is able to grab more capacity. However, when packets become excessively large the throughput saturates. It remains that with no buffering and no channel switching protocols obtain the lowest throughput, no matter what network setup is chosen. Interestingly, although
intuitevely B 1 S 1 should obtain the highest channel utilization, it does not perform better than B 1 S 0 due to large switching time. With t p approaching zero, DCC B 1 S 1 would perform best, irrespective of the network setup as we discuss below.
3) Impact of Switching Time on OSA MAC Protocols:
The results are presented in Fig. 9. In this experiment, we verify that for small t p DCC B 1 S 1 outperforms DCC B 1 S 0 . However, there is no huge difference between their performances even at t p = 10 µs. This is because connection switching does not comparing channel switching and buffering options we conclude that much more channel utilization is obtained by connection buffering than by channel switching alone when N/M > 1.
4) Relation Between Number of SUs and PU
Note that for all cases described in this section simulation results agrees with our analytical model.
Comparing our model and analytical results of [15] for DCC B 1 S 0 , see Fig. 10(b), we observe that prior analysis overestimated the performance resulting in more than 2 Mbps difference at N/M = 1.
Interestingly, if we consider the same set of parameters as in Section V-B1 then the model of [15] almost agrees with the model of our paper. Since the set of parameters that has been chosen in V-B1 are similar to [15] we remark that the observations on the performance of this OSA MAC in [15] were reflecting the reality. Fig. 7, except for qp = 0.1. E1 and E2 denote error models described in Section IV-C5. E0 denotes the system with pe = 0.
5) Impact of Channel Errors on the OSA Multichannel MAC Performance:
To observe the impact of channel errors on the MAC protocol throughput we have set up the following experiment. For HCC and both network sizes, small and large, we have observed the average throughput for different SU packet lengths and channel error probabilities. The results are presented in Fig. 11. For comparison in Fig. 11 we present the system with no errors, denoted as E 0 . We kept values of p e realistic, not exceeding 1%.
Obviously system with punctured errors E 1 obtains much higher throughput than system E 2 , since more data can be potentially sent after one control packet exchange. Again, buffering allows to obtain higher throughput in comparison to non-buffered case, even with the data channel errors present. Note that system E 2 is more prone to errors than E 1 , observe Fig. 11(a) and Fig. 11 , ii) log-normal (denoted symbolically as L), and for comparison iii) geometric (denoted symbolically as E) used in the analysis. We have tested the protocol performance for different combinations of "on"
and "off" times of PU activity. These were EE, LE, EL, LL (all possible combinations of "on" and "off" times obtained in [60, Tab. 3 and Tab. 4]) and additionally EU, UU, where first and second letter denotes selected distribution for "on" and "off" times, respectively. Due to the complexity of the analysis we show only the simulation results using the same simulation method of batch means, with the same parameters as described at the beginning of Section V.
The parameter of each distribution was selected such that the mean value of each distribution was equal to 1/p c for "on" time and 1 − 1/p c for "off" time. The uniform distribution has a non-continuous set of mean values, (a b + a n )/2, where a b , a n ∈ N denoting lower and upper limit of the distribution, respectively, which precludes existence of every mean on or off value for p c ∈ (0, 1). To solve that problem an continuous uniform distribution with required mean was used and rounded to the highest integer. This resulted in a slightly lower last peak in the probability mass function at a n for 1/p c / ∈ N or where c l = 1/p c , v l = (1 − p c )/p 2 c is the mean and variance of the resulting discretized log-normal distribution. Note that the variance of the used discretized log-normal distribution is equal to the variance of geometric distribution for the same mean value. The variance of resulting discretized uniform continuos distribution could not be equal to the variance of the geometric distribution due the reasons described earlier.
The results are presented in Fig. 12. We focus on two network types, as indicated earlier: (i) large scale and (ii) small scale, with the assumed parameters as in Fig. 7. We select four values of q p for the clarity of the presentation. The most important observation is that irrespective of the considered distribution DCC obtains relatively the same throughput and the same relation between different protocol options exists as it was shown analytically in Fig. 7. If one wants to select the distribution combination with the highest throughput it would be LE and LL, while the throughput obtained being almost equal to the one obtained via analysis for the geometric distribution. The distribution with the lowest throughput is UU and EU, due to the difference of the second moment between the other two distributions for the on time. The difference in throughput between UU, EU and the remaining distributions is more visible for
C. Performance of Joint Spectrum Sensing and OSA MAC Protocols
Having results for spectrum sensing protocol and OSA MAC we join these two layers to form a complete OSA network stack. By means of exhaustive search we solve the optimization problem of (1).
We will also investigate the set of parameters that maximize R t for small and large scale network.
We divide our analysis in macroscopic and microscopic case observing R t for small scale network with M = 3, N = 12, d = 5 kB, and large scale network with M = 12, N = 40, d = 20 kB. For each case we select a set of spectrum sensing and OSA MAC protocols that are possible and, as we believe, most important to the research community. For a fixed set of parameters C = 1 Mbps, b = 1 MHz, p = e −1 /N , t d,max = 1 ms (microscopic case), t d,max = 2 s (macroscopic case), α = 1/M , t t = 1 ms, p d,min = 0.99, γ = −5 dB, q p = 0.1, and t p = 100 µs we leave κ, t e , n g , and p f as optimization variables.
1) Microscopic Model:
Here we focus only on DCC protocol, since collaborative spectrum sensing is only possible via a PU free control channel, which is inefficient to accomplish with HCC. Also, for sensing measurement dissemination we do not consider SSMA, which would be most difficult to implement in practice. The results are presented in Fig. 13. DCC B 1 S 0 with TTDMA is the best option, both for small scale and large scale network, see Fig. 13(a) and Fig. 13(b), respectively. Because of relatively high switching time B 1 S 1 performs slightly worse than B 1 S 0 , for small and large scale network. DCC B 0 S 0 with TDMA is the worst protocol combination, which confirms earlier results from Section V-A and Section V-B. Irrespective of network size it is always better to buffer SU connections preempted by PU than to look for vacant channels, compare again B 1 S 0 and B 0 S 1 in Fig. 13(a) and Fig. 13(b). The difference between B 0 S 0 and B 0 S 1 is mostly visible for a large network scenario, see Fig. 13(b), since with a large number of channels there are more possibilities to look for empty channels.
For all protocol combinations and both network sizes κ = 2 maximizes throughput performance, see Fig. 13(a). Interestingly, network size dictates the size of a sensing group. For small scale network, n g = 1 is the optimal value, see Fig. 13(a), but for a large network R t is maximized when n g = 3 (for B 0 S 0 ) and n g = 4 (for the rest). We can conclude that with a small network it is better to involve all nodes in sensing, while for larger networks it is better to divide them into groups, which agrees with the observation from Section V-A4. Moreover, we observe that the performance difference between TTDMA and TDMA is not as big as in Fig. 2 when parameters are optimized.
The most interesting result is observed for p f . With the increase of protocol complexity false alarm increases as well. Also with an increase of p f , quiet time is decreasing. Because buffering and switching improves the performance, there can be more margin to design the spectrum sensing. DCC obtains higher throughput than HCC for a small scale network, and vice versa, compare Fig. 14(a) and Fig. 14(b), respectively. This confirms the observations of [15, Fig. 3], [34, Fig. 3]. Just like in Fig. 13(a), for small scale network κ = 2 and n g = 2 are the ones that maximize R t . For the large scale network, however, κ = 3 and n g = 3 is optimal for TDMA, and κ = 4 and n g = 4 for TTDMA.
This means that for large networks it is beneficial to split the network into smaller groups. Again, this confirms our findings from Section V-C1. For both network scenarios p f and t e is relatively the same for all protocols considered.
Note that for the large scale network in the macroscopic model, an SU takes more time to detect a PU than in the microscopic model because large t d,max reduces the time overhead. The release of time restriction impacts the large scale network by requiring greater value of κ to achieve the maximum throughput.
VI. CONCLUSION
We have presented a comprehensive framework enabling assessment of the performance of joint spectrum sensing and MAC protocol operation for OSA networks. In the model we have proposed we focused on the link layer throughput as the fundamental metric to assess performance. We have parameterized spectrum sensing architectures for energy detection based systems with collaborative measurements combining. We have proposed a novel spectrum sensing MAC denoted Truncated Time Division Multiple Access. We have also categorized multichannel MAC protocols for OSA networks based on their ability to buffer and switch existing SU connections on the arrival of a PU. Our analysis is supported by simulations which prove the accuracy of the obtained expressions.
Some of the design guidelines that need to be noted are as follows. For spectrum sensing introducing TTDMA gives an improvement in obtained performance in compared to TDMA. Large networks, i.e.
having many channels and users, benefit from clustering, while for small networks it is better to create small number of clusters such that sensing time is optimized. When considering MAC protocol design for OSA it is clear that more benefit comes from introducing SU connection buffering than channel switching, for those SU connections that have been preempted by PU. Interestingly, although intuition would suggest that MAC protocols that combine SU connection buffering and channel switching would outperform all other protocols, due to switching overhead this combination is usually inferior to protocols that involve only SU connection buffering.
Our future task will be to investigate the delay experience by using any of OSA MAC protocols proposed. We plan to develop a comprehensive simulation software which will implement features not covered by our model, like queue per each SU.
| 11,421 |
0910.4704
|
2129293544
|
We present an analytical framework to assess the link layer throughput of multichannel Opportunistic Spectrum Access (OSA) ad hoc networks. Specifically, we focus on analyzing various combinations of collaborative spectrum sensing and Medium Access Control (MAC) protocol abstractions. We decompose collaborative spectrum sensing into layers, parametrize each layer, classify existing solutions, and propose a new protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient distribution of sensing results in “K out of N” fusion rule. In case of multichannel MAC protocols, we evaluate two main approaches of control channel design with 1) dedicated and 2) hopping channel. We propose to augment these protocols with options of handling secondary user (SU) connections preempted by primary user (PU) by 1) connection buffering until PU departure and 2) connection switching to a vacant PU channel. By comparing and optimizing different design combinations, we show that 1) it is generally better to buffer preempted SU connections than to switch them to PU vacant channels and 2) TTDMA is a promising design option for collaborative spectrum sensing process when K does not change over time.
|
Considering the final group of papers, when coupling spectrum sensing procedures with link layer protocols, there is a fundamental tradeoff between sensing time, sensing quality and OSA network throughput. This has been independently found for general OSA network models with a single sensing band @cite_42 , multiple sensing bands @cite_21 with and without cooperative detection and centralized resource allocation, and in a context of MAC protocol abstraction @cite_24 for a non-cooperative sensing case. See also recent discussion in [Sec. 2.3.1, 7.3, and 10.2.4] hossain_book_2009 . This tradeoff will be especially clear, while evaluating microscopic models, since the detection time creates a significant overhead for the data exchange phase. Recently the model of @cite_42 was extended to the case of @math out of @math '' rule in cooperative sensing @cite_52 , optimizing parameters of the model to maximize the throughput given detection rate requirements. Unfortunately, the delay caused by exchanging sensing information was not included.
|
{
"abstract": [
"In this paper, different control channel (CC) implementations for multichannel medium access control (MAC) algorithms are compared and analyzed in the context of opportunistic spectrum access (OSA) as a function of spectrum-sensing performance and licensed user activity. The analysis is based on a discrete Markov chain model of a subset of representative multichannel OSA MAC classes that incorporates physical layer effects, such as spectrum sensing and fading. The analysis is complemented with extensive simulations. The major observations are given as follows: 1) When the CC is implemented through a dedicated channel, sharing such dedicated channel with the licensed user does not significantly decrease the throughput achieved by the OSA network when the data packet sizes are sufficiently large or the number of considered data channels is small. 2) Hopping OSA MACs, where the CC is spread over all channels, are less susceptible to licensed user activity than those with a dedicated CC (in terms of both average utilization and on off times). 3) Scanning efficiency has a large impact on the achievable performance of licensed and OSA users for all analyzed protocols. 4) The multiple rendezvous MAC class, which has yet to be proposed in OSA literature, outperforms all the multichannel MAC designs analyzed in this paper.",
"In cognitive radio networks, the performance of the spectrum sensing depends on the sensing time and the fusion scheme that are used when cooperative sensing is applied. In this paper, we consider the case where the secondary users cooperatively sense a channel using the k -out-of-N fusion rule to determine the presence of the primary user. A sensing-throughput tradeoff problem under a cooperative sensing scenario is formulated to find a pair of sensing time and k value that maximize the secondary users' throughput subject to sufficient protection that is provided to the primary user. An iterative algorithm is proposed to obtain the optimal values for these two parameters. Computer simulations show that significant improvement in the throughput of the secondary users is achieved when the parameters for the fusion scheme and the sensing time are jointly optimized.",
"In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90 detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.",
"Spectrum sensing is the key enabling technology for cognitive radio networks. The main objective of spectrum sensing is to provide more spectrum access opportunities to cognitive radio users without interfering with the operations of the licensed network. Hence, recent research has been focused on the interference avoidance problem. Moreover, current radio frequency (RF) front-ends cannot perform sensing and transmission at the same time, which inevitably decreases their transmission opportunities, leading to the so-called sensing efficiency problem. In this paper, in order to solve both the interference avoidance and the spectrum efficiency problem, an optimal spectrum sensing framework is developed. More specifically, first a theoretical framework is developed to optimize the sensing parameters in such a way as to maximize the sensing efficiency subject to interference avoidance constraints. Second, in order to exploit multiple spectrum bands, spectrum selection and scheduling methods are proposed where the best spectrum bands for sensing are selected to maximize the sensing capacity. Finally, an adaptive and cooperative spectrum sensing method is proposed where the sensing parameters are optimized adaptively to the number of cooperating users. Simulation results show that the proposed sensing framework can achieve maximum sensing efficiency and opportunities in multi-user multi-spectrum environments, satisfying interference constraints."
],
"cite_N": [
"@cite_24",
"@cite_52",
"@cite_42",
"@cite_21"
],
"mid": [
"2138837863",
"2069292411",
"2084436032",
"2169762591"
]
}
|
Performance of Joint Spectrum Sensing and MAC Algorithms for Multichannel Opportunistic Spectrum Access Ad Hoc Networks
|
It is believed that Opportunistic Spectrum Access (OSA) networks will be one of the primary forces in combating spectrum scarcity [2] in the upcoming years [3], [4]. Therefore, OSA networks [5], [6] have become the topic of rigorous investigation by the communications theory community. Specifically, the assessment of spectrum sensing overhead on OSA medium access control (MAC) performance recently gained a significant attention.
A. Research Objective
In the OSA network performance analysis, a description of the relation between the primary (spectrum) user (PU) network and the secondary (spectrum) user (SU) network can be split into two general models: macroscopic and microscopic. In the macroscopic OSA model [7], [8], [9] it is assumed that the time limit to detect a PU and vacate its channel is very long compared to the SU time slot, frame or packet length duration. Such a time limit is assumed to be given by a radio spectrum regulatory organization.
For example, the timing requirements for signal detection of TV transmissions and low power licensed devices operating in TV bands by IEEE 802.22 networks [10] (including transmission termination and channel vacancy time, i.e. a time it takes the SU to stop transmitting from the moment of detecting PU) must be equal to or smaller than 4.1 s [11,Tab. 15.5], while the frame and superframe duration of IEEE 802.22 are equal to 10 ms and 160 ms, respectively [11]. Also, in the macroscopic model it is assumed that the PU channel holding time, i.e. the time in which the PU is seen by the SU as actively transmitting, is much longer than the delay incurred by the detection process performed at the SU. As a result it can be assumed in the analysis that, given high PU detection accuracy (which is a necessity), OSA network performance is determined by the traffic pattern of the SUs. That is, it depends on the total amount of data to be transmitted by the SU network, the duration of individual SU data packets and the number of SU nodes. In other words the PU bandwidth resource utilization by the SU is independent of PU detection efficiency.
In the microscopic OSA model, more popular than its macroscopic counterpart due to analytic challenges, the detection time is short in relation to the shortest transmission unit of the OSA system.
Detection is also performed much more frequently than in the macroscopic model, i.e. for every SU packet [12], [13] or in every time slot [14], [15], [16], [17], [18]. Also, the microscopic model assumes much higher PU activity than the macroscopic model, which justifies frequent detection cycles. Since the detection overhead is much larger than in the macroscopic model, the analysis of utilization of resources (temporarily unoccupied by PU) by OSA network cannot be decoupled from the analysis of the PU signal detection phase. Therefore, while the distinction between macroscopic and microscopic models are somehow fluid, it is important to partition the two cases and compare them in a systematic manner. More importantly, the comparison should be based on a detailed OSA multichannel and multiuser ad hoc network model [19,Sec. 7.4], which would not ignore the overhead from both the physical layer (PHY) and MAC layers of different cooperative and distributed spectrum sensing strategies [19,Tab. 7.1] and, in case of microscopic model, account for different channel access procedures and connection management strategies for the SUs upon PU detection, like buffering or switching to a vacant channel. Finally, the comparison should be realized using tractable analytical tools.
C. Our Contribution
In this paper, we present a unified analytical framework to design the spectrum sensing and the OSA data MAC jointly, for the macroscopic and microscopic cases. This design framework provides the (i) means of comparing different spectrum sensing techniques plus MAC architectures for OSA networks and (ii) spectrum sensing parameters such as observation time and detection rate for given design options. As a metric for optimization and comparison, we consider the average link layer OSA network throughput.
Our model will account for the combined effects of the cooperative spectrum sensing and the underlying MAC protocol. For spectrum sensing, we will consider several architectures parametrized by sensing radio bandwidth, the parameters of the sensing PHY, and the parameters of the sensing MAC needed to exchange sensing data between individual OSA nodes. Along with classifying most of the well known sensing MAC protocols, we introduce a novel protocol called Truncated Time Division Multiple Access (TTDMA) that supports efficient exchange of individual sensing decisions in "κ out of N " fusion rule.
For the data MAC we will consider two protocol abstractions, (i) Dedicated Control Channel (DCC) and
(ii) Hopping Control Channel (HCC), as analyzed in [15], [34] with novel extensions. That is, given the designs of [25], [26], [27], [30], we will analyze MAC protocols that (i) allow (or forbid) to buffer existing SU connections on the event of PU arrival, and (ii) allow (or forbid) to switch the SU connections preempted by the PU to the empty channels. Please note that in the case of the analytical model proposed in [15] for the SU connection buffering OSA MAC schemes we present an exact solution. Finally, using our framework, we compute the maximum link layer throughput for most relevant combinations of spectrum sensing and MAC, optimizing parameters of the model jointly, both for the microscopic and macroscopic models.
The rest of the paper is organized as follows. System model and a formal problem description is presented in Section II. Description of spectrum sensing techniques and their analysis is presented in Section III. Analysis of MAC strategies are presented in Section IV. Numerical results for spectrum sensing process, MAC and joint design framework are presented in Section V. Finally the conclusions are presented in Section VI.
II. SYSTEM MODEL AND FORMAL PROBLEM DESCRIPTION
The aim of this work is to analyze link layer throughput accounting for different combinations of MAC, spectrum sensing protocols and regulatory constraints. The model can later be used to optimize the network parameters jointly to maximize the throughput, subject to regulatory constraints. Before formalizing the problem, we need to introduce the system model, distinguishing between the microscopic and macroscopic approaches.
A. System Model 1) Microscopic Model: For two multichannel MAC abstractions considered, i.e. DCC and HCC, we distinguish between the following cases: (i) when SU data transfer interrupted by the PU is being buffered (or not) for further transmission and (ii) when existing SU connection can switch (or not) to a free channel on the event of PU arrival (both for buffering and non-buffering SU connection cases). Finally, we will distinguish two cases for DCC where (i) there is a separate control channel not used by the PU and (ii) when control channel is also used by the PU for communication. All these protocols will be explained in detail in Section IV.
We assume slotted transmission within the SU and PU networks, where PU and SU time slots are equal and synchronized with each other. The assumptions on slotted and synchronous transmission between PU and SU are commonly made in the literature, either while analyzing theoretical aspects of OSA (see [12, OSA scenarios (see [16,Fig. 2] in the context of secondary utilization of GSM spectrum or [38] in the context of secondary IEEE 802.16 resources usage). Our model can be generalized to the case where PU slots are offset in time from SU slots, however, it would require additional analysis of optimal channel access policies, see for example [36], [39], [40], which is beyond the scope of this paper. We also note that the synchrony assumption allows one to obtain upper bounds on the throughput when transmitting on a slot-asynchronous interface [41].
The total slot duration is t t µs. It is divided in three parts: (i) the detection part of length t q µs, denoted as quiet time, (ii) the data part of length t u µs, and if communication protocol requires channel switching (iii) switching part of length t p µs. The data part of the SU time slot is long enough to execute one request to send and clear to send exchange [15], [34]. For the PU the entire slot of t t µs is used for data transfer, see Fig. 1(a).
Our model assumes that there are M channels having fixed capacity C Mbps that are randomly and independently occupied by the PU in each slot with probability q p . There are N nodes in the SU network, each one communicating directly with another SU on one of the available PU channels in one hop fashion.
Also, we assume no merging of the channels, i.e only one channel can be used by a communicating pair of SUs at a time. SUs send packets with geometrically distributed length with an average of 1/q = d/(Ct u ) slots for DCC, and 1/q = d/(C {t u + t p }) slots for HCC [15,, [34,Sec. 3.2.3], where d is the average packet size given in bits. Difference between average packet length for DCC and HCC is a result of switching time overhead for HCC, because during channel switching SUs do not transfer any data, even though they occupy the channel. We therefore virtually prolong data packet by t p for HCC to keep the comparison fair.
Every time a node tries to communicate with another node it accesses the control channel and transmits a control packet with probability p to a randomly selected and non-occupied receiver. A connection is successful when only one node transmits a control packet in a particular time slot. The reason for selecting a variant of S-ALOHA as a contention resolution strategy was manyfold. First, in reality each real-life OSA multichannel MAC protocol belonging to each of the considered classes, i.e. HCC or DCC, will use its own contention resolution strategy. Implementing each and every approach in our analysis: (i) would complicate significantly the analysis, and most importantly (ii) would jeopardize the fairness of the comparison. Therefore a single protocol was needed for the analytical model. Since S-ALOHA is a widespread and well understood protocol in wireless networks and is a foundation of many other collision resolution strategies, including CSMA/CA, it has been selected for the system model herein.
In each quiet phase every SU node performs PU signal detection based on signal energy observation.
Since we assume that OSA nodes are fully connected in a one hop network, thus each node observes on 8 average the same signal realization in each time slot [13], [18], [42]. PU channels detected by the SU are assumed as Additive White Gaussian Noise with a channel experiencing Rayleigh fading. Therefore to increase the PU detectability by the OSA network we consider collaborative detection with hard decision combining in the detection process based on "κ out of N " rule, as in [43], [44]. Hence we divide the quiet phase into (i) the sensing phase of length t s µs and (ii) the reporting phase of length t r µs. The sensing phase is of the same length for all nodes. For simplicity we do not consider in this study sensing methods that adapt the sensing time to propagation conditions as in [45]. In the sensing phase, nodes perform their local measurements. Then, during the reporting phase, nodes exchange their sensing results and make a decision individually by combining individual sensing results. We will analyze different PHY and MAC approaches to collaborative spectrum sensing, especially (i) methods to assign sensing frequencies to users, (ii) rules in combining the sensing results, and (iii) multiple access schemes for measurement reporting. In this paper we do not consider sensing strategies applicable to single channel OSA networks [46], two stage spectrum sensing [8], and sensing MAC protocols based on random access [47], due to their excessive delay. We will explain our spectrum sensing approaches in more detail in Section III. Further we assume a error channel, for the sensing layer as well as for data layer where probability of error during transmission is denoted as p e .
Finally, we consider two regulatory constraints under which the OSA network is allowed to utilize the PU spectrum provided the channel is idle: (i) maximum detection delay t d,max , i.e. a time limit within which a SU must detect a PU, and (ii) minimum detection probability p d,min , i.e., a probability with which a OSA system has to detect a PU signal with minimum signal to noise ratio γ. Note that in the event of mis-detection and subsequent SU transmission in a channel occupied by PU, a packet fragment is considered successfully transmitted, since in our model transmission power of SU is much higher than interference from PU, and regulatory requirements considered here do not constrain SU transmission power 1 (refer for example to IEEE 802.22 draft where Urgent Coexistent Situation packets are transmitted on the same channel as active PU [10], [11]). Moreover, maximum transmission power is a metric specific to overlay OSA systems [19, Sec. 2.2.5 and 8.2.1] where typically no spectrum sensing is considered. Also we do not consider a metric based on a maximum allowable level of collisions between PU and SU. Note that the parameters of the introduced model are summarized in Table I and 1 The opposite case is to assume that a packet fragment is considered as lost and retransmitted. This approach however requires an acknowledgement mechanism for a lost packet fragment, see for example [17,, [41,Sec. II], that contradicts the model assumption on the geometric distribution of SU packets. Table I. the abbreviations are summarized in Table II. 2) Macroscopic Model: We assume the same system model as for the microscopic case, except for the following differences. OSA performs detection rarely, and the PU is stable for the duration of OSA network operation, i.e. it is either transmitting constantly on a channel or stays idle. In other words quiet period occurs for multiple time slots, see Fig. 1(b). Also, since the PU is considered stable on every channel we do not consider all types of OSA MAC protocols introduced for the microscopic model.
Instead we use classical DCC and HCC models proposed in [34] with the corrections of [15] accounting for the incomplete transition probability calculations whenever OSA network occupied all PU channels and new connection was established on the control channel.
B. Formal Problem Description
To compute the maximum throughput for different combinations of protocols and models, we define an optimization problem. The objective is the OSA network link layer throughput R t . Therefore, considering the regulatory constraints given above we need to
maximize R t = ξR subject to p d = p d,min , t d ≤ t d,max ,(1)
where t d is the detection time, i.e. the time to process whole detection operation as described in Section III-D, R is the steady state link layer throughput without sensing and switching overhead, which will be computed in Section IV, and
ξ = tt−tq −P (i) x,y , R (z)
x,y total PU arrival probability for no buffering, and buffering case -
T (j) k , S (j) m
termination, and arrangement probability - (1) is itself affected by p f , as it will be shown in Section IV. Also note that t p is removed from second condition of (2) since the switching time is negligible in comparison to inter-sensing time.
S (1) m ,Ŝ(1)
III. LAYERED MODEL OF SPECTRUM SENSING ANALYSIS
To design the spectrum sensing, we follow the approach of [7] in which the spectrum sensing process is handled jointly by (i) the sensing radio, (ii) the sensing PHY, and (iii) the sensing MAC. Using this layered model we can compare existing approaches to spectrum sensing and choose the best sensing architecture in a systematic way. Since the parameters of the design framework in (1) are determined by the choices of individual layers, we describe and parametrize each layer of the spectrum sensing, later describing cross-layer parameters.
A. Sensing Radio
The sensing radio scans the PU spectrum and passes the spectrum sensing result to the sensing PHY for analysis. The sensing radio banwidth is given as αM b, where α is a ratio of the bandwidth of the sensing radio to the total PU bandwidth and b MHz is the bandwidth of each PU channel 2 . With α > 1/M node can sense multiple channels at once. However the cost of such wideband sensing radio increases.
B. Sensing PHY
The sensing PHY analyzes the measurements from the sensing radio to determine if a PU is present in a channel. Independent of the sensing algorithm, such as energy detection, matched filter detection or feature detection [48], [49], there exists a common set of parameters for the sensing PHY: (i) time to observe the channel by one node t e µs, (ii) the PU signal to noise ratio detection threshold θ, and (iii) a transmit time of one bit of sensing information t a = 1/C µs. We denote conditional probability of sensing result p ij , i, j ∈ {0, 1}, where j = 1 denotes PU presence and j = 0 otherwise, and i = 1
indicates the detection result of PU being busy and i = 0 otherwise. Observe that p 10 = 1 − p 00 and
p 11 = 1 − p 01 .
As noted in Section II-A, we consider energy detection as the PU detection algorithm since it does not require a priori information of the PU signal. For this detection method in Rayleigh plus Additive
White Gaussian Noise channel p 10 is given as [15, Eq. (1)]
p 10 = Γ(ǫ, θ/2) Γ(ǫ) ,(3)
and
p 11 [15, Eq. (3)] p 11 = e − θ 2 ǫ−2 h=0 θ h h!2 h + 1 + γ γ ǫ−1 e θγ 2+2γ − ǫ−2 j=0 (θγ) h j!(2 + 2γ) h ,(4)
where Γ(·) and Γ(·, ·) are complete and incomplete Gamma functions, respectively, and ǫ = ⌊t e αM b⌋ is a time-bandwidth product. By defining G ǫ (θ) = p 10 and θ = G −1 ǫ (p 10 ), we can derive p 11 as a function of p 10 and t e .
C. Sensing MAC
The sensing MAC is a process responsible for sensing multiple channels, sharing sensing results with other users, and making a final decision on the PU presence. Because of the vast number of possibilities for sensing MAC algorithms it is hard to find a general set of parameters. Instead, we derive crosslayer parameters for a specific option of the sensing MAC. This methodology can be applied to any new sensing MAC scheme. We now introduce classifications which will be used in the derivation of cross-layer parameters.
1) Sensing Strategy for Grouping Channels and Users:
Each SU has to determine which channel should be sensed among the M channels. To reduce sensing and reporting overhead, OSA system can divide users and channels into n g sub-groups [50]. Sub-group i ∈ {1, · · · , n g } is formed by n u,i users who should sense m s,i channels to make a final decision cooperatively. Assume that all users are equally divided into groups then m s,i ∈ {⌊M/n g ⌋, ⌈M/n g ⌉} and n u,i ∈ {⌊N/n g ⌋, ⌈N/n g ⌉}. Note that for M/n g ∈ N and N/n g ∈ N all sub-groups have the same n u,i = N/n g and m s,i = M/n g for all i. Given N and M , if n g is small, more users are in a group and the collaboration gain increases, but at the same time more channels must be sensed, which results in more time overhead for sensing. For large n g , this relation is opposite.
2) Combining Scheme: By combining sensing results of other users, a OSA network makes a more reliable decision on PU state. As considered in [13], [51], we will take κ as a design parameter for the sensing MAC and find an optimum value to maximize the performance. Note that for the case of N user cooperation if κ = 1, the combining logic becomes the "or" rule [19,Sec. 3.2], [42, Sec. III-C] and if κ = N , it becomes the "and" rule.
3) Multiple Access for Measurement Reporting:
To transmit sensing results of multiple users through the shared media, a multiple access scheme is needed. Note that this multiple access scheme is only for the reporting process, different from the multiple access for data transfer. We consider the following approaches.
a) Time Division Multiple Access (TDMA):
This is a static and well-organized multiple access scheme for which a designated one bit slot for sensing report transmission is assigned to each user [43], [50]. b) TTDMA: In TDMA, when the SU receives all the reporting bits from other users the SU makes a final decision of presence of PU on the channel. However, in OSA network using TTDMA SUs may not need to wait until receiving the last reporting bit, because for the "κ out of N " rule, a reporting operation can stop as soon as κ one bits denoting PU presence are received. This sensing MAC aims at reducing the reporting overhead, but unfortunately we have not seen any paper proposing and discussing TTDMA.
c) Single Slot Multiple Access (SSMA):
For this scheme, known also as the boosting protocol [52], only one bit slot is assigned for reporting and all SUs use this slot as a common reporting period. Any SU that detects a PU transmits one bit in the common designated slot. Otherwise, a user does not transmit any bit in the designated slot. Then, reporting bits from SUs who detect a PU are overlapped and as a result all power of the slot is summed up. By measuring the power in the designated slot, a SU can determine whether the primary user exists or not. We assume perfect power control and perfect synchronization.
Even though this may not be practical, because carrier frequency or the phase offset cannot be avoided in real systems, this scheme serves as an upper bound for sensing MAC performance. For the analysis of SSMA in isolation but in a more realistic physical layer conditions the reader is referred to [53], [54].
D. Cross-Layer Parameters
Considering the combined impact of the individual layers, we derive cross-layer parameters in the framework as described in (1). More specifically these are t q and t d , derived as a function of individual parameters and p f , and p d , denoting final network-wide probabilities of false alarm and detection, respectively.
1) Detection Time t d and Quiet Time t q : Detection time t d is defined as the time duration from the point that a SU starts to sense, to the point that a SU makes a final decision on PU presence. Regardless of the data transfer and spectrum sensing time overlap, the final detection decision is made only after combining the sensing group's reported information [55]. Thus t d is the time from the start of the sensing phase to the end of the reporting phase, i.e. t d = t s + t r .
Since the data transfer may not be possible during sensing or reporting phases t q ≤ t d , depending on the approach. When spectrum sensing and data transfer are divided in time division manner t q = t s + t r .
Note that three other methods sharing the same problem are possible (they will not be considered in the remainder of the paper): (i) simultaneous reporting and data, which can be implemented by using the separate channel as in [56], for which t q = t s , (ii) simultaneous sensing and data, implemented by using the frequency hopping method as in [57], for which t q = t r , and (iii) simultaneous sensing, reporting, and data for which t q = 0. Conceptually, simultaneous sensing, reporting, and data transfer is possible and seems most efficient but we have not found any implementation of it in the literature. Note that in order to implement simultaneous sensing and transmission at least two radio front ends are needed, which increases the total cost of the device.
Definem s as the number of individual sensing events to complete sensing operation andm r as the average number of bits to report. Then the sensing time and the reporting time can be calculated as t s =m s t e and t r =m r t a . Note thatm s is affected by the bandwidth of the sensing radio because it can scan multiple channels at once if the bandwidth of the sensing radio is wide. For the case that the sensing radio is narrower than the bandwidth to sense, i.e. α < max{m s,1 , · · · , m s,ng }/M , we assume that a SU monitors all channels by sequential sensing [33], because the reporting phase should be synchronized after all SUs finish the sensing phase. With this assumptionm s = max{m s,1 , · · · , m s,ng }/αM , because even though the bandwidth to sense is less than that of the sensing radio it still needs one sensing cycle to get information. Form r , because there are n g groups in a OSA system,m r = ng i=1m r,i wherem r,i depends on the multiple access schemes for reporting, which we compute below. a) TDMA: All n u,i users should transmit the sensing results of m s,i channels. Thus,m r,i = n u,i m s,i . b) TTDMA: For κ < n u,i /2, if κ of ones are received, the reporting process will end. We introduce a variable δ which is the number of bits when the reporting process finishes. Thus there should be κ − 1 of ones within δ − 1 bits and then δ-th bit should be one. Because the range of δ is from κ to n u,i , the average number of bits for this condition is derived as
m 1,i = nu,i δ=κ δ − 1 κ − 1 (1 − q p )δp δ−κ 00 p κ 10 + q p δp δ−κ 01 p κ 11 .(5)
Moreover, if the number of received zeros, denoting PU absence, equals to n u,i − κ + 1, the reporting process will stop because even if the remaining bits are all one, the number of ones must be less than κ. Then the reporting process stops at δ-th bit if δ − n u,i + κ − 1 bits of one are received within δ − 1 bits and zero is received at δ-th bit. The range of δ is from n u,i − κ + 1 to n u,i , and thus the average number of bits for this condition is
m 2,i = nu,i δ=νi δ − 1 δ − ν i (1 − q p )δp νi 00 p δ−νi 10 + q p δp νi 01 p δ−νi 11 ,(6)
where
ν i = n u,i − κ + 1. Therefore because there are m s,i channels to sense in a group i,m r,i = m s,i (m 1,i + m 2,i ).
For the case κ ≥ n u,i /2, m 1,i is calculated by counting zeros and m 2,i by counting ones. Thus, we usem r,i = m s,i (m 1,i + m 2,i ) again, by replacing κ with n u,i − κ + 1, p 00 with p 10 and p 01 with p 11 .
Because we assumed so far that κ is known to each node in the network, OSA nodes know when to stop reporting measurements and start data communication without being instructed by external parties.
For comparison we analyze another type of TTDMA, denoted as κTTDMA, where a cluster head node makes a decision to stop the reporting phase in the OSA network. For example, this approach may be necessary if the κ value is updated in real time. In the worst case scenario this approach requires two bits to be reported by the SU, i.e. one for sending sensing data and one for an acknowledgment from the cluster head to report. Then (5) and (6)
p f = 1 n g ng i=1 p f,i ,(7)
where p f,i is the probability of false alarm of sub-group i. Using (7) we can also derive p d by substituting (8) b) TTDMA: In this case SU does not need to receive n u,i bits to make a final decision because the reporting phase is ended when the number of ones is κ. To derive p f,i for this case, we introduce a variable β denoting the number of zeros. Then total number of reporting bits is κ + β if the last bit is one because otherwise reporting phase will end at less than κ + β bits. Therefore, there should be β of zeros in κ + β − 1 bits and κ-th bit should be one. Because β can vary from 0 to n u,i − κ
p f,i = nu,i δ=κ n u,i δ p δ 10p nu,i−δ 00 ,(8)wherep x = (1 − p e )p x + p e (1 − p x ) for p x ∈ {p 10 , p 00 }, while p d,i is derived fromp f,i = nu,i−κ β=0 κ + β − 1 β p κ 10p β 00 .(9)
Finally p d,i is obtained from (9) by substitutingp 10 withp 11 andp 00 withp 01 .
c) SSMA: Obviously, the process of the reporting information for SSMA is the same as for TDMA.
Therefore p f,i and p d,i are defined the same as for TDMA.
IV. MULTICHANNEL OSA MAC PROTOCOL ANALYSIS
In this section we present the analysis of throughput R for all considered combinations of MAC protocol architectures. As noted in Section I-C, we propose a set of new multichannel MAC protocols for OSA. We will first describe their operation, later presenting the analysis framework.
A. Description of New Multichannel MAC protocols for OSA
We consider two major groups of MAC protocols for OSA: (i) those enabling buffering of the SU connections preempted by the PU arrival, and (ii) those enabling switching of the SU connections to a vacant channel when preempted. In the former group, when the PU arrives the existing SU connection will pause at the time of preemption and resume on the same channel as soon as the PU goes idle. We assume that the SU always waits for the PU to finish its transmission. The case where the buffered SU connection expires after a predefined time, not analyzed here, is presented in [22] for the centralized network. We do not consider any channel reservation schemes for potential SU connections to be buffered [25]. When buffering is not possible, the preempted SU connection is considered as lost and a new connection must be established on the control channel. In the latter group, when the PU arrives the existing SU connection will look for a new empty channel, to continue transmission. If such a channel cannot be found the connection is lost. Without channel switching, the exiting SU connection is lost as soon as the PU preempts the channel.
Obviously we can have four combinations of these groups for OSA MAC, which have all been considered in the analysis: (i) with no buffering and no channel switching [30] scheme denoted as B 0 S 0 , where SU connections preempted by PU are lost; (ii) with no buffering and channel switching [24], [25], [26] denoted as B 0 S 1 , where SU connections preempted by PU switch to a free channel and connections that cannot find a free channel are blocked; (iii) with buffering and no channel switching [15], [22], [23] We propose a three dimensional Markov chain of which the state vector is given as Sec. III] where buffered SU connections were also considered to be utilizing the PU channels.
(X t , Y t , Z t ), where
Considering a real OSA system, there are conditions that qualify valid states. With SU connection buffering-enabled MAC protocols for OSA, the number of connections cannot be less than the number of channels utilized by SUs, i.e. X t ≤ Z t . Additionally, SUs do not pause transmissions over unoccupied channels. Therefore, the number of SU connections not utilizing a channel cannot exceed the number of channels occupied by PUs, i.e. Z t − X t ≤ Y t or Z t ≤ X t + Y t . Finally, the sum of the channels utilized by PUs and the SUs cannot be greater than M D , i.e. X t + Y t ≤ M D . By combining these conditions we can compactly write them as
0 ≤ X t ≤ Z t ≤ X t + Y t ≤ M D .(10)
When connection buffering is disabled the number of SU connections must be the same as the number of channels utilized by SUs, i.e. X t = Z t . Therefore, for non-buffering SU connection OSA MAC
protocols (X t , Y t , Z t = X t ) ⇒ (X t , Y t ).
For the microscopic case the average channel throughput, excluding switching and sensing overhead, is computed as
R = C sm x=0 MD y=0 sm z=0 xπ xyz ,(11)
where s m = max{S} and the steady-state probability π xyz is given by
π xyz = lim t→∞ Pr(X t = x, Y t = y, Z t = z),(12)
and the state transition probabilities to compute (12) will be derived in the subsequent section, uniquely for each OSA multichannel MAC protocol.
Finally, for the macroscopic case the average channel throughput, excluding switching and sensing overhead, is computed as
R = {q p (1 − p d ) + (1 − q p )(1 − p f )}R c C,(13)
where R c = sm i=1 iπ i and π i is a solution to a steady state Markov chain given by [15,Eq. (13)]. Since the macroscopic model assumes no PU activity in each time slot, SU connection buffering and switching is not needed. Note that contrary to the incorrect assumptions of [15,Eq. (12)], [34, Eq. (7) and (9)] we compute R in (11) and (13) taking all the channels into account, irrespective of the type of OSA MAC. This is because models of [15], [34] considered only data channels for the throughput investigation in DCC in the final calculation stage, assuming that no data traffic is being transmitted on control channel. However, the utilization must be computed over all channels, irrespective of whether one channel transmitted only control data or not.
C. Derivation of State Transition Probabilities for the Microscopic Model
We denote the state transition probability as
p xyz|klm = P r(X t = x, Y t = y, Z t = z|X t−1 = k, Y t−1 = l, Z t−1 = m).(14)
Note that changes in X t and Z t depend on the detection of the PU. In addition, changes in Z t depend on
OSA traffic characteristics such as the packet generation probability p and the average packet length 1/q.
Also, note that the steady state probability vector π containing all possible steady state probabilities π xyz is derived by solving π = πP, where entries of right stochastic matrix P are defined as (14) knowing that x,y,z π xyz = 1.
As a parameter to model PU state, p c denotes the probability that a OSA network collectively detects a PU channel as occupied 3 , i.e.
p c = q p p d + (1 − q p )p f .(15)
We introduce two supporting functions. First, we denote T
T (j) k = k j q j (1 − q) k−j , k ≥ j > 0, 0, otherwise.(16)
Note that k in T 5) and (8)] considering PU detection on the control channel. If a PU is detected on a control channel, an SU connection cannot be generated because there is no chance to acquire a data channel. We then have [15,Eq. (17)]
S (j) m = S (1) m , j = 1 (DCC), S (1) m N −2m−1 N −1 MD−m M , j = 1 (HCC), 1 − S (1) m , j = 0, 0, otherwise,(17)
whereS
(1) m = Ŝ (1)
m , PU free control channel, DCC only, This is because we assume that a SU that has a connection but pauses data transmission due to the PU presence does not try to make another connection. We can now derive the transition probabilities individually for all four different OSA MAC protocols.
(1 − p c )Ŝ (1) m , otherwise,(18)
1) Case B 0 S 0 : Recall that for non-buffering OSA MAC protocols Z t = X t . Thus p kl|xy is defined as Now, consider the case x < k + 1. When a SU data connection is terminated, there can be two possible reasons: (i) a SU completes its transmission, or (ii) a PU is detected on a channel that is assigned to a SU for data transmission before sensing. The former was analyzed [34,Sec. 3]. To model the latter, we introduce variable i denoting the number of channels that were reserved for SU data transmission before sensing but cannot be utilized due to PU detection. We have the following observation. In addition, we need to discuss the edge state 4 which considers two cases: (i) no more channels are available, either utilized by SUs or PUs, and (ii) all possible SU connections are established 5 which we denote as "full connection state". For the transition from full connection state to edge state, we have to consider the case that one new connection is generated while any existing connection is not terminated, which means a trial for the new connection by the free SU is not established because there already exists all possible connections.
Writing all conditions compactly, denote the indicator for the edge state
1 x,y = 1, x + y = M D or x = s m , 0, otherwise,(19)
and define P
p xy|kl = 0, x > k + 1 T (0) k S (1) k P (0) x,y , x = k + 1, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (i) x,y , x < k + 1, k < s m or 1 x,y = 0, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (i) x,y + T (0) k S (1) k P (0) 0,y , x < k + 1, k = s m , 1 x,y = 1,(20)
where i m = min(s m − x, y).
2) Case B 0 S 1 : Although in the SU connection non-switching case both DCC and HCC can be considered, only DCC will be able to perform switching without any additional control data exchange, which we prove formally.
Before going into detail of the derivation note that for the class of OSA MAC protocols with a dedicated control channel every node can follow the connection arrangement of the entire network. [34] it is impossible for a single node to learn the whole network connection arrangement since each sender receiver pair cannot listen to others while following its own hopping sequence. We now present the following proof.
Theorem 1: Channel switching in DCC can be performed without any additional control message exchange.
Proof: We prove this by showing a possible distributed channel switching process. Following earlier observation, in DCC each node can trace the connection arrangement of others, i.e. which channel has been reserved by a sender receiver pair. To distribute the switching events equally among SUs each SU computes the priority level as
Π i,t = Π i,t−1 + 1 p ,(21)
where
1 p = 1, preemption by PU, 0, otherwise,(22)
and Π i,t is the priority level of SU i at time t. For Π i,0 / ∈ N the priority is a MAC address of the SU,
where |I| = |U| = M D − X t − Y t , → is the mapping operator denoting process of switching active SU connection i to free channel j, I i,t denotes index of communicating SUs (transmitters) at time t, where Π a,t > Π b,t > · · · > Π c,t and U j,t denotes free channel with index j at t.
Note that existing connections that have not been mapped to a channel are considered blocked. Also note that algorithm given in Theorem 1 connections are preempted randomly with equal probability by PU.
Since new SU connections are also assumed to use new channels randomly with equal probability, each SU connection is blocked with uniform probability.
To enable SU connection switching in HCC one way is to augment it with a separate radio front end which would follow the hopping sequences and control data exchange of the OSA network. Obviously this increases the cost of hardware and contradicts the idea of HCC, where all channels should be used for data communication. Therefore while evaluating OSA MAC protocols in Section V-B, we will not consider SU connection switching for HCC.
We now define the state transition probability p xy|kl for the considered OSA MAC protocol. Because
x > k + 1 is infeasible, the state transition probability for x > k + 1 equals to zero. For x = k + 1, y
PUs can appear on any of M D channels because even though a PU is detected, the SUs can still transmit data by switching to the idle channels and the possible number of PU appearances is MD y . Note that the possible number of PU appearances in the case B 0 S 1 is always MD y , even for the edge state, because the data channel can be changed by switching to a vacant channel after the PU detection. Because it is impossible to create more than one new connection at a time, the OSA connection creation probabilities for x = k + 1 are the same as in (20), i.e. T
p xy|kl = 0,
x > k + 1,
T (0) k S (1) k P (0) 0,y , x = k + 1, T (x−k) k S (0) k + T (x−k+1) k S (1) k P (0) 0,y , x < k + 1, 1 x,y = 0, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (0) 0,y , x < k + 1, k < s m , 1 x,y = 1, im i=0 T (k−x−i) k S (0) k + T (k−x−i+1) k S (1) k P (0) 0,y + T (0) k S (1) k P (0) 0,y , x < k + 1, k = s m , 1 x,y = 1.(24)
3) Case B 1 S 0 : Before we discuss this case we present the following observation, which implicates the design of simulation models and derivation of p xyz|klm for SU connection buffering MAC protocols.
Observation 2: For all SU connection buffering OSA MAC protocols the same average link level throughput results from creating a brand new connection or resuming a previously preempted and buffered connection on the arrival of PU on a channel.
Proof: Due to the memoryless property of the geometric distribution
Pr(1/q i > 1/q t1 + 1/q t2 |1/q i > 1/q t1 ) = Pr(1/q i > 1/q t2 ),
where 1/q i is the duration of connection i, 1/q t1 is the connection length until time t 1 when it has been preempted by PU, and 1/q t2 is the remaining length of the connection after SU resumes connection at time t 2 . Since either a newly generated SU connection after resumption, or the remaining part of a preempted connection needs a new connection arrangement on the control channel, the number of slots occupied by each connection type is the same.
Having Observation 2 we can derive transition probabilities. Because packet generation is affected by the number of connections, we use Z t to classify conditions to derive the state transition probabilities.
Due to the assumption of a maximum number of one connection generation in one time slot, the state transition probability of the case of z > m + 1 is zero.
p xyz|klm = 0, z > m + 1, T (0) k S (1) m R (z)
x,y , z = m + 1,
T (m−z) k S (0) m + T (m−z+1) k S (1) m R (z)
x,y , z < m + 1, m < s m or z < s m ,
T (0) k S (0) m + T (1) k S (1) m + T (0) k S (1) m R (z) x,y , z = m = s m .(26)
Note that this OSA MAC has been previously analyzed in [15]. As it has been pointed out, the model proposed did not work well for the full range of parameters. This is due to the following. A Markov model has been derived for {X t , Y t } (using unmodified transition probabilities of [34,Eq. 6] used to calculate average throughput of networks based on non-OSA multichannel MAC protocols). With this limitation termination, the probability in [15,Eq. (14)], analogue to (16), included an aggregated stream of PU and SU traffic, where PU traffic q p was later substracted from steady state channel utilization in [15,Eq. (10)], analogue to (11). The approximation of [15], although Markovian, worked reasonably well only for a moderate values of PU activity q p .
p xyz|klm = 0, z > m + 1, or z = x, x + y < M D , or m = k, k + l < M D , T (0) k S (1) m R (0) 0,y , z = m + 1, T (m−z) k S (0) m + T (m−z+1) k S (1) m R (0)
0,y , z < m + 1, m < s m or z < s m ,
T (0) k S (0) m + T (1) k S (1) m + T (0) k S (1) m R (0) 0,y , z = m = s m .(27)
5) Impact of Channel Error on the Throughput Calculations: All previous analysis were done under
the assumption of the error-free channel. In this section we will briefly discuss the impact of channel error on the throughput calculations.
Channel error impacts the throughput in two ways. First, error affects throughput when SU involved in a connection setup fails to receive a control message from the transmitter. As a result no connection is established. Second, error affects throughput when SU not associated with the current connection setup For HCC, the control channel is selected as one of the data channels by a hopping method. Thus, if we assume an error on the control channel, it is reasonable to consider the error on the data channel as well.
For the control channel, if an error occurs, a connection fails to be established. Thus it is modeled by multiplyingŜ m by 1 − p e , where p e is a probability of error in the current time slot. For the data channel, different error handling strategies can be considered. We focus on the two following situations: i) case E 1 denoting packet punctured by unrecovered errors and ii) case E 2 denoting transmission termination on error.
a) Case E 1 : It can be assumed that when an error occurs on a time slot, the SU simply discards that time slot and resumes transmitting the remaining packet fragment from the next correct time slot. This is modeled by replacing the capacity C with C(1 − p e ).
b) Case E 2 : It can also be assumed that the connection terminates when an error occurs. Thus the probability that the packet finishes transmitting, q, should be replaced by q + (1 − q)p e . In addition, if the control channel hops to a channel which is being utilized for data transmission but error occurs, a new connection cannot be established. This is modeled by multiplyingŜ m by (1 − p e ) 2 .
V. NUMERICAL RESULTS
We now present numerical results for our model. First, we present results independently for spectrum sensing and OSA MAC performance, in Section V-A and Section V-B, respectively, for the microscopic case. Then in Section V-C we present the results of joint optimization of these two layers in the microscopic and macroscopic context. Moreover, due to a vast combination of parameters to consider we have decided to follow the convention of [15], [34] and focus on two general network setups (unless In this section we will also compare the analytical model of the sensing layer and OSA MAC protocols to simulation results. The simulations were developed with Matlab and reflect exactly the sensing models and MAC protocols presented in this paper. Simulation results for each system were obtained using the method of batch means for a 90% confidence interval. To evaluate the sensing protocols each batch contained 100 events and the whole simulation run was divided into 10 batches with no warm up phase.
When simulating the OSA MAC protocols, each batch contained 1000 events while the whole simulation was divided into 100 batches with the warm up period equal of 100 events.
A. Spectrum Sensing Architecture Performance
For all possible combinations of sensing architectures we compute the probability of false alarm for a wide range of t q . For two networks considered we select a set of the following common parameters: The advantage of TTDMA and SSMA can be shown more clearly if we compare the results of different p d = p d,min requirements. We can observe that high detection requirement such as p d = 0.99 makes the performance worse, as generally known. However if TTDMA or SSMA is applied, the performance for p d = 0.99 can be higher than that of TDMA for p d = 0.9. For example, in the range that t q < 50 µs in Fig. 2(a), SSMA for p d = 0.99 outperforms TDMA for p d = 0.9. Moreover, in Fig. 2(b), for t q 550 µs, SSMA and TTDMA for p d = 0.99 outperforms TDMA for p d = 0.9.
t t = t d,
It is important to note that κTTDMA performs worse than the rest of the protocols. It is due to excessive delay caused by instant acknowledgment of reporting result to the cluster head node. Note that κTTDMA is a lower bound for the operation of TTDMA. Also note that when TDMA needs to be equipped with acknowledgment function, as κTTDMA, its performance would be degraded the same way as TTDMA. Since we analyze static network with pre-set parameter values, e.g. κ does not change over time, in the following sections we proceed with unmodified TTDMA only.
2) Impact of Channel Errors during Reporting on PU Detection Performance:
The results are presented in Fig. 3. For small and large scale network, and the same parameters as used in Section V-A1, we have observed the probability of false alarm keeping detection probability p d constant for varying quiet time t q . First, it is obvious when comparing Fig. 2 (no channel error) and Fig. 3 (channel error) the impact of error is clearly visible, i.e. p f increases for every protocol. However, the relation between individual protocols is the same since error affects all protocols equally. Second, the effect of error on the small scale network is smaller than for the large scale network, compare Fig. 3(a) and Fig. 3(b), since the probability that SU will send a wrong report is larger for network with large number of nodes. Lastly,
for small values of κ probability of false alarm stabilizes and never reaches zero. However, large values of κ reduce significantly the effect of channel errors. It is because with high κ probability of making an error decreases rapidly. With 20% of nodes participating in the cooperative agreement on PU state, κ = 2 for small network and κ = 8 for large scale network, effect of error is reduced almost to zero.
3) Impact of Cooperation Level on PU Detection Performance:
The results are presented in Fig. 4.
We have selected TTDMA and set p d = p d,min = 0.99 as a protocol for further investigation. We observe that for the small scale network, see Fig. 4(a), the performance for κ = 2 is the best, while for the large scale network, see Fig. 4(b), the best performance can be achieved when κ = 8 or 16 if p f < 0.1.
Based on this observation, we conclude that for given detection requirements, high detection rate of PU is obtained when κ is well below the total number of SUs in the network. While for the considered setup optimal κ ≈ 20%, this value might be different for other network configurations. An interesting observation is that the number of groups to achieve the best performance becomes larger as the number of users N increases. For the small scale network, see Fig. 5(a), the best performance is observed for n g = 2 or n g = 3, while for large scale network, Fig. 5(b), n g = 6 is the best. This is because for the large scale network, the reporting overhead caused by large number of users offsets the performance improvement achieved by large cooperation scale. independent from κ, which differs them from TTDMA whose operation strictly depends on the value of κ considered. And again, when comparing Fig. 6(c) and Fig. 6(d) the optimal value of t q for TTDMA is in the same range as p f which proves the optimality of the design.
5) Impact of κ on PU
B. OSA MAC Protocol Performance
To evaluate the effectiveness of all proposed and analyzed MAC protocols we have fixed C = 1 Mbps, Section V-C), assuming that spectrum sensing layer is able to obtain such quality of detection. Again, as
p = e −
in Section V-A, results are presented separately for error-free and error channel.
1) Impact of PU Activity Level on OSA MAC Protocols:
The results are presented in Fig. 7. We observe that PU activity degrades DCC and HCC for B 0 S 0 , irrespective of other network parameters.
Their performances are comparable in this case. DCC and HCC performs best with B 1 S 0 . The results
show that the non-buffering OSA MAC protocols are very sensitive to q p where the greatest throughput decrease is visible at low ranges of PU activity. On the other hand, with connection buffering we observe a linear relation between q p and R t .
2) Impact of SU Packet Size on OSA MAC Protocols:
The results are presented in Fig. 8. Obviously, for larger SU packet size, the OSA network is able to grab more capacity. However, when packets become excessively large the throughput saturates. It remains that with no buffering and no channel switching protocols obtain the lowest throughput, no matter what network setup is chosen. Interestingly, although
intuitevely B 1 S 1 should obtain the highest channel utilization, it does not perform better than B 1 S 0 due to large switching time. With t p approaching zero, DCC B 1 S 1 would perform best, irrespective of the network setup as we discuss below.
3) Impact of Switching Time on OSA MAC Protocols:
The results are presented in Fig. 9. In this experiment, we verify that for small t p DCC B 1 S 1 outperforms DCC B 1 S 0 . However, there is no huge difference between their performances even at t p = 10 µs. This is because connection switching does not comparing channel switching and buffering options we conclude that much more channel utilization is obtained by connection buffering than by channel switching alone when N/M > 1.
4) Relation Between Number of SUs and PU
Note that for all cases described in this section simulation results agrees with our analytical model.
Comparing our model and analytical results of [15] for DCC B 1 S 0 , see Fig. 10(b), we observe that prior analysis overestimated the performance resulting in more than 2 Mbps difference at N/M = 1.
Interestingly, if we consider the same set of parameters as in Section V-B1 then the model of [15] almost agrees with the model of our paper. Since the set of parameters that has been chosen in V-B1 are similar to [15] we remark that the observations on the performance of this OSA MAC in [15] were reflecting the reality. Fig. 7, except for qp = 0.1. E1 and E2 denote error models described in Section IV-C5. E0 denotes the system with pe = 0.
5) Impact of Channel Errors on the OSA Multichannel MAC Performance:
To observe the impact of channel errors on the MAC protocol throughput we have set up the following experiment. For HCC and both network sizes, small and large, we have observed the average throughput for different SU packet lengths and channel error probabilities. The results are presented in Fig. 11. For comparison in Fig. 11 we present the system with no errors, denoted as E 0 . We kept values of p e realistic, not exceeding 1%.
Obviously system with punctured errors E 1 obtains much higher throughput than system E 2 , since more data can be potentially sent after one control packet exchange. Again, buffering allows to obtain higher throughput in comparison to non-buffered case, even with the data channel errors present. Note that system E 2 is more prone to errors than E 1 , observe Fig. 11(a) and Fig. 11 , ii) log-normal (denoted symbolically as L), and for comparison iii) geometric (denoted symbolically as E) used in the analysis. We have tested the protocol performance for different combinations of "on"
and "off" times of PU activity. These were EE, LE, EL, LL (all possible combinations of "on" and "off" times obtained in [60, Tab. 3 and Tab. 4]) and additionally EU, UU, where first and second letter denotes selected distribution for "on" and "off" times, respectively. Due to the complexity of the analysis we show only the simulation results using the same simulation method of batch means, with the same parameters as described at the beginning of Section V.
The parameter of each distribution was selected such that the mean value of each distribution was equal to 1/p c for "on" time and 1 − 1/p c for "off" time. The uniform distribution has a non-continuous set of mean values, (a b + a n )/2, where a b , a n ∈ N denoting lower and upper limit of the distribution, respectively, which precludes existence of every mean on or off value for p c ∈ (0, 1). To solve that problem an continuous uniform distribution with required mean was used and rounded to the highest integer. This resulted in a slightly lower last peak in the probability mass function at a n for 1/p c / ∈ N or where c l = 1/p c , v l = (1 − p c )/p 2 c is the mean and variance of the resulting discretized log-normal distribution. Note that the variance of the used discretized log-normal distribution is equal to the variance of geometric distribution for the same mean value. The variance of resulting discretized uniform continuos distribution could not be equal to the variance of the geometric distribution due the reasons described earlier.
The results are presented in Fig. 12. We focus on two network types, as indicated earlier: (i) large scale and (ii) small scale, with the assumed parameters as in Fig. 7. We select four values of q p for the clarity of the presentation. The most important observation is that irrespective of the considered distribution DCC obtains relatively the same throughput and the same relation between different protocol options exists as it was shown analytically in Fig. 7. If one wants to select the distribution combination with the highest throughput it would be LE and LL, while the throughput obtained being almost equal to the one obtained via analysis for the geometric distribution. The distribution with the lowest throughput is UU and EU, due to the difference of the second moment between the other two distributions for the on time. The difference in throughput between UU, EU and the remaining distributions is more visible for
C. Performance of Joint Spectrum Sensing and OSA MAC Protocols
Having results for spectrum sensing protocol and OSA MAC we join these two layers to form a complete OSA network stack. By means of exhaustive search we solve the optimization problem of (1).
We will also investigate the set of parameters that maximize R t for small and large scale network.
We divide our analysis in macroscopic and microscopic case observing R t for small scale network with M = 3, N = 12, d = 5 kB, and large scale network with M = 12, N = 40, d = 20 kB. For each case we select a set of spectrum sensing and OSA MAC protocols that are possible and, as we believe, most important to the research community. For a fixed set of parameters C = 1 Mbps, b = 1 MHz, p = e −1 /N , t d,max = 1 ms (microscopic case), t d,max = 2 s (macroscopic case), α = 1/M , t t = 1 ms, p d,min = 0.99, γ = −5 dB, q p = 0.1, and t p = 100 µs we leave κ, t e , n g , and p f as optimization variables.
1) Microscopic Model:
Here we focus only on DCC protocol, since collaborative spectrum sensing is only possible via a PU free control channel, which is inefficient to accomplish with HCC. Also, for sensing measurement dissemination we do not consider SSMA, which would be most difficult to implement in practice. The results are presented in Fig. 13. DCC B 1 S 0 with TTDMA is the best option, both for small scale and large scale network, see Fig. 13(a) and Fig. 13(b), respectively. Because of relatively high switching time B 1 S 1 performs slightly worse than B 1 S 0 , for small and large scale network. DCC B 0 S 0 with TDMA is the worst protocol combination, which confirms earlier results from Section V-A and Section V-B. Irrespective of network size it is always better to buffer SU connections preempted by PU than to look for vacant channels, compare again B 1 S 0 and B 0 S 1 in Fig. 13(a) and Fig. 13(b). The difference between B 0 S 0 and B 0 S 1 is mostly visible for a large network scenario, see Fig. 13(b), since with a large number of channels there are more possibilities to look for empty channels.
For all protocol combinations and both network sizes κ = 2 maximizes throughput performance, see Fig. 13(a). Interestingly, network size dictates the size of a sensing group. For small scale network, n g = 1 is the optimal value, see Fig. 13(a), but for a large network R t is maximized when n g = 3 (for B 0 S 0 ) and n g = 4 (for the rest). We can conclude that with a small network it is better to involve all nodes in sensing, while for larger networks it is better to divide them into groups, which agrees with the observation from Section V-A4. Moreover, we observe that the performance difference between TTDMA and TDMA is not as big as in Fig. 2 when parameters are optimized.
The most interesting result is observed for p f . With the increase of protocol complexity false alarm increases as well. Also with an increase of p f , quiet time is decreasing. Because buffering and switching improves the performance, there can be more margin to design the spectrum sensing. DCC obtains higher throughput than HCC for a small scale network, and vice versa, compare Fig. 14(a) and Fig. 14(b), respectively. This confirms the observations of [15, Fig. 3], [34, Fig. 3]. Just like in Fig. 13(a), for small scale network κ = 2 and n g = 2 are the ones that maximize R t . For the large scale network, however, κ = 3 and n g = 3 is optimal for TDMA, and κ = 4 and n g = 4 for TTDMA.
This means that for large networks it is beneficial to split the network into smaller groups. Again, this confirms our findings from Section V-C1. For both network scenarios p f and t e is relatively the same for all protocols considered.
Note that for the large scale network in the macroscopic model, an SU takes more time to detect a PU than in the microscopic model because large t d,max reduces the time overhead. The release of time restriction impacts the large scale network by requiring greater value of κ to achieve the maximum throughput.
VI. CONCLUSION
We have presented a comprehensive framework enabling assessment of the performance of joint spectrum sensing and MAC protocol operation for OSA networks. In the model we have proposed we focused on the link layer throughput as the fundamental metric to assess performance. We have parameterized spectrum sensing architectures for energy detection based systems with collaborative measurements combining. We have proposed a novel spectrum sensing MAC denoted Truncated Time Division Multiple Access. We have also categorized multichannel MAC protocols for OSA networks based on their ability to buffer and switch existing SU connections on the arrival of a PU. Our analysis is supported by simulations which prove the accuracy of the obtained expressions.
Some of the design guidelines that need to be noted are as follows. For spectrum sensing introducing TTDMA gives an improvement in obtained performance in compared to TDMA. Large networks, i.e.
having many channels and users, benefit from clustering, while for small networks it is better to create small number of clusters such that sensing time is optimized. When considering MAC protocol design for OSA it is clear that more benefit comes from introducing SU connection buffering than channel switching, for those SU connections that have been preempted by PU. Interestingly, although intuition would suggest that MAC protocols that combine SU connection buffering and channel switching would outperform all other protocols, due to switching overhead this combination is usually inferior to protocols that involve only SU connection buffering.
Our future task will be to investigate the delay experience by using any of OSA MAC protocols proposed. We plan to develop a comprehensive simulation software which will implement features not covered by our model, like queue per each SU.
| 11,421 |
0910.1495
|
1512308376
|
The Shannon entropy is a widely used summary statistic, for example, network traffic measurement, anomaly detection, neural computations, spike trains, etc. This study focuses on estimating Shannon entropy of data streams. It is known that Shannon entropy can be approximated by Reenyi entropy or Tsallis entropy, which are both functions of the p-th frequency moments and approach Shannon entropy as p->1. Compressed Counting (CC) is a new method for approximating the p-th frequency moments of data streams. Our contributions include: 1) We prove that Renyi entropy is (much) better than Tsallis entropy for approximating Shannon entropy. 2) We propose the optimal quantile estimator for CC, which considerably improves the previous estimators. 3) Our experiments demonstrate that CC is indeed highly effective approximating the moments and entropies. We also demonstrate the crucial importance of utilizing the variance-bias trade-off.
|
Because the elements, @math , are time-varying, a na 'ive counting mechanism requires a system of @math counters to compute @math exactly (unless @math ). This is not always realistic. Estimating @math in data streams is heavily studied @cite_27 @cite_26 @cite_25 @cite_4 @cite_9 . We have mentioned that computing @math in strict-Turnstile model is trivial using a simple counter. One might naturally speculate that when @math , computing (approximating) @math should be also easy. However, before Compressed Counting (CC) , none of the prior algorithms could capture this intuition.
|
{
"abstract": [
"We give a space-efficient, one-pass algorithm for approximating the L sup 1 difference spl Sigma sub i |a sub i -b sub i | between two functions, when the function values a sub i and b sub i are given as data streams, and their order is chosen by an adversary. Our main technical innovation is a method of constructing families V sub j of limited independence random variables that are range summable by which we mean that spl Sigma sub j=0 sup c-1 V sub j (s) is computable in time polylog(c), for all seeds s. These random variable families may be of interest outside our current application domain, i.e., massive data streams generated by communication networks. Our L sup 1 -difference algorithm can be viewed as a \"sketching\" algorithm, in the sense of (A. , 1998), and our algorithm performs better than that of , when used to approximate the symmetric difference of two sets with small symmetric difference.",
"In this article, we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular:---We show that, for any p ∈ (0, 2], one can maintain (using only O(log n e2) words of storage) a sketch C(q) of a point q ∈ lnp under dynamic updates of its coordinates. The sketch has the property that, given C(q) and C(s), one can estimate Vq − sVp up to a factor of (1 p e) with large probability. This solves the main open problem of [1999].---We show that the aforementioned sketching approach directly translates into an approximate algorithm that, for a fixed linear mapping A, and given x ∈ ℜn and y ∈ ℜm, estimates VAx − yVp in O(n p m) time, for any p ∈ (0, 2]. This generalizes an earlier algorithm of Wasserman and Blum [1997] which worked for the case p e 2.---We obtain another sketch function C′ which probabilistically embeds ln1 into a normed space lm1. The embedding guarantees that, if we set m e log(1 Δ)O(1 e), then for any pair of points q, s ∈ ln1, the distance between q and s does not increase by more than (1 p e) with constant probability, and it does not decrease by more than (1 − e) with probability 1 − Δ. This is the only known dimensionality reduction theorem for the l1 norm. In fact, stronger theorems of this type (i.e., that guarantee very low probability of expansion as well as of contraction) cannot exist [Brinkman and Charikar 2003].---We give an explicit embedding of ln2 into lnO(log n)1 with distortion (1 p 1 nΘ(1)).",
"The method of stable random projections is popular in data stream computations, data mining, information retrieval, and machine learning, for efficiently computing the lα (0 We propose algorithms based on (1) the geometric mean estimator, for all 0 • The general sample complexity bound for α ≠ 1,2. For α = 1, [27] provided a nice argument based on the inverse of Cauchy density about the median, leading to a sample complexity bound, although they did not provide the constants and their proof restricted e to be \"small enough.\" For general α ≠ 1, 2, however, the task becomes much more difficult. [27] provided the \"conceptual promise\" that the sample complexity bound similar to that for α = 1 should exist for general α, if a \"non-uniform algorithm based on t-quantile\" could be implemented. Such a conceptual algorithm was only for supporting the arguments in [27], not a real implementation. We consider this is one of the main problems left open in [27]. In this study, we propose a practical algorithm based on the geometric mean estimator and derive the sample complexity bound for all 0 • The practical and optimal algorithm for α = 0+ The l0 norm is an important case. Stable random projections can provide an approximation to the l0 norm using α → 0+. We provide an algorithm based on the harmonic mean estimator, which is simple and statistically optimal. Its tail bounds are sharper than the bounds derived based on the geometric mean. We also discover a (possibly surprising) fact: in boolean data, stable random projections using α = 0+ with the harmonic mean estimator will be about twice as accurate as (l2) normal random projections. Because high-dimensional boolean data are common, we expect this fact will be practically quite useful. • The precise theoretical analysis and practical implications We provide the precise constants in the tail bounds for both the geometric mean and harmonic mean estimators. We also provide the variances (either exact or asymptotic) for the proposed estimators. These results can assist practitioners to choose sample sizes accurately.",
"The frequency moments of a sequence containing mi elements of type i, for 1 i n, are the numbers Fk = P n=1 m k . We consider the space complexity of randomized algorithms that approximate the numbers Fk, when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it turns out that the numbers F0;F1 and F2 can be approximated in logarithmic space, whereas the approximation of Fk for k 6 requires n (1) space. Applications to data bases are mentioned as well.",
"Space-economical estimation of the pth frequency moments, defined as , for p> 0, are of interest in estimating all-pairs distances in a large data matrix [14], machine learning, and in data stream computation. Random sketches formed by the inner product of the frequency vector f 1 , ..., f n with a suitably chosen random vector were pioneered by Alon, Matias and Szegedy [1], and have since played a central role in estimating F p and for data stream computations in general. The concept of p-stable sketches formed by the inner product of the frequency vector with a random vector whose components are drawn from a p-stable distribution, was proposed by Indyk for estimating F p , for 0 < p< 2, and has been further studied in Li [13]. In this paper, we consider the problem of estimating F p , for 0 p , for 0 sketch [7] and the structure [5]. Our algorithms require space @math to estimate F p to within 1 ±i¾?factors and requires expected time @math to process each update. Thus, our technique trades an @math factor in space for much more efficient processing of stream updates. We also present a stand-alone iterative estimator for F 1 ."
],
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_9",
"@cite_27",
"@cite_25"
],
"mid": [
"2156943642",
"2045533739",
"2014689751",
"2064379477",
"1608038806"
]
}
|
A Very Efficient Scheme for Estimating Entropy of Data Streams Using Compressed Counting
| 0 |
|
0910.1495
|
1512308376
|
The Shannon entropy is a widely used summary statistic, for example, network traffic measurement, anomaly detection, neural computations, spike trains, etc. This study focuses on estimating Shannon entropy of data streams. It is known that Shannon entropy can be approximated by Reenyi entropy or Tsallis entropy, which are both functions of the p-th frequency moments and approach Shannon entropy as p->1. Compressed Counting (CC) is a new method for approximating the p-th frequency moments of data streams. Our contributions include: 1) We prove that Renyi entropy is (much) better than Tsallis entropy for approximating Shannon entropy. 2) We propose the optimal quantile estimator for CC, which considerably improves the previous estimators. 3) Our experiments demonstrate that CC is indeed highly effective approximating the moments and entropies. We also demonstrate the crucial importance of utilizing the variance-bias trade-off.
|
CC improves symmetric stable random projections @cite_4 @cite_9 uniformly for all @math as shown in Figure in Section . However, one can still considerably improve CC around @math , by developing better estimators, as in this study. In addition, no empirical studies on CC were reported.
|
{
"abstract": [
"The method of stable random projections is popular in data stream computations, data mining, information retrieval, and machine learning, for efficiently computing the lα (0 We propose algorithms based on (1) the geometric mean estimator, for all 0 • The general sample complexity bound for α ≠ 1,2. For α = 1, [27] provided a nice argument based on the inverse of Cauchy density about the median, leading to a sample complexity bound, although they did not provide the constants and their proof restricted e to be \"small enough.\" For general α ≠ 1, 2, however, the task becomes much more difficult. [27] provided the \"conceptual promise\" that the sample complexity bound similar to that for α = 1 should exist for general α, if a \"non-uniform algorithm based on t-quantile\" could be implemented. Such a conceptual algorithm was only for supporting the arguments in [27], not a real implementation. We consider this is one of the main problems left open in [27]. In this study, we propose a practical algorithm based on the geometric mean estimator and derive the sample complexity bound for all 0 • The practical and optimal algorithm for α = 0+ The l0 norm is an important case. Stable random projections can provide an approximation to the l0 norm using α → 0+. We provide an algorithm based on the harmonic mean estimator, which is simple and statistically optimal. Its tail bounds are sharper than the bounds derived based on the geometric mean. We also discover a (possibly surprising) fact: in boolean data, stable random projections using α = 0+ with the harmonic mean estimator will be about twice as accurate as (l2) normal random projections. Because high-dimensional boolean data are common, we expect this fact will be practically quite useful. • The precise theoretical analysis and practical implications We provide the precise constants in the tail bounds for both the geometric mean and harmonic mean estimators. We also provide the variances (either exact or asymptotic) for the proposed estimators. These results can assist practitioners to choose sample sizes accurately.",
"In this article, we show several results obtained by combining the use of stable distributions with pseudorandom generators for bounded space. In particular:---We show that, for any p ∈ (0, 2], one can maintain (using only O(log n e2) words of storage) a sketch C(q) of a point q ∈ lnp under dynamic updates of its coordinates. The sketch has the property that, given C(q) and C(s), one can estimate Vq − sVp up to a factor of (1 p e) with large probability. This solves the main open problem of [1999].---We show that the aforementioned sketching approach directly translates into an approximate algorithm that, for a fixed linear mapping A, and given x ∈ ℜn and y ∈ ℜm, estimates VAx − yVp in O(n p m) time, for any p ∈ (0, 2]. This generalizes an earlier algorithm of Wasserman and Blum [1997] which worked for the case p e 2.---We obtain another sketch function C′ which probabilistically embeds ln1 into a normed space lm1. The embedding guarantees that, if we set m e log(1 Δ)O(1 e), then for any pair of points q, s ∈ ln1, the distance between q and s does not increase by more than (1 p e) with constant probability, and it does not decrease by more than (1 − e) with probability 1 − Δ. This is the only known dimensionality reduction theorem for the l1 norm. In fact, stronger theorems of this type (i.e., that guarantee very low probability of expansion as well as of contraction) cannot exist [Brinkman and Charikar 2003].---We give an explicit embedding of ln2 into lnO(log n)1 with distortion (1 p 1 nΘ(1))."
],
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"2014689751",
"2045533739"
]
}
|
A Very Efficient Scheme for Estimating Entropy of Data Streams Using Compressed Counting
| 0 |
|
0910.1495
|
1512308376
|
The Shannon entropy is a widely used summary statistic, for example, network traffic measurement, anomaly detection, neural computations, spike trains, etc. This study focuses on estimating Shannon entropy of data streams. It is known that Shannon entropy can be approximated by Reenyi entropy or Tsallis entropy, which are both functions of the p-th frequency moments and approach Shannon entropy as p->1. Compressed Counting (CC) is a new method for approximating the p-th frequency moments of data streams. Our contributions include: 1) We prove that Renyi entropy is (much) better than Tsallis entropy for approximating Shannon entropy. 2) We propose the optimal quantile estimator for CC, which considerably improves the previous estimators. 3) Our experiments demonstrate that CC is indeed highly effective approximating the moments and entropies. We also demonstrate the crucial importance of utilizing the variance-bias trade-off.
|
@cite_30 applied symmetric stable random projections to approximate the moments and Shannon entropy. The nice theoretical work @cite_8 @cite_5 provided the criterion to choose the @math so that Shannon entropy can be approximated with a guaranteed accuracy, using the @math th frequency moment.
|
{
"abstract": [
"Entropy has recently gained considerable significance as an important metric for network measurement. Previous research has shown its utility in clustering traffic and detecting traffic anomalies. While measuring the entropy of the traffic observed at a single point has already been studied, an interesting open problem is to measure the entropy of the traffic between every origin-destination pair. In this paper, we propose the first solution to this challenging problem. Our sketch builds upon and extends the Lp sketch of Indyk with significant additional innovations. We present calculations showing that our data streaming algorithm is feasible for high link speeds using commodity CPU memory at a reasonable cost. Our algorithm is shown to be very accurate in practice via simulations, using traffic traces collected at a tier-1 ISP backbone link.",
"We give near-optimal sketching and streaming algorithms for estimating Shannon entropy in the most general streaming model, with arbitrary insertions and deletions. This improves on prior results that obtain suboptimal space bounds in the general model, and near-optimal bounds in the insertion-only model without sketching. Our high-level approach is simple: we give algorithms to estimate Tsallis entropy, and use them to extrapolate an estimate of Shannon entropy. The accuracy of our estimates is proven using approximation theory arguments and extremal properties of Chebyshev polynomials. Our work also yields the best-known and near-optimal additive approximations for entropy, and hence also for conditional entropy and mutual information.",
"We give a method for estimating the empirical Shannon entropy of a distribution in the streaming model of computation. Our approach reduces this problem to the well-studied problem of estimating frequency moments. The analysis of our approach is based on new results which establish quantitative bounds on the rate of convergence of Renyi entropy towards Shannon entropy."
],
"cite_N": [
"@cite_30",
"@cite_5",
"@cite_8"
],
"mid": [
"2099220942",
"2129978406",
"2116541203"
]
}
|
A Very Efficient Scheme for Estimating Entropy of Data Streams Using Compressed Counting
| 0 |
|
0910.1938
|
2952531516
|
In addition to the frequency of terms in a document collection, the distribution of terms plays an important role in determining the relevance of documents. In this paper, a new approach for representing term positions in documents is presented. The approach allows an efficient evaluation of term-positional information at query evaluation time. Three applications are investigated: a function-based ranking optimization representing a user-defined document region, a query expansion technique based on overlapping the term distributions in the top-ranked documents, and cluster analysis of terms in documents. Experimental results demonstrate the effectiveness of the proposed approach.
|
An early approach to apply term-positional data in IR is the work of Attar and Fraenkel @cite_3 . The authors propose different models to generate clusters of terms related to a query (searchonyms) and use these clusters in a local feedback process. In their experiments they confirm that metrical methods based on functions of the distance between terms are superior to methods based merely on weighted co-occurrences of terms. There are several other approaches that use metrical information @cite_7 @cite_4 .
|
{
"abstract": [
"In most existing retrieval models, documents are scored primarily based on various kinds of term statistics such as within-document frequencies, inverse document frequencies, and document lengths. Intuitively, the proximity of matched query terms in a document can also be exploited to promote scores of documents in which the matched query terms are close to each other. Such a proximity heuristic, however, has been largely under-explored in the literature; it is unclear how we can model proximity and incorporate a proximity measure into an existing retrieval model. In this paper,we systematically explore the query term proximity heuristic. Specifically, we propose and study the effectiveness of five different proximity measures, each modeling proximity from a different perspective. We then design two heuristic constraints and use them to guide us in incorporating the proposed proximity measures into an existing retrieval model. Experiments on five standard TREC test collections show that one of the proposed proximity measures is indeed highly correlated with document relevance, and by incorporating it into the KL-divergence language model and the Okapi BM25 model, we can significantly improve retrieval performance.",
"Based on the idea that the closer the query terms in a document are, the more relevant this document is, we propose a mathematical model of information retrieval based on a fuzzy proximity degree of term occurences. Our model is able to deal with Boolean queries, but contrary to the traditional extensions of the basic Boolean information retrieval model, it does not explicitly use a proximity operator. A single parameter allows to control the proximity degree required. With conjunctive queries, setting this parameter to low values requires a proximity at the phrase level and with high values, the required proximity can continuously be relaxed to the sentence or paragraph levels. We conducted some experiments and present the results.",
""
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_3"
],
"mid": [
"2162432120",
"2076984834",
"1992113527"
]
}
|
Information Retrieval via Truncated Hilbert-Space Expansions
|
The information retrieval (IR) process has two main stages. The first stage is the indexing stage in which the documents of a collection are processed to generate a database (index) containing the information about the terms of all documents in the collection. The index generally stores only term frequency information, but in some cases positional information of terms is also included, substantially increasing the memory requirements of the system.
In the second stage of the IR process (query evaluation), the user sends a query to the system, and the system responds with a ranked list of relevant documents. The implemented retrieval model determines how the relevant documents are calculated. Standard IR models (e.g. TFIDF, BM25) use the frequency of terms as the main document relevance criterion, producing adequate quality in the ranking and query processing time.
Other approaches, such as proximity queries or passage retrieval, complement the document relevance evaluation using term positional information. This additional process, normally performed at query time, generally improves the quality of the results but also slows down the response time of the system. Since the response time is a critical issue for the acceptance of an IR system by its users, the use of time-consuming algorithms to evaluate term-positional information at query time is generally inappropriate.
The IR model proposed in this paper shifts the complexity of processing the positional data to the indexing phase, using an abstract representation of the term positions and implementing a simple mathematical tool to operate with this compressed representation at query evaluation time. Thus, although query processing remains simple, the use of term-positional information provides new ways to optimize the IR process. Three applications are investigated: a function-based ranking optimization representing a user-defined document region, a query-expansion technique based on overlapping the term distributions in the top-ranked documents, and cluster analysis of terms in documents. Experimental results demonstrate the effectiveness of the proposed approach for optimizing the retrieval process.
The paper is organized as follows. Section 2 discusses related work. Section 3 presents the proposed approach for representing term positions based on truncated Hilbert space expansions. In Section 4, applications of the approach are described. Section 5 concludes the paper and outlines areas for future work.
Analyzing Term Positions
In this section, a general mathematical model to analyze term positions in documents is presented, making it possible to effectively use the term-positional information at query evaluation time.
Consider a document D of length L and a term t that appears in D. The distribution of the term t within the document is given by the set P t that contains all positions of t, where all terms are enumerated starting with 1 for the first term and so on. For example, a set P t = {2, 6} represents a tern that is located at the second and sixth position of the document body. A characteristic function
f (t) (x) = 1 for x ∈ [p − 1, p] if p ∈ P t 0 otherwise ,(1)
defined for x ∈ [0, L], is assigned to P t . The proposed method consists of approximating this characteristic function by an expansion in terms of certain sets of functions. In order to do so, some concepts of functional analysis are introduced. Details can be found in the book of Yosida [9].
Expansions in Hilbert Spaces
A Hilbert space H is a (possibly infinite-dimensional) vector space that is equipped with a scalar product ., . , i. e. two elements f, g ∈ H are mapped to a real or complex number f, g . We only consider real scalar products here.
An example of a Hilbert space is the space L 2 ([0, L]) defined as the set of all functions f that are square-integrable in the interval [0, L], i. e. functions for which L 0 (f (x)) 2 dx < ∞ . In this vector space, the addition of two functions f and g, and the multiplication of a function f by a scalar α ∈ R are defined point-wise:
(f + g)(x) = f (x) + g(x) , (αf )(x) = αf (x) . The scalar product in L 2 ([0, L]) is defined by f, g = L 0 f (x)g(x) dx .(2)
Two vectors with vanishing scalar product are called orthogonal.
The scalar product induces a norm (an abstract measure of length)
f = f, f ≥ 0 .f = ∞ k=0 γ k ϕ k ,(4)
where the γ k are real numbers, if the sequence f n = n k=0 γ k ϕ k of finite sums converges to f . This kind of convergence is called norm convergence.
Of particular importance are so-called complete, orthonormal sets {ϕ 0 , ϕ 1 , . . .} of functions in H. They have the following properties: (a) The ϕ i are mutually orthogonal and normalized to unity:
ϕ n , ϕ m = δ nm = 1 for n = m 0 for n = m (5) (b)
The ϕ i are complete, which means that every vector of the Hilbert space can be expanded into a convergent sum of them. Important properties of expansions in terms of complete orthonormal sets are: (a) The expansion coefficients γ k are given by
γ k = ϕ k , f . (6) (b) They fulfill n k=0 γ 2 k ≤ f 2 for all n, and ∞ k=0 γ 2 k = f 2(7)
(Bessel's inequality and Parseval's equation).
Given
two expansions f = ∞ k=0 γ k ϕ k , g = ∞ k=0
γ k ϕ k , the scalar product can be expressed as
f, g = ∞ k=0 γ k γ k .(8)
If the expansion coefficients are combined into coefficient vectors c = (γ 0 , γ 1 , . . .), c = (γ 0 , γ 1 , . . .), the preceding equation takes the form f, g = c · c . The Fourier expansions considered by Galeas et al. [4] are an example of such an expansion. The functions
ϕ Fo 0 (x) = 1 √ L , ϕ Fo 2k−1 (x) = 2 L sin 2πk L , ϕ Fo 2k (x) = 2 L cos 2πk L (9) (k > 0) form a complete orthonormal set in L 2 ([0, L]), leading to an expansion f (x) = a 0 √ L + 2 L ∞ k=1 a k cos 2πkx L + b k sin 2πkx L ,(10)where a 0 = γ 0 and a k = γ 2k , b k = γ 2k−1 for k > 0. Another complete set of orthonormal functions of L 2 ([0, L]) is given by ϕ Le k (x) = 2k + 1 L P * k (x/L) , k ≥ 0 ,(11)
where the P * k (x) are so-called shifted Legendre polynomials [1]. These polynomials are of order k. The first few of them are P *
0 (x) = 1, P * 1 (x) = 2x−1, P * 2 (x) = 6x 2 −6x+1, P * 3 (x) = 20x 3 − 30x 2 + 12x − 1. Fig. 1 (left) shows ϕ Le k (x) for 0 ≤ k ≤ 4 in the range x ∈ [0, L] for L = 1.
Another example that will be used later is a complete set for the space L 2 (R + ) (the space of square-integrable functions for 0 ≤ x < ∞):
ϕ La k (x) = e −x/(2λ) √ λ L k (x/λ) , k ≥ 0 .(12)
Here, λ is a positive scale parameter and the L k (x) are Laguerre polynomials [1], the first few of which are L 0 (x) = 1,
L 1 (x) = −x + 1, L 2 (x) = x 2 /2 − 2x + 1, L 3 (x) = −x 3 /6 + 3x 2 /2 − 3x + 1, see Fig. 1 (right).
Truncated Expansions of Term Distributions
As explained above, the finite sums f n = n k=0 γ k ϕ k converge to the function f in the sense of norm convergence. As a consequence of Bessel's inequality (7) they approximate f increasingly better for increasing n. An essential ingredient for the following discussion is to consider a truncated expansion, i. e. the mapping
P n : f (t) → f (t) n ,(13)
which associates to a term distribution f (t) of the form (1) its finite-order approximation f (t) n in terms of some complete orthonormal set for some order n. Figure 2 shows an example for the Fourier expansion. One can observe the characteristic broadening effect generated by the reduction of the expansion order (truncation).
The L 2 scalar product of two truncated term distributions f n and g n ,
f n , g n = f n (x)g n (x) dx(14)
has the meaning of an overlap integral: The integrand is only large in regions in which both functions f n (x) and g n (x) are large, so that f n , g n measures how well both functions overlap in the whole integration range. Given f n and g n , two truncated term distributions describing the term positions and their neighborhood in a certain document, we introduce the concept of semantic interaction range: Two terms that are close to each other present a stronger interaction because their truncated distributions have a considerable overlap. This semantic interaction range motivates the following definition of the similarity of two term distributions f and g: For some fixed order n, one sets sim(f, g) = f n , g n = P n f, P n g .
In this definition, the truncation P n : f → f n is essential, because the original term distributions f and g are always orthogonal if they describe two different terms. This is so because different terms are always at different positions within a document, so that their overlap always vanishes. Definition (15) is only one possibility. In fact, any definition based on the scalar product f n , g n can be utilized. For example, in Galeas et al. [4] a cosine definition cos ϑ = fn,gn fn gn has been used. Another choice is the norm difference
f n − g n = (f n (x) − g n (x)) 2 dx 1/2 = f n 2 + g n 2 − 2 f n , g n .(16)
Using different measures based on f n , g n , we have found no significant differences in the final retrieval results in several experiments. The scalar product of the truncated distributions can be easily calculated using the coefficient vectors: If the original distributions f and g have the infinite-dimensional coefficient vectors c = (γ 0 , γ 1 , . . .) and c = (γ 0 , γ 1 , . . .), respectively, then the truncated distributions f n and g n have the (n + 1)-dimensional coefficient vectors c n = (γ 0 , γ 1 , . . . , γ n ) and c n = (γ 0 , γ 1 , . . . , γ n ), resp., and their scalar product is the finite sum f n , g n = c n · c n = n k=0 γ k γ k .
(17)
The Semantic Interaction Range
In this section, a precise definition of the semantic interaction range is given. In abstract terms, the truncation P n : f → f n is a filtering or a projection: In the expansion f (x) = ∞ k=0 γ k ϕ k (x) the components ϕ k for k > n are filtered out, which amounts to a projection of f onto the components ϕ 0 , . . . , ϕ n . Thus, P n is a projection operator in the Hilbert space. To derive a closed expression for the operator P n , one combines (P n f )(x) = f n (x) = n k=0 γ k ϕ k (x), with (6) to obtain
(P n f )(x) = n k=0 ϕ k (y)f (y) dy ϕ k (x) = n k=0 ϕ k (y)ϕ k (x) f (y) dy .
(18) One can write the last expression as p n (y, x)f (y) dy with the projection kernel as an integral representation of P n in the sense of a convolution. It has the advantage that one can study the properties of the truncation independently of the function f . The width of p n (y, x) as a function of x is a lower bound for the width of a truncated expansion of a term located at y. Therefore, this width will be used as the semantic interaction range for a term at position y.
p n (y, x) = n k=0 ϕ k (y)ϕ k (x)(19)
For the Fourier expansion, p 2k is given by
p Fo 2k (y, x) = cos(4πk(y − x)/L) − cos(2π(2k + 1)(y − x)/L) L(1 − cos(2π(y − x)/L)) .(20)
(We consider only even orders n = 2k, because for these orders the expansion consists of an equal number of sine and cosine terms, see (9).) The maximum of p Fo 2k (y, x) is at x = y and the two zeros closest to the maximum are at x = y ± L/(2n + 1). Thus, the semantic interaction range for a Fourier expansion of order n may be defined to be
p i n (y, x) = α i n ϕ i n+1 (y)ϕ i n (x) − ϕ i n (y)ϕ i n+1 (x) y − x ,(22)
i = Le, La, with α Le n = (L/2)(n + 1)/(2n + 1) and α La n = −λ(n + 1). These kernels are no longer functions of y − x, meaning that the broadening of a term distribution depends on the position y of the term distribution within the document. Fig. 3 (right) shows the projection kernel p La 6 (y, x) for y = 20 and y = 100. One can see that the spatial resolution of the truncated expansion decreases for terms that are far away from the beginning of the document.
The goal of our approach is to shift the complexity of processing the positional data from the query evaluation phase to the (not time critical) indexing phase, reducing the ranking optimization via term positions to a simple mathematical operation.
Hence, we propose to calculate the expansion coefficients γ k of the term distributions in the indexing phase and to store this abstract term positional information in the index. This permits a considerably faster query evaluation, compared with methods that use the raw term-positional information.
Thus, the index contains an (n+1)-dimensional coefficient vector c n = (γ 0 , γ 1 , . . . , γ n ) for each term and each document in the collection. The γ k are calculated analytically via (6). To give an example of the complexity involved,
γ k = p∈Pt k j=0 α j p L j+1 − p − 1 L j+1 (23) with α j = (2k + 1)L a j /(j + 1) is
Ranking Optimization
The first scenario states document ranking as an optimization problem that is based on the query term distribution function f q,d and a user-defined objective function f o representing the optimal query term distribution in the document body:
M aximize {sim(f q,d , f o )} ∀f q,d ∈ A(24)
where A represents the query term distributions in a document set, f q,d is the query term distribution function for query q in document d, and f o is a user-defined objective function, representing the optimal query term distributions for the documents in the document ranking. Experiments based on the TREC-8 collection and the software Terrier [5], carried out to order n = 6, show the accuracy of the term distributions in a ranking based on user-defined objective functions. As depicted in Figure 4, the Fourier and Legendre models present a high accuracy for the distribution of query terms in the top-20 ranked documents, based on two different objective functions: The first function (denoted f o = 1|3) selects terms located in the first third of the document, and the second (f o = 3|3) selects terms located in the last third of the document [4].
Query Expansion
The second scenario considers the top-r documents D = {d 1 , d 2 , . . . , d r } of an initial ranking process and the functions f q,d with d ∈ D. The set of terms T q whose elements t maximize the expression sim(f q,d , f t,d ) is computed. It contains the terms for all documents in D that have a similar distribution as the query, i.e. terms positioned near the query in the top ranked documents. This set T q is used to expand q. As depicted in Figure 5, experiments executed on the TREC-8 collection demonstrate that query expansion based on the proposed orthogonal functions (Fourier and Laguerre) outperform state-of-the-art query expansion models, such as Rocchio and Kullback-Leibler [5]. The term position models (left) differ from the other models (right) because the former tend to increase the retrieval performance by increasing the number of expansion documents and expansion terms, while for the other models, the performance drops beyond roughly the 15 th expansion document. Figure 6 (left) shows a fixed query expansion configuration in which the other models show their best performance. Nevertheless, the term distribution models perform better. Any increase in the number of expansion documents or expansion terms makes the superiority of the term distribution models even clearer.
Cluster Analysis of Terms in Documents
Given a document, one may ask whether there are groups (clusters) of terms whose elements all have similar distributions. One may then infer that all terms inside a cluster describe related concepts [2]. In this section, some properties of the proposed method will be explained that may be useful for the analysis of term clusters.
Consider a document of length L. Since at every position within the document a particular term may either be present or not, there are in total N = 2 L possible term distributions. Each of these distributions is mapped to a point in an (n + 1)-dimensional Hilbert space. If the norm difference (16) is used as the similarity criterion, then clusters of similar term distributions are just Euclidean point clusters in the Hilbert space.
We will now investigate the geometrical structure of the set of all possible term distributions. Let us first calculate the centerf (x) = (1/N ) L/2 for all ν for the coefficient vectors truncated to order n. Thus, the truncated vectors all lie within a sphere of radius
(ν) of a distribution f (ν) is |c−c (ν) | 2 = f −f (ν) 2 = L 0 (1/2−f (ν) (x)) 2 dx. Since f (ν) (x)R 0 = √ L/2(25)
in the (n+1)-dimensional Hilbert space. The center of this sphere is atc n . If-as in the Fourier and Legendre cases-one of the expansion functions, say ϕ 0 (x), is constant, the vectorc describing itself a constant function has only a non-vanishing zero component: c =c n = ( √ L/2, 0, 0, . . .). Fig. 6 (right) shows this term sphere in n + 1 = 3 dimensions for a document of length L = 9 and the expansion in terms of Legendre polynomials.
The fact that all possible truncated coefficient vectors c (ν) n lie within a sphere whose radius and center are known is very useful for clustering analysis. First of all, it shows where in the Hilbert space to look for clusters. Secondly, assume one has found a cluster K = {k 1 , . . . , k q } of term distributions by some clustering algorithm (for an nth order truncation). The volume of this cluster can be estimated by calculating the standard deviation
R K = [(1/q) q i=1 (k i −k) 2 ] 1/2 = [(1/(2q 2 )) q i,j=1 (k i − k j ) 2 ] 1/2 (
herē k is the center of the cluster) and approximating the cluster by a sphere of radius R K . Since the volume of a sphere of radius R K in n+1 dimensions is proportional to R n+1 K , the cluster occupies approximately a part ξ = (R K /R 0 ) n+1 = (2R K / √ L) n+1 of the theoretically available space. A cluster would then be considered as significant only if ξ
1. An analysis of this kind may be useful to generate an ontology of terms based on individual documents.
It has been conjectured that the use of quantum mechanical methods, in particular infinite-dimensional Hilbert spaces and projection operators, may be advantageous in IR [8]. The approach presented here goes into this direction, because constructing appropriate sets of orthogonal functions is a standard technique in quantum mechanics. Still, we emphasize that our approach is essentially classical, not quantum mechanical, since it does not use any of the interpretational subtleties of quantum mechanics.
Conclusions
In this paper, a new approach to improve document relevance evaluation using truncated Hilbert space expansions has been presented. The proposed approach is based on an abstract representation of term positions in a document collection which induces a measure of proximity between terms (semantic interaction range) and permits their direct and simple comparison. Based on this abstract representation, it is possible to shift the complexity of processing term-positional data to the indexing phase, permitting the use of term-positional information at query time without significantly affecting the response time of the system. Three applications for IR were discussed: (a) ranking optimization based on a user-defined term distribution function, (b) query expansion based on term-positional information, and (c) a cluster analysis approach for terms within documents.
There are several areas of future work. For example, (a) quantifying the effect of the abstract term positions representation in the index size, (b) measuring the effectiveness of the proposed clustering approach, and (c) studying objective functions in documents having homogeneous structures (forms) are some of the topics that should be investigated.
| 3,738 |
0910.1938
|
2952531516
|
In addition to the frequency of terms in a document collection, the distribution of terms plays an important role in determining the relevance of documents. In this paper, a new approach for representing term positions in documents is presented. The approach allows an efficient evaluation of term-positional information at query evaluation time. Three applications are investigated: a function-based ranking optimization representing a user-defined document region, a query expansion technique based on overlapping the term distributions in the top-ranked documents, and cluster analysis of terms in documents. Experimental results demonstrate the effectiveness of the proposed approach.
|
One of the first approaches using abstract representations of term distributions in documents is Fourier Domain Scoring (FDS), proposed by @cite_8 . FDS performs a separate magnitude and phase analysis of term position signals to produce an optimized ranking. It creates an index based on page segmentation, storing term frequency and approximated positions in the document. FDS processes the indexed data using the to perform the corresponding spectral analysis.
|
{
"abstract": [
"Current document retrieval methods use a vector space similarity measure to give scores of relevance to documents when related to a specific query. The central problem with these methods is that they neglect any spatial information within the documents in question. We present a new method, called Fourier Domain Scoring (FDS), which takes advantage of this spatial information, via the Fourier transform, to give a more accurate ordering of relevance to a document set. We show that FDS gives an improvement in precision over the vector space similarity measures for the common case of Web like queries, and it gives similar results to the vector space measures for longer queries."
],
"cite_N": [
"@cite_8"
],
"mid": [
"2105660794"
]
}
|
Information Retrieval via Truncated Hilbert-Space Expansions
|
The information retrieval (IR) process has two main stages. The first stage is the indexing stage in which the documents of a collection are processed to generate a database (index) containing the information about the terms of all documents in the collection. The index generally stores only term frequency information, but in some cases positional information of terms is also included, substantially increasing the memory requirements of the system.
In the second stage of the IR process (query evaluation), the user sends a query to the system, and the system responds with a ranked list of relevant documents. The implemented retrieval model determines how the relevant documents are calculated. Standard IR models (e.g. TFIDF, BM25) use the frequency of terms as the main document relevance criterion, producing adequate quality in the ranking and query processing time.
Other approaches, such as proximity queries or passage retrieval, complement the document relevance evaluation using term positional information. This additional process, normally performed at query time, generally improves the quality of the results but also slows down the response time of the system. Since the response time is a critical issue for the acceptance of an IR system by its users, the use of time-consuming algorithms to evaluate term-positional information at query time is generally inappropriate.
The IR model proposed in this paper shifts the complexity of processing the positional data to the indexing phase, using an abstract representation of the term positions and implementing a simple mathematical tool to operate with this compressed representation at query evaluation time. Thus, although query processing remains simple, the use of term-positional information provides new ways to optimize the IR process. Three applications are investigated: a function-based ranking optimization representing a user-defined document region, a query-expansion technique based on overlapping the term distributions in the top-ranked documents, and cluster analysis of terms in documents. Experimental results demonstrate the effectiveness of the proposed approach for optimizing the retrieval process.
The paper is organized as follows. Section 2 discusses related work. Section 3 presents the proposed approach for representing term positions based on truncated Hilbert space expansions. In Section 4, applications of the approach are described. Section 5 concludes the paper and outlines areas for future work.
Analyzing Term Positions
In this section, a general mathematical model to analyze term positions in documents is presented, making it possible to effectively use the term-positional information at query evaluation time.
Consider a document D of length L and a term t that appears in D. The distribution of the term t within the document is given by the set P t that contains all positions of t, where all terms are enumerated starting with 1 for the first term and so on. For example, a set P t = {2, 6} represents a tern that is located at the second and sixth position of the document body. A characteristic function
f (t) (x) = 1 for x ∈ [p − 1, p] if p ∈ P t 0 otherwise ,(1)
defined for x ∈ [0, L], is assigned to P t . The proposed method consists of approximating this characteristic function by an expansion in terms of certain sets of functions. In order to do so, some concepts of functional analysis are introduced. Details can be found in the book of Yosida [9].
Expansions in Hilbert Spaces
A Hilbert space H is a (possibly infinite-dimensional) vector space that is equipped with a scalar product ., . , i. e. two elements f, g ∈ H are mapped to a real or complex number f, g . We only consider real scalar products here.
An example of a Hilbert space is the space L 2 ([0, L]) defined as the set of all functions f that are square-integrable in the interval [0, L], i. e. functions for which L 0 (f (x)) 2 dx < ∞ . In this vector space, the addition of two functions f and g, and the multiplication of a function f by a scalar α ∈ R are defined point-wise:
(f + g)(x) = f (x) + g(x) , (αf )(x) = αf (x) . The scalar product in L 2 ([0, L]) is defined by f, g = L 0 f (x)g(x) dx .(2)
Two vectors with vanishing scalar product are called orthogonal.
The scalar product induces a norm (an abstract measure of length)
f = f, f ≥ 0 .f = ∞ k=0 γ k ϕ k ,(4)
where the γ k are real numbers, if the sequence f n = n k=0 γ k ϕ k of finite sums converges to f . This kind of convergence is called norm convergence.
Of particular importance are so-called complete, orthonormal sets {ϕ 0 , ϕ 1 , . . .} of functions in H. They have the following properties: (a) The ϕ i are mutually orthogonal and normalized to unity:
ϕ n , ϕ m = δ nm = 1 for n = m 0 for n = m (5) (b)
The ϕ i are complete, which means that every vector of the Hilbert space can be expanded into a convergent sum of them. Important properties of expansions in terms of complete orthonormal sets are: (a) The expansion coefficients γ k are given by
γ k = ϕ k , f . (6) (b) They fulfill n k=0 γ 2 k ≤ f 2 for all n, and ∞ k=0 γ 2 k = f 2(7)
(Bessel's inequality and Parseval's equation).
Given
two expansions f = ∞ k=0 γ k ϕ k , g = ∞ k=0
γ k ϕ k , the scalar product can be expressed as
f, g = ∞ k=0 γ k γ k .(8)
If the expansion coefficients are combined into coefficient vectors c = (γ 0 , γ 1 , . . .), c = (γ 0 , γ 1 , . . .), the preceding equation takes the form f, g = c · c . The Fourier expansions considered by Galeas et al. [4] are an example of such an expansion. The functions
ϕ Fo 0 (x) = 1 √ L , ϕ Fo 2k−1 (x) = 2 L sin 2πk L , ϕ Fo 2k (x) = 2 L cos 2πk L (9) (k > 0) form a complete orthonormal set in L 2 ([0, L]), leading to an expansion f (x) = a 0 √ L + 2 L ∞ k=1 a k cos 2πkx L + b k sin 2πkx L ,(10)where a 0 = γ 0 and a k = γ 2k , b k = γ 2k−1 for k > 0. Another complete set of orthonormal functions of L 2 ([0, L]) is given by ϕ Le k (x) = 2k + 1 L P * k (x/L) , k ≥ 0 ,(11)
where the P * k (x) are so-called shifted Legendre polynomials [1]. These polynomials are of order k. The first few of them are P *
0 (x) = 1, P * 1 (x) = 2x−1, P * 2 (x) = 6x 2 −6x+1, P * 3 (x) = 20x 3 − 30x 2 + 12x − 1. Fig. 1 (left) shows ϕ Le k (x) for 0 ≤ k ≤ 4 in the range x ∈ [0, L] for L = 1.
Another example that will be used later is a complete set for the space L 2 (R + ) (the space of square-integrable functions for 0 ≤ x < ∞):
ϕ La k (x) = e −x/(2λ) √ λ L k (x/λ) , k ≥ 0 .(12)
Here, λ is a positive scale parameter and the L k (x) are Laguerre polynomials [1], the first few of which are L 0 (x) = 1,
L 1 (x) = −x + 1, L 2 (x) = x 2 /2 − 2x + 1, L 3 (x) = −x 3 /6 + 3x 2 /2 − 3x + 1, see Fig. 1 (right).
Truncated Expansions of Term Distributions
As explained above, the finite sums f n = n k=0 γ k ϕ k converge to the function f in the sense of norm convergence. As a consequence of Bessel's inequality (7) they approximate f increasingly better for increasing n. An essential ingredient for the following discussion is to consider a truncated expansion, i. e. the mapping
P n : f (t) → f (t) n ,(13)
which associates to a term distribution f (t) of the form (1) its finite-order approximation f (t) n in terms of some complete orthonormal set for some order n. Figure 2 shows an example for the Fourier expansion. One can observe the characteristic broadening effect generated by the reduction of the expansion order (truncation).
The L 2 scalar product of two truncated term distributions f n and g n ,
f n , g n = f n (x)g n (x) dx(14)
has the meaning of an overlap integral: The integrand is only large in regions in which both functions f n (x) and g n (x) are large, so that f n , g n measures how well both functions overlap in the whole integration range. Given f n and g n , two truncated term distributions describing the term positions and their neighborhood in a certain document, we introduce the concept of semantic interaction range: Two terms that are close to each other present a stronger interaction because their truncated distributions have a considerable overlap. This semantic interaction range motivates the following definition of the similarity of two term distributions f and g: For some fixed order n, one sets sim(f, g) = f n , g n = P n f, P n g .
In this definition, the truncation P n : f → f n is essential, because the original term distributions f and g are always orthogonal if they describe two different terms. This is so because different terms are always at different positions within a document, so that their overlap always vanishes. Definition (15) is only one possibility. In fact, any definition based on the scalar product f n , g n can be utilized. For example, in Galeas et al. [4] a cosine definition cos ϑ = fn,gn fn gn has been used. Another choice is the norm difference
f n − g n = (f n (x) − g n (x)) 2 dx 1/2 = f n 2 + g n 2 − 2 f n , g n .(16)
Using different measures based on f n , g n , we have found no significant differences in the final retrieval results in several experiments. The scalar product of the truncated distributions can be easily calculated using the coefficient vectors: If the original distributions f and g have the infinite-dimensional coefficient vectors c = (γ 0 , γ 1 , . . .) and c = (γ 0 , γ 1 , . . .), respectively, then the truncated distributions f n and g n have the (n + 1)-dimensional coefficient vectors c n = (γ 0 , γ 1 , . . . , γ n ) and c n = (γ 0 , γ 1 , . . . , γ n ), resp., and their scalar product is the finite sum f n , g n = c n · c n = n k=0 γ k γ k .
(17)
The Semantic Interaction Range
In this section, a precise definition of the semantic interaction range is given. In abstract terms, the truncation P n : f → f n is a filtering or a projection: In the expansion f (x) = ∞ k=0 γ k ϕ k (x) the components ϕ k for k > n are filtered out, which amounts to a projection of f onto the components ϕ 0 , . . . , ϕ n . Thus, P n is a projection operator in the Hilbert space. To derive a closed expression for the operator P n , one combines (P n f )(x) = f n (x) = n k=0 γ k ϕ k (x), with (6) to obtain
(P n f )(x) = n k=0 ϕ k (y)f (y) dy ϕ k (x) = n k=0 ϕ k (y)ϕ k (x) f (y) dy .
(18) One can write the last expression as p n (y, x)f (y) dy with the projection kernel as an integral representation of P n in the sense of a convolution. It has the advantage that one can study the properties of the truncation independently of the function f . The width of p n (y, x) as a function of x is a lower bound for the width of a truncated expansion of a term located at y. Therefore, this width will be used as the semantic interaction range for a term at position y.
p n (y, x) = n k=0 ϕ k (y)ϕ k (x)(19)
For the Fourier expansion, p 2k is given by
p Fo 2k (y, x) = cos(4πk(y − x)/L) − cos(2π(2k + 1)(y − x)/L) L(1 − cos(2π(y − x)/L)) .(20)
(We consider only even orders n = 2k, because for these orders the expansion consists of an equal number of sine and cosine terms, see (9).) The maximum of p Fo 2k (y, x) is at x = y and the two zeros closest to the maximum are at x = y ± L/(2n + 1). Thus, the semantic interaction range for a Fourier expansion of order n may be defined to be
p i n (y, x) = α i n ϕ i n+1 (y)ϕ i n (x) − ϕ i n (y)ϕ i n+1 (x) y − x ,(22)
i = Le, La, with α Le n = (L/2)(n + 1)/(2n + 1) and α La n = −λ(n + 1). These kernels are no longer functions of y − x, meaning that the broadening of a term distribution depends on the position y of the term distribution within the document. Fig. 3 (right) shows the projection kernel p La 6 (y, x) for y = 20 and y = 100. One can see that the spatial resolution of the truncated expansion decreases for terms that are far away from the beginning of the document.
The goal of our approach is to shift the complexity of processing the positional data from the query evaluation phase to the (not time critical) indexing phase, reducing the ranking optimization via term positions to a simple mathematical operation.
Hence, we propose to calculate the expansion coefficients γ k of the term distributions in the indexing phase and to store this abstract term positional information in the index. This permits a considerably faster query evaluation, compared with methods that use the raw term-positional information.
Thus, the index contains an (n+1)-dimensional coefficient vector c n = (γ 0 , γ 1 , . . . , γ n ) for each term and each document in the collection. The γ k are calculated analytically via (6). To give an example of the complexity involved,
γ k = p∈Pt k j=0 α j p L j+1 − p − 1 L j+1 (23) with α j = (2k + 1)L a j /(j + 1) is
Ranking Optimization
The first scenario states document ranking as an optimization problem that is based on the query term distribution function f q,d and a user-defined objective function f o representing the optimal query term distribution in the document body:
M aximize {sim(f q,d , f o )} ∀f q,d ∈ A(24)
where A represents the query term distributions in a document set, f q,d is the query term distribution function for query q in document d, and f o is a user-defined objective function, representing the optimal query term distributions for the documents in the document ranking. Experiments based on the TREC-8 collection and the software Terrier [5], carried out to order n = 6, show the accuracy of the term distributions in a ranking based on user-defined objective functions. As depicted in Figure 4, the Fourier and Legendre models present a high accuracy for the distribution of query terms in the top-20 ranked documents, based on two different objective functions: The first function (denoted f o = 1|3) selects terms located in the first third of the document, and the second (f o = 3|3) selects terms located in the last third of the document [4].
Query Expansion
The second scenario considers the top-r documents D = {d 1 , d 2 , . . . , d r } of an initial ranking process and the functions f q,d with d ∈ D. The set of terms T q whose elements t maximize the expression sim(f q,d , f t,d ) is computed. It contains the terms for all documents in D that have a similar distribution as the query, i.e. terms positioned near the query in the top ranked documents. This set T q is used to expand q. As depicted in Figure 5, experiments executed on the TREC-8 collection demonstrate that query expansion based on the proposed orthogonal functions (Fourier and Laguerre) outperform state-of-the-art query expansion models, such as Rocchio and Kullback-Leibler [5]. The term position models (left) differ from the other models (right) because the former tend to increase the retrieval performance by increasing the number of expansion documents and expansion terms, while for the other models, the performance drops beyond roughly the 15 th expansion document. Figure 6 (left) shows a fixed query expansion configuration in which the other models show their best performance. Nevertheless, the term distribution models perform better. Any increase in the number of expansion documents or expansion terms makes the superiority of the term distribution models even clearer.
Cluster Analysis of Terms in Documents
Given a document, one may ask whether there are groups (clusters) of terms whose elements all have similar distributions. One may then infer that all terms inside a cluster describe related concepts [2]. In this section, some properties of the proposed method will be explained that may be useful for the analysis of term clusters.
Consider a document of length L. Since at every position within the document a particular term may either be present or not, there are in total N = 2 L possible term distributions. Each of these distributions is mapped to a point in an (n + 1)-dimensional Hilbert space. If the norm difference (16) is used as the similarity criterion, then clusters of similar term distributions are just Euclidean point clusters in the Hilbert space.
We will now investigate the geometrical structure of the set of all possible term distributions. Let us first calculate the centerf (x) = (1/N ) L/2 for all ν for the coefficient vectors truncated to order n. Thus, the truncated vectors all lie within a sphere of radius
(ν) of a distribution f (ν) is |c−c (ν) | 2 = f −f (ν) 2 = L 0 (1/2−f (ν) (x)) 2 dx. Since f (ν) (x)R 0 = √ L/2(25)
in the (n+1)-dimensional Hilbert space. The center of this sphere is atc n . If-as in the Fourier and Legendre cases-one of the expansion functions, say ϕ 0 (x), is constant, the vectorc describing itself a constant function has only a non-vanishing zero component: c =c n = ( √ L/2, 0, 0, . . .). Fig. 6 (right) shows this term sphere in n + 1 = 3 dimensions for a document of length L = 9 and the expansion in terms of Legendre polynomials.
The fact that all possible truncated coefficient vectors c (ν) n lie within a sphere whose radius and center are known is very useful for clustering analysis. First of all, it shows where in the Hilbert space to look for clusters. Secondly, assume one has found a cluster K = {k 1 , . . . , k q } of term distributions by some clustering algorithm (for an nth order truncation). The volume of this cluster can be estimated by calculating the standard deviation
R K = [(1/q) q i=1 (k i −k) 2 ] 1/2 = [(1/(2q 2 )) q i,j=1 (k i − k j ) 2 ] 1/2 (
herē k is the center of the cluster) and approximating the cluster by a sphere of radius R K . Since the volume of a sphere of radius R K in n+1 dimensions is proportional to R n+1 K , the cluster occupies approximately a part ξ = (R K /R 0 ) n+1 = (2R K / √ L) n+1 of the theoretically available space. A cluster would then be considered as significant only if ξ
1. An analysis of this kind may be useful to generate an ontology of terms based on individual documents.
It has been conjectured that the use of quantum mechanical methods, in particular infinite-dimensional Hilbert spaces and projection operators, may be advantageous in IR [8]. The approach presented here goes into this direction, because constructing appropriate sets of orthogonal functions is a standard technique in quantum mechanics. Still, we emphasize that our approach is essentially classical, not quantum mechanical, since it does not use any of the interpretational subtleties of quantum mechanics.
Conclusions
In this paper, a new approach to improve document relevance evaluation using truncated Hilbert space expansions has been presented. The proposed approach is based on an abstract representation of term positions in a document collection which induces a measure of proximity between terms (semantic interaction range) and permits their direct and simple comparison. Based on this abstract representation, it is possible to shift the complexity of processing term-positional data to the indexing phase, permitting the use of term-positional information at query time without significantly affecting the response time of the system. Three applications for IR were discussed: (a) ranking optimization based on a user-defined term distribution function, (b) query expansion based on term-positional information, and (c) a cluster analysis approach for terms within documents.
There are several areas of future work. For example, (a) quantifying the effect of the abstract term positions representation in the index size, (b) measuring the effectiveness of the proposed clustering approach, and (c) studying objective functions in documents having homogeneous structures (forms) are some of the topics that should be investigated.
| 3,738 |
0910.1938
|
2952531516
|
In addition to the frequency of terms in a document collection, the distribution of terms plays an important role in determining the relevance of documents. In this paper, a new approach for representing term positions in documents is presented. The approach allows an efficient evaluation of term-positional information at query evaluation time. Three applications are investigated: a function-based ranking optimization representing a user-defined document region, a query expansion technique based on overlapping the term distributions in the top-ranked documents, and cluster analysis of terms in documents. Experimental results demonstrate the effectiveness of the proposed approach.
|
A recent approach based on an abstract representation of term position is Fourier Vector Scoring (FVS) @cite_2 . It represents the term information (Fourier coefficients) directly as an @math -dimensional vector using the analytic Fourier transform, permitting an immediate and simple term comparison process.
|
{
"abstract": [
"In addition to the frequency of terms in a document collection, the distribution of terms plays an important role in determining the relevance of documents for a given search query. In this paper, term distribution analysis using Fourier series expansion as a novel approach for calculating an abstract representation of term positions in a document corpus is introduced. Based on this approach, two methods for improving the evaluation of document relevance are proposed: (a) a function-based ranking optimization representing a user defined document region, and (b) a query expansion technique based on overlapping the term distributions in the top-ranked documents. Experimental results demonstrate the effectiveness of the proposed approach in providing new possibilities for optimizing the retrieval process."
],
"cite_N": [
"@cite_2"
],
"mid": [
"2070122157"
]
}
|
Information Retrieval via Truncated Hilbert-Space Expansions
|
The information retrieval (IR) process has two main stages. The first stage is the indexing stage in which the documents of a collection are processed to generate a database (index) containing the information about the terms of all documents in the collection. The index generally stores only term frequency information, but in some cases positional information of terms is also included, substantially increasing the memory requirements of the system.
In the second stage of the IR process (query evaluation), the user sends a query to the system, and the system responds with a ranked list of relevant documents. The implemented retrieval model determines how the relevant documents are calculated. Standard IR models (e.g. TFIDF, BM25) use the frequency of terms as the main document relevance criterion, producing adequate quality in the ranking and query processing time.
Other approaches, such as proximity queries or passage retrieval, complement the document relevance evaluation using term positional information. This additional process, normally performed at query time, generally improves the quality of the results but also slows down the response time of the system. Since the response time is a critical issue for the acceptance of an IR system by its users, the use of time-consuming algorithms to evaluate term-positional information at query time is generally inappropriate.
The IR model proposed in this paper shifts the complexity of processing the positional data to the indexing phase, using an abstract representation of the term positions and implementing a simple mathematical tool to operate with this compressed representation at query evaluation time. Thus, although query processing remains simple, the use of term-positional information provides new ways to optimize the IR process. Three applications are investigated: a function-based ranking optimization representing a user-defined document region, a query-expansion technique based on overlapping the term distributions in the top-ranked documents, and cluster analysis of terms in documents. Experimental results demonstrate the effectiveness of the proposed approach for optimizing the retrieval process.
The paper is organized as follows. Section 2 discusses related work. Section 3 presents the proposed approach for representing term positions based on truncated Hilbert space expansions. In Section 4, applications of the approach are described. Section 5 concludes the paper and outlines areas for future work.
Analyzing Term Positions
In this section, a general mathematical model to analyze term positions in documents is presented, making it possible to effectively use the term-positional information at query evaluation time.
Consider a document D of length L and a term t that appears in D. The distribution of the term t within the document is given by the set P t that contains all positions of t, where all terms are enumerated starting with 1 for the first term and so on. For example, a set P t = {2, 6} represents a tern that is located at the second and sixth position of the document body. A characteristic function
f (t) (x) = 1 for x ∈ [p − 1, p] if p ∈ P t 0 otherwise ,(1)
defined for x ∈ [0, L], is assigned to P t . The proposed method consists of approximating this characteristic function by an expansion in terms of certain sets of functions. In order to do so, some concepts of functional analysis are introduced. Details can be found in the book of Yosida [9].
Expansions in Hilbert Spaces
A Hilbert space H is a (possibly infinite-dimensional) vector space that is equipped with a scalar product ., . , i. e. two elements f, g ∈ H are mapped to a real or complex number f, g . We only consider real scalar products here.
An example of a Hilbert space is the space L 2 ([0, L]) defined as the set of all functions f that are square-integrable in the interval [0, L], i. e. functions for which L 0 (f (x)) 2 dx < ∞ . In this vector space, the addition of two functions f and g, and the multiplication of a function f by a scalar α ∈ R are defined point-wise:
(f + g)(x) = f (x) + g(x) , (αf )(x) = αf (x) . The scalar product in L 2 ([0, L]) is defined by f, g = L 0 f (x)g(x) dx .(2)
Two vectors with vanishing scalar product are called orthogonal.
The scalar product induces a norm (an abstract measure of length)
f = f, f ≥ 0 .f = ∞ k=0 γ k ϕ k ,(4)
where the γ k are real numbers, if the sequence f n = n k=0 γ k ϕ k of finite sums converges to f . This kind of convergence is called norm convergence.
Of particular importance are so-called complete, orthonormal sets {ϕ 0 , ϕ 1 , . . .} of functions in H. They have the following properties: (a) The ϕ i are mutually orthogonal and normalized to unity:
ϕ n , ϕ m = δ nm = 1 for n = m 0 for n = m (5) (b)
The ϕ i are complete, which means that every vector of the Hilbert space can be expanded into a convergent sum of them. Important properties of expansions in terms of complete orthonormal sets are: (a) The expansion coefficients γ k are given by
γ k = ϕ k , f . (6) (b) They fulfill n k=0 γ 2 k ≤ f 2 for all n, and ∞ k=0 γ 2 k = f 2(7)
(Bessel's inequality and Parseval's equation).
Given
two expansions f = ∞ k=0 γ k ϕ k , g = ∞ k=0
γ k ϕ k , the scalar product can be expressed as
f, g = ∞ k=0 γ k γ k .(8)
If the expansion coefficients are combined into coefficient vectors c = (γ 0 , γ 1 , . . .), c = (γ 0 , γ 1 , . . .), the preceding equation takes the form f, g = c · c . The Fourier expansions considered by Galeas et al. [4] are an example of such an expansion. The functions
ϕ Fo 0 (x) = 1 √ L , ϕ Fo 2k−1 (x) = 2 L sin 2πk L , ϕ Fo 2k (x) = 2 L cos 2πk L (9) (k > 0) form a complete orthonormal set in L 2 ([0, L]), leading to an expansion f (x) = a 0 √ L + 2 L ∞ k=1 a k cos 2πkx L + b k sin 2πkx L ,(10)where a 0 = γ 0 and a k = γ 2k , b k = γ 2k−1 for k > 0. Another complete set of orthonormal functions of L 2 ([0, L]) is given by ϕ Le k (x) = 2k + 1 L P * k (x/L) , k ≥ 0 ,(11)
where the P * k (x) are so-called shifted Legendre polynomials [1]. These polynomials are of order k. The first few of them are P *
0 (x) = 1, P * 1 (x) = 2x−1, P * 2 (x) = 6x 2 −6x+1, P * 3 (x) = 20x 3 − 30x 2 + 12x − 1. Fig. 1 (left) shows ϕ Le k (x) for 0 ≤ k ≤ 4 in the range x ∈ [0, L] for L = 1.
Another example that will be used later is a complete set for the space L 2 (R + ) (the space of square-integrable functions for 0 ≤ x < ∞):
ϕ La k (x) = e −x/(2λ) √ λ L k (x/λ) , k ≥ 0 .(12)
Here, λ is a positive scale parameter and the L k (x) are Laguerre polynomials [1], the first few of which are L 0 (x) = 1,
L 1 (x) = −x + 1, L 2 (x) = x 2 /2 − 2x + 1, L 3 (x) = −x 3 /6 + 3x 2 /2 − 3x + 1, see Fig. 1 (right).
Truncated Expansions of Term Distributions
As explained above, the finite sums f n = n k=0 γ k ϕ k converge to the function f in the sense of norm convergence. As a consequence of Bessel's inequality (7) they approximate f increasingly better for increasing n. An essential ingredient for the following discussion is to consider a truncated expansion, i. e. the mapping
P n : f (t) → f (t) n ,(13)
which associates to a term distribution f (t) of the form (1) its finite-order approximation f (t) n in terms of some complete orthonormal set for some order n. Figure 2 shows an example for the Fourier expansion. One can observe the characteristic broadening effect generated by the reduction of the expansion order (truncation).
The L 2 scalar product of two truncated term distributions f n and g n ,
f n , g n = f n (x)g n (x) dx(14)
has the meaning of an overlap integral: The integrand is only large in regions in which both functions f n (x) and g n (x) are large, so that f n , g n measures how well both functions overlap in the whole integration range. Given f n and g n , two truncated term distributions describing the term positions and their neighborhood in a certain document, we introduce the concept of semantic interaction range: Two terms that are close to each other present a stronger interaction because their truncated distributions have a considerable overlap. This semantic interaction range motivates the following definition of the similarity of two term distributions f and g: For some fixed order n, one sets sim(f, g) = f n , g n = P n f, P n g .
In this definition, the truncation P n : f → f n is essential, because the original term distributions f and g are always orthogonal if they describe two different terms. This is so because different terms are always at different positions within a document, so that their overlap always vanishes. Definition (15) is only one possibility. In fact, any definition based on the scalar product f n , g n can be utilized. For example, in Galeas et al. [4] a cosine definition cos ϑ = fn,gn fn gn has been used. Another choice is the norm difference
f n − g n = (f n (x) − g n (x)) 2 dx 1/2 = f n 2 + g n 2 − 2 f n , g n .(16)
Using different measures based on f n , g n , we have found no significant differences in the final retrieval results in several experiments. The scalar product of the truncated distributions can be easily calculated using the coefficient vectors: If the original distributions f and g have the infinite-dimensional coefficient vectors c = (γ 0 , γ 1 , . . .) and c = (γ 0 , γ 1 , . . .), respectively, then the truncated distributions f n and g n have the (n + 1)-dimensional coefficient vectors c n = (γ 0 , γ 1 , . . . , γ n ) and c n = (γ 0 , γ 1 , . . . , γ n ), resp., and their scalar product is the finite sum f n , g n = c n · c n = n k=0 γ k γ k .
(17)
The Semantic Interaction Range
In this section, a precise definition of the semantic interaction range is given. In abstract terms, the truncation P n : f → f n is a filtering or a projection: In the expansion f (x) = ∞ k=0 γ k ϕ k (x) the components ϕ k for k > n are filtered out, which amounts to a projection of f onto the components ϕ 0 , . . . , ϕ n . Thus, P n is a projection operator in the Hilbert space. To derive a closed expression for the operator P n , one combines (P n f )(x) = f n (x) = n k=0 γ k ϕ k (x), with (6) to obtain
(P n f )(x) = n k=0 ϕ k (y)f (y) dy ϕ k (x) = n k=0 ϕ k (y)ϕ k (x) f (y) dy .
(18) One can write the last expression as p n (y, x)f (y) dy with the projection kernel as an integral representation of P n in the sense of a convolution. It has the advantage that one can study the properties of the truncation independently of the function f . The width of p n (y, x) as a function of x is a lower bound for the width of a truncated expansion of a term located at y. Therefore, this width will be used as the semantic interaction range for a term at position y.
p n (y, x) = n k=0 ϕ k (y)ϕ k (x)(19)
For the Fourier expansion, p 2k is given by
p Fo 2k (y, x) = cos(4πk(y − x)/L) − cos(2π(2k + 1)(y − x)/L) L(1 − cos(2π(y − x)/L)) .(20)
(We consider only even orders n = 2k, because for these orders the expansion consists of an equal number of sine and cosine terms, see (9).) The maximum of p Fo 2k (y, x) is at x = y and the two zeros closest to the maximum are at x = y ± L/(2n + 1). Thus, the semantic interaction range for a Fourier expansion of order n may be defined to be
p i n (y, x) = α i n ϕ i n+1 (y)ϕ i n (x) − ϕ i n (y)ϕ i n+1 (x) y − x ,(22)
i = Le, La, with α Le n = (L/2)(n + 1)/(2n + 1) and α La n = −λ(n + 1). These kernels are no longer functions of y − x, meaning that the broadening of a term distribution depends on the position y of the term distribution within the document. Fig. 3 (right) shows the projection kernel p La 6 (y, x) for y = 20 and y = 100. One can see that the spatial resolution of the truncated expansion decreases for terms that are far away from the beginning of the document.
The goal of our approach is to shift the complexity of processing the positional data from the query evaluation phase to the (not time critical) indexing phase, reducing the ranking optimization via term positions to a simple mathematical operation.
Hence, we propose to calculate the expansion coefficients γ k of the term distributions in the indexing phase and to store this abstract term positional information in the index. This permits a considerably faster query evaluation, compared with methods that use the raw term-positional information.
Thus, the index contains an (n+1)-dimensional coefficient vector c n = (γ 0 , γ 1 , . . . , γ n ) for each term and each document in the collection. The γ k are calculated analytically via (6). To give an example of the complexity involved,
γ k = p∈Pt k j=0 α j p L j+1 − p − 1 L j+1 (23) with α j = (2k + 1)L a j /(j + 1) is
Ranking Optimization
The first scenario states document ranking as an optimization problem that is based on the query term distribution function f q,d and a user-defined objective function f o representing the optimal query term distribution in the document body:
M aximize {sim(f q,d , f o )} ∀f q,d ∈ A(24)
where A represents the query term distributions in a document set, f q,d is the query term distribution function for query q in document d, and f o is a user-defined objective function, representing the optimal query term distributions for the documents in the document ranking. Experiments based on the TREC-8 collection and the software Terrier [5], carried out to order n = 6, show the accuracy of the term distributions in a ranking based on user-defined objective functions. As depicted in Figure 4, the Fourier and Legendre models present a high accuracy for the distribution of query terms in the top-20 ranked documents, based on two different objective functions: The first function (denoted f o = 1|3) selects terms located in the first third of the document, and the second (f o = 3|3) selects terms located in the last third of the document [4].
Query Expansion
The second scenario considers the top-r documents D = {d 1 , d 2 , . . . , d r } of an initial ranking process and the functions f q,d with d ∈ D. The set of terms T q whose elements t maximize the expression sim(f q,d , f t,d ) is computed. It contains the terms for all documents in D that have a similar distribution as the query, i.e. terms positioned near the query in the top ranked documents. This set T q is used to expand q. As depicted in Figure 5, experiments executed on the TREC-8 collection demonstrate that query expansion based on the proposed orthogonal functions (Fourier and Laguerre) outperform state-of-the-art query expansion models, such as Rocchio and Kullback-Leibler [5]. The term position models (left) differ from the other models (right) because the former tend to increase the retrieval performance by increasing the number of expansion documents and expansion terms, while for the other models, the performance drops beyond roughly the 15 th expansion document. Figure 6 (left) shows a fixed query expansion configuration in which the other models show their best performance. Nevertheless, the term distribution models perform better. Any increase in the number of expansion documents or expansion terms makes the superiority of the term distribution models even clearer.
Cluster Analysis of Terms in Documents
Given a document, one may ask whether there are groups (clusters) of terms whose elements all have similar distributions. One may then infer that all terms inside a cluster describe related concepts [2]. In this section, some properties of the proposed method will be explained that may be useful for the analysis of term clusters.
Consider a document of length L. Since at every position within the document a particular term may either be present or not, there are in total N = 2 L possible term distributions. Each of these distributions is mapped to a point in an (n + 1)-dimensional Hilbert space. If the norm difference (16) is used as the similarity criterion, then clusters of similar term distributions are just Euclidean point clusters in the Hilbert space.
We will now investigate the geometrical structure of the set of all possible term distributions. Let us first calculate the centerf (x) = (1/N ) L/2 for all ν for the coefficient vectors truncated to order n. Thus, the truncated vectors all lie within a sphere of radius
(ν) of a distribution f (ν) is |c−c (ν) | 2 = f −f (ν) 2 = L 0 (1/2−f (ν) (x)) 2 dx. Since f (ν) (x)R 0 = √ L/2(25)
in the (n+1)-dimensional Hilbert space. The center of this sphere is atc n . If-as in the Fourier and Legendre cases-one of the expansion functions, say ϕ 0 (x), is constant, the vectorc describing itself a constant function has only a non-vanishing zero component: c =c n = ( √ L/2, 0, 0, . . .). Fig. 6 (right) shows this term sphere in n + 1 = 3 dimensions for a document of length L = 9 and the expansion in terms of Legendre polynomials.
The fact that all possible truncated coefficient vectors c (ν) n lie within a sphere whose radius and center are known is very useful for clustering analysis. First of all, it shows where in the Hilbert space to look for clusters. Secondly, assume one has found a cluster K = {k 1 , . . . , k q } of term distributions by some clustering algorithm (for an nth order truncation). The volume of this cluster can be estimated by calculating the standard deviation
R K = [(1/q) q i=1 (k i −k) 2 ] 1/2 = [(1/(2q 2 )) q i,j=1 (k i − k j ) 2 ] 1/2 (
herē k is the center of the cluster) and approximating the cluster by a sphere of radius R K . Since the volume of a sphere of radius R K in n+1 dimensions is proportional to R n+1 K , the cluster occupies approximately a part ξ = (R K /R 0 ) n+1 = (2R K / √ L) n+1 of the theoretically available space. A cluster would then be considered as significant only if ξ
1. An analysis of this kind may be useful to generate an ontology of terms based on individual documents.
It has been conjectured that the use of quantum mechanical methods, in particular infinite-dimensional Hilbert spaces and projection operators, may be advantageous in IR [8]. The approach presented here goes into this direction, because constructing appropriate sets of orthogonal functions is a standard technique in quantum mechanics. Still, we emphasize that our approach is essentially classical, not quantum mechanical, since it does not use any of the interpretational subtleties of quantum mechanics.
Conclusions
In this paper, a new approach to improve document relevance evaluation using truncated Hilbert space expansions has been presented. The proposed approach is based on an abstract representation of term positions in a document collection which induces a measure of proximity between terms (semantic interaction range) and permits their direct and simple comparison. Based on this abstract representation, it is possible to shift the complexity of processing term-positional data to the indexing phase, permitting the use of term-positional information at query time without significantly affecting the response time of the system. Three applications for IR were discussed: (a) ranking optimization based on a user-defined term distribution function, (b) query expansion based on term-positional information, and (c) a cluster analysis approach for terms within documents.
There are several areas of future work. For example, (a) quantifying the effect of the abstract term positions representation in the index size, (b) measuring the effectiveness of the proposed clustering approach, and (c) studying objective functions in documents having homogeneous structures (forms) are some of the topics that should be investigated.
| 3,738 |
0910.2113
|
2140546287
|
Internet and graphs are very much related. The graphical structure of internet has been studied extensively to provide efficient solutions to routing and other problems. But most of these studies assume a central authority which controls and manages the internet. In the recent years game theoretic models have been proposed which do not require a central authority and the users are assumed to be routing their flows selfishly. The existence of Nash Equilibria, congestion and the amount of inefficiency caused by this selfish routing is a major concern in this field. A type of paradox in the selfish routing networks, Braess' Paradox, first discovered by Braess, is a major contributor to inefficiency. Several pricing mechanisms have also been provided which give a game theoretical model between users(consumers) and ISPs ( Internet Service Providers or sellers) for the internet. We propose a novel pricing mechanism, based on real world Internet network architecture, which reduces the severity of Braess' Paradox in selfish routing game theoretic networks. It's a pricing mechanism between combinatorial users and ISPs. We prove that Nash equilibria exists in this network and provide bounds on inefficiency . We use graphical properties of internet to prove our result. Several interesting extensions and future work have also been discussed.
|
Hayrapetyan et. al @cite_10 analyze among similar lines as us but for their cost function they assume a constant term for the per unit flow price charged by ISPs but in the real world scenario generally the per unit charge decreases if the flow required by the user increases i.e. ISPs provide concession for more flow. Our model captures this. They have not done any analysis of the effect of Braess' Paradox in their network, too. We show that in our model the severity of Braess' Paradox is reduced.
|
{
"abstract": [
"The success of the Internet is remarkable in light of the decentralized manner in which it is designed and operated. Unlike small scale networks, the Internet is built and controlled by a large number of disperate service providers who are not interested in any global optimization. Instead, providers simply seek to maximize their own profit by charging users for access to their service. Users themselves also behave selfishly, optimizing over price and quality of service. Game theory provides a natural framework for the study of such a situation. However, recent work in this area tends to focus on either the service providers or the network users, but not both. This paper introduces a new model for exploring the interaction of these two elements, in which network managers compete for users via prices and the quality of service provided. We study the extent to which competition between service providers hurts the overall social utility of the system."
],
"cite_N": [
"@cite_10"
],
"mid": [
"2144615311"
]
}
|
A real world network pricing game with less severe Braess' Paradox
|
Internet has grown meteorically in the few decades and thus the amount of study associated with it. The graphical and combinatorial properties of internet have been studied in great deal. Several interesting graphical properties have been derived with experimental results [1] . The earlier studies assumed a central authority for the internet for ease of analysis . But today's Internet is more of an autonomous system without any central authority where users and the ISPs work selfishly to maximize their own interests. Because of this behavior there has been growing concerns about QoS (Quality of Service), finding efficient routes, pricing mechanisms etc. Since each user or ISP is selfish and works for his own interest without any concern for the overall efficiency of the system, a game theoretic approach to this problem looks to suit much better. Based on these lines there has been a growing amount of research literature in theoretical computer science about analysis of the inefficiency due to selfishness of the agents (users and ISPs) of the system. Several models have been suggested [2,3]. Roughgarden et. al [4] study the inefficiency arising due to Braess' Paradox, first discovered by Braess [5] and later reported by Murchland [6]. To combat the ills arising due to selfishness like congestion etc. several pricing mechanisms have been suggested in [7,8,9] but the effectiveness of these rely on the owner of the resources i.e the ISP. The ISPs are selfish and their goals may not align with social objectives of efficiency and QoS. There is also vast amount of research which proposes pricing mechanisms so that the resources of the ISPs can be effectively sold to the users [10,11] keeping in mind the selfish behavior. But all of these either assume a constant cost function for the edges charged by ISPs or analyze their models for cases when each user routes a negligible amount of flow for ease of analysis, or assume a kind of coordination between different users.
In selfish routing games the main aim of the user is to minimize its latency cost (delay). It does not worry about the cost charged by the ISPs. Several researches [4,8] have studied this kind of game giving bounds on Price of Anarchy which measure the inefficiency arising due to selfishness. In Network Design games the main aim for the users is to have a network which will let them flow their required flows with some pricing mechanism, it's not concerned about congestion delays. This game has also been the focus of attention of several works [2,3]. Here the job of the ISPs is to just provide the network with the minimal price, generally a constant cost per edge, without themselves(ISPs) having any selfish motives. It's just the user who has selfish motives. Obviously none of the above mentioned games capture the full complexity of the network. We provide a model which takes both the ISPs and the users to be selfish and the user cost function has both the factors: the latency as well as cost charged by the ISPs, which is not always constant. We give a model which captures a real world general case network scenario, where the price of the edges,charged by ISPs, in the network varies with amount of flow. The idea behind this assumption is that the per unit flow cost charged by seller decreases if the buyer buys more. i.e. if you buy in bulk you pay less per unit cost . Our cost function also includes the latency factor that is latency caused due to congestion in the network edge. In this model we assume combinatorial users (buyers) and their flow is not negligible. Each user has a significant amount of flow . Also we assume that the flow is non-splittable. This complicates the situation a lot. Our model can easily be extended for negligible and splittable flow cases as well. We prove that the effect of Braess' paradox is less severe in our model. We give a better bound of the worst case of Braess' Paradox. We also prove that the Nash Equilibria exists in our model.
Model and Notations
Our network is a multicommodity flow network i.e. it has more than one sourcetarget pair. This model is similar to the model discussed in [12] with an added term for unit cost function for the price charged by ISPs. Let us consider a directed graph G = (V,E) where V is the set of vertices and E is the set of Edges. There is a set of k source and target(sink) pair of vertices (s 1 , t 1 ), (s 2 , t 2 )...(s k , t k ) and a set of k users who want to use this network for their flow between a (s i , t i ) pair. Each (s i , t i ) has a corresponding user i and vice-versa . Each user i has a flow f i > 0 .Different players can have identical source sink pair. Π i is used to denote the set of (s i , t i ) paths of the network for a given i. Each user for (s i , t i ) pair picks a path P from the set Π i for its flow and thus f i P > 0 and equals 0 for all other paths. Thus the strategy set for user i is Π i . The condition that the flow is 0 for all other paths is termed as the non-splittable flow routing. And as we see the number of users is combinatorial , this condition is termed as Atomic Selfish Routing [12]. A flow f is a feasible flow for this network if it corresponds to a strategy profile : for each player i , f i P equals r i for exactly one path P for s i − t i where r i is the total amount of flow player i has to send. t(x e ) i .Note that t e (x e ) i will be 0 for users i for edge e who dont route their flow through e. The first component of the cost function t e i is defined as c e : R + → R + . This is dependent upon the total flow x in the edge . We consider only those C e which are convex with respect to flow x . So for the time being lets take C e (x) = x. The Second term of the cost function t e i is defined as u i e : R + → R + . This function is the per unit flow charge of the ISP for the edge. This function will be of the form
F (f i )/f i where F (f i )
is the charge for the total flow f i of the user i by the ISP and thus the per unit cost of routing the flow for the user i will be F (f i )/f i . You can note that the congestion term is independent of user i , because even if the user doesn't contribute to the congestion it confronts the congestion of the edge caused by all the users. Now we define the concept of equilibria in this model. As we know the users are selfish so they will all try to minimize their cost . Let us define the cost of a path P with respect to a flow f in terms of the sum of the costs for constituent
edges: t i P (f ) = eǫP t i e (f e ) = eǫP (c e (f e ) + u e (f i P )).i of s i − t i paths with f i P > 0 t i P (f ) ≤ t iP (f ) (1)
wheref is the flow identical to f except thatf i P = 0 andf ĩ P = r i Now we define the Social cost in our network. Social cost is the total cost experienced by all the users in the network. i.e. is
SC(f ) = eǫE c e (f e )f e + k i=1 eǫPi u e (r i )r i(2)
Informally when you calculate the sum of the cost of all the k users in the network we arrive at equation (2) .Next we define Price Of Anarchy of the network. This term was defined for Social Cost of the network and is used to measure inefficiency arising in the network flow due to selfish users . It was first conceptualized by Papadimitriou et.al [2] and Anshelevich et.al [3] Our definition is the same as used by Roughgarden et.al in [12].
Definition 3. (Price of Anarchy) In the network instance (G,r,t) as above Price of Anarchy POA is the ratio of the social cost in the worst case equilibrium flow f and that of the optimal flow f * .So
POA = SC(f ) SC(f * )(3)
Our Results
1. We show that for our model there always exists a pure Nash Equilibria even in the case of Combinatorial Users with non-splittable flows in real world network scenarios.
2. We show that the severity of Braess' Paradox is reduced in our case . The previous bound on Braess' Paradox was 4/3 given by Roughgarden et. al. [4] .Our bound is 8/7 in the worst case. We also show a trivial result that if the Unit Cost function term is the dominant term compared to the Congestion Term in the Cost Function then the Braess' Paradox doesn't even exist for the original graph given by Braess [5] which is also used by Roughgarden for the 4/3 bound.
3. We give bound for Price of Anarchy for our model
Paper Organization
From here on we discuss the existence of pure equilibria in our model. Then in the next section we prove the reduced severity of Braess' Paradox. After that We give bounds on Price of Anarchy for our model. In the last section we conclude our work with some open problems and opportunity for future work.
Pure Equilibria Existence In the Model
We prove the existence of pure equilibria in our model using the potential functions. This method was suggested in [12]. Our potential function is different from theirs .Potential function is a function which captures the changes in the cost of network very effectively when any user deviates from a network state or flow condition. We will use a discrete combinatorial potential function. The idea behind this combinatorial function is that it has finite values so there must exist a minimum value among those values. This minimum is what we are interested in. We make our potential function such that it depicts the change experienced by a user deviating from the equilibria. And hence we prove that we have attained a state of equilibria.In the following theorem we are going to define the proof of the existence of equilibria.
Theorem 1. (Equilibrium in the network) Let (G,r,t) is a network instance
where every user i has an amount r i to flow and the cost function is t as defined in earlier sections in the given graph G. Then (G,r,t) has at least one equilibrium flow given that the congestion factor in the t is an affine function.
proof Note that we took that the congestion factor in the t an affine function as an affine function is a fair estimate of congestion in a real world network. We design a potential function to capture our network model. As defined in the previous section our cost function t has two components. Let us formally define our cost function. The cost function t i e is the amount of per unit flow cost experienced by user i by flowing r i amount of flow on edge e which lies on path P chosen by i out of Π i . t i e = c e (f e )+ u e (r i ) .
Here as defined earlier f e = iǫSe r i .
Where S e is the set of users whose path has edge e. Now we define our Potential
function Φ as Φ = eǫE c e (f e )f e + iǫSe c e (r i )r i + iǫSe u e (r i )r i(4)
Having defined the potential function for our network we use its features to prove our result. Since the network instance has finite users and each user has finite strategies, thus there are only finite number of flows and correspondingly finite number of values for our potential function. Say for a flow f the potential function has the minimum value. We prove that this corresponds to the equilibrium flow. We prove it by contradiction. Let us say that this flow f is not the equilibrium flow. That means by switching its path a user i can strictly decrease his cost. Let the previous path was P for that user and the new path isP . Now change in the cost of i is: So Φ can also be written
0 > tP (f ) − t P (f ) =Φ = eǫE 2a e iǫSe r 2 i + 2a e i,jǫSe,i =j r i r j + 2b e iǫSe r i + iǫSe u e (r i )r i
Now the change ∆Φ in the potential function because of the change in user i's strategy is:
∆Φ = eǫP \P 2a e r 2 i + 2a e r i f e + 2b e r i + u e (r i )r i − eǫP \P 2a e r 2 i + 2a e r i (f e − r i ) + 2b e r i + u e (r i )r i = 2r i eǫP \P (a e (f e + r i ) + b e + u e (r i )) − eǫP \P (a e (r i + f e − r i ) + b e + u e (r i )) =⇒ ∆Φ = 2r i eǫP \P (c e (f e + r i ) + u e (r i )) − eǫP \P (c e (f e ) + u e (r i )) (6)
The bracketed term in the right hand side of this equation is same as equation (5). Thus this means that ∆Φ < 0 which contradicts our assumption that f gives the minimum value for Φ. Thus f is the equilibrium flow for instance (G,r,t).
⊓ ⊔
Braess' Paradox with Reduced Severity
In this section we show that our network model reduces the severity of Braess' Paradox. Braess' Paradox is most conspicuous in the following graph, fig(2). This graph is used by Roughgarden et.al [12] to give the 4/3 bound of anarchy. Assuming that the total flow of all the users is one unit and there are N users where N is very large,in the graph in fig(2a), at equilibrium the flow is evenly distributed between the two paths i.e.each path has 1/2 units of flow. Thus the cost experienced by each user i is 3/2 per unit flow. We feel intuitively that addition of the zero cost directed edge v → w should decrease the cost , but when we add the path any user i uses the path u → v → w → t, because at any stage if the flow in the path u → v → w → t was x then user i experiences 2x unit cost which is less than 1+x unit cost in the other paths, But since every user is selfish each one migrates to this path and thus x becomes 1.This makes This is what happens in real world network pricing. We take the congestion cost term c e (x) to be a linear function i.e. c e (f e ) = f e . First lets have a look at the unit cost function u e (x) = F (x)/x. Since the whole network is normalized i.e. total flow is 1, cost for edges s → w and v → t is 1 there is no loss of generality in assuming that for a flow ε, F (ε) is ε where ε is very small i.e lim ε→0 F (ε) = ε and the value of F (ε + δε) < ε + δε i.e. lim x→0 (u(x) = F (x)
x ) = 1 . We have plotted the graph of some such functions like sin x and log (1 + x) in fig(4) against graph y = x for comparing how the marginal cost for a flow δX decreases as the X increases. So we have
x ≥ 0 ⇒ (u(x) = F (x) x ) ≤ 1 (7)1/N ≥ 0 ⇒ F e (1/N ) 1/N ≤ 1 ⇒ F e (1/N ) 1/N + x ≤ 1 + 1 = 2 As x ≤ 1 ⇒ 2( F e (1/N ) 1/N + x) ≤ 2 + F e (1/N ) 1/N + x ⇒ 2( x + Fe(1/N ) 1/N 2 ) ≤ 1 + ( x + Fe(1/N ) 1/N 2 )
That is what we wanted as the result. So the user i chooses path u → v → w → t but since every user is selfish each one chooses this path and thus increasing cost. Thus at equilibrium the cost to the user is (1 + Fe(1/N ) 1/N ) and the ration between the new cost and the old cost i.e. bound ρ is
ρ = (1 + Fe(1/N ) 1/N ) (1 + ( 1 2 + Fe(1/N ) 1/N 2 )) = 4 + 4 Fe(1/N ) 1/N ) 5 + 2 Fe(1/N ) 1/N )(8)
Since Fe(1/N ) 1/N ) ≤ 1 , thus the worst case value of ρ is 4+4 5+2 = 8/7 ⊓ ⊔ Now let us have a normalized generic t e that is t e = c 1 c e + c 2 u e where c 1 + c 2 = 1 and c 1 , c 2 ǫ[0, 1], putting c 1 = c 2 = 1/2 we get our original cost function. After doing the calculations we find that ρ for this is [12] .
⊓ ⊔
Price Of Anarchy bound
The price of anarchy bound for our network is 3+ √ 5 2 ≈ 2.618. Its same as the bound given for Atomic Selfish Routing in the book [12]. See Appendix for the proof.
2. The effect of this model on undirected graphs.
3. It will be interesting to see how addition of another term say QOS(Quality of Service) affects the equations.
where r i is user i's flow. This equation comes from the equilibrium flow condition, definition (1). Now given the same network instance and flows as for equation (9) Combining the above result with equation (10)we arrive at the following inequality
SC(f ) SC(f * ) − 1 ≤ SC(f ) SC(f * )(11)
As we know SC(f ) SC(f * ) is our POA defined in definition(1.2). Thus solving inequality (11) gives us
| 3,226 |
0910.1255
|
2086153481
|
This paper presents a new method and a constraint-based objective function to solve two problemsrelated to the design of optical telecommunication networks, namely the Synchronous Optical Net-work Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network DesignProblem (IDP). These network topology problems can be represented as a graph partitioning withcapacity constraints as shown in previous works. We present here a new objective function and anew local search algorithm to solve these problems. Experiments conducted in C
|
These two problems have been well studied. It has been proven that they are both @math -hard ( @cite_4 , @cite_1 ).
|
{
"abstract": [
"Motivated by a problem arising in the design of telecommunications networks using the SONET standard, we consider the problem of covering all edges of a graph using subgraphs that contain at most k edges with the objective of minimizing the total number of vertices in the subgraphs. We show that the problem is -hard when k ≥ 3 and present a linear-time -approximation algorithm. For even k values, we present an approximation scheme with a reduced ratio but with increased complexity. © 2002 Wiley Periodicals, Inc.",
"We consider the problem of interconnecting a set of customer sites using bidirectional SONET rings of equal capacity. Each site is assigned to exactly one ring and a special ring, called the federal ring, interconnects the other rings together. The objective is to minimize the total cost of the network subject to a ring capacity limit where the capacity of a ring is determined by the total bandwidth required between sites assigned to the same ring plus the total bandwidth request between these sites and sites assigned to other rings.We present exact, integer-programming based solution techniques and fast heuristic algorithms for this problem. We compare the results from applying the heuristic algorithms with those produced by the exact methods for real-world as well as randomly generated problem instances. We show that two of the heuristics find solutions that cost at most twice that of an optimal solution. Empirical evidence indicates that in practice the algorithms perform much better than their theoretical bound and often find optimal solutions."
],
"cite_N": [
"@cite_1",
"@cite_4"
],
"mid": [
"2139721783",
"2002547394"
]
}
|
Sonet Network Design Problems
|
This paper presents a new algorithm and an objective function to solve two real-world combinatorial optimization problems from the field of network design. These two problems, the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP), have been shown N P-hard and have already been solved by combinatorial optimization techniques. This work extends the seminal ideas introduced by R. Aringhieri and M. Dell'Amico in 2005 in [2]. This paper is organized as follows. In the sequel of this section we introduce the two problems we have worked on, and the local search techniques which have been used to solve them. We will also introduce the models in a constrained optimization format for the two problems. We then present the previous works on SRAP and IDP in section 2. Section 3 describes the key ingredients necessary to implement the local search algorithms. Finally, the results are shown in Section 4.
Optical networks topologies
During the last few years the number of internet based application users has exponentially increased, and so has the demand for bandwidth. To enable fast transmission of large quantities of data, the fiber optic technology in telecommunication is the current solution.
The Synchronous Optical NETwork (SONET) in North America and Synchronous Digital Hierarchy (SDH) in Europe and Japan are the standard designs for fiber optics networks. They have a ring-based topology, in other words, they are a collection of rings.
Rings Each customer is connected to one or more rings, and can send, receive and relay messages using an add-drop-multiplexer (ADM). There are two bidirectional links connecting each customer to his neighboring customers on the ring. In a bidirectional ring the traffic between two nodes can be sent clockwise or counterclockwise. This topology allows an enhanced survivability of the network, specifically if a failure occurs on a link, the traffic originally transmitted on this link will be sent on the surviving part of the ring. The volume traffic on any ring is limited by the link capacity, called B. The cost of this kind of network is defined by the cost of the different components used in it.
There are different ways to represent a network. In this paper, we consider two network topologies described by R. Aringhieri and M. Dell'Amico in 2005 in [2]. In both topologies the goal is to minimize the cost of the network while guaranteeing that the customers' demands, in term of bandwidth, are satisfied.
The model associated to these topologies are based on graphs. Given an undirected graph G = (V, E), V = {1, . . . , n}, the set of nodes represent the customers and E, the set of edges, stand for the customers' traffic demands. A communication between two customers u and v corresponds to the weighted edge (u, v) in the graph, where the weight d uv is the fixed traffic demand. Note that d uv = d vu , and that d uu = 0.
First topology (SRAP)
In the first topology, each customer is connected to exactly one ring. All of these local rings are connected with a device called digital cross connector (DXC) to a special ring, called the federal ring. The traffic between two rings is transmitted over this special ring. Like the other rings, the federal ring is limited by the capacity B. Because DXCs are so much more expensive than ADMs we want to have the smallest possible number of them. As there is a one-to-one relationship between the ring and the DXC, minimizing the number of rings is equivalent to minimizing the number of DXCs. The problem associated to this topology is called SONET Ring Assignment Problem (SRAP) with capacity constraint. Figure 1 shows an example of this topology. Model This topology is modeled by a decomposition of the set of nodes V into a partition, each subset of the partition representing a particular ring. Assigning a node to a subset of the partition in the model is then equivalent to assigning a customer to a ring.
Formally, let V 1 ,V 2 , . . . ,V k be a partitioning of V in k subsets. Each customer in the subset V i is assigned to the i-th local ring. As each customer is connected with an ADM to one and only one ring, and each local ring is connected to the federal ring with a DXC, there are exactly |V | AMD and k DXC used in the corresponding SRAP network.
Hence, minimizing the number of rings is equivalent to minimizing k subject to the following constraints:
∑ u∈V i ∑ v∈V,v =u d uv ≤ B, ∀i = 1, . . . , k (1) k−1 ∑ i=1 k ∑ j=i+1 ∑ u∈V i ∑ v∈V j d uv ≤ B(2)
Constraint (1) imposes that the total traffic routed on each ring does not exceed the capacity B. In other words, for a given ring i, it forces the total traffic demands of all the customers connected to this ring, to be lower or equal to the bandwidth. Constraint (2) forces the load of federal ring to be less than or equal to B. To do so, it computes the sum of the traffic demands between all the pairs of customers connected to different rings. Figure 2 illustrates the relation between the node partitioning model and the first topology SRAP. We can see that, because the nodes 1, 3, 5 and 6 are in the same partition, they are connected to the same ring. Similarly, the nodes 2, 4 and 7 are on the same ring. For this problem we can easily compute a lower bound k lb introduced in [6]. In fact, we want to know the minimum number of partitions needed to route all the traffic. Reasonning on the total traffic amount, if we sum all the traffic demands of the graph and divide it by the bandwidth B, we trivially obtain a minmum for the number of rings, that is, a lower bound of the number of partitions. Moreover, we cannot have fractional part of partition, that is why we take the upper round of this fraction.
k lb = n−1 ∑ u=1 n ∑ v=u+1 d uv B
Second topology (IDP)
In the second topology, customers can be connected to more than one ring. If two customers want to communicate, they have to be connected to the same ring. In this case, the DXC are no longer needed and neither is the federal ring. However there are more ADM used than in the first topology. In this case, the most expensive component is the ADM although its price has significantly dropped over the past few years. It is important, in this topology, to have the smallest numbers of ADMs. This problem is called Intra-ring Synchronous Optical Network Design Problem (IDP). The figure 3 illustrates this topology. Model Contrarily to the SRAP problem, there is no need to assign each customer to a particular ring because customers can be connected to several rings. Here the model is based on a partition of the edges of the graph, where a subset of the partition corresponds to a ring.
Formally, let E 1 , E 2 , . . . , E k be a partitioning of E in k subsets and Nodes(E i ) be the set of endpoint nodes of the edges in E i . Each subset of the partition corresponds to a ring, in other words, each customer in Nodes(E i ) is linked to the i-th ring. In the corresponding IDP network, there are
k ∑ i=1 |Nodes(E i )| ADM and no DXC.
Hence, minimizing the number of ADMs is equivalent to minimizing
k ∑ i=1 |Nodes(E i )| subject to, ∑ (u,v)∈E i d uv ≤ B, ∀i = 1, . . . , k(3)
Constraint (3) imposes that the traffic in each ring does not exceed the capacity B. Figure 4 shows the relation between the edge partitioning and the second topology. If all the edges of a node are in the same partition, this node will only be connected to a ring. We can see, for example, the node 4 has all its edges in the same partition, because of that, the node 4 is connected to only one ring. On the opposite, the edges of the node 2 are in two different partitions, so it is connected to two rings. The SRAP problem can be seen as a node partitioning problem, whereas IDP, as an edge partitioning problem for the graph described above, subject to capacity constraints. These graph partitioning problems have been introduced in [6] and [7].
Both of these problems are N P-hard (see O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6], and O. Goldschmidt, D. Hochbaum, A. Levin and E. Olinick in 2003, [7] for details). The principal constraint, the load constraint, is similar to a capacity constraint, yet different: a capacity constraint holds on the variables in the sum, while the load constraint holds on the variables below the sum. The question is how to choose the d uv (which are data) that count for the load.
Brief introduction of Local Search
In order to efficiently and quickly solve these two combinatorial optimization problems, we decided to use Local Search instead of an exact algorithm. Indeed, it permits to search in a efficiently way among all the candidate solutions, by performing steps from a solution to another.
Principles Local search is a metaheuristic based on iterative improvement of an objective function. It has been proved very efficient on many combinatorial optimization problems like the Maximum Clique Problem (L. Cavique, C. Rego and I. Themido in 2001 in [9]), or the Graph Coloring Problem (J.P. Hansen and J.K. Hao in 2002 in [10]). It can be used on problems which formulated either as mere optimization problems, or as constrained optimization problems where the goal is to optimize an objective function while respecting some constraints. Local search algorithms perform local moves in the space of candidate solutions, called the search space, trying to improve the objective function, until a solution deemed optimal is found or a time bound is reached. Defining the neighborhood graph and the method to explore it are two of the key ingredients of local search algorithms.
The approach for solving combinatorial optimization problems with local search is very different from the systematic tree search of constraint and integer programming. Local search belongs to the family of metaheuristic algorithms, which are incomplete by nature and cannot prove optimality. However on many problems, it will isolate a optimal or high-quality solution in a very short time: local search sacrifices optimality guarantees to performance. In our case, we can compute the lower bound to either prove that the obtained solution is optimum, or estimate its optimality, hence local search is well suited.
Basic algorithm A local search algorithm starts from a candidate solution and then iteratively moves to a neighboring solution. This is only possible if a neighborhood relation is defined on the search space. Typically, for every candidate solution, we define a subset of the search space to be the neighborhood. Moves are performed from neighbors to neighbors, hence the name local search. The basic principle is to choose among the neighbors the one with the best value for the objective function. The problem is then that the algorithm will be stuck in local optima. Metaheuristics, such as Tabu Search, are added to avoid this. In Tabu Search, the last t visited configurations are left out of the search (t being a parameter of the algorithm): this ensures that the algorithm can escape local optima, at least at order t. A pseudo-code is given on figure 1.
Termination of local search can be based on a time bound. Another common choice is to terminate when the best solution found by the algorithm has not been improved in a given number of iterations. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. This can happen even if termination is due to the impossibility of improving the solution, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
Choose or construct an initial solution S 0 ; S ← S 0 ; /* S is the current solution */ S * ← S 0 ; /* S * is the best solution so far */ bestValue ← ob jValue(S 0 ) ; /* bestValue is the evaluation of S * */ T ← / 0 ; /* T is the Tabu list */ while Terminaison criterion not satisfied do N(S) ← all the neighboring solutions of S ; /* Neighborhood exploration */ S ← a solution in N(S) minimizing the objective ; if ob jValue(S) < bestValue then /* The solution found is better than S * */ S * ← S ; bestValue ← ob jValue(S) ; end Record tabu for the current move in T (delete oldest entry if necessary) ; end Algorithm 1: Tabu Search
COMET
COMET is an object-oriented language created by Pascal Van Hentenryck and Laurent Michel. It has a constraint-based architecture that makes it easy to use when implementing local search algorithms, and more important, constraint-based local search algorithms (see [1] for details).
Moreover, it has a rich modeling language, including invariants, and a rich constraint language featuring numerical, logical and combinatorial constraints. Constraints and objective functions are differentiable objects maintaining the properties used to direct the graph exploration. The constraints maintain their violations and the objectives their evaluation. One of its most important particularity, is that differentiable objects can be queried to determine incrementally the impact of local moves on their properties.
As we can see on the constraint (1), the sum are on datas (d uv ) and are determined by the variables (u ∈ V i , v ∈ V, v = u). We will rely on COMET 's built-in invariants to define a constraint to represent the load.
Greedy algorithms for SRAP
In [6] the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the edge-based, the cut-based and the node-based. The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings V i and V j if V i ∪ V j is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm 2 shows the pseudo code for the edge-based heuristic.
Given a value k, the node-based heuristic, starts by randomly assigning a node to each of the k rings. At each iteration it first chooses the ring V i with the largest unused capacity, then the unassigned node F ← E ; /* Initialize the set of edges that have not been used yet */ ∀v ∈ V ring(v) ← v ; /* Assign each node to a different ring */ while F = / 0 do /* There is still some edges that have not been used */ Choose a maximum capacity edge (u, v) ∈ F ; i ← ring(u), j ← ring(v) ; if V i ∪V j is a feasible ring then /* Merging the rings gives a feasible ring */ ∀v ∈ V j ring(v) ← i ;
F ← F\{(x, y)|ring(x) = i, ring(y) = j} ; else F ← F\{(u, v)} ; end end
Algorithm 2: Edge-Based Heuristic u with the largest traffic with the nodes in V i . Finally it adds u to the ring V i disregarding the capacity constraint. The pseudo-code for this heuristic is shown on algorithm 3. The node-based heuristic is run ten times. At each run, if a feasible solution is found, the corresponding value for k is kept and the next run takes k − 1 as an input. The idea behind this is to try and improve the objective at each run.
U ← V ; /* Initialize the set of nodes that have not been used yet */ for i = 1 to k do /* Assign k random nodes to the k partitions */ Choose u ∈ U,V i ← u,U ← U\{u} end while U = / 0 do /* There are some unused nodes */ Choose a minimum capacity ring V i
Choose u ∈ U to maximize ∑ {v∈V i } d uv ring(u) ← V i ,U ← U\{u} ; /* Assign u to V i */ end
Algorithm 3: Node-Based Heuristic
To test these heuristics, the authors have randomly generated 160 instances 1 . The edge-based, and the cut-based are run first. If they have found a feasible solution and obtained a value for k, the node-based is then run with the smallest value obtained for k as input. If they have not, the node-based heuristic has for input a random value from the range [k lb , |V |] where k lb is the lower bound, described previously.
MIP and Branch and Cut for IDP
A special case of the IDP problem where all the edges have the same weight, is studied in [7]. This special case is called the K-Edge-Partitioning problem. Given a simple undirected graph G = (V, E) and a value k < |E|, we want to find a partitioning of E, {E 1 , E 2 , . . . E l } such that ∀i ∈ {1, . . . , l}, |E i | ≤ k. The authors present two linear-time-approximation algorithms with fixed performance guarantee. Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( [8]), have studied the IDP problem with an additional constraint such that for each ring i, |Nodes(E i )| ≤ R. The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node u with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge [u, v] such that the graph degree of v is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit B. A set of 40 instances is generated to test these heuristics and the branch-and-cut.
Local Search for SRAP and IDP
More recently, in [2], these two problems have been studied. Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of [2] use the same for all of their metaheuristics. It tries to assign an item x from a partition, P 1 , to another partition, P 2 . The authors also consider the neighborhood obtained by swapping two items, x and y, from two different partitions, P 1 and P 2 . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of x to the partition P 2 is unfeasible.
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in [2] ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
The two other methods described in [2] are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V , such that for every two vertices in C, there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, Find two clique A and B such that each node in A is adjacent to each node in B then merge the two cliques (Union). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
In the third method, Clique-BF, it iteratively selects a clique of unassigned edges with the total traffic less or equal to B. Then assigns it to the ring that minimizes the residual capacity and, if possible, preserves the feasibility. If both of them are impossible it places it to a new ring. Algorithm 4 shows the pseudo code associated to this heuristic. The last algorithm, Cycle-BF, is like the previous method, but instead of looking for a clique at each iteration it try to find a cycle with as many cords as possible.
They also introduce four objective functions, one of which depends on the current and the next status of the search. Let z 0 be the basic objective function counting the number of rings of a solution for SRAP, and the total number of ADMs for IDP, and let BN be the highest load of a ring in the current solution.
U ← E ; r ← 0 ; while U = / 0 do Heuristicaly find a clique C ⊂ U such that weight(C) ≤ B ; /* Search a ring such that the weight of the ring plus the weight of the clique does not exceed B and is the biggest possible
*/ j ← min{B − weight(E i ) − weight(C) : i ∈ {1, . . . , k}, B − weight(E i ) − weight(C) ≥ 0} ; if j = null then r + + ; j ← r ; end E j ← E j ∪C ; U ← U\C ; end
Algorithm 4: Clique-BF
z 1 = z 0 + max{0, BN − B}, z 2 = z 1 + α · RingLoad(r)
if the last move has created a new ring r, The first function z 1 minimizes the basic function z 0 . As BN > B, it also penalizes the unfeasible solutions, by taking into account only one ring, the one with the highest overload. In addition to the penalty for the unfeasible solutions, z 2 penalizes the moves that increase the number of rings. Function z 3 encourages solutions with small z 0 , while among all the solutions with the same value of z 0 , it prefers the ones in which the rings have the same loads. The last objective function z 4 is an adapting technique that modifies the evaluation according to the status of the search. It is a variable objective function having different expressions for different transitions from the current status to the next one.
0 otherwise z 3 = z 0 · B + BN z 4 = z 4a = z 0 · B + BN(= z 3 ) (
Our work
In this section we present the different tools needed to implement the Constraints Based Local Search algorithms for SRAP and IDP. First we introduce the starting solution, then the neighborhoods and the objective functions. Finally we present the different local search algorithms.
Starting solution
Most of the times, local search starts from an random initial solution. However we have tested other possibilities and two other options proved to be more efficient.
The best initializing method assigned all the items, nodes for SRAP or edges for IDP, to the same partition. This solution is certainly unfeasible as all the traffic is on only one ring. This biases the search towards solutions with a minimum value for the cost and a very bad value for the capacity constraints' violations. Astonishingly this is the one that gave us the best results on large instances.
We had good confidence in another one which first computes the lower bound k lb (described in section 2) and randomly assigns all the items to exactly k lb partitions. The idea was to let the Local Search reduce the number of violations. This starting solution was good on small instances and not so good on large ones. It was the same with a random solution, which corresponds, for these problems, to a solution where all the items are randomly assigned to a partition.
Neighborhoods
In a generic partitioning problem there are usually two basic neighborhoods. From a given solution, we can move an object from a subset to another subset or swap two objects assigned to two different subsets. For SRAP a neighboring solution is produced by moving a node from a ring to another (including a new one) or by swapping two nodes assigned to two different rings. The same kind of neighborhood can be used for IDP: moving an edge from a ring to another or swapping two edges.
In some cases it is more efficient to restrain the neighborhood to the feasible space. We have tested different variants of the basic neighborhood applying this idea, by choosing the worst partition (wrt. the capacity constraint) and even by assigning it to the partition with the lowest load. Anyway it appears to be less efficient than the basic one. As will be seen later it seems that on these problems it is necessary to keep the search as broad as possible.
Objective function
We have compared the four objective functions described in [2] (see Section 2) to a new one we have defined: z 5 .
z 5 = z 0 + ∑ p ∈ partitions violations(p) where
partitions are all the rings (in the case of the SRAP problem the federal ring is also included),
violations(p) = capacity(p) − B if the load of p exceed B 0 otherwise.
This objective function minimizes the basic function z 0 and penalizes the unfeasible solutions, but contrarily to the previous objectives, this penalty is based on all the constraints. We consider that every constraint is violated by a certain amount (its current load minus B). By summing all the violations of the current solution, we obtain the total violation for all the constraints, and we can precisely say how far we are from a feasible one. If the current solution is feasible,
∑ p ∈ partitions violations(p) = 0.
This objective has also the nice property that it is merely local, depending only on the current solution and not on the other moves. Notice that a feasible solution with 4 rings will be preferred to an unfeasible solution with 3 rings, as z 0 is much smaller than the load of a ring.
Local Search
We have proposed a new algorithm called DMN2 which proved to be efficient on both problems. It is a variant of the Diversification by Multiple Neighborhood (DMN) proposed in [2]. DMN is based on Tabu Search, and adds a mechanism to perform diversification when the search is going round and round without improving the objective (eventhough it is not a local minimum). This replaces the classical random restart steps. We refine this particular mechanism by proposing several ways of escaping such areas.
More precisely, on our problems, after a series of consecutive non improving iterations, the DMN algorithm empties a partition by moving all its items to another partition, disregarding the capacity constraint and locally minimizing the objective function. There is a particular case for our function z 5 , because it integrates the capacity constraints. In this case, the "z 5 " version of DMN we have implemented moves the items to another partition minimizing z 5 . The results in [2] show a general trend on SRAP and IDP: the more diversification is performed, the better are the results. Following this idea, we propose different ways of perfoming the DMN step, which gives our algorithm DMN2. In DMN2, when the search needs to be diversified, it randomly chooses among three diversification methods (d 1 , d 2 , d 3 ). The first method, d 1 , is the diversification used in DMN. The second one, d 2 , generates a random solution, in the same way as a classic random restart. Finally, d 3 randomly chooses a number m in the range [1, k], where k is the number of rings, and applies m random moves.
In the end, our general algorithm starts with a solution where all the items are in the same partition. Then it applies one of the local search algorithms described before. If the solution returned by the local search is feasible but with the objective value greater than the lower bound k lb , it empties one partition by randomly assigning all its items to another. Then run once again the local search until it founds a solution with the objective value equals to k lb or until the time limit is exceeded.
Results
The objective functions and the metaheuristics, respectively described in Section 3.3 and Section 3.4, have been coded in COMET and tested on Intel based, dual-core, dual processor, Dell Poweredge 1855 blade server, running under Linux. The instances used are from the litterature.
Benchmark
To test the algorithms, we used two sets of instances. The first one has been introduced in [6]. They have generated 80 geometric instances, based on the fact that customers tend to communicate more with their close neighbors, and 80 random instances. These subsets have both 40 low-demand instances, with a ring capacity B = 155 Mbs, and 40 high-demand instances, where B = 622 Mbs. The traffic demand between two customers, u and v, is determined by a discrete uniform random variable corresponding to the number of T1 lines required for the anticipated volume of traffic between u and v. A T1 line has an approximate capacity of 1.5 Mbs. The number of T1 lines is randomly picked in the interval [3,7], for low-demand cases, while it is selected from the range [11,17], for the high-demand cases. The generated graphs have |V | ∈ {15, 25, 30, 50}. In the 160 instances, generated by O. Goldschmidt, A. Laugier and E. Olinick in 2003, 42 have been proven to be unfeasible by R. Aringhieri and M. Dell'Amico using CPLEX 8.0 (see [2]).
The second set of instances has been presented in [8]. They have generated 40 instances with a ring capacity B = 48× T1 lines and the number of T1 lines required for the traffic between two customers has been chosen in the interval [1,30]. The considered graphs have |V | ∈ {15, 20, 25} and |E| = {30, 35}. Most of the instances in this set are unfeasible.
Note that all the instances can be feasible for the IDP problem, we always could assign each demand to a different partition.
Computational Results
We now describe the results obtained for SRAP and IDP on the above two benchmark sets, by the algorithms Basic Tabu For each algorithm we consider the five objective functions of Section 3.3, but for the SS we use the three functions described in Section 3.4.
We gave a time limit of 5 minutes to each run of an algorithm. However we observed that the average time to find the best solution is less than 1 minute. Obviously, the algorithm terminates if the current best solution found is equal to the lower bound k lb . In case the lower bound is not reached, we define as a high-quality solution a solution for which the evaluation of the objective is equal to k lb + 1. Remind that objective functions z 2 and z 3 cannot be applied with the Scatter Search. The figure 5 only shows for each algorithm the number of optimal solutions found with the objective function z 5 . With the other objectives, the number of optimal solutions found is zero, that is why we did not show them on the diagram. However the other objectives found good solutions. Our conclusion is that maybe the other functions do not enough discriminate the different solutions. For this problem, we can see that the eXploring Tabu Search does not give good results. This can be due to a too early "backtracking". After a fixed number of consecutive non improving iterations the search goes back in a previous configuration and applies the second best move. In the case of the IDP problem, it could take much more iterations to improve the value of the objective function than for the SRAP problem. Indeed, the value of the objective function depends on the number of partitions in which a customer belongs, while an iteration moves only one edge ; and to reduce its value by only one it could need to move several edges. Figure 6 shows for each algorithm and each objective function, the number of instances for which the search has found an optimal solution, i.e. a solution with k lb partitions (in dark gray on the diagram) ; the number of those for which the best feasible solution found has k lb + 1 partitions (in gray) ; and, in light gray, the number of instances for which it has found a feasible solution with more than k lb + 1 partitions. From the objective functions perspective, we can see that z 4 , supposed to be the most improving one, is not that good in the COMET implementation. However the one we add, z 5 , is always better than the other ones.
Against all odds, the Basic Tabu Search on all the objective functions, is as good as the other search algorithms. Still on the local search algorithms, we can see that the second version of the Diversification by Multiple Neighborhoods, is much better than the first one with the objectives z 3 and z 4 .
For the details of our results see the report [11].
Conclusion
The purpose of this work was to reproduce with COMET the results obtained, for the SONET Design Problems, by R. Aringhieri and M. DellAmico in 2005 in ANSI C (see [2] for details).
We have implemented in COMET the algorithms and the objective functions described in this paper. We found relevant to add a variant of one of their local search algorithm and a new objective function. Unfortunately, we cannot exactly compare our results to theirs because the set of 230 instances they have generated is not available. However, for the IDP problem, we obtained better results for 15 instances over the 160 compared, and similar results for the other instances. Unfortunately we did not found their results for the SRAP problem. Still for the problem SRAP, compare to the results obtained by O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6] we obtained better results, we have more instances for wich the algorithm reach the lower bound and less unfeasible instances. It would be interesting to have all the instances and the results to fully compare our results.
In the end we can exhibit two main observations. Firstly, for these two problems, the more an algorithm uses diversification the better it is. Actually, we have tried different intensification methods for the local search algorithms but none of them improved the results, worst, they gave us pretty bad results.
Secondly, based on our results, we can say that our objective function implemented in COMET finds more good solutions than the other ones. It is a constraint-based objective function taking into account the violation of every constraint. Hence it has the asset of being both more generic and precise than the dedicated functions, with better results.
| 6,079 |
0910.1255
|
2086153481
|
This paper presents a new method and a constraint-based objective function to solve two problemsrelated to the design of optical telecommunication networks, namely the Synchronous Optical Net-work Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network DesignProblem (IDP). These network topology problems can be represented as a graph partitioning withcapacity constraints as shown in previous works. We present here a new objective function and anew local search algorithm to solve these problems. Experiments conducted in C
|
In @cite_4 the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the , the and the . The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings @math and @math if @math is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm shows the pseudo code for the edge-based heuristic.
|
{
"abstract": [
"We consider the problem of interconnecting a set of customer sites using bidirectional SONET rings of equal capacity. Each site is assigned to exactly one ring and a special ring, called the federal ring, interconnects the other rings together. The objective is to minimize the total cost of the network subject to a ring capacity limit where the capacity of a ring is determined by the total bandwidth required between sites assigned to the same ring plus the total bandwidth request between these sites and sites assigned to other rings.We present exact, integer-programming based solution techniques and fast heuristic algorithms for this problem. We compare the results from applying the heuristic algorithms with those produced by the exact methods for real-world as well as randomly generated problem instances. We show that two of the heuristics find solutions that cost at most twice that of an optimal solution. Empirical evidence indicates that in practice the algorithms perform much better than their theoretical bound and often find optimal solutions."
],
"cite_N": [
"@cite_4"
],
"mid": [
"2002547394"
]
}
|
Sonet Network Design Problems
|
This paper presents a new algorithm and an objective function to solve two real-world combinatorial optimization problems from the field of network design. These two problems, the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP), have been shown N P-hard and have already been solved by combinatorial optimization techniques. This work extends the seminal ideas introduced by R. Aringhieri and M. Dell'Amico in 2005 in [2]. This paper is organized as follows. In the sequel of this section we introduce the two problems we have worked on, and the local search techniques which have been used to solve them. We will also introduce the models in a constrained optimization format for the two problems. We then present the previous works on SRAP and IDP in section 2. Section 3 describes the key ingredients necessary to implement the local search algorithms. Finally, the results are shown in Section 4.
Optical networks topologies
During the last few years the number of internet based application users has exponentially increased, and so has the demand for bandwidth. To enable fast transmission of large quantities of data, the fiber optic technology in telecommunication is the current solution.
The Synchronous Optical NETwork (SONET) in North America and Synchronous Digital Hierarchy (SDH) in Europe and Japan are the standard designs for fiber optics networks. They have a ring-based topology, in other words, they are a collection of rings.
Rings Each customer is connected to one or more rings, and can send, receive and relay messages using an add-drop-multiplexer (ADM). There are two bidirectional links connecting each customer to his neighboring customers on the ring. In a bidirectional ring the traffic between two nodes can be sent clockwise or counterclockwise. This topology allows an enhanced survivability of the network, specifically if a failure occurs on a link, the traffic originally transmitted on this link will be sent on the surviving part of the ring. The volume traffic on any ring is limited by the link capacity, called B. The cost of this kind of network is defined by the cost of the different components used in it.
There are different ways to represent a network. In this paper, we consider two network topologies described by R. Aringhieri and M. Dell'Amico in 2005 in [2]. In both topologies the goal is to minimize the cost of the network while guaranteeing that the customers' demands, in term of bandwidth, are satisfied.
The model associated to these topologies are based on graphs. Given an undirected graph G = (V, E), V = {1, . . . , n}, the set of nodes represent the customers and E, the set of edges, stand for the customers' traffic demands. A communication between two customers u and v corresponds to the weighted edge (u, v) in the graph, where the weight d uv is the fixed traffic demand. Note that d uv = d vu , and that d uu = 0.
First topology (SRAP)
In the first topology, each customer is connected to exactly one ring. All of these local rings are connected with a device called digital cross connector (DXC) to a special ring, called the federal ring. The traffic between two rings is transmitted over this special ring. Like the other rings, the federal ring is limited by the capacity B. Because DXCs are so much more expensive than ADMs we want to have the smallest possible number of them. As there is a one-to-one relationship between the ring and the DXC, minimizing the number of rings is equivalent to minimizing the number of DXCs. The problem associated to this topology is called SONET Ring Assignment Problem (SRAP) with capacity constraint. Figure 1 shows an example of this topology. Model This topology is modeled by a decomposition of the set of nodes V into a partition, each subset of the partition representing a particular ring. Assigning a node to a subset of the partition in the model is then equivalent to assigning a customer to a ring.
Formally, let V 1 ,V 2 , . . . ,V k be a partitioning of V in k subsets. Each customer in the subset V i is assigned to the i-th local ring. As each customer is connected with an ADM to one and only one ring, and each local ring is connected to the federal ring with a DXC, there are exactly |V | AMD and k DXC used in the corresponding SRAP network.
Hence, minimizing the number of rings is equivalent to minimizing k subject to the following constraints:
∑ u∈V i ∑ v∈V,v =u d uv ≤ B, ∀i = 1, . . . , k (1) k−1 ∑ i=1 k ∑ j=i+1 ∑ u∈V i ∑ v∈V j d uv ≤ B(2)
Constraint (1) imposes that the total traffic routed on each ring does not exceed the capacity B. In other words, for a given ring i, it forces the total traffic demands of all the customers connected to this ring, to be lower or equal to the bandwidth. Constraint (2) forces the load of federal ring to be less than or equal to B. To do so, it computes the sum of the traffic demands between all the pairs of customers connected to different rings. Figure 2 illustrates the relation between the node partitioning model and the first topology SRAP. We can see that, because the nodes 1, 3, 5 and 6 are in the same partition, they are connected to the same ring. Similarly, the nodes 2, 4 and 7 are on the same ring. For this problem we can easily compute a lower bound k lb introduced in [6]. In fact, we want to know the minimum number of partitions needed to route all the traffic. Reasonning on the total traffic amount, if we sum all the traffic demands of the graph and divide it by the bandwidth B, we trivially obtain a minmum for the number of rings, that is, a lower bound of the number of partitions. Moreover, we cannot have fractional part of partition, that is why we take the upper round of this fraction.
k lb = n−1 ∑ u=1 n ∑ v=u+1 d uv B
Second topology (IDP)
In the second topology, customers can be connected to more than one ring. If two customers want to communicate, they have to be connected to the same ring. In this case, the DXC are no longer needed and neither is the federal ring. However there are more ADM used than in the first topology. In this case, the most expensive component is the ADM although its price has significantly dropped over the past few years. It is important, in this topology, to have the smallest numbers of ADMs. This problem is called Intra-ring Synchronous Optical Network Design Problem (IDP). The figure 3 illustrates this topology. Model Contrarily to the SRAP problem, there is no need to assign each customer to a particular ring because customers can be connected to several rings. Here the model is based on a partition of the edges of the graph, where a subset of the partition corresponds to a ring.
Formally, let E 1 , E 2 , . . . , E k be a partitioning of E in k subsets and Nodes(E i ) be the set of endpoint nodes of the edges in E i . Each subset of the partition corresponds to a ring, in other words, each customer in Nodes(E i ) is linked to the i-th ring. In the corresponding IDP network, there are
k ∑ i=1 |Nodes(E i )| ADM and no DXC.
Hence, minimizing the number of ADMs is equivalent to minimizing
k ∑ i=1 |Nodes(E i )| subject to, ∑ (u,v)∈E i d uv ≤ B, ∀i = 1, . . . , k(3)
Constraint (3) imposes that the traffic in each ring does not exceed the capacity B. Figure 4 shows the relation between the edge partitioning and the second topology. If all the edges of a node are in the same partition, this node will only be connected to a ring. We can see, for example, the node 4 has all its edges in the same partition, because of that, the node 4 is connected to only one ring. On the opposite, the edges of the node 2 are in two different partitions, so it is connected to two rings. The SRAP problem can be seen as a node partitioning problem, whereas IDP, as an edge partitioning problem for the graph described above, subject to capacity constraints. These graph partitioning problems have been introduced in [6] and [7].
Both of these problems are N P-hard (see O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6], and O. Goldschmidt, D. Hochbaum, A. Levin and E. Olinick in 2003, [7] for details). The principal constraint, the load constraint, is similar to a capacity constraint, yet different: a capacity constraint holds on the variables in the sum, while the load constraint holds on the variables below the sum. The question is how to choose the d uv (which are data) that count for the load.
Brief introduction of Local Search
In order to efficiently and quickly solve these two combinatorial optimization problems, we decided to use Local Search instead of an exact algorithm. Indeed, it permits to search in a efficiently way among all the candidate solutions, by performing steps from a solution to another.
Principles Local search is a metaheuristic based on iterative improvement of an objective function. It has been proved very efficient on many combinatorial optimization problems like the Maximum Clique Problem (L. Cavique, C. Rego and I. Themido in 2001 in [9]), or the Graph Coloring Problem (J.P. Hansen and J.K. Hao in 2002 in [10]). It can be used on problems which formulated either as mere optimization problems, or as constrained optimization problems where the goal is to optimize an objective function while respecting some constraints. Local search algorithms perform local moves in the space of candidate solutions, called the search space, trying to improve the objective function, until a solution deemed optimal is found or a time bound is reached. Defining the neighborhood graph and the method to explore it are two of the key ingredients of local search algorithms.
The approach for solving combinatorial optimization problems with local search is very different from the systematic tree search of constraint and integer programming. Local search belongs to the family of metaheuristic algorithms, which are incomplete by nature and cannot prove optimality. However on many problems, it will isolate a optimal or high-quality solution in a very short time: local search sacrifices optimality guarantees to performance. In our case, we can compute the lower bound to either prove that the obtained solution is optimum, or estimate its optimality, hence local search is well suited.
Basic algorithm A local search algorithm starts from a candidate solution and then iteratively moves to a neighboring solution. This is only possible if a neighborhood relation is defined on the search space. Typically, for every candidate solution, we define a subset of the search space to be the neighborhood. Moves are performed from neighbors to neighbors, hence the name local search. The basic principle is to choose among the neighbors the one with the best value for the objective function. The problem is then that the algorithm will be stuck in local optima. Metaheuristics, such as Tabu Search, are added to avoid this. In Tabu Search, the last t visited configurations are left out of the search (t being a parameter of the algorithm): this ensures that the algorithm can escape local optima, at least at order t. A pseudo-code is given on figure 1.
Termination of local search can be based on a time bound. Another common choice is to terminate when the best solution found by the algorithm has not been improved in a given number of iterations. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. This can happen even if termination is due to the impossibility of improving the solution, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
Choose or construct an initial solution S 0 ; S ← S 0 ; /* S is the current solution */ S * ← S 0 ; /* S * is the best solution so far */ bestValue ← ob jValue(S 0 ) ; /* bestValue is the evaluation of S * */ T ← / 0 ; /* T is the Tabu list */ while Terminaison criterion not satisfied do N(S) ← all the neighboring solutions of S ; /* Neighborhood exploration */ S ← a solution in N(S) minimizing the objective ; if ob jValue(S) < bestValue then /* The solution found is better than S * */ S * ← S ; bestValue ← ob jValue(S) ; end Record tabu for the current move in T (delete oldest entry if necessary) ; end Algorithm 1: Tabu Search
COMET
COMET is an object-oriented language created by Pascal Van Hentenryck and Laurent Michel. It has a constraint-based architecture that makes it easy to use when implementing local search algorithms, and more important, constraint-based local search algorithms (see [1] for details).
Moreover, it has a rich modeling language, including invariants, and a rich constraint language featuring numerical, logical and combinatorial constraints. Constraints and objective functions are differentiable objects maintaining the properties used to direct the graph exploration. The constraints maintain their violations and the objectives their evaluation. One of its most important particularity, is that differentiable objects can be queried to determine incrementally the impact of local moves on their properties.
As we can see on the constraint (1), the sum are on datas (d uv ) and are determined by the variables (u ∈ V i , v ∈ V, v = u). We will rely on COMET 's built-in invariants to define a constraint to represent the load.
Greedy algorithms for SRAP
In [6] the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the edge-based, the cut-based and the node-based. The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings V i and V j if V i ∪ V j is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm 2 shows the pseudo code for the edge-based heuristic.
Given a value k, the node-based heuristic, starts by randomly assigning a node to each of the k rings. At each iteration it first chooses the ring V i with the largest unused capacity, then the unassigned node F ← E ; /* Initialize the set of edges that have not been used yet */ ∀v ∈ V ring(v) ← v ; /* Assign each node to a different ring */ while F = / 0 do /* There is still some edges that have not been used */ Choose a maximum capacity edge (u, v) ∈ F ; i ← ring(u), j ← ring(v) ; if V i ∪V j is a feasible ring then /* Merging the rings gives a feasible ring */ ∀v ∈ V j ring(v) ← i ;
F ← F\{(x, y)|ring(x) = i, ring(y) = j} ; else F ← F\{(u, v)} ; end end
Algorithm 2: Edge-Based Heuristic u with the largest traffic with the nodes in V i . Finally it adds u to the ring V i disregarding the capacity constraint. The pseudo-code for this heuristic is shown on algorithm 3. The node-based heuristic is run ten times. At each run, if a feasible solution is found, the corresponding value for k is kept and the next run takes k − 1 as an input. The idea behind this is to try and improve the objective at each run.
U ← V ; /* Initialize the set of nodes that have not been used yet */ for i = 1 to k do /* Assign k random nodes to the k partitions */ Choose u ∈ U,V i ← u,U ← U\{u} end while U = / 0 do /* There are some unused nodes */ Choose a minimum capacity ring V i
Choose u ∈ U to maximize ∑ {v∈V i } d uv ring(u) ← V i ,U ← U\{u} ; /* Assign u to V i */ end
Algorithm 3: Node-Based Heuristic
To test these heuristics, the authors have randomly generated 160 instances 1 . The edge-based, and the cut-based are run first. If they have found a feasible solution and obtained a value for k, the node-based is then run with the smallest value obtained for k as input. If they have not, the node-based heuristic has for input a random value from the range [k lb , |V |] where k lb is the lower bound, described previously.
MIP and Branch and Cut for IDP
A special case of the IDP problem where all the edges have the same weight, is studied in [7]. This special case is called the K-Edge-Partitioning problem. Given a simple undirected graph G = (V, E) and a value k < |E|, we want to find a partitioning of E, {E 1 , E 2 , . . . E l } such that ∀i ∈ {1, . . . , l}, |E i | ≤ k. The authors present two linear-time-approximation algorithms with fixed performance guarantee. Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( [8]), have studied the IDP problem with an additional constraint such that for each ring i, |Nodes(E i )| ≤ R. The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node u with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge [u, v] such that the graph degree of v is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit B. A set of 40 instances is generated to test these heuristics and the branch-and-cut.
Local Search for SRAP and IDP
More recently, in [2], these two problems have been studied. Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of [2] use the same for all of their metaheuristics. It tries to assign an item x from a partition, P 1 , to another partition, P 2 . The authors also consider the neighborhood obtained by swapping two items, x and y, from two different partitions, P 1 and P 2 . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of x to the partition P 2 is unfeasible.
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in [2] ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
The two other methods described in [2] are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V , such that for every two vertices in C, there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, Find two clique A and B such that each node in A is adjacent to each node in B then merge the two cliques (Union). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
In the third method, Clique-BF, it iteratively selects a clique of unassigned edges with the total traffic less or equal to B. Then assigns it to the ring that minimizes the residual capacity and, if possible, preserves the feasibility. If both of them are impossible it places it to a new ring. Algorithm 4 shows the pseudo code associated to this heuristic. The last algorithm, Cycle-BF, is like the previous method, but instead of looking for a clique at each iteration it try to find a cycle with as many cords as possible.
They also introduce four objective functions, one of which depends on the current and the next status of the search. Let z 0 be the basic objective function counting the number of rings of a solution for SRAP, and the total number of ADMs for IDP, and let BN be the highest load of a ring in the current solution.
U ← E ; r ← 0 ; while U = / 0 do Heuristicaly find a clique C ⊂ U such that weight(C) ≤ B ; /* Search a ring such that the weight of the ring plus the weight of the clique does not exceed B and is the biggest possible
*/ j ← min{B − weight(E i ) − weight(C) : i ∈ {1, . . . , k}, B − weight(E i ) − weight(C) ≥ 0} ; if j = null then r + + ; j ← r ; end E j ← E j ∪C ; U ← U\C ; end
Algorithm 4: Clique-BF
z 1 = z 0 + max{0, BN − B}, z 2 = z 1 + α · RingLoad(r)
if the last move has created a new ring r, The first function z 1 minimizes the basic function z 0 . As BN > B, it also penalizes the unfeasible solutions, by taking into account only one ring, the one with the highest overload. In addition to the penalty for the unfeasible solutions, z 2 penalizes the moves that increase the number of rings. Function z 3 encourages solutions with small z 0 , while among all the solutions with the same value of z 0 , it prefers the ones in which the rings have the same loads. The last objective function z 4 is an adapting technique that modifies the evaluation according to the status of the search. It is a variable objective function having different expressions for different transitions from the current status to the next one.
0 otherwise z 3 = z 0 · B + BN z 4 = z 4a = z 0 · B + BN(= z 3 ) (
Our work
In this section we present the different tools needed to implement the Constraints Based Local Search algorithms for SRAP and IDP. First we introduce the starting solution, then the neighborhoods and the objective functions. Finally we present the different local search algorithms.
Starting solution
Most of the times, local search starts from an random initial solution. However we have tested other possibilities and two other options proved to be more efficient.
The best initializing method assigned all the items, nodes for SRAP or edges for IDP, to the same partition. This solution is certainly unfeasible as all the traffic is on only one ring. This biases the search towards solutions with a minimum value for the cost and a very bad value for the capacity constraints' violations. Astonishingly this is the one that gave us the best results on large instances.
We had good confidence in another one which first computes the lower bound k lb (described in section 2) and randomly assigns all the items to exactly k lb partitions. The idea was to let the Local Search reduce the number of violations. This starting solution was good on small instances and not so good on large ones. It was the same with a random solution, which corresponds, for these problems, to a solution where all the items are randomly assigned to a partition.
Neighborhoods
In a generic partitioning problem there are usually two basic neighborhoods. From a given solution, we can move an object from a subset to another subset or swap two objects assigned to two different subsets. For SRAP a neighboring solution is produced by moving a node from a ring to another (including a new one) or by swapping two nodes assigned to two different rings. The same kind of neighborhood can be used for IDP: moving an edge from a ring to another or swapping two edges.
In some cases it is more efficient to restrain the neighborhood to the feasible space. We have tested different variants of the basic neighborhood applying this idea, by choosing the worst partition (wrt. the capacity constraint) and even by assigning it to the partition with the lowest load. Anyway it appears to be less efficient than the basic one. As will be seen later it seems that on these problems it is necessary to keep the search as broad as possible.
Objective function
We have compared the four objective functions described in [2] (see Section 2) to a new one we have defined: z 5 .
z 5 = z 0 + ∑ p ∈ partitions violations(p) where
partitions are all the rings (in the case of the SRAP problem the federal ring is also included),
violations(p) = capacity(p) − B if the load of p exceed B 0 otherwise.
This objective function minimizes the basic function z 0 and penalizes the unfeasible solutions, but contrarily to the previous objectives, this penalty is based on all the constraints. We consider that every constraint is violated by a certain amount (its current load minus B). By summing all the violations of the current solution, we obtain the total violation for all the constraints, and we can precisely say how far we are from a feasible one. If the current solution is feasible,
∑ p ∈ partitions violations(p) = 0.
This objective has also the nice property that it is merely local, depending only on the current solution and not on the other moves. Notice that a feasible solution with 4 rings will be preferred to an unfeasible solution with 3 rings, as z 0 is much smaller than the load of a ring.
Local Search
We have proposed a new algorithm called DMN2 which proved to be efficient on both problems. It is a variant of the Diversification by Multiple Neighborhood (DMN) proposed in [2]. DMN is based on Tabu Search, and adds a mechanism to perform diversification when the search is going round and round without improving the objective (eventhough it is not a local minimum). This replaces the classical random restart steps. We refine this particular mechanism by proposing several ways of escaping such areas.
More precisely, on our problems, after a series of consecutive non improving iterations, the DMN algorithm empties a partition by moving all its items to another partition, disregarding the capacity constraint and locally minimizing the objective function. There is a particular case for our function z 5 , because it integrates the capacity constraints. In this case, the "z 5 " version of DMN we have implemented moves the items to another partition minimizing z 5 . The results in [2] show a general trend on SRAP and IDP: the more diversification is performed, the better are the results. Following this idea, we propose different ways of perfoming the DMN step, which gives our algorithm DMN2. In DMN2, when the search needs to be diversified, it randomly chooses among three diversification methods (d 1 , d 2 , d 3 ). The first method, d 1 , is the diversification used in DMN. The second one, d 2 , generates a random solution, in the same way as a classic random restart. Finally, d 3 randomly chooses a number m in the range [1, k], where k is the number of rings, and applies m random moves.
In the end, our general algorithm starts with a solution where all the items are in the same partition. Then it applies one of the local search algorithms described before. If the solution returned by the local search is feasible but with the objective value greater than the lower bound k lb , it empties one partition by randomly assigning all its items to another. Then run once again the local search until it founds a solution with the objective value equals to k lb or until the time limit is exceeded.
Results
The objective functions and the metaheuristics, respectively described in Section 3.3 and Section 3.4, have been coded in COMET and tested on Intel based, dual-core, dual processor, Dell Poweredge 1855 blade server, running under Linux. The instances used are from the litterature.
Benchmark
To test the algorithms, we used two sets of instances. The first one has been introduced in [6]. They have generated 80 geometric instances, based on the fact that customers tend to communicate more with their close neighbors, and 80 random instances. These subsets have both 40 low-demand instances, with a ring capacity B = 155 Mbs, and 40 high-demand instances, where B = 622 Mbs. The traffic demand between two customers, u and v, is determined by a discrete uniform random variable corresponding to the number of T1 lines required for the anticipated volume of traffic between u and v. A T1 line has an approximate capacity of 1.5 Mbs. The number of T1 lines is randomly picked in the interval [3,7], for low-demand cases, while it is selected from the range [11,17], for the high-demand cases. The generated graphs have |V | ∈ {15, 25, 30, 50}. In the 160 instances, generated by O. Goldschmidt, A. Laugier and E. Olinick in 2003, 42 have been proven to be unfeasible by R. Aringhieri and M. Dell'Amico using CPLEX 8.0 (see [2]).
The second set of instances has been presented in [8]. They have generated 40 instances with a ring capacity B = 48× T1 lines and the number of T1 lines required for the traffic between two customers has been chosen in the interval [1,30]. The considered graphs have |V | ∈ {15, 20, 25} and |E| = {30, 35}. Most of the instances in this set are unfeasible.
Note that all the instances can be feasible for the IDP problem, we always could assign each demand to a different partition.
Computational Results
We now describe the results obtained for SRAP and IDP on the above two benchmark sets, by the algorithms Basic Tabu For each algorithm we consider the five objective functions of Section 3.3, but for the SS we use the three functions described in Section 3.4.
We gave a time limit of 5 minutes to each run of an algorithm. However we observed that the average time to find the best solution is less than 1 minute. Obviously, the algorithm terminates if the current best solution found is equal to the lower bound k lb . In case the lower bound is not reached, we define as a high-quality solution a solution for which the evaluation of the objective is equal to k lb + 1. Remind that objective functions z 2 and z 3 cannot be applied with the Scatter Search. The figure 5 only shows for each algorithm the number of optimal solutions found with the objective function z 5 . With the other objectives, the number of optimal solutions found is zero, that is why we did not show them on the diagram. However the other objectives found good solutions. Our conclusion is that maybe the other functions do not enough discriminate the different solutions. For this problem, we can see that the eXploring Tabu Search does not give good results. This can be due to a too early "backtracking". After a fixed number of consecutive non improving iterations the search goes back in a previous configuration and applies the second best move. In the case of the IDP problem, it could take much more iterations to improve the value of the objective function than for the SRAP problem. Indeed, the value of the objective function depends on the number of partitions in which a customer belongs, while an iteration moves only one edge ; and to reduce its value by only one it could need to move several edges. Figure 6 shows for each algorithm and each objective function, the number of instances for which the search has found an optimal solution, i.e. a solution with k lb partitions (in dark gray on the diagram) ; the number of those for which the best feasible solution found has k lb + 1 partitions (in gray) ; and, in light gray, the number of instances for which it has found a feasible solution with more than k lb + 1 partitions. From the objective functions perspective, we can see that z 4 , supposed to be the most improving one, is not that good in the COMET implementation. However the one we add, z 5 , is always better than the other ones.
Against all odds, the Basic Tabu Search on all the objective functions, is as good as the other search algorithms. Still on the local search algorithms, we can see that the second version of the Diversification by Multiple Neighborhoods, is much better than the first one with the objectives z 3 and z 4 .
For the details of our results see the report [11].
Conclusion
The purpose of this work was to reproduce with COMET the results obtained, for the SONET Design Problems, by R. Aringhieri and M. DellAmico in 2005 in ANSI C (see [2] for details).
We have implemented in COMET the algorithms and the objective functions described in this paper. We found relevant to add a variant of one of their local search algorithm and a new objective function. Unfortunately, we cannot exactly compare our results to theirs because the set of 230 instances they have generated is not available. However, for the IDP problem, we obtained better results for 15 instances over the 160 compared, and similar results for the other instances. Unfortunately we did not found their results for the SRAP problem. Still for the problem SRAP, compare to the results obtained by O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6] we obtained better results, we have more instances for wich the algorithm reach the lower bound and less unfeasible instances. It would be interesting to have all the instances and the results to fully compare our results.
In the end we can exhibit two main observations. Firstly, for these two problems, the more an algorithm uses diversification the better it is. Actually, we have tried different intensification methods for the local search algorithms but none of them improved the results, worst, they gave us pretty bad results.
Secondly, based on our results, we can say that our objective function implemented in COMET finds more good solutions than the other ones. It is a constraint-based objective function taking into account the violation of every constraint. Hence it has the asset of being both more generic and precise than the dedicated functions, with better results.
| 6,079 |
0910.1255
|
2086153481
|
This paper presents a new method and a constraint-based objective function to solve two problemsrelated to the design of optical telecommunication networks, namely the Synchronous Optical Net-work Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network DesignProblem (IDP). These network topology problems can be represented as a graph partitioning withcapacity constraints as shown in previous works. We present here a new objective function and anew local search algorithm to solve these problems. Experiments conducted in C
|
A special case of the IDP problem where all the edges have the same weight, is studied in @cite_1 . This special case is called the problem. Given a simple undirected graph @math and a value @math , we want to find a partitioning of @math , @math such that @math . The authors present two linear-time-approximation algorithms with fixed performance guarantee.
|
{
"abstract": [
"Motivated by a problem arising in the design of telecommunications networks using the SONET standard, we consider the problem of covering all edges of a graph using subgraphs that contain at most k edges with the objective of minimizing the total number of vertices in the subgraphs. We show that the problem is -hard when k ≥ 3 and present a linear-time -approximation algorithm. For even k values, we present an approximation scheme with a reduced ratio but with increased complexity. © 2002 Wiley Periodicals, Inc."
],
"cite_N": [
"@cite_1"
],
"mid": [
"2139721783"
]
}
|
Sonet Network Design Problems
|
This paper presents a new algorithm and an objective function to solve two real-world combinatorial optimization problems from the field of network design. These two problems, the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP), have been shown N P-hard and have already been solved by combinatorial optimization techniques. This work extends the seminal ideas introduced by R. Aringhieri and M. Dell'Amico in 2005 in [2]. This paper is organized as follows. In the sequel of this section we introduce the two problems we have worked on, and the local search techniques which have been used to solve them. We will also introduce the models in a constrained optimization format for the two problems. We then present the previous works on SRAP and IDP in section 2. Section 3 describes the key ingredients necessary to implement the local search algorithms. Finally, the results are shown in Section 4.
Optical networks topologies
During the last few years the number of internet based application users has exponentially increased, and so has the demand for bandwidth. To enable fast transmission of large quantities of data, the fiber optic technology in telecommunication is the current solution.
The Synchronous Optical NETwork (SONET) in North America and Synchronous Digital Hierarchy (SDH) in Europe and Japan are the standard designs for fiber optics networks. They have a ring-based topology, in other words, they are a collection of rings.
Rings Each customer is connected to one or more rings, and can send, receive and relay messages using an add-drop-multiplexer (ADM). There are two bidirectional links connecting each customer to his neighboring customers on the ring. In a bidirectional ring the traffic between two nodes can be sent clockwise or counterclockwise. This topology allows an enhanced survivability of the network, specifically if a failure occurs on a link, the traffic originally transmitted on this link will be sent on the surviving part of the ring. The volume traffic on any ring is limited by the link capacity, called B. The cost of this kind of network is defined by the cost of the different components used in it.
There are different ways to represent a network. In this paper, we consider two network topologies described by R. Aringhieri and M. Dell'Amico in 2005 in [2]. In both topologies the goal is to minimize the cost of the network while guaranteeing that the customers' demands, in term of bandwidth, are satisfied.
The model associated to these topologies are based on graphs. Given an undirected graph G = (V, E), V = {1, . . . , n}, the set of nodes represent the customers and E, the set of edges, stand for the customers' traffic demands. A communication between two customers u and v corresponds to the weighted edge (u, v) in the graph, where the weight d uv is the fixed traffic demand. Note that d uv = d vu , and that d uu = 0.
First topology (SRAP)
In the first topology, each customer is connected to exactly one ring. All of these local rings are connected with a device called digital cross connector (DXC) to a special ring, called the federal ring. The traffic between two rings is transmitted over this special ring. Like the other rings, the federal ring is limited by the capacity B. Because DXCs are so much more expensive than ADMs we want to have the smallest possible number of them. As there is a one-to-one relationship between the ring and the DXC, minimizing the number of rings is equivalent to minimizing the number of DXCs. The problem associated to this topology is called SONET Ring Assignment Problem (SRAP) with capacity constraint. Figure 1 shows an example of this topology. Model This topology is modeled by a decomposition of the set of nodes V into a partition, each subset of the partition representing a particular ring. Assigning a node to a subset of the partition in the model is then equivalent to assigning a customer to a ring.
Formally, let V 1 ,V 2 , . . . ,V k be a partitioning of V in k subsets. Each customer in the subset V i is assigned to the i-th local ring. As each customer is connected with an ADM to one and only one ring, and each local ring is connected to the federal ring with a DXC, there are exactly |V | AMD and k DXC used in the corresponding SRAP network.
Hence, minimizing the number of rings is equivalent to minimizing k subject to the following constraints:
∑ u∈V i ∑ v∈V,v =u d uv ≤ B, ∀i = 1, . . . , k (1) k−1 ∑ i=1 k ∑ j=i+1 ∑ u∈V i ∑ v∈V j d uv ≤ B(2)
Constraint (1) imposes that the total traffic routed on each ring does not exceed the capacity B. In other words, for a given ring i, it forces the total traffic demands of all the customers connected to this ring, to be lower or equal to the bandwidth. Constraint (2) forces the load of federal ring to be less than or equal to B. To do so, it computes the sum of the traffic demands between all the pairs of customers connected to different rings. Figure 2 illustrates the relation between the node partitioning model and the first topology SRAP. We can see that, because the nodes 1, 3, 5 and 6 are in the same partition, they are connected to the same ring. Similarly, the nodes 2, 4 and 7 are on the same ring. For this problem we can easily compute a lower bound k lb introduced in [6]. In fact, we want to know the minimum number of partitions needed to route all the traffic. Reasonning on the total traffic amount, if we sum all the traffic demands of the graph and divide it by the bandwidth B, we trivially obtain a minmum for the number of rings, that is, a lower bound of the number of partitions. Moreover, we cannot have fractional part of partition, that is why we take the upper round of this fraction.
k lb = n−1 ∑ u=1 n ∑ v=u+1 d uv B
Second topology (IDP)
In the second topology, customers can be connected to more than one ring. If two customers want to communicate, they have to be connected to the same ring. In this case, the DXC are no longer needed and neither is the federal ring. However there are more ADM used than in the first topology. In this case, the most expensive component is the ADM although its price has significantly dropped over the past few years. It is important, in this topology, to have the smallest numbers of ADMs. This problem is called Intra-ring Synchronous Optical Network Design Problem (IDP). The figure 3 illustrates this topology. Model Contrarily to the SRAP problem, there is no need to assign each customer to a particular ring because customers can be connected to several rings. Here the model is based on a partition of the edges of the graph, where a subset of the partition corresponds to a ring.
Formally, let E 1 , E 2 , . . . , E k be a partitioning of E in k subsets and Nodes(E i ) be the set of endpoint nodes of the edges in E i . Each subset of the partition corresponds to a ring, in other words, each customer in Nodes(E i ) is linked to the i-th ring. In the corresponding IDP network, there are
k ∑ i=1 |Nodes(E i )| ADM and no DXC.
Hence, minimizing the number of ADMs is equivalent to minimizing
k ∑ i=1 |Nodes(E i )| subject to, ∑ (u,v)∈E i d uv ≤ B, ∀i = 1, . . . , k(3)
Constraint (3) imposes that the traffic in each ring does not exceed the capacity B. Figure 4 shows the relation between the edge partitioning and the second topology. If all the edges of a node are in the same partition, this node will only be connected to a ring. We can see, for example, the node 4 has all its edges in the same partition, because of that, the node 4 is connected to only one ring. On the opposite, the edges of the node 2 are in two different partitions, so it is connected to two rings. The SRAP problem can be seen as a node partitioning problem, whereas IDP, as an edge partitioning problem for the graph described above, subject to capacity constraints. These graph partitioning problems have been introduced in [6] and [7].
Both of these problems are N P-hard (see O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6], and O. Goldschmidt, D. Hochbaum, A. Levin and E. Olinick in 2003, [7] for details). The principal constraint, the load constraint, is similar to a capacity constraint, yet different: a capacity constraint holds on the variables in the sum, while the load constraint holds on the variables below the sum. The question is how to choose the d uv (which are data) that count for the load.
Brief introduction of Local Search
In order to efficiently and quickly solve these two combinatorial optimization problems, we decided to use Local Search instead of an exact algorithm. Indeed, it permits to search in a efficiently way among all the candidate solutions, by performing steps from a solution to another.
Principles Local search is a metaheuristic based on iterative improvement of an objective function. It has been proved very efficient on many combinatorial optimization problems like the Maximum Clique Problem (L. Cavique, C. Rego and I. Themido in 2001 in [9]), or the Graph Coloring Problem (J.P. Hansen and J.K. Hao in 2002 in [10]). It can be used on problems which formulated either as mere optimization problems, or as constrained optimization problems where the goal is to optimize an objective function while respecting some constraints. Local search algorithms perform local moves in the space of candidate solutions, called the search space, trying to improve the objective function, until a solution deemed optimal is found or a time bound is reached. Defining the neighborhood graph and the method to explore it are two of the key ingredients of local search algorithms.
The approach for solving combinatorial optimization problems with local search is very different from the systematic tree search of constraint and integer programming. Local search belongs to the family of metaheuristic algorithms, which are incomplete by nature and cannot prove optimality. However on many problems, it will isolate a optimal or high-quality solution in a very short time: local search sacrifices optimality guarantees to performance. In our case, we can compute the lower bound to either prove that the obtained solution is optimum, or estimate its optimality, hence local search is well suited.
Basic algorithm A local search algorithm starts from a candidate solution and then iteratively moves to a neighboring solution. This is only possible if a neighborhood relation is defined on the search space. Typically, for every candidate solution, we define a subset of the search space to be the neighborhood. Moves are performed from neighbors to neighbors, hence the name local search. The basic principle is to choose among the neighbors the one with the best value for the objective function. The problem is then that the algorithm will be stuck in local optima. Metaheuristics, such as Tabu Search, are added to avoid this. In Tabu Search, the last t visited configurations are left out of the search (t being a parameter of the algorithm): this ensures that the algorithm can escape local optima, at least at order t. A pseudo-code is given on figure 1.
Termination of local search can be based on a time bound. Another common choice is to terminate when the best solution found by the algorithm has not been improved in a given number of iterations. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. This can happen even if termination is due to the impossibility of improving the solution, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
Choose or construct an initial solution S 0 ; S ← S 0 ; /* S is the current solution */ S * ← S 0 ; /* S * is the best solution so far */ bestValue ← ob jValue(S 0 ) ; /* bestValue is the evaluation of S * */ T ← / 0 ; /* T is the Tabu list */ while Terminaison criterion not satisfied do N(S) ← all the neighboring solutions of S ; /* Neighborhood exploration */ S ← a solution in N(S) minimizing the objective ; if ob jValue(S) < bestValue then /* The solution found is better than S * */ S * ← S ; bestValue ← ob jValue(S) ; end Record tabu for the current move in T (delete oldest entry if necessary) ; end Algorithm 1: Tabu Search
COMET
COMET is an object-oriented language created by Pascal Van Hentenryck and Laurent Michel. It has a constraint-based architecture that makes it easy to use when implementing local search algorithms, and more important, constraint-based local search algorithms (see [1] for details).
Moreover, it has a rich modeling language, including invariants, and a rich constraint language featuring numerical, logical and combinatorial constraints. Constraints and objective functions are differentiable objects maintaining the properties used to direct the graph exploration. The constraints maintain their violations and the objectives their evaluation. One of its most important particularity, is that differentiable objects can be queried to determine incrementally the impact of local moves on their properties.
As we can see on the constraint (1), the sum are on datas (d uv ) and are determined by the variables (u ∈ V i , v ∈ V, v = u). We will rely on COMET 's built-in invariants to define a constraint to represent the load.
Greedy algorithms for SRAP
In [6] the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the edge-based, the cut-based and the node-based. The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings V i and V j if V i ∪ V j is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm 2 shows the pseudo code for the edge-based heuristic.
Given a value k, the node-based heuristic, starts by randomly assigning a node to each of the k rings. At each iteration it first chooses the ring V i with the largest unused capacity, then the unassigned node F ← E ; /* Initialize the set of edges that have not been used yet */ ∀v ∈ V ring(v) ← v ; /* Assign each node to a different ring */ while F = / 0 do /* There is still some edges that have not been used */ Choose a maximum capacity edge (u, v) ∈ F ; i ← ring(u), j ← ring(v) ; if V i ∪V j is a feasible ring then /* Merging the rings gives a feasible ring */ ∀v ∈ V j ring(v) ← i ;
F ← F\{(x, y)|ring(x) = i, ring(y) = j} ; else F ← F\{(u, v)} ; end end
Algorithm 2: Edge-Based Heuristic u with the largest traffic with the nodes in V i . Finally it adds u to the ring V i disregarding the capacity constraint. The pseudo-code for this heuristic is shown on algorithm 3. The node-based heuristic is run ten times. At each run, if a feasible solution is found, the corresponding value for k is kept and the next run takes k − 1 as an input. The idea behind this is to try and improve the objective at each run.
U ← V ; /* Initialize the set of nodes that have not been used yet */ for i = 1 to k do /* Assign k random nodes to the k partitions */ Choose u ∈ U,V i ← u,U ← U\{u} end while U = / 0 do /* There are some unused nodes */ Choose a minimum capacity ring V i
Choose u ∈ U to maximize ∑ {v∈V i } d uv ring(u) ← V i ,U ← U\{u} ; /* Assign u to V i */ end
Algorithm 3: Node-Based Heuristic
To test these heuristics, the authors have randomly generated 160 instances 1 . The edge-based, and the cut-based are run first. If they have found a feasible solution and obtained a value for k, the node-based is then run with the smallest value obtained for k as input. If they have not, the node-based heuristic has for input a random value from the range [k lb , |V |] where k lb is the lower bound, described previously.
MIP and Branch and Cut for IDP
A special case of the IDP problem where all the edges have the same weight, is studied in [7]. This special case is called the K-Edge-Partitioning problem. Given a simple undirected graph G = (V, E) and a value k < |E|, we want to find a partitioning of E, {E 1 , E 2 , . . . E l } such that ∀i ∈ {1, . . . , l}, |E i | ≤ k. The authors present two linear-time-approximation algorithms with fixed performance guarantee. Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( [8]), have studied the IDP problem with an additional constraint such that for each ring i, |Nodes(E i )| ≤ R. The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node u with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge [u, v] such that the graph degree of v is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit B. A set of 40 instances is generated to test these heuristics and the branch-and-cut.
Local Search for SRAP and IDP
More recently, in [2], these two problems have been studied. Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of [2] use the same for all of their metaheuristics. It tries to assign an item x from a partition, P 1 , to another partition, P 2 . The authors also consider the neighborhood obtained by swapping two items, x and y, from two different partitions, P 1 and P 2 . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of x to the partition P 2 is unfeasible.
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in [2] ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
The two other methods described in [2] are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V , such that for every two vertices in C, there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, Find two clique A and B such that each node in A is adjacent to each node in B then merge the two cliques (Union). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
In the third method, Clique-BF, it iteratively selects a clique of unassigned edges with the total traffic less or equal to B. Then assigns it to the ring that minimizes the residual capacity and, if possible, preserves the feasibility. If both of them are impossible it places it to a new ring. Algorithm 4 shows the pseudo code associated to this heuristic. The last algorithm, Cycle-BF, is like the previous method, but instead of looking for a clique at each iteration it try to find a cycle with as many cords as possible.
They also introduce four objective functions, one of which depends on the current and the next status of the search. Let z 0 be the basic objective function counting the number of rings of a solution for SRAP, and the total number of ADMs for IDP, and let BN be the highest load of a ring in the current solution.
U ← E ; r ← 0 ; while U = / 0 do Heuristicaly find a clique C ⊂ U such that weight(C) ≤ B ; /* Search a ring such that the weight of the ring plus the weight of the clique does not exceed B and is the biggest possible
*/ j ← min{B − weight(E i ) − weight(C) : i ∈ {1, . . . , k}, B − weight(E i ) − weight(C) ≥ 0} ; if j = null then r + + ; j ← r ; end E j ← E j ∪C ; U ← U\C ; end
Algorithm 4: Clique-BF
z 1 = z 0 + max{0, BN − B}, z 2 = z 1 + α · RingLoad(r)
if the last move has created a new ring r, The first function z 1 minimizes the basic function z 0 . As BN > B, it also penalizes the unfeasible solutions, by taking into account only one ring, the one with the highest overload. In addition to the penalty for the unfeasible solutions, z 2 penalizes the moves that increase the number of rings. Function z 3 encourages solutions with small z 0 , while among all the solutions with the same value of z 0 , it prefers the ones in which the rings have the same loads. The last objective function z 4 is an adapting technique that modifies the evaluation according to the status of the search. It is a variable objective function having different expressions for different transitions from the current status to the next one.
0 otherwise z 3 = z 0 · B + BN z 4 = z 4a = z 0 · B + BN(= z 3 ) (
Our work
In this section we present the different tools needed to implement the Constraints Based Local Search algorithms for SRAP and IDP. First we introduce the starting solution, then the neighborhoods and the objective functions. Finally we present the different local search algorithms.
Starting solution
Most of the times, local search starts from an random initial solution. However we have tested other possibilities and two other options proved to be more efficient.
The best initializing method assigned all the items, nodes for SRAP or edges for IDP, to the same partition. This solution is certainly unfeasible as all the traffic is on only one ring. This biases the search towards solutions with a minimum value for the cost and a very bad value for the capacity constraints' violations. Astonishingly this is the one that gave us the best results on large instances.
We had good confidence in another one which first computes the lower bound k lb (described in section 2) and randomly assigns all the items to exactly k lb partitions. The idea was to let the Local Search reduce the number of violations. This starting solution was good on small instances and not so good on large ones. It was the same with a random solution, which corresponds, for these problems, to a solution where all the items are randomly assigned to a partition.
Neighborhoods
In a generic partitioning problem there are usually two basic neighborhoods. From a given solution, we can move an object from a subset to another subset or swap two objects assigned to two different subsets. For SRAP a neighboring solution is produced by moving a node from a ring to another (including a new one) or by swapping two nodes assigned to two different rings. The same kind of neighborhood can be used for IDP: moving an edge from a ring to another or swapping two edges.
In some cases it is more efficient to restrain the neighborhood to the feasible space. We have tested different variants of the basic neighborhood applying this idea, by choosing the worst partition (wrt. the capacity constraint) and even by assigning it to the partition with the lowest load. Anyway it appears to be less efficient than the basic one. As will be seen later it seems that on these problems it is necessary to keep the search as broad as possible.
Objective function
We have compared the four objective functions described in [2] (see Section 2) to a new one we have defined: z 5 .
z 5 = z 0 + ∑ p ∈ partitions violations(p) where
partitions are all the rings (in the case of the SRAP problem the federal ring is also included),
violations(p) = capacity(p) − B if the load of p exceed B 0 otherwise.
This objective function minimizes the basic function z 0 and penalizes the unfeasible solutions, but contrarily to the previous objectives, this penalty is based on all the constraints. We consider that every constraint is violated by a certain amount (its current load minus B). By summing all the violations of the current solution, we obtain the total violation for all the constraints, and we can precisely say how far we are from a feasible one. If the current solution is feasible,
∑ p ∈ partitions violations(p) = 0.
This objective has also the nice property that it is merely local, depending only on the current solution and not on the other moves. Notice that a feasible solution with 4 rings will be preferred to an unfeasible solution with 3 rings, as z 0 is much smaller than the load of a ring.
Local Search
We have proposed a new algorithm called DMN2 which proved to be efficient on both problems. It is a variant of the Diversification by Multiple Neighborhood (DMN) proposed in [2]. DMN is based on Tabu Search, and adds a mechanism to perform diversification when the search is going round and round without improving the objective (eventhough it is not a local minimum). This replaces the classical random restart steps. We refine this particular mechanism by proposing several ways of escaping such areas.
More precisely, on our problems, after a series of consecutive non improving iterations, the DMN algorithm empties a partition by moving all its items to another partition, disregarding the capacity constraint and locally minimizing the objective function. There is a particular case for our function z 5 , because it integrates the capacity constraints. In this case, the "z 5 " version of DMN we have implemented moves the items to another partition minimizing z 5 . The results in [2] show a general trend on SRAP and IDP: the more diversification is performed, the better are the results. Following this idea, we propose different ways of perfoming the DMN step, which gives our algorithm DMN2. In DMN2, when the search needs to be diversified, it randomly chooses among three diversification methods (d 1 , d 2 , d 3 ). The first method, d 1 , is the diversification used in DMN. The second one, d 2 , generates a random solution, in the same way as a classic random restart. Finally, d 3 randomly chooses a number m in the range [1, k], where k is the number of rings, and applies m random moves.
In the end, our general algorithm starts with a solution where all the items are in the same partition. Then it applies one of the local search algorithms described before. If the solution returned by the local search is feasible but with the objective value greater than the lower bound k lb , it empties one partition by randomly assigning all its items to another. Then run once again the local search until it founds a solution with the objective value equals to k lb or until the time limit is exceeded.
Results
The objective functions and the metaheuristics, respectively described in Section 3.3 and Section 3.4, have been coded in COMET and tested on Intel based, dual-core, dual processor, Dell Poweredge 1855 blade server, running under Linux. The instances used are from the litterature.
Benchmark
To test the algorithms, we used two sets of instances. The first one has been introduced in [6]. They have generated 80 geometric instances, based on the fact that customers tend to communicate more with their close neighbors, and 80 random instances. These subsets have both 40 low-demand instances, with a ring capacity B = 155 Mbs, and 40 high-demand instances, where B = 622 Mbs. The traffic demand between two customers, u and v, is determined by a discrete uniform random variable corresponding to the number of T1 lines required for the anticipated volume of traffic between u and v. A T1 line has an approximate capacity of 1.5 Mbs. The number of T1 lines is randomly picked in the interval [3,7], for low-demand cases, while it is selected from the range [11,17], for the high-demand cases. The generated graphs have |V | ∈ {15, 25, 30, 50}. In the 160 instances, generated by O. Goldschmidt, A. Laugier and E. Olinick in 2003, 42 have been proven to be unfeasible by R. Aringhieri and M. Dell'Amico using CPLEX 8.0 (see [2]).
The second set of instances has been presented in [8]. They have generated 40 instances with a ring capacity B = 48× T1 lines and the number of T1 lines required for the traffic between two customers has been chosen in the interval [1,30]. The considered graphs have |V | ∈ {15, 20, 25} and |E| = {30, 35}. Most of the instances in this set are unfeasible.
Note that all the instances can be feasible for the IDP problem, we always could assign each demand to a different partition.
Computational Results
We now describe the results obtained for SRAP and IDP on the above two benchmark sets, by the algorithms Basic Tabu For each algorithm we consider the five objective functions of Section 3.3, but for the SS we use the three functions described in Section 3.4.
We gave a time limit of 5 minutes to each run of an algorithm. However we observed that the average time to find the best solution is less than 1 minute. Obviously, the algorithm terminates if the current best solution found is equal to the lower bound k lb . In case the lower bound is not reached, we define as a high-quality solution a solution for which the evaluation of the objective is equal to k lb + 1. Remind that objective functions z 2 and z 3 cannot be applied with the Scatter Search. The figure 5 only shows for each algorithm the number of optimal solutions found with the objective function z 5 . With the other objectives, the number of optimal solutions found is zero, that is why we did not show them on the diagram. However the other objectives found good solutions. Our conclusion is that maybe the other functions do not enough discriminate the different solutions. For this problem, we can see that the eXploring Tabu Search does not give good results. This can be due to a too early "backtracking". After a fixed number of consecutive non improving iterations the search goes back in a previous configuration and applies the second best move. In the case of the IDP problem, it could take much more iterations to improve the value of the objective function than for the SRAP problem. Indeed, the value of the objective function depends on the number of partitions in which a customer belongs, while an iteration moves only one edge ; and to reduce its value by only one it could need to move several edges. Figure 6 shows for each algorithm and each objective function, the number of instances for which the search has found an optimal solution, i.e. a solution with k lb partitions (in dark gray on the diagram) ; the number of those for which the best feasible solution found has k lb + 1 partitions (in gray) ; and, in light gray, the number of instances for which it has found a feasible solution with more than k lb + 1 partitions. From the objective functions perspective, we can see that z 4 , supposed to be the most improving one, is not that good in the COMET implementation. However the one we add, z 5 , is always better than the other ones.
Against all odds, the Basic Tabu Search on all the objective functions, is as good as the other search algorithms. Still on the local search algorithms, we can see that the second version of the Diversification by Multiple Neighborhoods, is much better than the first one with the objectives z 3 and z 4 .
For the details of our results see the report [11].
Conclusion
The purpose of this work was to reproduce with COMET the results obtained, for the SONET Design Problems, by R. Aringhieri and M. DellAmico in 2005 in ANSI C (see [2] for details).
We have implemented in COMET the algorithms and the objective functions described in this paper. We found relevant to add a variant of one of their local search algorithm and a new objective function. Unfortunately, we cannot exactly compare our results to theirs because the set of 230 instances they have generated is not available. However, for the IDP problem, we obtained better results for 15 instances over the 160 compared, and similar results for the other instances. Unfortunately we did not found their results for the SRAP problem. Still for the problem SRAP, compare to the results obtained by O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6] we obtained better results, we have more instances for wich the algorithm reach the lower bound and less unfeasible instances. It would be interesting to have all the instances and the results to fully compare our results.
In the end we can exhibit two main observations. Firstly, for these two problems, the more an algorithm uses diversification the better it is. Actually, we have tried different intensification methods for the local search algorithms but none of them improved the results, worst, they gave us pretty bad results.
Secondly, based on our results, we can say that our objective function implemented in COMET finds more good solutions than the other ones. It is a constraint-based objective function taking into account the violation of every constraint. Hence it has the asset of being both more generic and precise than the dedicated functions, with better results.
| 6,079 |
0910.1255
|
2086153481
|
This paper presents a new method and a constraint-based objective function to solve two problemsrelated to the design of optical telecommunication networks, namely the Synchronous Optical Net-work Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network DesignProblem (IDP). These network topology problems can be represented as a graph partitioning withcapacity constraints as shown in previous works. We present here a new objective function and anew local search algorithm to solve these problems. Experiments conducted in C
|
Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( @cite_7 ), have studied the IDP problem with an additional constraint such that for each ring @math , @math . The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node @math with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge @math such that the graph degree of @math is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit @math . A set of 40 instances is generated to test these heuristics and the branch-and-cut.
|
{
"abstract": [
"In this paper, we deal with a network design problem arising from the deployment of synchronous optical networks (SONET), a standard of transmission using optical fiber technology. The problem is to find an optimal clustering of traffic demands in the network such that the total number of node assignments (and, hence, add-drop multiplexer equipment requirements) is minimized, while satisfying the ring capacity and node cardinality constraints. This problem can be conceptualized as an edge-capacitated graph partitioning problem with node cardinality constraints. We formulate the problem as a mixed-integer programming model and develop a new branch-and-cut algorithm along with preprocessing routines for optimally solving the problem. We also prescribe an effective heuristic procedure. Promising computational results are obtained using the proposed method."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2101186575"
]
}
|
Sonet Network Design Problems
|
This paper presents a new algorithm and an objective function to solve two real-world combinatorial optimization problems from the field of network design. These two problems, the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP), have been shown N P-hard and have already been solved by combinatorial optimization techniques. This work extends the seminal ideas introduced by R. Aringhieri and M. Dell'Amico in 2005 in [2]. This paper is organized as follows. In the sequel of this section we introduce the two problems we have worked on, and the local search techniques which have been used to solve them. We will also introduce the models in a constrained optimization format for the two problems. We then present the previous works on SRAP and IDP in section 2. Section 3 describes the key ingredients necessary to implement the local search algorithms. Finally, the results are shown in Section 4.
Optical networks topologies
During the last few years the number of internet based application users has exponentially increased, and so has the demand for bandwidth. To enable fast transmission of large quantities of data, the fiber optic technology in telecommunication is the current solution.
The Synchronous Optical NETwork (SONET) in North America and Synchronous Digital Hierarchy (SDH) in Europe and Japan are the standard designs for fiber optics networks. They have a ring-based topology, in other words, they are a collection of rings.
Rings Each customer is connected to one or more rings, and can send, receive and relay messages using an add-drop-multiplexer (ADM). There are two bidirectional links connecting each customer to his neighboring customers on the ring. In a bidirectional ring the traffic between two nodes can be sent clockwise or counterclockwise. This topology allows an enhanced survivability of the network, specifically if a failure occurs on a link, the traffic originally transmitted on this link will be sent on the surviving part of the ring. The volume traffic on any ring is limited by the link capacity, called B. The cost of this kind of network is defined by the cost of the different components used in it.
There are different ways to represent a network. In this paper, we consider two network topologies described by R. Aringhieri and M. Dell'Amico in 2005 in [2]. In both topologies the goal is to minimize the cost of the network while guaranteeing that the customers' demands, in term of bandwidth, are satisfied.
The model associated to these topologies are based on graphs. Given an undirected graph G = (V, E), V = {1, . . . , n}, the set of nodes represent the customers and E, the set of edges, stand for the customers' traffic demands. A communication between two customers u and v corresponds to the weighted edge (u, v) in the graph, where the weight d uv is the fixed traffic demand. Note that d uv = d vu , and that d uu = 0.
First topology (SRAP)
In the first topology, each customer is connected to exactly one ring. All of these local rings are connected with a device called digital cross connector (DXC) to a special ring, called the federal ring. The traffic between two rings is transmitted over this special ring. Like the other rings, the federal ring is limited by the capacity B. Because DXCs are so much more expensive than ADMs we want to have the smallest possible number of them. As there is a one-to-one relationship between the ring and the DXC, minimizing the number of rings is equivalent to minimizing the number of DXCs. The problem associated to this topology is called SONET Ring Assignment Problem (SRAP) with capacity constraint. Figure 1 shows an example of this topology. Model This topology is modeled by a decomposition of the set of nodes V into a partition, each subset of the partition representing a particular ring. Assigning a node to a subset of the partition in the model is then equivalent to assigning a customer to a ring.
Formally, let V 1 ,V 2 , . . . ,V k be a partitioning of V in k subsets. Each customer in the subset V i is assigned to the i-th local ring. As each customer is connected with an ADM to one and only one ring, and each local ring is connected to the federal ring with a DXC, there are exactly |V | AMD and k DXC used in the corresponding SRAP network.
Hence, minimizing the number of rings is equivalent to minimizing k subject to the following constraints:
∑ u∈V i ∑ v∈V,v =u d uv ≤ B, ∀i = 1, . . . , k (1) k−1 ∑ i=1 k ∑ j=i+1 ∑ u∈V i ∑ v∈V j d uv ≤ B(2)
Constraint (1) imposes that the total traffic routed on each ring does not exceed the capacity B. In other words, for a given ring i, it forces the total traffic demands of all the customers connected to this ring, to be lower or equal to the bandwidth. Constraint (2) forces the load of federal ring to be less than or equal to B. To do so, it computes the sum of the traffic demands between all the pairs of customers connected to different rings. Figure 2 illustrates the relation between the node partitioning model and the first topology SRAP. We can see that, because the nodes 1, 3, 5 and 6 are in the same partition, they are connected to the same ring. Similarly, the nodes 2, 4 and 7 are on the same ring. For this problem we can easily compute a lower bound k lb introduced in [6]. In fact, we want to know the minimum number of partitions needed to route all the traffic. Reasonning on the total traffic amount, if we sum all the traffic demands of the graph and divide it by the bandwidth B, we trivially obtain a minmum for the number of rings, that is, a lower bound of the number of partitions. Moreover, we cannot have fractional part of partition, that is why we take the upper round of this fraction.
k lb = n−1 ∑ u=1 n ∑ v=u+1 d uv B
Second topology (IDP)
In the second topology, customers can be connected to more than one ring. If two customers want to communicate, they have to be connected to the same ring. In this case, the DXC are no longer needed and neither is the federal ring. However there are more ADM used than in the first topology. In this case, the most expensive component is the ADM although its price has significantly dropped over the past few years. It is important, in this topology, to have the smallest numbers of ADMs. This problem is called Intra-ring Synchronous Optical Network Design Problem (IDP). The figure 3 illustrates this topology. Model Contrarily to the SRAP problem, there is no need to assign each customer to a particular ring because customers can be connected to several rings. Here the model is based on a partition of the edges of the graph, where a subset of the partition corresponds to a ring.
Formally, let E 1 , E 2 , . . . , E k be a partitioning of E in k subsets and Nodes(E i ) be the set of endpoint nodes of the edges in E i . Each subset of the partition corresponds to a ring, in other words, each customer in Nodes(E i ) is linked to the i-th ring. In the corresponding IDP network, there are
k ∑ i=1 |Nodes(E i )| ADM and no DXC.
Hence, minimizing the number of ADMs is equivalent to minimizing
k ∑ i=1 |Nodes(E i )| subject to, ∑ (u,v)∈E i d uv ≤ B, ∀i = 1, . . . , k(3)
Constraint (3) imposes that the traffic in each ring does not exceed the capacity B. Figure 4 shows the relation between the edge partitioning and the second topology. If all the edges of a node are in the same partition, this node will only be connected to a ring. We can see, for example, the node 4 has all its edges in the same partition, because of that, the node 4 is connected to only one ring. On the opposite, the edges of the node 2 are in two different partitions, so it is connected to two rings. The SRAP problem can be seen as a node partitioning problem, whereas IDP, as an edge partitioning problem for the graph described above, subject to capacity constraints. These graph partitioning problems have been introduced in [6] and [7].
Both of these problems are N P-hard (see O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6], and O. Goldschmidt, D. Hochbaum, A. Levin and E. Olinick in 2003, [7] for details). The principal constraint, the load constraint, is similar to a capacity constraint, yet different: a capacity constraint holds on the variables in the sum, while the load constraint holds on the variables below the sum. The question is how to choose the d uv (which are data) that count for the load.
Brief introduction of Local Search
In order to efficiently and quickly solve these two combinatorial optimization problems, we decided to use Local Search instead of an exact algorithm. Indeed, it permits to search in a efficiently way among all the candidate solutions, by performing steps from a solution to another.
Principles Local search is a metaheuristic based on iterative improvement of an objective function. It has been proved very efficient on many combinatorial optimization problems like the Maximum Clique Problem (L. Cavique, C. Rego and I. Themido in 2001 in [9]), or the Graph Coloring Problem (J.P. Hansen and J.K. Hao in 2002 in [10]). It can be used on problems which formulated either as mere optimization problems, or as constrained optimization problems where the goal is to optimize an objective function while respecting some constraints. Local search algorithms perform local moves in the space of candidate solutions, called the search space, trying to improve the objective function, until a solution deemed optimal is found or a time bound is reached. Defining the neighborhood graph and the method to explore it are two of the key ingredients of local search algorithms.
The approach for solving combinatorial optimization problems with local search is very different from the systematic tree search of constraint and integer programming. Local search belongs to the family of metaheuristic algorithms, which are incomplete by nature and cannot prove optimality. However on many problems, it will isolate a optimal or high-quality solution in a very short time: local search sacrifices optimality guarantees to performance. In our case, we can compute the lower bound to either prove that the obtained solution is optimum, or estimate its optimality, hence local search is well suited.
Basic algorithm A local search algorithm starts from a candidate solution and then iteratively moves to a neighboring solution. This is only possible if a neighborhood relation is defined on the search space. Typically, for every candidate solution, we define a subset of the search space to be the neighborhood. Moves are performed from neighbors to neighbors, hence the name local search. The basic principle is to choose among the neighbors the one with the best value for the objective function. The problem is then that the algorithm will be stuck in local optima. Metaheuristics, such as Tabu Search, are added to avoid this. In Tabu Search, the last t visited configurations are left out of the search (t being a parameter of the algorithm): this ensures that the algorithm can escape local optima, at least at order t. A pseudo-code is given on figure 1.
Termination of local search can be based on a time bound. Another common choice is to terminate when the best solution found by the algorithm has not been improved in a given number of iterations. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. This can happen even if termination is due to the impossibility of improving the solution, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
Choose or construct an initial solution S 0 ; S ← S 0 ; /* S is the current solution */ S * ← S 0 ; /* S * is the best solution so far */ bestValue ← ob jValue(S 0 ) ; /* bestValue is the evaluation of S * */ T ← / 0 ; /* T is the Tabu list */ while Terminaison criterion not satisfied do N(S) ← all the neighboring solutions of S ; /* Neighborhood exploration */ S ← a solution in N(S) minimizing the objective ; if ob jValue(S) < bestValue then /* The solution found is better than S * */ S * ← S ; bestValue ← ob jValue(S) ; end Record tabu for the current move in T (delete oldest entry if necessary) ; end Algorithm 1: Tabu Search
COMET
COMET is an object-oriented language created by Pascal Van Hentenryck and Laurent Michel. It has a constraint-based architecture that makes it easy to use when implementing local search algorithms, and more important, constraint-based local search algorithms (see [1] for details).
Moreover, it has a rich modeling language, including invariants, and a rich constraint language featuring numerical, logical and combinatorial constraints. Constraints and objective functions are differentiable objects maintaining the properties used to direct the graph exploration. The constraints maintain their violations and the objectives their evaluation. One of its most important particularity, is that differentiable objects can be queried to determine incrementally the impact of local moves on their properties.
As we can see on the constraint (1), the sum are on datas (d uv ) and are determined by the variables (u ∈ V i , v ∈ V, v = u). We will rely on COMET 's built-in invariants to define a constraint to represent the load.
Greedy algorithms for SRAP
In [6] the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the edge-based, the cut-based and the node-based. The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings V i and V j if V i ∪ V j is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm 2 shows the pseudo code for the edge-based heuristic.
Given a value k, the node-based heuristic, starts by randomly assigning a node to each of the k rings. At each iteration it first chooses the ring V i with the largest unused capacity, then the unassigned node F ← E ; /* Initialize the set of edges that have not been used yet */ ∀v ∈ V ring(v) ← v ; /* Assign each node to a different ring */ while F = / 0 do /* There is still some edges that have not been used */ Choose a maximum capacity edge (u, v) ∈ F ; i ← ring(u), j ← ring(v) ; if V i ∪V j is a feasible ring then /* Merging the rings gives a feasible ring */ ∀v ∈ V j ring(v) ← i ;
F ← F\{(x, y)|ring(x) = i, ring(y) = j} ; else F ← F\{(u, v)} ; end end
Algorithm 2: Edge-Based Heuristic u with the largest traffic with the nodes in V i . Finally it adds u to the ring V i disregarding the capacity constraint. The pseudo-code for this heuristic is shown on algorithm 3. The node-based heuristic is run ten times. At each run, if a feasible solution is found, the corresponding value for k is kept and the next run takes k − 1 as an input. The idea behind this is to try and improve the objective at each run.
U ← V ; /* Initialize the set of nodes that have not been used yet */ for i = 1 to k do /* Assign k random nodes to the k partitions */ Choose u ∈ U,V i ← u,U ← U\{u} end while U = / 0 do /* There are some unused nodes */ Choose a minimum capacity ring V i
Choose u ∈ U to maximize ∑ {v∈V i } d uv ring(u) ← V i ,U ← U\{u} ; /* Assign u to V i */ end
Algorithm 3: Node-Based Heuristic
To test these heuristics, the authors have randomly generated 160 instances 1 . The edge-based, and the cut-based are run first. If they have found a feasible solution and obtained a value for k, the node-based is then run with the smallest value obtained for k as input. If they have not, the node-based heuristic has for input a random value from the range [k lb , |V |] where k lb is the lower bound, described previously.
MIP and Branch and Cut for IDP
A special case of the IDP problem where all the edges have the same weight, is studied in [7]. This special case is called the K-Edge-Partitioning problem. Given a simple undirected graph G = (V, E) and a value k < |E|, we want to find a partitioning of E, {E 1 , E 2 , . . . E l } such that ∀i ∈ {1, . . . , l}, |E i | ≤ k. The authors present two linear-time-approximation algorithms with fixed performance guarantee. Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( [8]), have studied the IDP problem with an additional constraint such that for each ring i, |Nodes(E i )| ≤ R. The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node u with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge [u, v] such that the graph degree of v is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit B. A set of 40 instances is generated to test these heuristics and the branch-and-cut.
Local Search for SRAP and IDP
More recently, in [2], these two problems have been studied. Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of [2] use the same for all of their metaheuristics. It tries to assign an item x from a partition, P 1 , to another partition, P 2 . The authors also consider the neighborhood obtained by swapping two items, x and y, from two different partitions, P 1 and P 2 . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of x to the partition P 2 is unfeasible.
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in [2] ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
The two other methods described in [2] are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V , such that for every two vertices in C, there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, Find two clique A and B such that each node in A is adjacent to each node in B then merge the two cliques (Union). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
In the third method, Clique-BF, it iteratively selects a clique of unassigned edges with the total traffic less or equal to B. Then assigns it to the ring that minimizes the residual capacity and, if possible, preserves the feasibility. If both of them are impossible it places it to a new ring. Algorithm 4 shows the pseudo code associated to this heuristic. The last algorithm, Cycle-BF, is like the previous method, but instead of looking for a clique at each iteration it try to find a cycle with as many cords as possible.
They also introduce four objective functions, one of which depends on the current and the next status of the search. Let z 0 be the basic objective function counting the number of rings of a solution for SRAP, and the total number of ADMs for IDP, and let BN be the highest load of a ring in the current solution.
U ← E ; r ← 0 ; while U = / 0 do Heuristicaly find a clique C ⊂ U such that weight(C) ≤ B ; /* Search a ring such that the weight of the ring plus the weight of the clique does not exceed B and is the biggest possible
*/ j ← min{B − weight(E i ) − weight(C) : i ∈ {1, . . . , k}, B − weight(E i ) − weight(C) ≥ 0} ; if j = null then r + + ; j ← r ; end E j ← E j ∪C ; U ← U\C ; end
Algorithm 4: Clique-BF
z 1 = z 0 + max{0, BN − B}, z 2 = z 1 + α · RingLoad(r)
if the last move has created a new ring r, The first function z 1 minimizes the basic function z 0 . As BN > B, it also penalizes the unfeasible solutions, by taking into account only one ring, the one with the highest overload. In addition to the penalty for the unfeasible solutions, z 2 penalizes the moves that increase the number of rings. Function z 3 encourages solutions with small z 0 , while among all the solutions with the same value of z 0 , it prefers the ones in which the rings have the same loads. The last objective function z 4 is an adapting technique that modifies the evaluation according to the status of the search. It is a variable objective function having different expressions for different transitions from the current status to the next one.
0 otherwise z 3 = z 0 · B + BN z 4 = z 4a = z 0 · B + BN(= z 3 ) (
Our work
In this section we present the different tools needed to implement the Constraints Based Local Search algorithms for SRAP and IDP. First we introduce the starting solution, then the neighborhoods and the objective functions. Finally we present the different local search algorithms.
Starting solution
Most of the times, local search starts from an random initial solution. However we have tested other possibilities and two other options proved to be more efficient.
The best initializing method assigned all the items, nodes for SRAP or edges for IDP, to the same partition. This solution is certainly unfeasible as all the traffic is on only one ring. This biases the search towards solutions with a minimum value for the cost and a very bad value for the capacity constraints' violations. Astonishingly this is the one that gave us the best results on large instances.
We had good confidence in another one which first computes the lower bound k lb (described in section 2) and randomly assigns all the items to exactly k lb partitions. The idea was to let the Local Search reduce the number of violations. This starting solution was good on small instances and not so good on large ones. It was the same with a random solution, which corresponds, for these problems, to a solution where all the items are randomly assigned to a partition.
Neighborhoods
In a generic partitioning problem there are usually two basic neighborhoods. From a given solution, we can move an object from a subset to another subset or swap two objects assigned to two different subsets. For SRAP a neighboring solution is produced by moving a node from a ring to another (including a new one) or by swapping two nodes assigned to two different rings. The same kind of neighborhood can be used for IDP: moving an edge from a ring to another or swapping two edges.
In some cases it is more efficient to restrain the neighborhood to the feasible space. We have tested different variants of the basic neighborhood applying this idea, by choosing the worst partition (wrt. the capacity constraint) and even by assigning it to the partition with the lowest load. Anyway it appears to be less efficient than the basic one. As will be seen later it seems that on these problems it is necessary to keep the search as broad as possible.
Objective function
We have compared the four objective functions described in [2] (see Section 2) to a new one we have defined: z 5 .
z 5 = z 0 + ∑ p ∈ partitions violations(p) where
partitions are all the rings (in the case of the SRAP problem the federal ring is also included),
violations(p) = capacity(p) − B if the load of p exceed B 0 otherwise.
This objective function minimizes the basic function z 0 and penalizes the unfeasible solutions, but contrarily to the previous objectives, this penalty is based on all the constraints. We consider that every constraint is violated by a certain amount (its current load minus B). By summing all the violations of the current solution, we obtain the total violation for all the constraints, and we can precisely say how far we are from a feasible one. If the current solution is feasible,
∑ p ∈ partitions violations(p) = 0.
This objective has also the nice property that it is merely local, depending only on the current solution and not on the other moves. Notice that a feasible solution with 4 rings will be preferred to an unfeasible solution with 3 rings, as z 0 is much smaller than the load of a ring.
Local Search
We have proposed a new algorithm called DMN2 which proved to be efficient on both problems. It is a variant of the Diversification by Multiple Neighborhood (DMN) proposed in [2]. DMN is based on Tabu Search, and adds a mechanism to perform diversification when the search is going round and round without improving the objective (eventhough it is not a local minimum). This replaces the classical random restart steps. We refine this particular mechanism by proposing several ways of escaping such areas.
More precisely, on our problems, after a series of consecutive non improving iterations, the DMN algorithm empties a partition by moving all its items to another partition, disregarding the capacity constraint and locally minimizing the objective function. There is a particular case for our function z 5 , because it integrates the capacity constraints. In this case, the "z 5 " version of DMN we have implemented moves the items to another partition minimizing z 5 . The results in [2] show a general trend on SRAP and IDP: the more diversification is performed, the better are the results. Following this idea, we propose different ways of perfoming the DMN step, which gives our algorithm DMN2. In DMN2, when the search needs to be diversified, it randomly chooses among three diversification methods (d 1 , d 2 , d 3 ). The first method, d 1 , is the diversification used in DMN. The second one, d 2 , generates a random solution, in the same way as a classic random restart. Finally, d 3 randomly chooses a number m in the range [1, k], where k is the number of rings, and applies m random moves.
In the end, our general algorithm starts with a solution where all the items are in the same partition. Then it applies one of the local search algorithms described before. If the solution returned by the local search is feasible but with the objective value greater than the lower bound k lb , it empties one partition by randomly assigning all its items to another. Then run once again the local search until it founds a solution with the objective value equals to k lb or until the time limit is exceeded.
Results
The objective functions and the metaheuristics, respectively described in Section 3.3 and Section 3.4, have been coded in COMET and tested on Intel based, dual-core, dual processor, Dell Poweredge 1855 blade server, running under Linux. The instances used are from the litterature.
Benchmark
To test the algorithms, we used two sets of instances. The first one has been introduced in [6]. They have generated 80 geometric instances, based on the fact that customers tend to communicate more with their close neighbors, and 80 random instances. These subsets have both 40 low-demand instances, with a ring capacity B = 155 Mbs, and 40 high-demand instances, where B = 622 Mbs. The traffic demand between two customers, u and v, is determined by a discrete uniform random variable corresponding to the number of T1 lines required for the anticipated volume of traffic between u and v. A T1 line has an approximate capacity of 1.5 Mbs. The number of T1 lines is randomly picked in the interval [3,7], for low-demand cases, while it is selected from the range [11,17], for the high-demand cases. The generated graphs have |V | ∈ {15, 25, 30, 50}. In the 160 instances, generated by O. Goldschmidt, A. Laugier and E. Olinick in 2003, 42 have been proven to be unfeasible by R. Aringhieri and M. Dell'Amico using CPLEX 8.0 (see [2]).
The second set of instances has been presented in [8]. They have generated 40 instances with a ring capacity B = 48× T1 lines and the number of T1 lines required for the traffic between two customers has been chosen in the interval [1,30]. The considered graphs have |V | ∈ {15, 20, 25} and |E| = {30, 35}. Most of the instances in this set are unfeasible.
Note that all the instances can be feasible for the IDP problem, we always could assign each demand to a different partition.
Computational Results
We now describe the results obtained for SRAP and IDP on the above two benchmark sets, by the algorithms Basic Tabu For each algorithm we consider the five objective functions of Section 3.3, but for the SS we use the three functions described in Section 3.4.
We gave a time limit of 5 minutes to each run of an algorithm. However we observed that the average time to find the best solution is less than 1 minute. Obviously, the algorithm terminates if the current best solution found is equal to the lower bound k lb . In case the lower bound is not reached, we define as a high-quality solution a solution for which the evaluation of the objective is equal to k lb + 1. Remind that objective functions z 2 and z 3 cannot be applied with the Scatter Search. The figure 5 only shows for each algorithm the number of optimal solutions found with the objective function z 5 . With the other objectives, the number of optimal solutions found is zero, that is why we did not show them on the diagram. However the other objectives found good solutions. Our conclusion is that maybe the other functions do not enough discriminate the different solutions. For this problem, we can see that the eXploring Tabu Search does not give good results. This can be due to a too early "backtracking". After a fixed number of consecutive non improving iterations the search goes back in a previous configuration and applies the second best move. In the case of the IDP problem, it could take much more iterations to improve the value of the objective function than for the SRAP problem. Indeed, the value of the objective function depends on the number of partitions in which a customer belongs, while an iteration moves only one edge ; and to reduce its value by only one it could need to move several edges. Figure 6 shows for each algorithm and each objective function, the number of instances for which the search has found an optimal solution, i.e. a solution with k lb partitions (in dark gray on the diagram) ; the number of those for which the best feasible solution found has k lb + 1 partitions (in gray) ; and, in light gray, the number of instances for which it has found a feasible solution with more than k lb + 1 partitions. From the objective functions perspective, we can see that z 4 , supposed to be the most improving one, is not that good in the COMET implementation. However the one we add, z 5 , is always better than the other ones.
Against all odds, the Basic Tabu Search on all the objective functions, is as good as the other search algorithms. Still on the local search algorithms, we can see that the second version of the Diversification by Multiple Neighborhoods, is much better than the first one with the objectives z 3 and z 4 .
For the details of our results see the report [11].
Conclusion
The purpose of this work was to reproduce with COMET the results obtained, for the SONET Design Problems, by R. Aringhieri and M. DellAmico in 2005 in ANSI C (see [2] for details).
We have implemented in COMET the algorithms and the objective functions described in this paper. We found relevant to add a variant of one of their local search algorithm and a new objective function. Unfortunately, we cannot exactly compare our results to theirs because the set of 230 instances they have generated is not available. However, for the IDP problem, we obtained better results for 15 instances over the 160 compared, and similar results for the other instances. Unfortunately we did not found their results for the SRAP problem. Still for the problem SRAP, compare to the results obtained by O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6] we obtained better results, we have more instances for wich the algorithm reach the lower bound and less unfeasible instances. It would be interesting to have all the instances and the results to fully compare our results.
In the end we can exhibit two main observations. Firstly, for these two problems, the more an algorithm uses diversification the better it is. Actually, we have tried different intensification methods for the local search algorithms but none of them improved the results, worst, they gave us pretty bad results.
Secondly, based on our results, we can say that our objective function implemented in COMET finds more good solutions than the other ones. It is a constraint-based objective function taking into account the violation of every constraint. Hence it has the asset of being both more generic and precise than the dedicated functions, with better results.
| 6,079 |
0910.1255
|
2086153481
|
This paper presents a new method and a constraint-based objective function to solve two problemsrelated to the design of optical telecommunication networks, namely the Synchronous Optical Net-work Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network DesignProblem (IDP). These network topology problems can be represented as a graph partitioning withcapacity constraints as shown in previous works. We present here a new objective function and anew local search algorithm to solve these problems. Experiments conducted in C
|
More recently, in @cite_5 , these two problems have been studied. The authors have developed different metaheuristic algorithms, all based on the Tabu Search. The metaheuristics are the (BTS), two versions of the (PR1, PR2), the (XTS), the (SS), and the (DMN). These local search algorithms are detailed further.
|
{
"abstract": [
"This paper considers two problems that arise in the design of optical telecommunication networks when a ring-based topology is adopted, namely the SONET Ring Assignment Problem and the Intraring Synchronous Optical Network Design Problem. We show that these two network topology problems correspond to graph partitioning problems with capacity constraints: the first is a vertex partitioning problem, while the latter is an edge partitioning problem. We consider solution methods for both problems, based on metaheuristic algorithms. We first describe variable objective functions that depend on the transition from one solution to a neighboring one, then we apply several diversification and intensification techniques including Path Relinking, eXploring Tabu Search and Scatter Search. Finally we propose a diversification method based on the use of multiple neighborhoods. A set of extensive computational results is used to compare the behaviour of the proposed methods and objective functions."
],
"cite_N": [
"@cite_5"
],
"mid": [
"1969385974"
]
}
|
Sonet Network Design Problems
|
This paper presents a new algorithm and an objective function to solve two real-world combinatorial optimization problems from the field of network design. These two problems, the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP), have been shown N P-hard and have already been solved by combinatorial optimization techniques. This work extends the seminal ideas introduced by R. Aringhieri and M. Dell'Amico in 2005 in [2]. This paper is organized as follows. In the sequel of this section we introduce the two problems we have worked on, and the local search techniques which have been used to solve them. We will also introduce the models in a constrained optimization format for the two problems. We then present the previous works on SRAP and IDP in section 2. Section 3 describes the key ingredients necessary to implement the local search algorithms. Finally, the results are shown in Section 4.
Optical networks topologies
During the last few years the number of internet based application users has exponentially increased, and so has the demand for bandwidth. To enable fast transmission of large quantities of data, the fiber optic technology in telecommunication is the current solution.
The Synchronous Optical NETwork (SONET) in North America and Synchronous Digital Hierarchy (SDH) in Europe and Japan are the standard designs for fiber optics networks. They have a ring-based topology, in other words, they are a collection of rings.
Rings Each customer is connected to one or more rings, and can send, receive and relay messages using an add-drop-multiplexer (ADM). There are two bidirectional links connecting each customer to his neighboring customers on the ring. In a bidirectional ring the traffic between two nodes can be sent clockwise or counterclockwise. This topology allows an enhanced survivability of the network, specifically if a failure occurs on a link, the traffic originally transmitted on this link will be sent on the surviving part of the ring. The volume traffic on any ring is limited by the link capacity, called B. The cost of this kind of network is defined by the cost of the different components used in it.
There are different ways to represent a network. In this paper, we consider two network topologies described by R. Aringhieri and M. Dell'Amico in 2005 in [2]. In both topologies the goal is to minimize the cost of the network while guaranteeing that the customers' demands, in term of bandwidth, are satisfied.
The model associated to these topologies are based on graphs. Given an undirected graph G = (V, E), V = {1, . . . , n}, the set of nodes represent the customers and E, the set of edges, stand for the customers' traffic demands. A communication between two customers u and v corresponds to the weighted edge (u, v) in the graph, where the weight d uv is the fixed traffic demand. Note that d uv = d vu , and that d uu = 0.
First topology (SRAP)
In the first topology, each customer is connected to exactly one ring. All of these local rings are connected with a device called digital cross connector (DXC) to a special ring, called the federal ring. The traffic between two rings is transmitted over this special ring. Like the other rings, the federal ring is limited by the capacity B. Because DXCs are so much more expensive than ADMs we want to have the smallest possible number of them. As there is a one-to-one relationship between the ring and the DXC, minimizing the number of rings is equivalent to minimizing the number of DXCs. The problem associated to this topology is called SONET Ring Assignment Problem (SRAP) with capacity constraint. Figure 1 shows an example of this topology. Model This topology is modeled by a decomposition of the set of nodes V into a partition, each subset of the partition representing a particular ring. Assigning a node to a subset of the partition in the model is then equivalent to assigning a customer to a ring.
Formally, let V 1 ,V 2 , . . . ,V k be a partitioning of V in k subsets. Each customer in the subset V i is assigned to the i-th local ring. As each customer is connected with an ADM to one and only one ring, and each local ring is connected to the federal ring with a DXC, there are exactly |V | AMD and k DXC used in the corresponding SRAP network.
Hence, minimizing the number of rings is equivalent to minimizing k subject to the following constraints:
∑ u∈V i ∑ v∈V,v =u d uv ≤ B, ∀i = 1, . . . , k (1) k−1 ∑ i=1 k ∑ j=i+1 ∑ u∈V i ∑ v∈V j d uv ≤ B(2)
Constraint (1) imposes that the total traffic routed on each ring does not exceed the capacity B. In other words, for a given ring i, it forces the total traffic demands of all the customers connected to this ring, to be lower or equal to the bandwidth. Constraint (2) forces the load of federal ring to be less than or equal to B. To do so, it computes the sum of the traffic demands between all the pairs of customers connected to different rings. Figure 2 illustrates the relation between the node partitioning model and the first topology SRAP. We can see that, because the nodes 1, 3, 5 and 6 are in the same partition, they are connected to the same ring. Similarly, the nodes 2, 4 and 7 are on the same ring. For this problem we can easily compute a lower bound k lb introduced in [6]. In fact, we want to know the minimum number of partitions needed to route all the traffic. Reasonning on the total traffic amount, if we sum all the traffic demands of the graph and divide it by the bandwidth B, we trivially obtain a minmum for the number of rings, that is, a lower bound of the number of partitions. Moreover, we cannot have fractional part of partition, that is why we take the upper round of this fraction.
k lb = n−1 ∑ u=1 n ∑ v=u+1 d uv B
Second topology (IDP)
In the second topology, customers can be connected to more than one ring. If two customers want to communicate, they have to be connected to the same ring. In this case, the DXC are no longer needed and neither is the federal ring. However there are more ADM used than in the first topology. In this case, the most expensive component is the ADM although its price has significantly dropped over the past few years. It is important, in this topology, to have the smallest numbers of ADMs. This problem is called Intra-ring Synchronous Optical Network Design Problem (IDP). The figure 3 illustrates this topology. Model Contrarily to the SRAP problem, there is no need to assign each customer to a particular ring because customers can be connected to several rings. Here the model is based on a partition of the edges of the graph, where a subset of the partition corresponds to a ring.
Formally, let E 1 , E 2 , . . . , E k be a partitioning of E in k subsets and Nodes(E i ) be the set of endpoint nodes of the edges in E i . Each subset of the partition corresponds to a ring, in other words, each customer in Nodes(E i ) is linked to the i-th ring. In the corresponding IDP network, there are
k ∑ i=1 |Nodes(E i )| ADM and no DXC.
Hence, minimizing the number of ADMs is equivalent to minimizing
k ∑ i=1 |Nodes(E i )| subject to, ∑ (u,v)∈E i d uv ≤ B, ∀i = 1, . . . , k(3)
Constraint (3) imposes that the traffic in each ring does not exceed the capacity B. Figure 4 shows the relation between the edge partitioning and the second topology. If all the edges of a node are in the same partition, this node will only be connected to a ring. We can see, for example, the node 4 has all its edges in the same partition, because of that, the node 4 is connected to only one ring. On the opposite, the edges of the node 2 are in two different partitions, so it is connected to two rings. The SRAP problem can be seen as a node partitioning problem, whereas IDP, as an edge partitioning problem for the graph described above, subject to capacity constraints. These graph partitioning problems have been introduced in [6] and [7].
Both of these problems are N P-hard (see O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6], and O. Goldschmidt, D. Hochbaum, A. Levin and E. Olinick in 2003, [7] for details). The principal constraint, the load constraint, is similar to a capacity constraint, yet different: a capacity constraint holds on the variables in the sum, while the load constraint holds on the variables below the sum. The question is how to choose the d uv (which are data) that count for the load.
Brief introduction of Local Search
In order to efficiently and quickly solve these two combinatorial optimization problems, we decided to use Local Search instead of an exact algorithm. Indeed, it permits to search in a efficiently way among all the candidate solutions, by performing steps from a solution to another.
Principles Local search is a metaheuristic based on iterative improvement of an objective function. It has been proved very efficient on many combinatorial optimization problems like the Maximum Clique Problem (L. Cavique, C. Rego and I. Themido in 2001 in [9]), or the Graph Coloring Problem (J.P. Hansen and J.K. Hao in 2002 in [10]). It can be used on problems which formulated either as mere optimization problems, or as constrained optimization problems where the goal is to optimize an objective function while respecting some constraints. Local search algorithms perform local moves in the space of candidate solutions, called the search space, trying to improve the objective function, until a solution deemed optimal is found or a time bound is reached. Defining the neighborhood graph and the method to explore it are two of the key ingredients of local search algorithms.
The approach for solving combinatorial optimization problems with local search is very different from the systematic tree search of constraint and integer programming. Local search belongs to the family of metaheuristic algorithms, which are incomplete by nature and cannot prove optimality. However on many problems, it will isolate a optimal or high-quality solution in a very short time: local search sacrifices optimality guarantees to performance. In our case, we can compute the lower bound to either prove that the obtained solution is optimum, or estimate its optimality, hence local search is well suited.
Basic algorithm A local search algorithm starts from a candidate solution and then iteratively moves to a neighboring solution. This is only possible if a neighborhood relation is defined on the search space. Typically, for every candidate solution, we define a subset of the search space to be the neighborhood. Moves are performed from neighbors to neighbors, hence the name local search. The basic principle is to choose among the neighbors the one with the best value for the objective function. The problem is then that the algorithm will be stuck in local optima. Metaheuristics, such as Tabu Search, are added to avoid this. In Tabu Search, the last t visited configurations are left out of the search (t being a parameter of the algorithm): this ensures that the algorithm can escape local optima, at least at order t. A pseudo-code is given on figure 1.
Termination of local search can be based on a time bound. Another common choice is to terminate when the best solution found by the algorithm has not been improved in a given number of iterations. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. This can happen even if termination is due to the impossibility of improving the solution, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
Choose or construct an initial solution S 0 ; S ← S 0 ; /* S is the current solution */ S * ← S 0 ; /* S * is the best solution so far */ bestValue ← ob jValue(S 0 ) ; /* bestValue is the evaluation of S * */ T ← / 0 ; /* T is the Tabu list */ while Terminaison criterion not satisfied do N(S) ← all the neighboring solutions of S ; /* Neighborhood exploration */ S ← a solution in N(S) minimizing the objective ; if ob jValue(S) < bestValue then /* The solution found is better than S * */ S * ← S ; bestValue ← ob jValue(S) ; end Record tabu for the current move in T (delete oldest entry if necessary) ; end Algorithm 1: Tabu Search
COMET
COMET is an object-oriented language created by Pascal Van Hentenryck and Laurent Michel. It has a constraint-based architecture that makes it easy to use when implementing local search algorithms, and more important, constraint-based local search algorithms (see [1] for details).
Moreover, it has a rich modeling language, including invariants, and a rich constraint language featuring numerical, logical and combinatorial constraints. Constraints and objective functions are differentiable objects maintaining the properties used to direct the graph exploration. The constraints maintain their violations and the objectives their evaluation. One of its most important particularity, is that differentiable objects can be queried to determine incrementally the impact of local moves on their properties.
As we can see on the constraint (1), the sum are on datas (d uv ) and are determined by the variables (u ∈ V i , v ∈ V, v = u). We will rely on COMET 's built-in invariants to define a constraint to represent the load.
Greedy algorithms for SRAP
In [6] the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the edge-based, the cut-based and the node-based. The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings V i and V j if V i ∪ V j is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm 2 shows the pseudo code for the edge-based heuristic.
Given a value k, the node-based heuristic, starts by randomly assigning a node to each of the k rings. At each iteration it first chooses the ring V i with the largest unused capacity, then the unassigned node F ← E ; /* Initialize the set of edges that have not been used yet */ ∀v ∈ V ring(v) ← v ; /* Assign each node to a different ring */ while F = / 0 do /* There is still some edges that have not been used */ Choose a maximum capacity edge (u, v) ∈ F ; i ← ring(u), j ← ring(v) ; if V i ∪V j is a feasible ring then /* Merging the rings gives a feasible ring */ ∀v ∈ V j ring(v) ← i ;
F ← F\{(x, y)|ring(x) = i, ring(y) = j} ; else F ← F\{(u, v)} ; end end
Algorithm 2: Edge-Based Heuristic u with the largest traffic with the nodes in V i . Finally it adds u to the ring V i disregarding the capacity constraint. The pseudo-code for this heuristic is shown on algorithm 3. The node-based heuristic is run ten times. At each run, if a feasible solution is found, the corresponding value for k is kept and the next run takes k − 1 as an input. The idea behind this is to try and improve the objective at each run.
U ← V ; /* Initialize the set of nodes that have not been used yet */ for i = 1 to k do /* Assign k random nodes to the k partitions */ Choose u ∈ U,V i ← u,U ← U\{u} end while U = / 0 do /* There are some unused nodes */ Choose a minimum capacity ring V i
Choose u ∈ U to maximize ∑ {v∈V i } d uv ring(u) ← V i ,U ← U\{u} ; /* Assign u to V i */ end
Algorithm 3: Node-Based Heuristic
To test these heuristics, the authors have randomly generated 160 instances 1 . The edge-based, and the cut-based are run first. If they have found a feasible solution and obtained a value for k, the node-based is then run with the smallest value obtained for k as input. If they have not, the node-based heuristic has for input a random value from the range [k lb , |V |] where k lb is the lower bound, described previously.
MIP and Branch and Cut for IDP
A special case of the IDP problem where all the edges have the same weight, is studied in [7]. This special case is called the K-Edge-Partitioning problem. Given a simple undirected graph G = (V, E) and a value k < |E|, we want to find a partitioning of E, {E 1 , E 2 , . . . E l } such that ∀i ∈ {1, . . . , l}, |E i | ≤ k. The authors present two linear-time-approximation algorithms with fixed performance guarantee. Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( [8]), have studied the IDP problem with an additional constraint such that for each ring i, |Nodes(E i )| ≤ R. The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node u with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge [u, v] such that the graph degree of v is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit B. A set of 40 instances is generated to test these heuristics and the branch-and-cut.
Local Search for SRAP and IDP
More recently, in [2], these two problems have been studied. Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of [2] use the same for all of their metaheuristics. It tries to assign an item x from a partition, P 1 , to another partition, P 2 . The authors also consider the neighborhood obtained by swapping two items, x and y, from two different partitions, P 1 and P 2 . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of x to the partition P 2 is unfeasible.
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in [2] ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
The two other methods described in [2] are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V , such that for every two vertices in C, there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, Find two clique A and B such that each node in A is adjacent to each node in B then merge the two cliques (Union). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
In the third method, Clique-BF, it iteratively selects a clique of unassigned edges with the total traffic less or equal to B. Then assigns it to the ring that minimizes the residual capacity and, if possible, preserves the feasibility. If both of them are impossible it places it to a new ring. Algorithm 4 shows the pseudo code associated to this heuristic. The last algorithm, Cycle-BF, is like the previous method, but instead of looking for a clique at each iteration it try to find a cycle with as many cords as possible.
They also introduce four objective functions, one of which depends on the current and the next status of the search. Let z 0 be the basic objective function counting the number of rings of a solution for SRAP, and the total number of ADMs for IDP, and let BN be the highest load of a ring in the current solution.
U ← E ; r ← 0 ; while U = / 0 do Heuristicaly find a clique C ⊂ U such that weight(C) ≤ B ; /* Search a ring such that the weight of the ring plus the weight of the clique does not exceed B and is the biggest possible
*/ j ← min{B − weight(E i ) − weight(C) : i ∈ {1, . . . , k}, B − weight(E i ) − weight(C) ≥ 0} ; if j = null then r + + ; j ← r ; end E j ← E j ∪C ; U ← U\C ; end
Algorithm 4: Clique-BF
z 1 = z 0 + max{0, BN − B}, z 2 = z 1 + α · RingLoad(r)
if the last move has created a new ring r, The first function z 1 minimizes the basic function z 0 . As BN > B, it also penalizes the unfeasible solutions, by taking into account only one ring, the one with the highest overload. In addition to the penalty for the unfeasible solutions, z 2 penalizes the moves that increase the number of rings. Function z 3 encourages solutions with small z 0 , while among all the solutions with the same value of z 0 , it prefers the ones in which the rings have the same loads. The last objective function z 4 is an adapting technique that modifies the evaluation according to the status of the search. It is a variable objective function having different expressions for different transitions from the current status to the next one.
0 otherwise z 3 = z 0 · B + BN z 4 = z 4a = z 0 · B + BN(= z 3 ) (
Our work
In this section we present the different tools needed to implement the Constraints Based Local Search algorithms for SRAP and IDP. First we introduce the starting solution, then the neighborhoods and the objective functions. Finally we present the different local search algorithms.
Starting solution
Most of the times, local search starts from an random initial solution. However we have tested other possibilities and two other options proved to be more efficient.
The best initializing method assigned all the items, nodes for SRAP or edges for IDP, to the same partition. This solution is certainly unfeasible as all the traffic is on only one ring. This biases the search towards solutions with a minimum value for the cost and a very bad value for the capacity constraints' violations. Astonishingly this is the one that gave us the best results on large instances.
We had good confidence in another one which first computes the lower bound k lb (described in section 2) and randomly assigns all the items to exactly k lb partitions. The idea was to let the Local Search reduce the number of violations. This starting solution was good on small instances and not so good on large ones. It was the same with a random solution, which corresponds, for these problems, to a solution where all the items are randomly assigned to a partition.
Neighborhoods
In a generic partitioning problem there are usually two basic neighborhoods. From a given solution, we can move an object from a subset to another subset or swap two objects assigned to two different subsets. For SRAP a neighboring solution is produced by moving a node from a ring to another (including a new one) or by swapping two nodes assigned to two different rings. The same kind of neighborhood can be used for IDP: moving an edge from a ring to another or swapping two edges.
In some cases it is more efficient to restrain the neighborhood to the feasible space. We have tested different variants of the basic neighborhood applying this idea, by choosing the worst partition (wrt. the capacity constraint) and even by assigning it to the partition with the lowest load. Anyway it appears to be less efficient than the basic one. As will be seen later it seems that on these problems it is necessary to keep the search as broad as possible.
Objective function
We have compared the four objective functions described in [2] (see Section 2) to a new one we have defined: z 5 .
z 5 = z 0 + ∑ p ∈ partitions violations(p) where
partitions are all the rings (in the case of the SRAP problem the federal ring is also included),
violations(p) = capacity(p) − B if the load of p exceed B 0 otherwise.
This objective function minimizes the basic function z 0 and penalizes the unfeasible solutions, but contrarily to the previous objectives, this penalty is based on all the constraints. We consider that every constraint is violated by a certain amount (its current load minus B). By summing all the violations of the current solution, we obtain the total violation for all the constraints, and we can precisely say how far we are from a feasible one. If the current solution is feasible,
∑ p ∈ partitions violations(p) = 0.
This objective has also the nice property that it is merely local, depending only on the current solution and not on the other moves. Notice that a feasible solution with 4 rings will be preferred to an unfeasible solution with 3 rings, as z 0 is much smaller than the load of a ring.
Local Search
We have proposed a new algorithm called DMN2 which proved to be efficient on both problems. It is a variant of the Diversification by Multiple Neighborhood (DMN) proposed in [2]. DMN is based on Tabu Search, and adds a mechanism to perform diversification when the search is going round and round without improving the objective (eventhough it is not a local minimum). This replaces the classical random restart steps. We refine this particular mechanism by proposing several ways of escaping such areas.
More precisely, on our problems, after a series of consecutive non improving iterations, the DMN algorithm empties a partition by moving all its items to another partition, disregarding the capacity constraint and locally minimizing the objective function. There is a particular case for our function z 5 , because it integrates the capacity constraints. In this case, the "z 5 " version of DMN we have implemented moves the items to another partition minimizing z 5 . The results in [2] show a general trend on SRAP and IDP: the more diversification is performed, the better are the results. Following this idea, we propose different ways of perfoming the DMN step, which gives our algorithm DMN2. In DMN2, when the search needs to be diversified, it randomly chooses among three diversification methods (d 1 , d 2 , d 3 ). The first method, d 1 , is the diversification used in DMN. The second one, d 2 , generates a random solution, in the same way as a classic random restart. Finally, d 3 randomly chooses a number m in the range [1, k], where k is the number of rings, and applies m random moves.
In the end, our general algorithm starts with a solution where all the items are in the same partition. Then it applies one of the local search algorithms described before. If the solution returned by the local search is feasible but with the objective value greater than the lower bound k lb , it empties one partition by randomly assigning all its items to another. Then run once again the local search until it founds a solution with the objective value equals to k lb or until the time limit is exceeded.
Results
The objective functions and the metaheuristics, respectively described in Section 3.3 and Section 3.4, have been coded in COMET and tested on Intel based, dual-core, dual processor, Dell Poweredge 1855 blade server, running under Linux. The instances used are from the litterature.
Benchmark
To test the algorithms, we used two sets of instances. The first one has been introduced in [6]. They have generated 80 geometric instances, based on the fact that customers tend to communicate more with their close neighbors, and 80 random instances. These subsets have both 40 low-demand instances, with a ring capacity B = 155 Mbs, and 40 high-demand instances, where B = 622 Mbs. The traffic demand between two customers, u and v, is determined by a discrete uniform random variable corresponding to the number of T1 lines required for the anticipated volume of traffic between u and v. A T1 line has an approximate capacity of 1.5 Mbs. The number of T1 lines is randomly picked in the interval [3,7], for low-demand cases, while it is selected from the range [11,17], for the high-demand cases. The generated graphs have |V | ∈ {15, 25, 30, 50}. In the 160 instances, generated by O. Goldschmidt, A. Laugier and E. Olinick in 2003, 42 have been proven to be unfeasible by R. Aringhieri and M. Dell'Amico using CPLEX 8.0 (see [2]).
The second set of instances has been presented in [8]. They have generated 40 instances with a ring capacity B = 48× T1 lines and the number of T1 lines required for the traffic between two customers has been chosen in the interval [1,30]. The considered graphs have |V | ∈ {15, 20, 25} and |E| = {30, 35}. Most of the instances in this set are unfeasible.
Note that all the instances can be feasible for the IDP problem, we always could assign each demand to a different partition.
Computational Results
We now describe the results obtained for SRAP and IDP on the above two benchmark sets, by the algorithms Basic Tabu For each algorithm we consider the five objective functions of Section 3.3, but for the SS we use the three functions described in Section 3.4.
We gave a time limit of 5 minutes to each run of an algorithm. However we observed that the average time to find the best solution is less than 1 minute. Obviously, the algorithm terminates if the current best solution found is equal to the lower bound k lb . In case the lower bound is not reached, we define as a high-quality solution a solution for which the evaluation of the objective is equal to k lb + 1. Remind that objective functions z 2 and z 3 cannot be applied with the Scatter Search. The figure 5 only shows for each algorithm the number of optimal solutions found with the objective function z 5 . With the other objectives, the number of optimal solutions found is zero, that is why we did not show them on the diagram. However the other objectives found good solutions. Our conclusion is that maybe the other functions do not enough discriminate the different solutions. For this problem, we can see that the eXploring Tabu Search does not give good results. This can be due to a too early "backtracking". After a fixed number of consecutive non improving iterations the search goes back in a previous configuration and applies the second best move. In the case of the IDP problem, it could take much more iterations to improve the value of the objective function than for the SRAP problem. Indeed, the value of the objective function depends on the number of partitions in which a customer belongs, while an iteration moves only one edge ; and to reduce its value by only one it could need to move several edges. Figure 6 shows for each algorithm and each objective function, the number of instances for which the search has found an optimal solution, i.e. a solution with k lb partitions (in dark gray on the diagram) ; the number of those for which the best feasible solution found has k lb + 1 partitions (in gray) ; and, in light gray, the number of instances for which it has found a feasible solution with more than k lb + 1 partitions. From the objective functions perspective, we can see that z 4 , supposed to be the most improving one, is not that good in the COMET implementation. However the one we add, z 5 , is always better than the other ones.
Against all odds, the Basic Tabu Search on all the objective functions, is as good as the other search algorithms. Still on the local search algorithms, we can see that the second version of the Diversification by Multiple Neighborhoods, is much better than the first one with the objectives z 3 and z 4 .
For the details of our results see the report [11].
Conclusion
The purpose of this work was to reproduce with COMET the results obtained, for the SONET Design Problems, by R. Aringhieri and M. DellAmico in 2005 in ANSI C (see [2] for details).
We have implemented in COMET the algorithms and the objective functions described in this paper. We found relevant to add a variant of one of their local search algorithm and a new objective function. Unfortunately, we cannot exactly compare our results to theirs because the set of 230 instances they have generated is not available. However, for the IDP problem, we obtained better results for 15 instances over the 160 compared, and similar results for the other instances. Unfortunately we did not found their results for the SRAP problem. Still for the problem SRAP, compare to the results obtained by O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6] we obtained better results, we have more instances for wich the algorithm reach the lower bound and less unfeasible instances. It would be interesting to have all the instances and the results to fully compare our results.
In the end we can exhibit two main observations. Firstly, for these two problems, the more an algorithm uses diversification the better it is. Actually, we have tried different intensification methods for the local search algorithms but none of them improved the results, worst, they gave us pretty bad results.
Secondly, based on our results, we can say that our objective function implemented in COMET finds more good solutions than the other ones. It is a constraint-based objective function taking into account the violation of every constraint. Hence it has the asset of being both more generic and precise than the dedicated functions, with better results.
| 6,079 |
0910.1255
|
2086153481
|
This paper presents a new method and a constraint-based objective function to solve two problemsrelated to the design of optical telecommunication networks, namely the Synchronous Optical Net-work Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network DesignProblem (IDP). These network topology problems can be represented as a graph partitioning withcapacity constraints as shown in previous works. We present here a new objective function and anew local search algorithm to solve these problems. Experiments conducted in C
|
Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of @cite_5 use the same for all of their metaheuristics. It tries to assign an item @math from a partition, @math , to another partition, @math . The authors also consider the neighborhood obtained by swapping two items, @math and @math , from two different partitions, @math and @math . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of @math to the partition @math is unfeasible.
|
{
"abstract": [
"This paper considers two problems that arise in the design of optical telecommunication networks when a ring-based topology is adopted, namely the SONET Ring Assignment Problem and the Intraring Synchronous Optical Network Design Problem. We show that these two network topology problems correspond to graph partitioning problems with capacity constraints: the first is a vertex partitioning problem, while the latter is an edge partitioning problem. We consider solution methods for both problems, based on metaheuristic algorithms. We first describe variable objective functions that depend on the transition from one solution to a neighboring one, then we apply several diversification and intensification techniques including Path Relinking, eXploring Tabu Search and Scatter Search. Finally we propose a diversification method based on the use of multiple neighborhoods. A set of extensive computational results is used to compare the behaviour of the proposed methods and objective functions."
],
"cite_N": [
"@cite_5"
],
"mid": [
"1969385974"
]
}
|
Sonet Network Design Problems
|
This paper presents a new algorithm and an objective function to solve two real-world combinatorial optimization problems from the field of network design. These two problems, the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP), have been shown N P-hard and have already been solved by combinatorial optimization techniques. This work extends the seminal ideas introduced by R. Aringhieri and M. Dell'Amico in 2005 in [2]. This paper is organized as follows. In the sequel of this section we introduce the two problems we have worked on, and the local search techniques which have been used to solve them. We will also introduce the models in a constrained optimization format for the two problems. We then present the previous works on SRAP and IDP in section 2. Section 3 describes the key ingredients necessary to implement the local search algorithms. Finally, the results are shown in Section 4.
Optical networks topologies
During the last few years the number of internet based application users has exponentially increased, and so has the demand for bandwidth. To enable fast transmission of large quantities of data, the fiber optic technology in telecommunication is the current solution.
The Synchronous Optical NETwork (SONET) in North America and Synchronous Digital Hierarchy (SDH) in Europe and Japan are the standard designs for fiber optics networks. They have a ring-based topology, in other words, they are a collection of rings.
Rings Each customer is connected to one or more rings, and can send, receive and relay messages using an add-drop-multiplexer (ADM). There are two bidirectional links connecting each customer to his neighboring customers on the ring. In a bidirectional ring the traffic between two nodes can be sent clockwise or counterclockwise. This topology allows an enhanced survivability of the network, specifically if a failure occurs on a link, the traffic originally transmitted on this link will be sent on the surviving part of the ring. The volume traffic on any ring is limited by the link capacity, called B. The cost of this kind of network is defined by the cost of the different components used in it.
There are different ways to represent a network. In this paper, we consider two network topologies described by R. Aringhieri and M. Dell'Amico in 2005 in [2]. In both topologies the goal is to minimize the cost of the network while guaranteeing that the customers' demands, in term of bandwidth, are satisfied.
The model associated to these topologies are based on graphs. Given an undirected graph G = (V, E), V = {1, . . . , n}, the set of nodes represent the customers and E, the set of edges, stand for the customers' traffic demands. A communication between two customers u and v corresponds to the weighted edge (u, v) in the graph, where the weight d uv is the fixed traffic demand. Note that d uv = d vu , and that d uu = 0.
First topology (SRAP)
In the first topology, each customer is connected to exactly one ring. All of these local rings are connected with a device called digital cross connector (DXC) to a special ring, called the federal ring. The traffic between two rings is transmitted over this special ring. Like the other rings, the federal ring is limited by the capacity B. Because DXCs are so much more expensive than ADMs we want to have the smallest possible number of them. As there is a one-to-one relationship between the ring and the DXC, minimizing the number of rings is equivalent to minimizing the number of DXCs. The problem associated to this topology is called SONET Ring Assignment Problem (SRAP) with capacity constraint. Figure 1 shows an example of this topology. Model This topology is modeled by a decomposition of the set of nodes V into a partition, each subset of the partition representing a particular ring. Assigning a node to a subset of the partition in the model is then equivalent to assigning a customer to a ring.
Formally, let V 1 ,V 2 , . . . ,V k be a partitioning of V in k subsets. Each customer in the subset V i is assigned to the i-th local ring. As each customer is connected with an ADM to one and only one ring, and each local ring is connected to the federal ring with a DXC, there are exactly |V | AMD and k DXC used in the corresponding SRAP network.
Hence, minimizing the number of rings is equivalent to minimizing k subject to the following constraints:
∑ u∈V i ∑ v∈V,v =u d uv ≤ B, ∀i = 1, . . . , k (1) k−1 ∑ i=1 k ∑ j=i+1 ∑ u∈V i ∑ v∈V j d uv ≤ B(2)
Constraint (1) imposes that the total traffic routed on each ring does not exceed the capacity B. In other words, for a given ring i, it forces the total traffic demands of all the customers connected to this ring, to be lower or equal to the bandwidth. Constraint (2) forces the load of federal ring to be less than or equal to B. To do so, it computes the sum of the traffic demands between all the pairs of customers connected to different rings. Figure 2 illustrates the relation between the node partitioning model and the first topology SRAP. We can see that, because the nodes 1, 3, 5 and 6 are in the same partition, they are connected to the same ring. Similarly, the nodes 2, 4 and 7 are on the same ring. For this problem we can easily compute a lower bound k lb introduced in [6]. In fact, we want to know the minimum number of partitions needed to route all the traffic. Reasonning on the total traffic amount, if we sum all the traffic demands of the graph and divide it by the bandwidth B, we trivially obtain a minmum for the number of rings, that is, a lower bound of the number of partitions. Moreover, we cannot have fractional part of partition, that is why we take the upper round of this fraction.
k lb = n−1 ∑ u=1 n ∑ v=u+1 d uv B
Second topology (IDP)
In the second topology, customers can be connected to more than one ring. If two customers want to communicate, they have to be connected to the same ring. In this case, the DXC are no longer needed and neither is the federal ring. However there are more ADM used than in the first topology. In this case, the most expensive component is the ADM although its price has significantly dropped over the past few years. It is important, in this topology, to have the smallest numbers of ADMs. This problem is called Intra-ring Synchronous Optical Network Design Problem (IDP). The figure 3 illustrates this topology. Model Contrarily to the SRAP problem, there is no need to assign each customer to a particular ring because customers can be connected to several rings. Here the model is based on a partition of the edges of the graph, where a subset of the partition corresponds to a ring.
Formally, let E 1 , E 2 , . . . , E k be a partitioning of E in k subsets and Nodes(E i ) be the set of endpoint nodes of the edges in E i . Each subset of the partition corresponds to a ring, in other words, each customer in Nodes(E i ) is linked to the i-th ring. In the corresponding IDP network, there are
k ∑ i=1 |Nodes(E i )| ADM and no DXC.
Hence, minimizing the number of ADMs is equivalent to minimizing
k ∑ i=1 |Nodes(E i )| subject to, ∑ (u,v)∈E i d uv ≤ B, ∀i = 1, . . . , k(3)
Constraint (3) imposes that the traffic in each ring does not exceed the capacity B. Figure 4 shows the relation between the edge partitioning and the second topology. If all the edges of a node are in the same partition, this node will only be connected to a ring. We can see, for example, the node 4 has all its edges in the same partition, because of that, the node 4 is connected to only one ring. On the opposite, the edges of the node 2 are in two different partitions, so it is connected to two rings. The SRAP problem can be seen as a node partitioning problem, whereas IDP, as an edge partitioning problem for the graph described above, subject to capacity constraints. These graph partitioning problems have been introduced in [6] and [7].
Both of these problems are N P-hard (see O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6], and O. Goldschmidt, D. Hochbaum, A. Levin and E. Olinick in 2003, [7] for details). The principal constraint, the load constraint, is similar to a capacity constraint, yet different: a capacity constraint holds on the variables in the sum, while the load constraint holds on the variables below the sum. The question is how to choose the d uv (which are data) that count for the load.
Brief introduction of Local Search
In order to efficiently and quickly solve these two combinatorial optimization problems, we decided to use Local Search instead of an exact algorithm. Indeed, it permits to search in a efficiently way among all the candidate solutions, by performing steps from a solution to another.
Principles Local search is a metaheuristic based on iterative improvement of an objective function. It has been proved very efficient on many combinatorial optimization problems like the Maximum Clique Problem (L. Cavique, C. Rego and I. Themido in 2001 in [9]), or the Graph Coloring Problem (J.P. Hansen and J.K. Hao in 2002 in [10]). It can be used on problems which formulated either as mere optimization problems, or as constrained optimization problems where the goal is to optimize an objective function while respecting some constraints. Local search algorithms perform local moves in the space of candidate solutions, called the search space, trying to improve the objective function, until a solution deemed optimal is found or a time bound is reached. Defining the neighborhood graph and the method to explore it are two of the key ingredients of local search algorithms.
The approach for solving combinatorial optimization problems with local search is very different from the systematic tree search of constraint and integer programming. Local search belongs to the family of metaheuristic algorithms, which are incomplete by nature and cannot prove optimality. However on many problems, it will isolate a optimal or high-quality solution in a very short time: local search sacrifices optimality guarantees to performance. In our case, we can compute the lower bound to either prove that the obtained solution is optimum, or estimate its optimality, hence local search is well suited.
Basic algorithm A local search algorithm starts from a candidate solution and then iteratively moves to a neighboring solution. This is only possible if a neighborhood relation is defined on the search space. Typically, for every candidate solution, we define a subset of the search space to be the neighborhood. Moves are performed from neighbors to neighbors, hence the name local search. The basic principle is to choose among the neighbors the one with the best value for the objective function. The problem is then that the algorithm will be stuck in local optima. Metaheuristics, such as Tabu Search, are added to avoid this. In Tabu Search, the last t visited configurations are left out of the search (t being a parameter of the algorithm): this ensures that the algorithm can escape local optima, at least at order t. A pseudo-code is given on figure 1.
Termination of local search can be based on a time bound. Another common choice is to terminate when the best solution found by the algorithm has not been improved in a given number of iterations. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. This can happen even if termination is due to the impossibility of improving the solution, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
Choose or construct an initial solution S 0 ; S ← S 0 ; /* S is the current solution */ S * ← S 0 ; /* S * is the best solution so far */ bestValue ← ob jValue(S 0 ) ; /* bestValue is the evaluation of S * */ T ← / 0 ; /* T is the Tabu list */ while Terminaison criterion not satisfied do N(S) ← all the neighboring solutions of S ; /* Neighborhood exploration */ S ← a solution in N(S) minimizing the objective ; if ob jValue(S) < bestValue then /* The solution found is better than S * */ S * ← S ; bestValue ← ob jValue(S) ; end Record tabu for the current move in T (delete oldest entry if necessary) ; end Algorithm 1: Tabu Search
COMET
COMET is an object-oriented language created by Pascal Van Hentenryck and Laurent Michel. It has a constraint-based architecture that makes it easy to use when implementing local search algorithms, and more important, constraint-based local search algorithms (see [1] for details).
Moreover, it has a rich modeling language, including invariants, and a rich constraint language featuring numerical, logical and combinatorial constraints. Constraints and objective functions are differentiable objects maintaining the properties used to direct the graph exploration. The constraints maintain their violations and the objectives their evaluation. One of its most important particularity, is that differentiable objects can be queried to determine incrementally the impact of local moves on their properties.
As we can see on the constraint (1), the sum are on datas (d uv ) and are determined by the variables (u ∈ V i , v ∈ V, v = u). We will rely on COMET 's built-in invariants to define a constraint to represent the load.
Greedy algorithms for SRAP
In [6] the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the edge-based, the cut-based and the node-based. The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings V i and V j if V i ∪ V j is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm 2 shows the pseudo code for the edge-based heuristic.
Given a value k, the node-based heuristic, starts by randomly assigning a node to each of the k rings. At each iteration it first chooses the ring V i with the largest unused capacity, then the unassigned node F ← E ; /* Initialize the set of edges that have not been used yet */ ∀v ∈ V ring(v) ← v ; /* Assign each node to a different ring */ while F = / 0 do /* There is still some edges that have not been used */ Choose a maximum capacity edge (u, v) ∈ F ; i ← ring(u), j ← ring(v) ; if V i ∪V j is a feasible ring then /* Merging the rings gives a feasible ring */ ∀v ∈ V j ring(v) ← i ;
F ← F\{(x, y)|ring(x) = i, ring(y) = j} ; else F ← F\{(u, v)} ; end end
Algorithm 2: Edge-Based Heuristic u with the largest traffic with the nodes in V i . Finally it adds u to the ring V i disregarding the capacity constraint. The pseudo-code for this heuristic is shown on algorithm 3. The node-based heuristic is run ten times. At each run, if a feasible solution is found, the corresponding value for k is kept and the next run takes k − 1 as an input. The idea behind this is to try and improve the objective at each run.
U ← V ; /* Initialize the set of nodes that have not been used yet */ for i = 1 to k do /* Assign k random nodes to the k partitions */ Choose u ∈ U,V i ← u,U ← U\{u} end while U = / 0 do /* There are some unused nodes */ Choose a minimum capacity ring V i
Choose u ∈ U to maximize ∑ {v∈V i } d uv ring(u) ← V i ,U ← U\{u} ; /* Assign u to V i */ end
Algorithm 3: Node-Based Heuristic
To test these heuristics, the authors have randomly generated 160 instances 1 . The edge-based, and the cut-based are run first. If they have found a feasible solution and obtained a value for k, the node-based is then run with the smallest value obtained for k as input. If they have not, the node-based heuristic has for input a random value from the range [k lb , |V |] where k lb is the lower bound, described previously.
MIP and Branch and Cut for IDP
A special case of the IDP problem where all the edges have the same weight, is studied in [7]. This special case is called the K-Edge-Partitioning problem. Given a simple undirected graph G = (V, E) and a value k < |E|, we want to find a partitioning of E, {E 1 , E 2 , . . . E l } such that ∀i ∈ {1, . . . , l}, |E i | ≤ k. The authors present two linear-time-approximation algorithms with fixed performance guarantee. Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( [8]), have studied the IDP problem with an additional constraint such that for each ring i, |Nodes(E i )| ≤ R. The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node u with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge [u, v] such that the graph degree of v is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit B. A set of 40 instances is generated to test these heuristics and the branch-and-cut.
Local Search for SRAP and IDP
More recently, in [2], these two problems have been studied. Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of [2] use the same for all of their metaheuristics. It tries to assign an item x from a partition, P 1 , to another partition, P 2 . The authors also consider the neighborhood obtained by swapping two items, x and y, from two different partitions, P 1 and P 2 . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of x to the partition P 2 is unfeasible.
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in [2] ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
The two other methods described in [2] are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V , such that for every two vertices in C, there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, Find two clique A and B such that each node in A is adjacent to each node in B then merge the two cliques (Union). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
In the third method, Clique-BF, it iteratively selects a clique of unassigned edges with the total traffic less or equal to B. Then assigns it to the ring that minimizes the residual capacity and, if possible, preserves the feasibility. If both of them are impossible it places it to a new ring. Algorithm 4 shows the pseudo code associated to this heuristic. The last algorithm, Cycle-BF, is like the previous method, but instead of looking for a clique at each iteration it try to find a cycle with as many cords as possible.
They also introduce four objective functions, one of which depends on the current and the next status of the search. Let z 0 be the basic objective function counting the number of rings of a solution for SRAP, and the total number of ADMs for IDP, and let BN be the highest load of a ring in the current solution.
U ← E ; r ← 0 ; while U = / 0 do Heuristicaly find a clique C ⊂ U such that weight(C) ≤ B ; /* Search a ring such that the weight of the ring plus the weight of the clique does not exceed B and is the biggest possible
*/ j ← min{B − weight(E i ) − weight(C) : i ∈ {1, . . . , k}, B − weight(E i ) − weight(C) ≥ 0} ; if j = null then r + + ; j ← r ; end E j ← E j ∪C ; U ← U\C ; end
Algorithm 4: Clique-BF
z 1 = z 0 + max{0, BN − B}, z 2 = z 1 + α · RingLoad(r)
if the last move has created a new ring r, The first function z 1 minimizes the basic function z 0 . As BN > B, it also penalizes the unfeasible solutions, by taking into account only one ring, the one with the highest overload. In addition to the penalty for the unfeasible solutions, z 2 penalizes the moves that increase the number of rings. Function z 3 encourages solutions with small z 0 , while among all the solutions with the same value of z 0 , it prefers the ones in which the rings have the same loads. The last objective function z 4 is an adapting technique that modifies the evaluation according to the status of the search. It is a variable objective function having different expressions for different transitions from the current status to the next one.
0 otherwise z 3 = z 0 · B + BN z 4 = z 4a = z 0 · B + BN(= z 3 ) (
Our work
In this section we present the different tools needed to implement the Constraints Based Local Search algorithms for SRAP and IDP. First we introduce the starting solution, then the neighborhoods and the objective functions. Finally we present the different local search algorithms.
Starting solution
Most of the times, local search starts from an random initial solution. However we have tested other possibilities and two other options proved to be more efficient.
The best initializing method assigned all the items, nodes for SRAP or edges for IDP, to the same partition. This solution is certainly unfeasible as all the traffic is on only one ring. This biases the search towards solutions with a minimum value for the cost and a very bad value for the capacity constraints' violations. Astonishingly this is the one that gave us the best results on large instances.
We had good confidence in another one which first computes the lower bound k lb (described in section 2) and randomly assigns all the items to exactly k lb partitions. The idea was to let the Local Search reduce the number of violations. This starting solution was good on small instances and not so good on large ones. It was the same with a random solution, which corresponds, for these problems, to a solution where all the items are randomly assigned to a partition.
Neighborhoods
In a generic partitioning problem there are usually two basic neighborhoods. From a given solution, we can move an object from a subset to another subset or swap two objects assigned to two different subsets. For SRAP a neighboring solution is produced by moving a node from a ring to another (including a new one) or by swapping two nodes assigned to two different rings. The same kind of neighborhood can be used for IDP: moving an edge from a ring to another or swapping two edges.
In some cases it is more efficient to restrain the neighborhood to the feasible space. We have tested different variants of the basic neighborhood applying this idea, by choosing the worst partition (wrt. the capacity constraint) and even by assigning it to the partition with the lowest load. Anyway it appears to be less efficient than the basic one. As will be seen later it seems that on these problems it is necessary to keep the search as broad as possible.
Objective function
We have compared the four objective functions described in [2] (see Section 2) to a new one we have defined: z 5 .
z 5 = z 0 + ∑ p ∈ partitions violations(p) where
partitions are all the rings (in the case of the SRAP problem the federal ring is also included),
violations(p) = capacity(p) − B if the load of p exceed B 0 otherwise.
This objective function minimizes the basic function z 0 and penalizes the unfeasible solutions, but contrarily to the previous objectives, this penalty is based on all the constraints. We consider that every constraint is violated by a certain amount (its current load minus B). By summing all the violations of the current solution, we obtain the total violation for all the constraints, and we can precisely say how far we are from a feasible one. If the current solution is feasible,
∑ p ∈ partitions violations(p) = 0.
This objective has also the nice property that it is merely local, depending only on the current solution and not on the other moves. Notice that a feasible solution with 4 rings will be preferred to an unfeasible solution with 3 rings, as z 0 is much smaller than the load of a ring.
Local Search
We have proposed a new algorithm called DMN2 which proved to be efficient on both problems. It is a variant of the Diversification by Multiple Neighborhood (DMN) proposed in [2]. DMN is based on Tabu Search, and adds a mechanism to perform diversification when the search is going round and round without improving the objective (eventhough it is not a local minimum). This replaces the classical random restart steps. We refine this particular mechanism by proposing several ways of escaping such areas.
More precisely, on our problems, after a series of consecutive non improving iterations, the DMN algorithm empties a partition by moving all its items to another partition, disregarding the capacity constraint and locally minimizing the objective function. There is a particular case for our function z 5 , because it integrates the capacity constraints. In this case, the "z 5 " version of DMN we have implemented moves the items to another partition minimizing z 5 . The results in [2] show a general trend on SRAP and IDP: the more diversification is performed, the better are the results. Following this idea, we propose different ways of perfoming the DMN step, which gives our algorithm DMN2. In DMN2, when the search needs to be diversified, it randomly chooses among three diversification methods (d 1 , d 2 , d 3 ). The first method, d 1 , is the diversification used in DMN. The second one, d 2 , generates a random solution, in the same way as a classic random restart. Finally, d 3 randomly chooses a number m in the range [1, k], where k is the number of rings, and applies m random moves.
In the end, our general algorithm starts with a solution where all the items are in the same partition. Then it applies one of the local search algorithms described before. If the solution returned by the local search is feasible but with the objective value greater than the lower bound k lb , it empties one partition by randomly assigning all its items to another. Then run once again the local search until it founds a solution with the objective value equals to k lb or until the time limit is exceeded.
Results
The objective functions and the metaheuristics, respectively described in Section 3.3 and Section 3.4, have been coded in COMET and tested on Intel based, dual-core, dual processor, Dell Poweredge 1855 blade server, running under Linux. The instances used are from the litterature.
Benchmark
To test the algorithms, we used two sets of instances. The first one has been introduced in [6]. They have generated 80 geometric instances, based on the fact that customers tend to communicate more with their close neighbors, and 80 random instances. These subsets have both 40 low-demand instances, with a ring capacity B = 155 Mbs, and 40 high-demand instances, where B = 622 Mbs. The traffic demand between two customers, u and v, is determined by a discrete uniform random variable corresponding to the number of T1 lines required for the anticipated volume of traffic between u and v. A T1 line has an approximate capacity of 1.5 Mbs. The number of T1 lines is randomly picked in the interval [3,7], for low-demand cases, while it is selected from the range [11,17], for the high-demand cases. The generated graphs have |V | ∈ {15, 25, 30, 50}. In the 160 instances, generated by O. Goldschmidt, A. Laugier and E. Olinick in 2003, 42 have been proven to be unfeasible by R. Aringhieri and M. Dell'Amico using CPLEX 8.0 (see [2]).
The second set of instances has been presented in [8]. They have generated 40 instances with a ring capacity B = 48× T1 lines and the number of T1 lines required for the traffic between two customers has been chosen in the interval [1,30]. The considered graphs have |V | ∈ {15, 20, 25} and |E| = {30, 35}. Most of the instances in this set are unfeasible.
Note that all the instances can be feasible for the IDP problem, we always could assign each demand to a different partition.
Computational Results
We now describe the results obtained for SRAP and IDP on the above two benchmark sets, by the algorithms Basic Tabu For each algorithm we consider the five objective functions of Section 3.3, but for the SS we use the three functions described in Section 3.4.
We gave a time limit of 5 minutes to each run of an algorithm. However we observed that the average time to find the best solution is less than 1 minute. Obviously, the algorithm terminates if the current best solution found is equal to the lower bound k lb . In case the lower bound is not reached, we define as a high-quality solution a solution for which the evaluation of the objective is equal to k lb + 1. Remind that objective functions z 2 and z 3 cannot be applied with the Scatter Search. The figure 5 only shows for each algorithm the number of optimal solutions found with the objective function z 5 . With the other objectives, the number of optimal solutions found is zero, that is why we did not show them on the diagram. However the other objectives found good solutions. Our conclusion is that maybe the other functions do not enough discriminate the different solutions. For this problem, we can see that the eXploring Tabu Search does not give good results. This can be due to a too early "backtracking". After a fixed number of consecutive non improving iterations the search goes back in a previous configuration and applies the second best move. In the case of the IDP problem, it could take much more iterations to improve the value of the objective function than for the SRAP problem. Indeed, the value of the objective function depends on the number of partitions in which a customer belongs, while an iteration moves only one edge ; and to reduce its value by only one it could need to move several edges. Figure 6 shows for each algorithm and each objective function, the number of instances for which the search has found an optimal solution, i.e. a solution with k lb partitions (in dark gray on the diagram) ; the number of those for which the best feasible solution found has k lb + 1 partitions (in gray) ; and, in light gray, the number of instances for which it has found a feasible solution with more than k lb + 1 partitions. From the objective functions perspective, we can see that z 4 , supposed to be the most improving one, is not that good in the COMET implementation. However the one we add, z 5 , is always better than the other ones.
Against all odds, the Basic Tabu Search on all the objective functions, is as good as the other search algorithms. Still on the local search algorithms, we can see that the second version of the Diversification by Multiple Neighborhoods, is much better than the first one with the objectives z 3 and z 4 .
For the details of our results see the report [11].
Conclusion
The purpose of this work was to reproduce with COMET the results obtained, for the SONET Design Problems, by R. Aringhieri and M. DellAmico in 2005 in ANSI C (see [2] for details).
We have implemented in COMET the algorithms and the objective functions described in this paper. We found relevant to add a variant of one of their local search algorithm and a new objective function. Unfortunately, we cannot exactly compare our results to theirs because the set of 230 instances they have generated is not available. However, for the IDP problem, we obtained better results for 15 instances over the 160 compared, and similar results for the other instances. Unfortunately we did not found their results for the SRAP problem. Still for the problem SRAP, compare to the results obtained by O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6] we obtained better results, we have more instances for wich the algorithm reach the lower bound and less unfeasible instances. It would be interesting to have all the instances and the results to fully compare our results.
In the end we can exhibit two main observations. Firstly, for these two problems, the more an algorithm uses diversification the better it is. Actually, we have tried different intensification methods for the local search algorithms but none of them improved the results, worst, they gave us pretty bad results.
Secondly, based on our results, we can say that our objective function implemented in COMET finds more good solutions than the other ones. It is a constraint-based objective function taking into account the violation of every constraint. Hence it has the asset of being both more generic and precise than the dedicated functions, with better results.
| 6,079 |
0910.1255
|
2086153481
|
This paper presents a new method and a constraint-based objective function to solve two problemsrelated to the design of optical telecommunication networks, namely the Synchronous Optical Net-work Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network DesignProblem (IDP). These network topology problems can be represented as a graph partitioning withcapacity constraints as shown in previous works. We present here a new objective function and anew local search algorithm to solve these problems. Experiments conducted in C
|
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in @cite_5 ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
|
{
"abstract": [
"This paper considers two problems that arise in the design of optical telecommunication networks when a ring-based topology is adopted, namely the SONET Ring Assignment Problem and the Intraring Synchronous Optical Network Design Problem. We show that these two network topology problems correspond to graph partitioning problems with capacity constraints: the first is a vertex partitioning problem, while the latter is an edge partitioning problem. We consider solution methods for both problems, based on metaheuristic algorithms. We first describe variable objective functions that depend on the transition from one solution to a neighboring one, then we apply several diversification and intensification techniques including Path Relinking, eXploring Tabu Search and Scatter Search. Finally we propose a diversification method based on the use of multiple neighborhoods. A set of extensive computational results is used to compare the behaviour of the proposed methods and objective functions."
],
"cite_N": [
"@cite_5"
],
"mid": [
"1969385974"
]
}
|
Sonet Network Design Problems
|
This paper presents a new algorithm and an objective function to solve two real-world combinatorial optimization problems from the field of network design. These two problems, the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP), have been shown N P-hard and have already been solved by combinatorial optimization techniques. This work extends the seminal ideas introduced by R. Aringhieri and M. Dell'Amico in 2005 in [2]. This paper is organized as follows. In the sequel of this section we introduce the two problems we have worked on, and the local search techniques which have been used to solve them. We will also introduce the models in a constrained optimization format for the two problems. We then present the previous works on SRAP and IDP in section 2. Section 3 describes the key ingredients necessary to implement the local search algorithms. Finally, the results are shown in Section 4.
Optical networks topologies
During the last few years the number of internet based application users has exponentially increased, and so has the demand for bandwidth. To enable fast transmission of large quantities of data, the fiber optic technology in telecommunication is the current solution.
The Synchronous Optical NETwork (SONET) in North America and Synchronous Digital Hierarchy (SDH) in Europe and Japan are the standard designs for fiber optics networks. They have a ring-based topology, in other words, they are a collection of rings.
Rings Each customer is connected to one or more rings, and can send, receive and relay messages using an add-drop-multiplexer (ADM). There are two bidirectional links connecting each customer to his neighboring customers on the ring. In a bidirectional ring the traffic between two nodes can be sent clockwise or counterclockwise. This topology allows an enhanced survivability of the network, specifically if a failure occurs on a link, the traffic originally transmitted on this link will be sent on the surviving part of the ring. The volume traffic on any ring is limited by the link capacity, called B. The cost of this kind of network is defined by the cost of the different components used in it.
There are different ways to represent a network. In this paper, we consider two network topologies described by R. Aringhieri and M. Dell'Amico in 2005 in [2]. In both topologies the goal is to minimize the cost of the network while guaranteeing that the customers' demands, in term of bandwidth, are satisfied.
The model associated to these topologies are based on graphs. Given an undirected graph G = (V, E), V = {1, . . . , n}, the set of nodes represent the customers and E, the set of edges, stand for the customers' traffic demands. A communication between two customers u and v corresponds to the weighted edge (u, v) in the graph, where the weight d uv is the fixed traffic demand. Note that d uv = d vu , and that d uu = 0.
First topology (SRAP)
In the first topology, each customer is connected to exactly one ring. All of these local rings are connected with a device called digital cross connector (DXC) to a special ring, called the federal ring. The traffic between two rings is transmitted over this special ring. Like the other rings, the federal ring is limited by the capacity B. Because DXCs are so much more expensive than ADMs we want to have the smallest possible number of them. As there is a one-to-one relationship between the ring and the DXC, minimizing the number of rings is equivalent to minimizing the number of DXCs. The problem associated to this topology is called SONET Ring Assignment Problem (SRAP) with capacity constraint. Figure 1 shows an example of this topology. Model This topology is modeled by a decomposition of the set of nodes V into a partition, each subset of the partition representing a particular ring. Assigning a node to a subset of the partition in the model is then equivalent to assigning a customer to a ring.
Formally, let V 1 ,V 2 , . . . ,V k be a partitioning of V in k subsets. Each customer in the subset V i is assigned to the i-th local ring. As each customer is connected with an ADM to one and only one ring, and each local ring is connected to the federal ring with a DXC, there are exactly |V | AMD and k DXC used in the corresponding SRAP network.
Hence, minimizing the number of rings is equivalent to minimizing k subject to the following constraints:
∑ u∈V i ∑ v∈V,v =u d uv ≤ B, ∀i = 1, . . . , k (1) k−1 ∑ i=1 k ∑ j=i+1 ∑ u∈V i ∑ v∈V j d uv ≤ B(2)
Constraint (1) imposes that the total traffic routed on each ring does not exceed the capacity B. In other words, for a given ring i, it forces the total traffic demands of all the customers connected to this ring, to be lower or equal to the bandwidth. Constraint (2) forces the load of federal ring to be less than or equal to B. To do so, it computes the sum of the traffic demands between all the pairs of customers connected to different rings. Figure 2 illustrates the relation between the node partitioning model and the first topology SRAP. We can see that, because the nodes 1, 3, 5 and 6 are in the same partition, they are connected to the same ring. Similarly, the nodes 2, 4 and 7 are on the same ring. For this problem we can easily compute a lower bound k lb introduced in [6]. In fact, we want to know the minimum number of partitions needed to route all the traffic. Reasonning on the total traffic amount, if we sum all the traffic demands of the graph and divide it by the bandwidth B, we trivially obtain a minmum for the number of rings, that is, a lower bound of the number of partitions. Moreover, we cannot have fractional part of partition, that is why we take the upper round of this fraction.
k lb = n−1 ∑ u=1 n ∑ v=u+1 d uv B
Second topology (IDP)
In the second topology, customers can be connected to more than one ring. If two customers want to communicate, they have to be connected to the same ring. In this case, the DXC are no longer needed and neither is the federal ring. However there are more ADM used than in the first topology. In this case, the most expensive component is the ADM although its price has significantly dropped over the past few years. It is important, in this topology, to have the smallest numbers of ADMs. This problem is called Intra-ring Synchronous Optical Network Design Problem (IDP). The figure 3 illustrates this topology. Model Contrarily to the SRAP problem, there is no need to assign each customer to a particular ring because customers can be connected to several rings. Here the model is based on a partition of the edges of the graph, where a subset of the partition corresponds to a ring.
Formally, let E 1 , E 2 , . . . , E k be a partitioning of E in k subsets and Nodes(E i ) be the set of endpoint nodes of the edges in E i . Each subset of the partition corresponds to a ring, in other words, each customer in Nodes(E i ) is linked to the i-th ring. In the corresponding IDP network, there are
k ∑ i=1 |Nodes(E i )| ADM and no DXC.
Hence, minimizing the number of ADMs is equivalent to minimizing
k ∑ i=1 |Nodes(E i )| subject to, ∑ (u,v)∈E i d uv ≤ B, ∀i = 1, . . . , k(3)
Constraint (3) imposes that the traffic in each ring does not exceed the capacity B. Figure 4 shows the relation between the edge partitioning and the second topology. If all the edges of a node are in the same partition, this node will only be connected to a ring. We can see, for example, the node 4 has all its edges in the same partition, because of that, the node 4 is connected to only one ring. On the opposite, the edges of the node 2 are in two different partitions, so it is connected to two rings. The SRAP problem can be seen as a node partitioning problem, whereas IDP, as an edge partitioning problem for the graph described above, subject to capacity constraints. These graph partitioning problems have been introduced in [6] and [7].
Both of these problems are N P-hard (see O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6], and O. Goldschmidt, D. Hochbaum, A. Levin and E. Olinick in 2003, [7] for details). The principal constraint, the load constraint, is similar to a capacity constraint, yet different: a capacity constraint holds on the variables in the sum, while the load constraint holds on the variables below the sum. The question is how to choose the d uv (which are data) that count for the load.
Brief introduction of Local Search
In order to efficiently and quickly solve these two combinatorial optimization problems, we decided to use Local Search instead of an exact algorithm. Indeed, it permits to search in a efficiently way among all the candidate solutions, by performing steps from a solution to another.
Principles Local search is a metaheuristic based on iterative improvement of an objective function. It has been proved very efficient on many combinatorial optimization problems like the Maximum Clique Problem (L. Cavique, C. Rego and I. Themido in 2001 in [9]), or the Graph Coloring Problem (J.P. Hansen and J.K. Hao in 2002 in [10]). It can be used on problems which formulated either as mere optimization problems, or as constrained optimization problems where the goal is to optimize an objective function while respecting some constraints. Local search algorithms perform local moves in the space of candidate solutions, called the search space, trying to improve the objective function, until a solution deemed optimal is found or a time bound is reached. Defining the neighborhood graph and the method to explore it are two of the key ingredients of local search algorithms.
The approach for solving combinatorial optimization problems with local search is very different from the systematic tree search of constraint and integer programming. Local search belongs to the family of metaheuristic algorithms, which are incomplete by nature and cannot prove optimality. However on many problems, it will isolate a optimal or high-quality solution in a very short time: local search sacrifices optimality guarantees to performance. In our case, we can compute the lower bound to either prove that the obtained solution is optimum, or estimate its optimality, hence local search is well suited.
Basic algorithm A local search algorithm starts from a candidate solution and then iteratively moves to a neighboring solution. This is only possible if a neighborhood relation is defined on the search space. Typically, for every candidate solution, we define a subset of the search space to be the neighborhood. Moves are performed from neighbors to neighbors, hence the name local search. The basic principle is to choose among the neighbors the one with the best value for the objective function. The problem is then that the algorithm will be stuck in local optima. Metaheuristics, such as Tabu Search, are added to avoid this. In Tabu Search, the last t visited configurations are left out of the search (t being a parameter of the algorithm): this ensures that the algorithm can escape local optima, at least at order t. A pseudo-code is given on figure 1.
Termination of local search can be based on a time bound. Another common choice is to terminate when the best solution found by the algorithm has not been improved in a given number of iterations. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. This can happen even if termination is due to the impossibility of improving the solution, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
Choose or construct an initial solution S 0 ; S ← S 0 ; /* S is the current solution */ S * ← S 0 ; /* S * is the best solution so far */ bestValue ← ob jValue(S 0 ) ; /* bestValue is the evaluation of S * */ T ← / 0 ; /* T is the Tabu list */ while Terminaison criterion not satisfied do N(S) ← all the neighboring solutions of S ; /* Neighborhood exploration */ S ← a solution in N(S) minimizing the objective ; if ob jValue(S) < bestValue then /* The solution found is better than S * */ S * ← S ; bestValue ← ob jValue(S) ; end Record tabu for the current move in T (delete oldest entry if necessary) ; end Algorithm 1: Tabu Search
COMET
COMET is an object-oriented language created by Pascal Van Hentenryck and Laurent Michel. It has a constraint-based architecture that makes it easy to use when implementing local search algorithms, and more important, constraint-based local search algorithms (see [1] for details).
Moreover, it has a rich modeling language, including invariants, and a rich constraint language featuring numerical, logical and combinatorial constraints. Constraints and objective functions are differentiable objects maintaining the properties used to direct the graph exploration. The constraints maintain their violations and the objectives their evaluation. One of its most important particularity, is that differentiable objects can be queried to determine incrementally the impact of local moves on their properties.
As we can see on the constraint (1), the sum are on datas (d uv ) and are determined by the variables (u ∈ V i , v ∈ V, v = u). We will rely on COMET 's built-in invariants to define a constraint to represent the load.
Greedy algorithms for SRAP
In [6] the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the edge-based, the cut-based and the node-based. The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings V i and V j if V i ∪ V j is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm 2 shows the pseudo code for the edge-based heuristic.
Given a value k, the node-based heuristic, starts by randomly assigning a node to each of the k rings. At each iteration it first chooses the ring V i with the largest unused capacity, then the unassigned node F ← E ; /* Initialize the set of edges that have not been used yet */ ∀v ∈ V ring(v) ← v ; /* Assign each node to a different ring */ while F = / 0 do /* There is still some edges that have not been used */ Choose a maximum capacity edge (u, v) ∈ F ; i ← ring(u), j ← ring(v) ; if V i ∪V j is a feasible ring then /* Merging the rings gives a feasible ring */ ∀v ∈ V j ring(v) ← i ;
F ← F\{(x, y)|ring(x) = i, ring(y) = j} ; else F ← F\{(u, v)} ; end end
Algorithm 2: Edge-Based Heuristic u with the largest traffic with the nodes in V i . Finally it adds u to the ring V i disregarding the capacity constraint. The pseudo-code for this heuristic is shown on algorithm 3. The node-based heuristic is run ten times. At each run, if a feasible solution is found, the corresponding value for k is kept and the next run takes k − 1 as an input. The idea behind this is to try and improve the objective at each run.
U ← V ; /* Initialize the set of nodes that have not been used yet */ for i = 1 to k do /* Assign k random nodes to the k partitions */ Choose u ∈ U,V i ← u,U ← U\{u} end while U = / 0 do /* There are some unused nodes */ Choose a minimum capacity ring V i
Choose u ∈ U to maximize ∑ {v∈V i } d uv ring(u) ← V i ,U ← U\{u} ; /* Assign u to V i */ end
Algorithm 3: Node-Based Heuristic
To test these heuristics, the authors have randomly generated 160 instances 1 . The edge-based, and the cut-based are run first. If they have found a feasible solution and obtained a value for k, the node-based is then run with the smallest value obtained for k as input. If they have not, the node-based heuristic has for input a random value from the range [k lb , |V |] where k lb is the lower bound, described previously.
MIP and Branch and Cut for IDP
A special case of the IDP problem where all the edges have the same weight, is studied in [7]. This special case is called the K-Edge-Partitioning problem. Given a simple undirected graph G = (V, E) and a value k < |E|, we want to find a partitioning of E, {E 1 , E 2 , . . . E l } such that ∀i ∈ {1, . . . , l}, |E i | ≤ k. The authors present two linear-time-approximation algorithms with fixed performance guarantee. Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( [8]), have studied the IDP problem with an additional constraint such that for each ring i, |Nodes(E i )| ≤ R. The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node u with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge [u, v] such that the graph degree of v is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit B. A set of 40 instances is generated to test these heuristics and the branch-and-cut.
Local Search for SRAP and IDP
More recently, in [2], these two problems have been studied. Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of [2] use the same for all of their metaheuristics. It tries to assign an item x from a partition, P 1 , to another partition, P 2 . The authors also consider the neighborhood obtained by swapping two items, x and y, from two different partitions, P 1 and P 2 . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of x to the partition P 2 is unfeasible.
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in [2] ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
The two other methods described in [2] are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V , such that for every two vertices in C, there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, Find two clique A and B such that each node in A is adjacent to each node in B then merge the two cliques (Union). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
In the third method, Clique-BF, it iteratively selects a clique of unassigned edges with the total traffic less or equal to B. Then assigns it to the ring that minimizes the residual capacity and, if possible, preserves the feasibility. If both of them are impossible it places it to a new ring. Algorithm 4 shows the pseudo code associated to this heuristic. The last algorithm, Cycle-BF, is like the previous method, but instead of looking for a clique at each iteration it try to find a cycle with as many cords as possible.
They also introduce four objective functions, one of which depends on the current and the next status of the search. Let z 0 be the basic objective function counting the number of rings of a solution for SRAP, and the total number of ADMs for IDP, and let BN be the highest load of a ring in the current solution.
U ← E ; r ← 0 ; while U = / 0 do Heuristicaly find a clique C ⊂ U such that weight(C) ≤ B ; /* Search a ring such that the weight of the ring plus the weight of the clique does not exceed B and is the biggest possible
*/ j ← min{B − weight(E i ) − weight(C) : i ∈ {1, . . . , k}, B − weight(E i ) − weight(C) ≥ 0} ; if j = null then r + + ; j ← r ; end E j ← E j ∪C ; U ← U\C ; end
Algorithm 4: Clique-BF
z 1 = z 0 + max{0, BN − B}, z 2 = z 1 + α · RingLoad(r)
if the last move has created a new ring r, The first function z 1 minimizes the basic function z 0 . As BN > B, it also penalizes the unfeasible solutions, by taking into account only one ring, the one with the highest overload. In addition to the penalty for the unfeasible solutions, z 2 penalizes the moves that increase the number of rings. Function z 3 encourages solutions with small z 0 , while among all the solutions with the same value of z 0 , it prefers the ones in which the rings have the same loads. The last objective function z 4 is an adapting technique that modifies the evaluation according to the status of the search. It is a variable objective function having different expressions for different transitions from the current status to the next one.
0 otherwise z 3 = z 0 · B + BN z 4 = z 4a = z 0 · B + BN(= z 3 ) (
Our work
In this section we present the different tools needed to implement the Constraints Based Local Search algorithms for SRAP and IDP. First we introduce the starting solution, then the neighborhoods and the objective functions. Finally we present the different local search algorithms.
Starting solution
Most of the times, local search starts from an random initial solution. However we have tested other possibilities and two other options proved to be more efficient.
The best initializing method assigned all the items, nodes for SRAP or edges for IDP, to the same partition. This solution is certainly unfeasible as all the traffic is on only one ring. This biases the search towards solutions with a minimum value for the cost and a very bad value for the capacity constraints' violations. Astonishingly this is the one that gave us the best results on large instances.
We had good confidence in another one which first computes the lower bound k lb (described in section 2) and randomly assigns all the items to exactly k lb partitions. The idea was to let the Local Search reduce the number of violations. This starting solution was good on small instances and not so good on large ones. It was the same with a random solution, which corresponds, for these problems, to a solution where all the items are randomly assigned to a partition.
Neighborhoods
In a generic partitioning problem there are usually two basic neighborhoods. From a given solution, we can move an object from a subset to another subset or swap two objects assigned to two different subsets. For SRAP a neighboring solution is produced by moving a node from a ring to another (including a new one) or by swapping two nodes assigned to two different rings. The same kind of neighborhood can be used for IDP: moving an edge from a ring to another or swapping two edges.
In some cases it is more efficient to restrain the neighborhood to the feasible space. We have tested different variants of the basic neighborhood applying this idea, by choosing the worst partition (wrt. the capacity constraint) and even by assigning it to the partition with the lowest load. Anyway it appears to be less efficient than the basic one. As will be seen later it seems that on these problems it is necessary to keep the search as broad as possible.
Objective function
We have compared the four objective functions described in [2] (see Section 2) to a new one we have defined: z 5 .
z 5 = z 0 + ∑ p ∈ partitions violations(p) where
partitions are all the rings (in the case of the SRAP problem the federal ring is also included),
violations(p) = capacity(p) − B if the load of p exceed B 0 otherwise.
This objective function minimizes the basic function z 0 and penalizes the unfeasible solutions, but contrarily to the previous objectives, this penalty is based on all the constraints. We consider that every constraint is violated by a certain amount (its current load minus B). By summing all the violations of the current solution, we obtain the total violation for all the constraints, and we can precisely say how far we are from a feasible one. If the current solution is feasible,
∑ p ∈ partitions violations(p) = 0.
This objective has also the nice property that it is merely local, depending only on the current solution and not on the other moves. Notice that a feasible solution with 4 rings will be preferred to an unfeasible solution with 3 rings, as z 0 is much smaller than the load of a ring.
Local Search
We have proposed a new algorithm called DMN2 which proved to be efficient on both problems. It is a variant of the Diversification by Multiple Neighborhood (DMN) proposed in [2]. DMN is based on Tabu Search, and adds a mechanism to perform diversification when the search is going round and round without improving the objective (eventhough it is not a local minimum). This replaces the classical random restart steps. We refine this particular mechanism by proposing several ways of escaping such areas.
More precisely, on our problems, after a series of consecutive non improving iterations, the DMN algorithm empties a partition by moving all its items to another partition, disregarding the capacity constraint and locally minimizing the objective function. There is a particular case for our function z 5 , because it integrates the capacity constraints. In this case, the "z 5 " version of DMN we have implemented moves the items to another partition minimizing z 5 . The results in [2] show a general trend on SRAP and IDP: the more diversification is performed, the better are the results. Following this idea, we propose different ways of perfoming the DMN step, which gives our algorithm DMN2. In DMN2, when the search needs to be diversified, it randomly chooses among three diversification methods (d 1 , d 2 , d 3 ). The first method, d 1 , is the diversification used in DMN. The second one, d 2 , generates a random solution, in the same way as a classic random restart. Finally, d 3 randomly chooses a number m in the range [1, k], where k is the number of rings, and applies m random moves.
In the end, our general algorithm starts with a solution where all the items are in the same partition. Then it applies one of the local search algorithms described before. If the solution returned by the local search is feasible but with the objective value greater than the lower bound k lb , it empties one partition by randomly assigning all its items to another. Then run once again the local search until it founds a solution with the objective value equals to k lb or until the time limit is exceeded.
Results
The objective functions and the metaheuristics, respectively described in Section 3.3 and Section 3.4, have been coded in COMET and tested on Intel based, dual-core, dual processor, Dell Poweredge 1855 blade server, running under Linux. The instances used are from the litterature.
Benchmark
To test the algorithms, we used two sets of instances. The first one has been introduced in [6]. They have generated 80 geometric instances, based on the fact that customers tend to communicate more with their close neighbors, and 80 random instances. These subsets have both 40 low-demand instances, with a ring capacity B = 155 Mbs, and 40 high-demand instances, where B = 622 Mbs. The traffic demand between two customers, u and v, is determined by a discrete uniform random variable corresponding to the number of T1 lines required for the anticipated volume of traffic between u and v. A T1 line has an approximate capacity of 1.5 Mbs. The number of T1 lines is randomly picked in the interval [3,7], for low-demand cases, while it is selected from the range [11,17], for the high-demand cases. The generated graphs have |V | ∈ {15, 25, 30, 50}. In the 160 instances, generated by O. Goldschmidt, A. Laugier and E. Olinick in 2003, 42 have been proven to be unfeasible by R. Aringhieri and M. Dell'Amico using CPLEX 8.0 (see [2]).
The second set of instances has been presented in [8]. They have generated 40 instances with a ring capacity B = 48× T1 lines and the number of T1 lines required for the traffic between two customers has been chosen in the interval [1,30]. The considered graphs have |V | ∈ {15, 20, 25} and |E| = {30, 35}. Most of the instances in this set are unfeasible.
Note that all the instances can be feasible for the IDP problem, we always could assign each demand to a different partition.
Computational Results
We now describe the results obtained for SRAP and IDP on the above two benchmark sets, by the algorithms Basic Tabu For each algorithm we consider the five objective functions of Section 3.3, but for the SS we use the three functions described in Section 3.4.
We gave a time limit of 5 minutes to each run of an algorithm. However we observed that the average time to find the best solution is less than 1 minute. Obviously, the algorithm terminates if the current best solution found is equal to the lower bound k lb . In case the lower bound is not reached, we define as a high-quality solution a solution for which the evaluation of the objective is equal to k lb + 1. Remind that objective functions z 2 and z 3 cannot be applied with the Scatter Search. The figure 5 only shows for each algorithm the number of optimal solutions found with the objective function z 5 . With the other objectives, the number of optimal solutions found is zero, that is why we did not show them on the diagram. However the other objectives found good solutions. Our conclusion is that maybe the other functions do not enough discriminate the different solutions. For this problem, we can see that the eXploring Tabu Search does not give good results. This can be due to a too early "backtracking". After a fixed number of consecutive non improving iterations the search goes back in a previous configuration and applies the second best move. In the case of the IDP problem, it could take much more iterations to improve the value of the objective function than for the SRAP problem. Indeed, the value of the objective function depends on the number of partitions in which a customer belongs, while an iteration moves only one edge ; and to reduce its value by only one it could need to move several edges. Figure 6 shows for each algorithm and each objective function, the number of instances for which the search has found an optimal solution, i.e. a solution with k lb partitions (in dark gray on the diagram) ; the number of those for which the best feasible solution found has k lb + 1 partitions (in gray) ; and, in light gray, the number of instances for which it has found a feasible solution with more than k lb + 1 partitions. From the objective functions perspective, we can see that z 4 , supposed to be the most improving one, is not that good in the COMET implementation. However the one we add, z 5 , is always better than the other ones.
Against all odds, the Basic Tabu Search on all the objective functions, is as good as the other search algorithms. Still on the local search algorithms, we can see that the second version of the Diversification by Multiple Neighborhoods, is much better than the first one with the objectives z 3 and z 4 .
For the details of our results see the report [11].
Conclusion
The purpose of this work was to reproduce with COMET the results obtained, for the SONET Design Problems, by R. Aringhieri and M. DellAmico in 2005 in ANSI C (see [2] for details).
We have implemented in COMET the algorithms and the objective functions described in this paper. We found relevant to add a variant of one of their local search algorithm and a new objective function. Unfortunately, we cannot exactly compare our results to theirs because the set of 230 instances they have generated is not available. However, for the IDP problem, we obtained better results for 15 instances over the 160 compared, and similar results for the other instances. Unfortunately we did not found their results for the SRAP problem. Still for the problem SRAP, compare to the results obtained by O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6] we obtained better results, we have more instances for wich the algorithm reach the lower bound and less unfeasible instances. It would be interesting to have all the instances and the results to fully compare our results.
In the end we can exhibit two main observations. Firstly, for these two problems, the more an algorithm uses diversification the better it is. Actually, we have tried different intensification methods for the local search algorithms but none of them improved the results, worst, they gave us pretty bad results.
Secondly, based on our results, we can say that our objective function implemented in COMET finds more good solutions than the other ones. It is a constraint-based objective function taking into account the violation of every constraint. Hence it has the asset of being both more generic and precise than the dedicated functions, with better results.
| 6,079 |
0910.1255
|
2086153481
|
This paper presents a new method and a constraint-based objective function to solve two problemsrelated to the design of optical telecommunication networks, namely the Synchronous Optical Net-work Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network DesignProblem (IDP). These network topology problems can be represented as a graph partitioning withcapacity constraints as shown in previous works. We present here a new objective function and anew local search algorithm to solve these problems. Experiments conducted in C
|
The two other methods described in @cite_5 are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph @math is a subset of the vertex set @math , such that for every two vertices in @math , there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, two clique @math and @math such that each node in @math is adjacent to each node in @math then merge the two cliques (). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
|
{
"abstract": [
"This paper considers two problems that arise in the design of optical telecommunication networks when a ring-based topology is adopted, namely the SONET Ring Assignment Problem and the Intraring Synchronous Optical Network Design Problem. We show that these two network topology problems correspond to graph partitioning problems with capacity constraints: the first is a vertex partitioning problem, while the latter is an edge partitioning problem. We consider solution methods for both problems, based on metaheuristic algorithms. We first describe variable objective functions that depend on the transition from one solution to a neighboring one, then we apply several diversification and intensification techniques including Path Relinking, eXploring Tabu Search and Scatter Search. Finally we propose a diversification method based on the use of multiple neighborhoods. A set of extensive computational results is used to compare the behaviour of the proposed methods and objective functions."
],
"cite_N": [
"@cite_5"
],
"mid": [
"1969385974"
]
}
|
Sonet Network Design Problems
|
This paper presents a new algorithm and an objective function to solve two real-world combinatorial optimization problems from the field of network design. These two problems, the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP), have been shown N P-hard and have already been solved by combinatorial optimization techniques. This work extends the seminal ideas introduced by R. Aringhieri and M. Dell'Amico in 2005 in [2]. This paper is organized as follows. In the sequel of this section we introduce the two problems we have worked on, and the local search techniques which have been used to solve them. We will also introduce the models in a constrained optimization format for the two problems. We then present the previous works on SRAP and IDP in section 2. Section 3 describes the key ingredients necessary to implement the local search algorithms. Finally, the results are shown in Section 4.
Optical networks topologies
During the last few years the number of internet based application users has exponentially increased, and so has the demand for bandwidth. To enable fast transmission of large quantities of data, the fiber optic technology in telecommunication is the current solution.
The Synchronous Optical NETwork (SONET) in North America and Synchronous Digital Hierarchy (SDH) in Europe and Japan are the standard designs for fiber optics networks. They have a ring-based topology, in other words, they are a collection of rings.
Rings Each customer is connected to one or more rings, and can send, receive and relay messages using an add-drop-multiplexer (ADM). There are two bidirectional links connecting each customer to his neighboring customers on the ring. In a bidirectional ring the traffic between two nodes can be sent clockwise or counterclockwise. This topology allows an enhanced survivability of the network, specifically if a failure occurs on a link, the traffic originally transmitted on this link will be sent on the surviving part of the ring. The volume traffic on any ring is limited by the link capacity, called B. The cost of this kind of network is defined by the cost of the different components used in it.
There are different ways to represent a network. In this paper, we consider two network topologies described by R. Aringhieri and M. Dell'Amico in 2005 in [2]. In both topologies the goal is to minimize the cost of the network while guaranteeing that the customers' demands, in term of bandwidth, are satisfied.
The model associated to these topologies are based on graphs. Given an undirected graph G = (V, E), V = {1, . . . , n}, the set of nodes represent the customers and E, the set of edges, stand for the customers' traffic demands. A communication between two customers u and v corresponds to the weighted edge (u, v) in the graph, where the weight d uv is the fixed traffic demand. Note that d uv = d vu , and that d uu = 0.
First topology (SRAP)
In the first topology, each customer is connected to exactly one ring. All of these local rings are connected with a device called digital cross connector (DXC) to a special ring, called the federal ring. The traffic between two rings is transmitted over this special ring. Like the other rings, the federal ring is limited by the capacity B. Because DXCs are so much more expensive than ADMs we want to have the smallest possible number of them. As there is a one-to-one relationship between the ring and the DXC, minimizing the number of rings is equivalent to minimizing the number of DXCs. The problem associated to this topology is called SONET Ring Assignment Problem (SRAP) with capacity constraint. Figure 1 shows an example of this topology. Model This topology is modeled by a decomposition of the set of nodes V into a partition, each subset of the partition representing a particular ring. Assigning a node to a subset of the partition in the model is then equivalent to assigning a customer to a ring.
Formally, let V 1 ,V 2 , . . . ,V k be a partitioning of V in k subsets. Each customer in the subset V i is assigned to the i-th local ring. As each customer is connected with an ADM to one and only one ring, and each local ring is connected to the federal ring with a DXC, there are exactly |V | AMD and k DXC used in the corresponding SRAP network.
Hence, minimizing the number of rings is equivalent to minimizing k subject to the following constraints:
∑ u∈V i ∑ v∈V,v =u d uv ≤ B, ∀i = 1, . . . , k (1) k−1 ∑ i=1 k ∑ j=i+1 ∑ u∈V i ∑ v∈V j d uv ≤ B(2)
Constraint (1) imposes that the total traffic routed on each ring does not exceed the capacity B. In other words, for a given ring i, it forces the total traffic demands of all the customers connected to this ring, to be lower or equal to the bandwidth. Constraint (2) forces the load of federal ring to be less than or equal to B. To do so, it computes the sum of the traffic demands between all the pairs of customers connected to different rings. Figure 2 illustrates the relation between the node partitioning model and the first topology SRAP. We can see that, because the nodes 1, 3, 5 and 6 are in the same partition, they are connected to the same ring. Similarly, the nodes 2, 4 and 7 are on the same ring. For this problem we can easily compute a lower bound k lb introduced in [6]. In fact, we want to know the minimum number of partitions needed to route all the traffic. Reasonning on the total traffic amount, if we sum all the traffic demands of the graph and divide it by the bandwidth B, we trivially obtain a minmum for the number of rings, that is, a lower bound of the number of partitions. Moreover, we cannot have fractional part of partition, that is why we take the upper round of this fraction.
k lb = n−1 ∑ u=1 n ∑ v=u+1 d uv B
Second topology (IDP)
In the second topology, customers can be connected to more than one ring. If two customers want to communicate, they have to be connected to the same ring. In this case, the DXC are no longer needed and neither is the federal ring. However there are more ADM used than in the first topology. In this case, the most expensive component is the ADM although its price has significantly dropped over the past few years. It is important, in this topology, to have the smallest numbers of ADMs. This problem is called Intra-ring Synchronous Optical Network Design Problem (IDP). The figure 3 illustrates this topology. Model Contrarily to the SRAP problem, there is no need to assign each customer to a particular ring because customers can be connected to several rings. Here the model is based on a partition of the edges of the graph, where a subset of the partition corresponds to a ring.
Formally, let E 1 , E 2 , . . . , E k be a partitioning of E in k subsets and Nodes(E i ) be the set of endpoint nodes of the edges in E i . Each subset of the partition corresponds to a ring, in other words, each customer in Nodes(E i ) is linked to the i-th ring. In the corresponding IDP network, there are
k ∑ i=1 |Nodes(E i )| ADM and no DXC.
Hence, minimizing the number of ADMs is equivalent to minimizing
k ∑ i=1 |Nodes(E i )| subject to, ∑ (u,v)∈E i d uv ≤ B, ∀i = 1, . . . , k(3)
Constraint (3) imposes that the traffic in each ring does not exceed the capacity B. Figure 4 shows the relation between the edge partitioning and the second topology. If all the edges of a node are in the same partition, this node will only be connected to a ring. We can see, for example, the node 4 has all its edges in the same partition, because of that, the node 4 is connected to only one ring. On the opposite, the edges of the node 2 are in two different partitions, so it is connected to two rings. The SRAP problem can be seen as a node partitioning problem, whereas IDP, as an edge partitioning problem for the graph described above, subject to capacity constraints. These graph partitioning problems have been introduced in [6] and [7].
Both of these problems are N P-hard (see O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6], and O. Goldschmidt, D. Hochbaum, A. Levin and E. Olinick in 2003, [7] for details). The principal constraint, the load constraint, is similar to a capacity constraint, yet different: a capacity constraint holds on the variables in the sum, while the load constraint holds on the variables below the sum. The question is how to choose the d uv (which are data) that count for the load.
Brief introduction of Local Search
In order to efficiently and quickly solve these two combinatorial optimization problems, we decided to use Local Search instead of an exact algorithm. Indeed, it permits to search in a efficiently way among all the candidate solutions, by performing steps from a solution to another.
Principles Local search is a metaheuristic based on iterative improvement of an objective function. It has been proved very efficient on many combinatorial optimization problems like the Maximum Clique Problem (L. Cavique, C. Rego and I. Themido in 2001 in [9]), or the Graph Coloring Problem (J.P. Hansen and J.K. Hao in 2002 in [10]). It can be used on problems which formulated either as mere optimization problems, or as constrained optimization problems where the goal is to optimize an objective function while respecting some constraints. Local search algorithms perform local moves in the space of candidate solutions, called the search space, trying to improve the objective function, until a solution deemed optimal is found or a time bound is reached. Defining the neighborhood graph and the method to explore it are two of the key ingredients of local search algorithms.
The approach for solving combinatorial optimization problems with local search is very different from the systematic tree search of constraint and integer programming. Local search belongs to the family of metaheuristic algorithms, which are incomplete by nature and cannot prove optimality. However on many problems, it will isolate a optimal or high-quality solution in a very short time: local search sacrifices optimality guarantees to performance. In our case, we can compute the lower bound to either prove that the obtained solution is optimum, or estimate its optimality, hence local search is well suited.
Basic algorithm A local search algorithm starts from a candidate solution and then iteratively moves to a neighboring solution. This is only possible if a neighborhood relation is defined on the search space. Typically, for every candidate solution, we define a subset of the search space to be the neighborhood. Moves are performed from neighbors to neighbors, hence the name local search. The basic principle is to choose among the neighbors the one with the best value for the objective function. The problem is then that the algorithm will be stuck in local optima. Metaheuristics, such as Tabu Search, are added to avoid this. In Tabu Search, the last t visited configurations are left out of the search (t being a parameter of the algorithm): this ensures that the algorithm can escape local optima, at least at order t. A pseudo-code is given on figure 1.
Termination of local search can be based on a time bound. Another common choice is to terminate when the best solution found by the algorithm has not been improved in a given number of iterations. Local search algorithms are typically incomplete algorithms, as the search may stop even if the best solution found by the algorithm is not optimal. This can happen even if termination is due to the impossibility of improving the solution, as the optimal solution can lie far from the neighborhood of the solutions crossed by the algorithms.
Choose or construct an initial solution S 0 ; S ← S 0 ; /* S is the current solution */ S * ← S 0 ; /* S * is the best solution so far */ bestValue ← ob jValue(S 0 ) ; /* bestValue is the evaluation of S * */ T ← / 0 ; /* T is the Tabu list */ while Terminaison criterion not satisfied do N(S) ← all the neighboring solutions of S ; /* Neighborhood exploration */ S ← a solution in N(S) minimizing the objective ; if ob jValue(S) < bestValue then /* The solution found is better than S * */ S * ← S ; bestValue ← ob jValue(S) ; end Record tabu for the current move in T (delete oldest entry if necessary) ; end Algorithm 1: Tabu Search
COMET
COMET is an object-oriented language created by Pascal Van Hentenryck and Laurent Michel. It has a constraint-based architecture that makes it easy to use when implementing local search algorithms, and more important, constraint-based local search algorithms (see [1] for details).
Moreover, it has a rich modeling language, including invariants, and a rich constraint language featuring numerical, logical and combinatorial constraints. Constraints and objective functions are differentiable objects maintaining the properties used to direct the graph exploration. The constraints maintain their violations and the objectives their evaluation. One of its most important particularity, is that differentiable objects can be queried to determine incrementally the impact of local moves on their properties.
As we can see on the constraint (1), the sum are on datas (d uv ) and are determined by the variables (u ∈ V i , v ∈ V, v = u). We will rely on COMET 's built-in invariants to define a constraint to represent the load.
Greedy algorithms for SRAP
In [6] the SRAP problem is considered. They propose three greedy algorithms with different heuristics, the edge-based, the cut-based and the node-based. The first two algorithms start by assigning each node to a different ring. At each iteration they reduce the number of rings by merging two rings V i and V j if V i ∪ V j is a feasible ring for the capacity constraint. In the edge-based heuristic, the two rings with the maximum weight edge are merged. While in the cut-based heuristic, the two rings with the maximum total weight of the edges with one endpoint in each of them, are merged. Algorithm 2 shows the pseudo code for the edge-based heuristic.
Given a value k, the node-based heuristic, starts by randomly assigning a node to each of the k rings. At each iteration it first chooses the ring V i with the largest unused capacity, then the unassigned node F ← E ; /* Initialize the set of edges that have not been used yet */ ∀v ∈ V ring(v) ← v ; /* Assign each node to a different ring */ while F = / 0 do /* There is still some edges that have not been used */ Choose a maximum capacity edge (u, v) ∈ F ; i ← ring(u), j ← ring(v) ; if V i ∪V j is a feasible ring then /* Merging the rings gives a feasible ring */ ∀v ∈ V j ring(v) ← i ;
F ← F\{(x, y)|ring(x) = i, ring(y) = j} ; else F ← F\{(u, v)} ; end end
Algorithm 2: Edge-Based Heuristic u with the largest traffic with the nodes in V i . Finally it adds u to the ring V i disregarding the capacity constraint. The pseudo-code for this heuristic is shown on algorithm 3. The node-based heuristic is run ten times. At each run, if a feasible solution is found, the corresponding value for k is kept and the next run takes k − 1 as an input. The idea behind this is to try and improve the objective at each run.
U ← V ; /* Initialize the set of nodes that have not been used yet */ for i = 1 to k do /* Assign k random nodes to the k partitions */ Choose u ∈ U,V i ← u,U ← U\{u} end while U = / 0 do /* There are some unused nodes */ Choose a minimum capacity ring V i
Choose u ∈ U to maximize ∑ {v∈V i } d uv ring(u) ← V i ,U ← U\{u} ; /* Assign u to V i */ end
Algorithm 3: Node-Based Heuristic
To test these heuristics, the authors have randomly generated 160 instances 1 . The edge-based, and the cut-based are run first. If they have found a feasible solution and obtained a value for k, the node-based is then run with the smallest value obtained for k as input. If they have not, the node-based heuristic has for input a random value from the range [k lb , |V |] where k lb is the lower bound, described previously.
MIP and Branch and Cut for IDP
A special case of the IDP problem where all the edges have the same weight, is studied in [7]. This special case is called the K-Edge-Partitioning problem. Given a simple undirected graph G = (V, E) and a value k < |E|, we want to find a partitioning of E, {E 1 , E 2 , . . . E l } such that ∀i ∈ {1, . . . , l}, |E i | ≤ k. The authors present two linear-time-approximation algorithms with fixed performance guarantee. Y. Lee, H. Sherali, J. Han and S. Kim in 2000 ( [8]), have studied the IDP problem with an additional constraint such that for each ring i, |Nodes(E i )| ≤ R. The authors present a mixed-integer programming model for the problem, and develop a branch-and-cut algorithm. They also introduce a heuristic to generate an initial feasible solution, and another one to improve the initial solution. To initialize a ring, the heuristic first, adds the node u with the maximum graph degree, with respect to unassigned edges, and then adds to the partition the edge [u, v] such that the graph degree of v is maximum. It iteratively increases the partition by choosing a node such that the total traffic does not exceed the limit B. A set of 40 instances is generated to test these heuristics and the branch-and-cut.
Local Search for SRAP and IDP
More recently, in [2], these two problems have been studied. Previously, we saw that with local search it is necessary to define a neighborhood to choose the next solution. The authors of [2] use the same for all of their metaheuristics. It tries to assign an item x from a partition, P 1 , to another partition, P 2 . The authors also consider the neighborhood obtained by swapping two items, x and y, from two different partitions, P 1 and P 2 . But instead of trying all the pairs of items, it will only try to swap the two items if the resulting solution of the assignment of x to the partition P 2 is unfeasible.
In order to compute a starting solution for the IDP problem, the authors describe four different heuristics. The first heuristic introduced in [2] ordered the edges by decreasing weight, at each iteration it tries to assign the edge with the biggest weight which is not already assigned, to the ring with the smallest residual capacity regarding to capacity constraint. If no assignment is possible, the current edge is assigned to a new ring. The second one, sorts the edges by increasing weight, and tries to assign the current edge to the current ring if the capacity constraint is respected, otherwise the ring is no longer considered and a new ring is initialized with the current edge.
The two other methods described in [2] are based on the idea that to save ADMs a good solution should have very dense rings. They are both greedy and rely on a clique algorithm. In graph theory, a clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V , such that for every two vertices in C, there exists an edge connecting the two. Finding a clique is not that easy, a way to do it is to use an "Union-Find" strategie, Find two clique A and B such that each node in A is adjacent to each node in B then merge the two cliques (Union). The associated heuristic starts by considering each node to be a clique of size one, and to merge two cliques into a larger clique until there are no more possible merges.
In the third method, Clique-BF, it iteratively selects a clique of unassigned edges with the total traffic less or equal to B. Then assigns it to the ring that minimizes the residual capacity and, if possible, preserves the feasibility. If both of them are impossible it places it to a new ring. Algorithm 4 shows the pseudo code associated to this heuristic. The last algorithm, Cycle-BF, is like the previous method, but instead of looking for a clique at each iteration it try to find a cycle with as many cords as possible.
They also introduce four objective functions, one of which depends on the current and the next status of the search. Let z 0 be the basic objective function counting the number of rings of a solution for SRAP, and the total number of ADMs for IDP, and let BN be the highest load of a ring in the current solution.
U ← E ; r ← 0 ; while U = / 0 do Heuristicaly find a clique C ⊂ U such that weight(C) ≤ B ; /* Search a ring such that the weight of the ring plus the weight of the clique does not exceed B and is the biggest possible
*/ j ← min{B − weight(E i ) − weight(C) : i ∈ {1, . . . , k}, B − weight(E i ) − weight(C) ≥ 0} ; if j = null then r + + ; j ← r ; end E j ← E j ∪C ; U ← U\C ; end
Algorithm 4: Clique-BF
z 1 = z 0 + max{0, BN − B}, z 2 = z 1 + α · RingLoad(r)
if the last move has created a new ring r, The first function z 1 minimizes the basic function z 0 . As BN > B, it also penalizes the unfeasible solutions, by taking into account only one ring, the one with the highest overload. In addition to the penalty for the unfeasible solutions, z 2 penalizes the moves that increase the number of rings. Function z 3 encourages solutions with small z 0 , while among all the solutions with the same value of z 0 , it prefers the ones in which the rings have the same loads. The last objective function z 4 is an adapting technique that modifies the evaluation according to the status of the search. It is a variable objective function having different expressions for different transitions from the current status to the next one.
0 otherwise z 3 = z 0 · B + BN z 4 = z 4a = z 0 · B + BN(= z 3 ) (
Our work
In this section we present the different tools needed to implement the Constraints Based Local Search algorithms for SRAP and IDP. First we introduce the starting solution, then the neighborhoods and the objective functions. Finally we present the different local search algorithms.
Starting solution
Most of the times, local search starts from an random initial solution. However we have tested other possibilities and two other options proved to be more efficient.
The best initializing method assigned all the items, nodes for SRAP or edges for IDP, to the same partition. This solution is certainly unfeasible as all the traffic is on only one ring. This biases the search towards solutions with a minimum value for the cost and a very bad value for the capacity constraints' violations. Astonishingly this is the one that gave us the best results on large instances.
We had good confidence in another one which first computes the lower bound k lb (described in section 2) and randomly assigns all the items to exactly k lb partitions. The idea was to let the Local Search reduce the number of violations. This starting solution was good on small instances and not so good on large ones. It was the same with a random solution, which corresponds, for these problems, to a solution where all the items are randomly assigned to a partition.
Neighborhoods
In a generic partitioning problem there are usually two basic neighborhoods. From a given solution, we can move an object from a subset to another subset or swap two objects assigned to two different subsets. For SRAP a neighboring solution is produced by moving a node from a ring to another (including a new one) or by swapping two nodes assigned to two different rings. The same kind of neighborhood can be used for IDP: moving an edge from a ring to another or swapping two edges.
In some cases it is more efficient to restrain the neighborhood to the feasible space. We have tested different variants of the basic neighborhood applying this idea, by choosing the worst partition (wrt. the capacity constraint) and even by assigning it to the partition with the lowest load. Anyway it appears to be less efficient than the basic one. As will be seen later it seems that on these problems it is necessary to keep the search as broad as possible.
Objective function
We have compared the four objective functions described in [2] (see Section 2) to a new one we have defined: z 5 .
z 5 = z 0 + ∑ p ∈ partitions violations(p) where
partitions are all the rings (in the case of the SRAP problem the federal ring is also included),
violations(p) = capacity(p) − B if the load of p exceed B 0 otherwise.
This objective function minimizes the basic function z 0 and penalizes the unfeasible solutions, but contrarily to the previous objectives, this penalty is based on all the constraints. We consider that every constraint is violated by a certain amount (its current load minus B). By summing all the violations of the current solution, we obtain the total violation for all the constraints, and we can precisely say how far we are from a feasible one. If the current solution is feasible,
∑ p ∈ partitions violations(p) = 0.
This objective has also the nice property that it is merely local, depending only on the current solution and not on the other moves. Notice that a feasible solution with 4 rings will be preferred to an unfeasible solution with 3 rings, as z 0 is much smaller than the load of a ring.
Local Search
We have proposed a new algorithm called DMN2 which proved to be efficient on both problems. It is a variant of the Diversification by Multiple Neighborhood (DMN) proposed in [2]. DMN is based on Tabu Search, and adds a mechanism to perform diversification when the search is going round and round without improving the objective (eventhough it is not a local minimum). This replaces the classical random restart steps. We refine this particular mechanism by proposing several ways of escaping such areas.
More precisely, on our problems, after a series of consecutive non improving iterations, the DMN algorithm empties a partition by moving all its items to another partition, disregarding the capacity constraint and locally minimizing the objective function. There is a particular case for our function z 5 , because it integrates the capacity constraints. In this case, the "z 5 " version of DMN we have implemented moves the items to another partition minimizing z 5 . The results in [2] show a general trend on SRAP and IDP: the more diversification is performed, the better are the results. Following this idea, we propose different ways of perfoming the DMN step, which gives our algorithm DMN2. In DMN2, when the search needs to be diversified, it randomly chooses among three diversification methods (d 1 , d 2 , d 3 ). The first method, d 1 , is the diversification used in DMN. The second one, d 2 , generates a random solution, in the same way as a classic random restart. Finally, d 3 randomly chooses a number m in the range [1, k], where k is the number of rings, and applies m random moves.
In the end, our general algorithm starts with a solution where all the items are in the same partition. Then it applies one of the local search algorithms described before. If the solution returned by the local search is feasible but with the objective value greater than the lower bound k lb , it empties one partition by randomly assigning all its items to another. Then run once again the local search until it founds a solution with the objective value equals to k lb or until the time limit is exceeded.
Results
The objective functions and the metaheuristics, respectively described in Section 3.3 and Section 3.4, have been coded in COMET and tested on Intel based, dual-core, dual processor, Dell Poweredge 1855 blade server, running under Linux. The instances used are from the litterature.
Benchmark
To test the algorithms, we used two sets of instances. The first one has been introduced in [6]. They have generated 80 geometric instances, based on the fact that customers tend to communicate more with their close neighbors, and 80 random instances. These subsets have both 40 low-demand instances, with a ring capacity B = 155 Mbs, and 40 high-demand instances, where B = 622 Mbs. The traffic demand between two customers, u and v, is determined by a discrete uniform random variable corresponding to the number of T1 lines required for the anticipated volume of traffic between u and v. A T1 line has an approximate capacity of 1.5 Mbs. The number of T1 lines is randomly picked in the interval [3,7], for low-demand cases, while it is selected from the range [11,17], for the high-demand cases. The generated graphs have |V | ∈ {15, 25, 30, 50}. In the 160 instances, generated by O. Goldschmidt, A. Laugier and E. Olinick in 2003, 42 have been proven to be unfeasible by R. Aringhieri and M. Dell'Amico using CPLEX 8.0 (see [2]).
The second set of instances has been presented in [8]. They have generated 40 instances with a ring capacity B = 48× T1 lines and the number of T1 lines required for the traffic between two customers has been chosen in the interval [1,30]. The considered graphs have |V | ∈ {15, 20, 25} and |E| = {30, 35}. Most of the instances in this set are unfeasible.
Note that all the instances can be feasible for the IDP problem, we always could assign each demand to a different partition.
Computational Results
We now describe the results obtained for SRAP and IDP on the above two benchmark sets, by the algorithms Basic Tabu For each algorithm we consider the five objective functions of Section 3.3, but for the SS we use the three functions described in Section 3.4.
We gave a time limit of 5 minutes to each run of an algorithm. However we observed that the average time to find the best solution is less than 1 minute. Obviously, the algorithm terminates if the current best solution found is equal to the lower bound k lb . In case the lower bound is not reached, we define as a high-quality solution a solution for which the evaluation of the objective is equal to k lb + 1. Remind that objective functions z 2 and z 3 cannot be applied with the Scatter Search. The figure 5 only shows for each algorithm the number of optimal solutions found with the objective function z 5 . With the other objectives, the number of optimal solutions found is zero, that is why we did not show them on the diagram. However the other objectives found good solutions. Our conclusion is that maybe the other functions do not enough discriminate the different solutions. For this problem, we can see that the eXploring Tabu Search does not give good results. This can be due to a too early "backtracking". After a fixed number of consecutive non improving iterations the search goes back in a previous configuration and applies the second best move. In the case of the IDP problem, it could take much more iterations to improve the value of the objective function than for the SRAP problem. Indeed, the value of the objective function depends on the number of partitions in which a customer belongs, while an iteration moves only one edge ; and to reduce its value by only one it could need to move several edges. Figure 6 shows for each algorithm and each objective function, the number of instances for which the search has found an optimal solution, i.e. a solution with k lb partitions (in dark gray on the diagram) ; the number of those for which the best feasible solution found has k lb + 1 partitions (in gray) ; and, in light gray, the number of instances for which it has found a feasible solution with more than k lb + 1 partitions. From the objective functions perspective, we can see that z 4 , supposed to be the most improving one, is not that good in the COMET implementation. However the one we add, z 5 , is always better than the other ones.
Against all odds, the Basic Tabu Search on all the objective functions, is as good as the other search algorithms. Still on the local search algorithms, we can see that the second version of the Diversification by Multiple Neighborhoods, is much better than the first one with the objectives z 3 and z 4 .
For the details of our results see the report [11].
Conclusion
The purpose of this work was to reproduce with COMET the results obtained, for the SONET Design Problems, by R. Aringhieri and M. DellAmico in 2005 in ANSI C (see [2] for details).
We have implemented in COMET the algorithms and the objective functions described in this paper. We found relevant to add a variant of one of their local search algorithm and a new objective function. Unfortunately, we cannot exactly compare our results to theirs because the set of 230 instances they have generated is not available. However, for the IDP problem, we obtained better results for 15 instances over the 160 compared, and similar results for the other instances. Unfortunately we did not found their results for the SRAP problem. Still for the problem SRAP, compare to the results obtained by O. Goldschmidt, A. Laugier and E. Olinick in 2003, [6] we obtained better results, we have more instances for wich the algorithm reach the lower bound and less unfeasible instances. It would be interesting to have all the instances and the results to fully compare our results.
In the end we can exhibit two main observations. Firstly, for these two problems, the more an algorithm uses diversification the better it is. Actually, we have tried different intensification methods for the local search algorithms but none of them improved the results, worst, they gave us pretty bad results.
Secondly, based on our results, we can say that our objective function implemented in COMET finds more good solutions than the other ones. It is a constraint-based objective function taking into account the violation of every constraint. Hence it has the asset of being both more generic and precise than the dedicated functions, with better results.
| 6,079 |
0909.4437
|
2953180797
|
The stable marriage problem is a well-known problem of matching men to women so that no man and woman who are not married to each other both prefer each other. Such a problem has a wide variety of practical applications ranging from matching resident doctors to hospitals to matching students to schools. A well-known algorithm to solve this problem is the Gale-Shapley algorithm, which runs in polynomial time. It has been proven that stable marriage procedures can always be manipulated. Whilst the Gale-Shapley algorithm is computationally easy to manipulate, we prove that there exist stable marriage procedures which are NP-hard to manipulate. We also consider the relationship between voting theory and stable marriage procedures, showing that voting rules which are NP-hard to manipulate can be used to define stable marriage procedures which are themselves NP-hard to manipulate. Finally, we consider the issue that stable marriage procedures like Gale-Shapley favour one gender over the other, and we show how to use voting rules to make any stable marriage procedure gender neutral.
|
In @cite_7 fairness of a matching procedure is defined in terms of four axioms, two of which are gender neutrality and peer indifference. Then, the existence of a matching procedures satisfying all or a subset of the axioms is considered in terms of restrictions on preference orderings. Here, instead, we propose a preprocessing step that allows to obtain a gender neutral matching procedure from any matching procedure without imposing any restrictions on the preferences in the input. A detailed description of results about manipulation of stable marriage procedures can be found in @cite_6 . In particular, several early results @cite_4 @cite_20 @cite_3 @cite_11 indicated the futility of men lying, focusing later work mostly on strategies in which the women lie. Gale and Sotomayor @cite_16 presented the manipulation strategy in which women truncate their preference lists. Roth and Vate @cite_2 discussed strategic issues when the stable matching is chosen at random, proposed a truncation strategy and showed that every stable matching can be achieved as an equilibrium in truncation strategies. We instead do not allow the elimination of men from a woman's preference ordering, but permit reordering of the preference lists.
|
{
"abstract": [
"Using a lemma of J.S. Hwang we obtain a generalization of a theorem of Dubins and Freedman. It is shown that the core of the matching game is non-manipulable in a suitable sense by coalitions consisting of both men and women. A further strong stability property of the core is derived.",
"We analyze the Gale-Shapley matching problem within the context of Rawlsian justice. Defining a fair matching algorithm by a set of 4 axioms (Gender Indifference, Peer Indifference, Maximin Optimality, and Stability), we show that not all preference profiles admit a fair matching algorithm, the reason being that even this set of minimal axioms is too strong in a sense. Because of conflict between Stability and Maximin Optimality, even the algorithm which generates the mutual agreement match, paradoxically, has no chance to be fair.",
"",
"This paper addresses strategies for the stable marriage problem. For the Gale-Shapley algorithm with men proposing, a classical theorem states that it is impossible for every cheating man to get a better partner than the one he gets if everyone is truthful. We study how to circumvent this theorem and incite men to cheat. First we devise coalitions in which a nonempty subset of the liars get better partners and no man is worse off than before. This strategy is limited in that not everyone in the coalition has the incentive to falsify his list. In an attempt to rectify this situation we introduce the element of randomness, but the theorem shows surprising robustness: it is impossible that every liar has a chance to improve the rank of his partner while no one gets hurt. To overcome the problem that some men lack the motivation to lie, we exhibit another randomized lying strategy in which every liar can expect to get a better partner on average, though with a chance of getting a worse one. Finally, we consider a variant scenario: instead of using the Gale-Shapley algorithm, suppose the stable matching is chosen at random. We present a modified form of the coalition strategy ensuring that every man in the coalition has a new probability distribution over partners which majorizes the original one.",
"This paper considers the incentives confronting agents who face the prospect of being matched by some sort of random stable mechanism, such as that discussed in Roth and Vande Vate (1990). A one period game is studied in which all stable matchings can be achieved as equilibria; in a natural class of undominated strategies, and in which certain unstable matchings can also arise in this way. A multi-period extension of this game is then considered in which subgame perfect equilibria must result in stable matches. These results suggest avenues to explore markets in which matching is organized in a decentralized way.",
"",
"SummaryGale and Shapley have an algorithm for assigning students to universities which gives each student the best university available in a stable system of assignments. The object here is to prove that students cannot improve their fate by lying about their preferences. Indeed, no coalition of students can simultaneously improve the lot of all its members if those outside the coalition state their true preferences.",
"This paper considers some game-theoretic aspects of matching problems and procedures, of the sort which involve matching the members of one group of agents with one or more members of a second, disjoint group of agents, ail of whom have preferences over the possible resulting matches. The main focus of this paper is on determining the extent to which matching procedures can be designed which give agents the incentive to honestly reveal their preferences, and which produce stable matches.Two principal results are demonstrated. The first is that no matching procedure exists which always yields a stable outcome and gives players the incentive to reveal their true preferences, even though procedures exist which accomplish either of these goals separately. The second result is that matching procedures do exist, however, which always yield a stable outcome and which always give all the agents in one of the two disjoint sets of agents the incentive to reveal their true preferences."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_3",
"@cite_6",
"@cite_2",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"1992705916",
"2053079891",
"",
"1605637365",
"1974769140",
"2037790669",
"2033898593",
"2071667058"
]
}
|
Manipulation and gender neutrality in stable marriage procedures
|
The stable marriage problem (SMP) [12] is a well-known problem of matching the elements of two sets. Given n men and n women, where each person expresses a strict ordering over the members of the opposite sex, the problem is to match the men to the women so that there are no two people of opposite sex who would both rather be matched with each other than their current partners. If there are no such people, all the marriages are said to be stable. Gale and Shapley [8] proved that it is always possible to solve the SMP and make all marriages stable, and provided a quadratic time algorithm which can be used to find one of two particular but extreme stable marriages, the so-called male optimal or female optimal solution. The Gale-Shapley algorithm has been used in many real-life applications, such as in systems for matching hospitals to resident doctors [21] and the assignment of primary school students in Singapore to secondary schools [25]. Variants of the stable marriage problem turn up in many domains. For example, the US Navy has a webbased multi-agent system for assigning sailors to ships [17].
One important issue is whether agents have an incentive to tell the truth or can manipulate the result by misreporting their preferences. Unfortunately, Roth [20] has proved that all stable marriage procedures can be manipulated. He demonstrated a stable marriage problem with 3 men and 3 women which can be manipulated whatever stable marriage procedure we use. This result is in some sense analogous to the classical Gibbard Satterthwaite [11,24] theorem for voting theory, which states that all voting procedures are manipulable under modest assumptions provided we have 3 or more voters. For voting theory, Bartholdi, Tovey and Trick [3] proposed that computational complexity might be an escape: whilst manipulation is always possible, there are voting rules where it is NP-hard to find a manipulation.
We might hope that computational complexity might also be a barrier to manipulate stable marriage procedures. Unfortunately, the Gale-Shapley algorithm is computationally easy to manipulate [25]. We identify here stable marriage procedures that are NP-hard to manipulate. This can be considered a first step to understanding if computational complexity might be a barrier to manipulations. Many questions remain to be answered. For example, the preferences met in practice may be highly correlated. Men may have similar preferences for many of the women. Are such profiles computationally difficult to manipulate? As a second example, it has been recently recognised (see, for example, [4,19]) that worst-case results may represent an insufficient barrier against manipulation since they may only apply to problems that are rare. Are there stable marriage procedures which are difficult to manipulate on average?
Another drawback of many stable marriage procedures such as the one proposed by Gale-Shapley is their bias towards one of the two genders. The stable matching returned by the Gale-Shapley algorithm is either male optimal (and the best possible for every man) but female pessimal (that is, the worst possible for every woman), or female optimal but male pessimal. It is often desirable to use stable marriage procedures that are gender neutral [18]. Such procedures return a stable matching that is not affected by swapping the men with the women. The goal of this paper is to study both the complexity of manipulation and gender neutrality in stable marriage procedures, and to design gender neutral procedures that are difficult to manipulate.
It is known that the Gale-Shapley algorithm is computationally easy to manipulate [25]. Our first contribution is to prove that if the male and female preferences have a certain form, it is computationally easy to manipulate any stable marriage procedure. We provide a universal polynomial time manipulation scheme that, under certain conditions on the preferences, guarantees that the manipulator marries his optimal stable partner irrespective of the stable marriage procedure used. On the other hand, our second contribution is to prove that, when the preferences of the men and women are unrestricted, there exist stable marriage procedures which are NP-hard to manipulate.
Our third contribution is to show that any stable marriage procedure can be made gender neutral by means of a simple pre-processing step which may swap the men with the women. This swap can, for instance, be decided by a voting rule. However, this may give a gender neutral stable matching procedure which is easy to manipulate.
Our final contribution is a stable matching procedure which is both gender neutral and NP-hard to manipulate. This procedure uses a voting rule that, considering the male and female preferences, helps to choose between stable matchings. In fact, it picks the stable matching that is most preferred by the most popular men and women. We prove that, if the voting rule used is Single Transferable Vote (STV) [1], which is NP-hard to manipulate, then the resulting stable matching procedure is both gender neutral and NP-hard to manipulate. We conjecture that other voting rules which are NP-hard to manipulate will give rise to stable matching procedures which are also gender neutral and NP-hard to manipulate. Thus, our approach shows how combining voting rules and stable matching procedures can be beneficial in two ways: by using preferences to discriminate among stable matchings and by providing a possible computational shield against manipulation.
The Gale-Shapley algorithm
The Gale-Shapley algorithm [8] is a well-known algorithm to solve the SMP problem. It involves a number of rounds where each un-engaged man "proposes" to his most-preferred woman to whom he has not yet proposed. Each woman then considers all her suitors and tells the one she most prefers "maybe" and all the rest of them "No". She is then provisionally "engaged". In each subsequent round, each unengaged man proposes to one woman to whom he has not yet proposed (the woman may or may not already be engaged), and the women once again reply with one "maybe" and reject the rest. This may mean that already-engaged women can "trade up", and already-engaged men can be "jilted".
This algorithm needs a number of steps that is quadratic in n, and it guarantees that:
• If the number of men and women coincide, and all participants express a linear order over all the members of the other group, everyone gets married. Once a woman becomes engaged, she is always engaged to someone. So, at the end, there cannot be a man and a woman both un-engaged, as he must have proposed to her at some point (since a man will eventually propose to every woman, if necessary) and, being un-engaged, she would have to have said yes.
• The marriages are stable. Let Alice be a woman and Bob be a man. Suppose they are each married, but not to each other. Upon completion of the algorithm, it is not possible for both Alice and Bob to prefer each other over their current partners. If Bob prefers Alice to his current partner, he must have proposed to Alice before he proposed to his current partner. If Alice accepted his proposal, yet is not married to him at the end, she must have dumped him for someone she likes more, and therefore doesn't like Bob more than her current partner. If Alice rejected his proposal, she was already with someone she liked more than Bob.
Note that the pairing generated by the Gale-Shapley algorithm is male optimal, i.e., every man is paired with his highest ranked feasible partner, and female-pessimal, i.e., each female is paired with her lowest ranked feasible partner. It would be the reverse, of course, if the roles of male and female participants in the algorithm were interchanged. Given n men and n women, a profile is a sequence of 2n strict total orders, n over the men and n over the women. In a profile, every woman ranks all the men, and every man ranks all the women. Example 1. Assume n = 3. Let W = {w1, w2, w3} and M = {m1, m2, m3} be respectively the set of women and men. The following sequence of strict total orders defines a profile:
• m1 : w1 > w2 > w3 (i.e., the man m1 prefers the woman w1 to w2 to w3),
• m2 : w2 > w1 > w3,
• m3 : w3 > w2 > w1,
• w1 : m1 > m2 > m3,
• w2 : m3 > m1 > m2,
• w3 : m2 > m1 > m3
For this profile, the Gale-Shapley algorithm returns the male optimal solution {(m1, w1), (m2, w2), (m3, w3)}. On the other hand, the female optimal solution is {(w1, m1), (w2, m3), (w3, m2)}.
Gender neutrality and non-manipulability
A desirable property of a stable marriage procedure is gender neutrality. A stable marriage procedure is gender neutral [18] if and only if when we swap the men with the women, we get the same result. A related property, called peer indifference [18], holds if the result is not affected by the order in which the members of the same sex are considered. The Gale-Shapley procedure is peer indifferent but it is not gender neutral. In fact, if we swap men and women in Example 1, we obtain the female optimal solution rather than the male optimal one.
Another useful property of a stable marriage procedure is its resistance to manipulation. In fact, it would be desirable that lying would not lead to better results for the lier. A stable marriage procedure is manipulable if there is a way for one person to mis-report their preferences and obtain a result which is better than the one they would have obtained with their true preferences.
Roth [20] has proven that stable marriage procedures can always be manipulated, i.e, that no stable marriage procedures exist which always yields a stable outcome and give agents the incentive to reveal their true preferences. He demonstrated a 3 men, 3 women profile which can be manipulated whatever stable marriage procedure we use. A similar result in a different context is the one by Gibbard and Satterthwaite [11,24], that proves that all voting procedures [1] are manipulable under some modest assumptions. In this context, Bartholdi, Tovey and Trick [3] proposed that computational complexity might be an escape: whilst manipulation is always possible, there are rules like Single Transferable Vote (STV) where it is NP-hard to find a manipulation [2]. This resistance to manipulation arises from the difficulty of inverting the voting rule and does not depend on other assumptions like the difficulty of discovering the preferences of the other voters. In this paper, we study whether computational complexity may also be an escape from the manipulability of stable marriage procedures. Our results are only initial steps to a more complete understanding of the computational complexity of manipulating stable matching procedures. As mentioned before, NP-hardness results only address the worst case and may not apply to preferences met in practice.
MANIPULATING STABLE MARRIAGE PROCEDURES
A manipulation attempt by a participant p is the misreporting of p's preferences. A manipulation attempt is unsuccessful if the resulting marriage for p is strictly worse than the marriage obtained telling the truth. Otherwise, it is said to be successful. A stable marriage procedure is manipulable if there is a profile with a successful manipulation attempt from a participant.
The Gale-Shapley procedure, which depending on how it is defined returns either the male optimal or the female optimal solutions, is computationally easy to manipulate [25]. However, besides these two extreme solutions, there may be many other stable matchings. Several procedures have been defined to return some of these other stable matchings [13]. Our first contribution is to show that, under certain conditions on the shape of the male and female preferences, any stable marriage procedure is computationally easy to manipulate.
Consider a profile p and a woman w in such a profile. Let m be the male optimal partner for w in p, and n be the female optimal partner for w in p. Profile p is said to be universally manipulable by w if the following conditions hold:
• in the men-proposing Gale-Shapley algorithm, w receives more than one proposal;
• there exists a woman v such that n is the male optimal partner for v in p;
• v prefers m to n;
• n's preferences are . . . > v > w > . . .;
• m's preferences . . . w > v > . . .. Theorem 1. Consider any stable marriage procedure and any woman w. There is a polynomial manipulation scheme that, for any profile which is universally manipulable by w, produces the female optimal partner for w. Otherwise, it produces the same partner.
Proof. Consider the manipulation attempt that moves the male optimal partner m of w to the lower end of w's preference ordering, obtaining the new profile p ′ . Consider now the behaviour of the men-proposing Gale-Shapley algorithm on p and p ′ . Two cases are possible for p: w is proposed to only by man m, or it is proposed to also by some other man o. In this second case, it must be w prefers m to o since m is the male optimal partner for w.
If w is proposed to by m and also by some o, then, when w compares the two proposals, in p she will decide for m, while in p ′ she will decide for o. At this point, in p ′ , m will have to propose to the next best woman for him, that is, v, and she will accept because of the assumptions on her preference ordering. This means that n (who was married to v in p) now in p ′ has to propose to his next best choice, that is, w, who will accept, since w prefers n to m. So, in p ′ , the male optimal partner for w, as well as her female optimal partner, is n. This means that there is only one stable partner for w in p ′ . Therefore, any stable marriage procedure must return n as the partner for w.
Thus, if woman w wants to manipulate a stable marriage procedure, she can check if the profile is universally manipulable by her. This involves simulating the Gale-Shapley algorithm to see whether she is proposed by m only or also by some other man. In the former case, she will not do the manipulation. Otherwise, she will move m to the far right it and she will get her female optimal partner, whatever stable marriage procedure is used. This procedure is polynomial since the Gale-Shapley algorithm takes quadratic time to run. 2 Example 2. In a setting with 3 men and 3 women, consider the profile {m1 : w1 > w2 > w3; m2 : w2 > w1 > w3; m3 : w1 > w2 > w3; } {w1 : m2 > m1 > m3; w2 : m1 > m2 > m3; w3 : m1 > m2 > m3; } In this profile, the male optimal solution is {(m1, w1), (m2, w2), (m3, w3)}. This profile is universally manipulable by w1. In fact, woman w1 can successfully manipulate by moving m1 after m3, and obtaining the marriage (m2, w1), thus getting her female optimal partner. Notice that this holds no matter what stable marriage procedure is used. This same profile is not universally manipulable by w2 or w3, since they receive just one proposal in the men-proposing Gale-Shapley algorithm. In fact, woman w2 cannot manipulate: trying to move m2 after m3 gets a worse result. Also, woman w3 cannot manipulate since her male optimal partner is her least preferred man.
Restricting to universally manipulable profiles makes manipulation computationally easy. On the other hand, if we allow all possible profiles, there are stable marriage procedures that are NP-hard to manipulate. The intuition is simple. We construct a stable marriage procedure that is computationally easy to compute but NP-hard to invert.
To manipulate, a man or a woman will essentially need to be able to invert the procedure to choose between the exponential number of possible preference orderings. Hence, the constructed stable marriage procedure will be NP-hard to manipulate. The stable marriage procedure used in this proof is somewhat "artificial". However, we will later propose a stable marriage procedure which is more natural while remaining NP-hard to manipulate. This procedure selects the stable matching that is most preferred by the most popular men and women. It is an interesting open question to devise other stable marriage procedures which are "natural" and computationally difficult to manipulate.
Theorem 2. There exist stable marriage procedures for which deciding the existence of a successful manipulation is NP-complete.
Proof. We construct a stable marriage procedure which chooses between the male and female optimal solution based on whether the profile encodes a NP-complete problem and its polynomial witness. The manipulator's preferences define the witness. The other people's preferences define the NPcomplete problem. Hence, the manipulator needs to be able to solve a NP-complete problem to be able to manipulate successfully. Deciding if there is a successful manipulation for this stable marriage procedure is clearly in NP since we can compute male and female optimal solutions in polynomial time, and we can check a witness to a NP-complete problem also in polynomial time.
Our stable marriage procedure is defined to work on n + 3 men (m1, m2 and p1 to pn+1) and n + 3 women (w1, w2 and v1 to vn+1). It returns the female optimal solution if the preferences of woman w1 encode a Hamiltonian path in a directed graph encoded by the other women's preferences, otherwise it returns the male optimal solution. The 3rd to n + 2th preferences of woman w1 encode a possible Hamiltonian path in a n node graph. In particular, if the 2 + ith man in the preference ordering of woman w1 for i > 0 is man pj, then the path goes from vertex i to vertex j. The preferences of the women vi for i ≤ n encode the graph in which we find this Hamiltonian path. In particular, if man pj for j < n + 1 and j = i appears before man pn+1 in the preference list of woman wi, then there is a directed edge in the graph from i to j. It should be noticed that any graph can be produced using this construction.
Given a graph which is not complete in which we wish to find a Hamiltonian path, we now build a special profile. Woman w1 will be able to manipulate this profile successfully iff the graph contains a Hamiltonian path. In the profile, woman w1 most prefers to marry man m1 and then man m2. Consider any pair of vertices (i, j) not in the graph. Woman w1 puts man pj at position 2 + i in her preference order. She puts all other pj's in any arbitrary order. This construction will guarantee that the preferences of w1 do not represent a Hamiltonian path. Woman w2 most prefers to marry man m2. Woman vi most prefers to marry man pi, and has preferences for the other men pj according to the edges from vertex i. Man m1 most prefers woman w2. Man m2 most prefers woman w1. Finally, man pi most prefers woman vi. All other unspecified preferences can be chosen in any way. By construction, all first choices are different. Hence, the male optimal solution has the men married to their first choice, whilst the female optimal solution has the women married to their first choice.
The male optimal solution has woman w1 married to man m2. The female optimal solution has woman w1 married to man m1. By construction, the preferences of woman w1 do not represent a Hamiltonian path. Hence our stable matching procedure returns the male optimal solution: woman w1 married to man m2. The only successful manipulation then for woman w1 is if she can marry her most preferred choice, man m1. As all first choices are different, woman w1 cannot successfully manipulate the male or female optimal solution. Therefore, she must manipulate her preferences so that she spells out a Hamiltonian path in her preference ordering, and our stable marriage procedure therefore returns the female optimal solution. This means she can successful manipulate iff there is a Hamiltonian path. Hence, deciding if there is a successful manipulation is NP-complete. 2
Note that we can modify the proof by introducing O(n 2 ) men so that the graph is encoded in the tail of the preferences of woman w2. This means that it remains NP-hard to manipulate a stable marriage procedure even if we collude with all but one of the women. It also means that it is NPhard to manipulate a stable marriage procedure when the problem is imbalanced and there are just 2 women but an arbitrary number of men. Notice that this procedure is not peer indifferent, since it gives special roles to different men and women. However, it is possible to make it peer indifferent, so that it computes the same result if we rename the men and women. For instance, we just take the men's preferences and compute from them a total ordering of the women (e.g. by running an election with these preferences). Similarly, we take the women's preferences and compute from them a total ordering of the men. We can then use these orderings to assign indices to men and women. Notice also this procedure is not gender neutral. If we swap men and women, we may get a different result. We can, however, use the simple procedure proposed in the next section to make it gender neutral.
GENDER NEUTRALITY
As mentioned before, a weakness of many stable marriage procedures like the Gale-Shapley procedure and the procedure presented in the previous section, is that they are not gender neutral. They may greatly favour one sex over the other. We now present a simple and universal technique for taking any stable marriage procedure and making it gender neutral. We will assume that the men and the women are named from 1 to n. We will also say that the men's preferences are isomorphic to the women's preferences iff there is a bijection between the men and women that preserves both the men's and women's preferences. In this case, it is easy to see that there is only one stable matching.
We can convert any stable marriage procedure into one that is gender neutral by adding a pre-round in which we choose if we swap the men with the women. The idea of using pre-rounds for enforcing certain properties is not new and has been used for example in [5] to make manipulation of voting rules NP-hard. The goal of our pre-round is, instead, to ensure gender-neutrality. More precisely, for each gender we compute its signature: a vector of numbers constructed by concatenating together each of the individual preference lists. Among all such vectors, the signature is the lexicographically smallest vector under reordering of the members of the chosen gender and renumbering of the members of the other gender.
Example 3. Consider the following profile with 3 men and 3 women. {m1 : w2 > w1 > w3; m2 : w3 > w2 > w1; m3 : w2 > w1 > w3} {w1 : m1 > m2 > m3; w2 : m3 > m1 > m2; w3 : m2 > m1 > m3}. The signature of the men is 123123312: each group of three digits represents the preference ordering of a man; men m2 and m3 and women w1 and w2 have been swapped with each other to obtain the lexicographically smallest vector. The signature of the women is instead 123213312.
Note that this vector can be computed in O(n 2 ) time. For each man, we put his preference list first, then reorder the women so that this man's preference list reads 1 to n. Finally, we concatenate the other men's preference lists in lexicographical order. We define the signature as the smallest such vector.
Before applying any stable marriage procedure, we propose to pre-process the profile according to the following rule, that we will call gn-rule (for gender neutral): If the male signature is smaller than the female signature, then we swap the men with the women before calling the stable marriage procedure. On the other hand, if the male signature is equal or greater than the female signature, we will not swap the men with the women before calling the stable marriage procedure. In the example above, the male signature is smaller than the female signature, thus men and women must be swapped before using the stable marriage procedure.
Theorem 3. Consider any stable marriage procedure, say µ. Given a profile p, consider the new procedure µ ′ obtained by applying µ to gn-rule(p). This new procedure returns a stable marriage and it is gender neutral. Moreover, if µ is peer indifferent, then µ ′ is peer indifferent as well.
Proof. To prove gender neutrality, we consider three cases:
• If the male signature is smaller than the female signature, the gn-rule swaps the men with the women. Thus we would apply µ to swapped genders.
To prove that the new procedure is gender neutral, we must prove that, if we swap the men with the women, the result is the same. If we do this swap, their signatures will be swapped. Thus the male signature will result larger than the female signature, and therefore the gn-rule will not swap men and women. Thus procedure µ will be applied to swapped genders.
• If the male signature is larger than the female signature, the gn-rule leaves the profile as it is. Thus µ is applied to profile p.
If we swap the genders, the male signature will result smaller than the female signature, and therefore the gn-rule will perform the swap. Thus procedure µ will be applied to the original profile p.
• If the male and female signatures are identical, the men and women's preferences are isomorphic and there is only one stable matching. Any stable marriage procedure must therefore return this matching, and hence it is gender neutral.
As for peer indifference, if we start from a profile obtained by reordering men or women, the signatures will be the same and thus the gn-rule will perform the same (either swapping or not). Thus the result of applying the whole procedure to the reordered profile will be the same as the one obtained by using the given profile. 2
If we are not concerned about preserving peer indifference, or if we start from a non-peer indifferent matching procedure, we can use a much simpler version of the gnrule, where the signatures are obtained directly from the profile without considering any reordering/renaming of men or women. This simpler approach is still sufficient to guarantee gender neutrality, but might produce a procedure which is not peer indifferent.
VOTING RULES AND STABLE MARRIAGE PROCEDURES
We will now see how we can exploit results about voting rules to build stable marriage procedures which are both gender neutral and difficult to manipulate.
A score-based matching procedure: gender neutral but easy to manipulate
Given a profile, consider a set of its stable matchings. For simplicity, consider the set containing only the male and female optimal stable matchings. However, there is no reason why we could not consider a larger polynomial size set. For example, we might consider all stable matchings found on a path through the stable marriage lattice [16] between the male and female optimal, or we may simply run twice any procedure computing a set of stable marriages, swapping genders the second time. We can now use the men and women's preferences to rank stable matchings in the considered set. For example, as in [15], we can score a matching as the sum of the men's ranks of their partners and of the women's ranks of their partners.
We then choose between the stable matchings in our given set according to which has the smallest score. Since our set contains only the male and the female optimal matches, we choose between the male and female optimal stable matchings according to which has the lowest score. If the male optimal and the female optimal stable matching have the same score, we use the signature of men and women, as defined in the previous section, to tie-break. It is possible to show that the resulting matching procedure, which returns the male optimal or the female optimal stable matching according to the scoring rule (or, if they have the same score, according to the signature) is gender neutral.
Unfortunately, this procedure is easy to manipulate. For a man, it is sufficient to place his male optimal partner in first place in his preference list, and his female optimal partner in last place. If this manipulation does not give the man his male optimal partner, then there is no manipulation that will. A woman manipulates the result in a symmetric way.
Lexicographical minimal regret
Let us now consider a more complex score-based matching procedure to choose between two (or more) stable matchings which will be computationally difficult to manipulate. The intuition behind the procedure is to choose between stable matchings according to the preferences of the most preferred men or women. In particular, we will pick the stable matching that is most preferred by the most popular men and women. Given a voting rule, we order the men using the women's preferences and order the women using the men's preferences. We then construct a male score vector for a matching using this ordering of the men (where a more preferred man is before a less preferred one). The ith element of the male score vector is the integer j iff the ith man in this order is married to his jth most preferred woman. A large male score vector is a measure of dissatisfaction with the matching from the perspective of the more preferred men. A female score vector is computed in an analogous manner.
The overall score for a matching is the lexicographically largest of its male and female score vectors. A large overall score corresponds to dissatisfaction with the matching from the perspective of the more preferred men or women. We then choose the stable matching from our given set which has the lexicographically least overall score. That is, we choose the stable matching which carries less regret for the more preferred men and women.
In the event of a tie, we can use any gender neutral tiebreaking procedure, such as the one based on signatures described above. Let us call this procedure the lexicographical minimal regret stable marriage procedure. In particular, when voting rule v is used to order the men and women we will call it a v-based lexicographical minimal regret stable marriage procedure. It is easy to see that this procedure is gender neutral. In addition, it is computationally hard to manipulate. Here we consider using STV [1] to order the men and women. However, we conjecture that similar results will hold for stable matching procedures which are derived from other voting rules which are NP-hard to manipulate.
In the STV rule each voter provides a total order on candidates and, initially, an individual's vote is allocated to his most preferred candidate. The quota of the election is the minimum number of votes necessary to get elected. If no candidate exceeds the quota, then, the candidate with the fewest votes is eliminated, and his votes are equally distributed among the second choices of the voters who had selected him as first choice. This step is repeated until some candidate exceeds the quota. In the following theorem we assume a quota of at least half of the number of voters.
Theorem 4. It is NP-complete to decide if an agent can manipulate the STV-based lexicographical minimal regret stable marriage procedure.
Proof. We adapt the reduction used to prove that constructive manipulation of the STV rule by a single voter is NP-hard [2]. In our proof, we need to consider how the STV rule treats ties. For example, ties will occur among all men and all women, since we will build a profile where every man and every woman have different first choice. Thus STV will need to tie break between all the men (and between all the women). We suppose that in any such tie break, the candidate alphabetically last is eliminated. We also suppose that a man h will try to manipulate the stable marriage procedure by mis-reporting his preferences.
To prove membership in NP, we observe that a manipulation is a polynomial witness. To prove NP-hardness, we give a reduction from 3-COVER. Given a set S with |S| = n, subsets and subsets Si with i ∈ [1, m], |Si = 3| and Si ⊂ S, we ask if there exists an index set I with |I| = n/3 and S i∈I Si = S. We will construct a profile of preferences for the men so that the only possibility is for STV to order first one of only two women, w or y. The manipulator h will try to vote strategically so that woman y is ordered first. This will have the consequence that we return the male optimal stable marriage in which the manipulator marries his first choice z1. On the other hand, if w is ordered first, we will return the female optimal stable marriage in which the manipulator is married to his second choice z2.
The following sets of women participate in the problem:
• two possible winners of the first STV election, w and y;
• z1 and z2 who are the first two choices of the manipulator;
• "first losers" in this election, ai and bi for i ∈ [1, m];
• "second line" in this election, ci and di for i ∈ [1, m];
• "e-bloc", ei for i ∈ [0, n];
• "garbage collectors", gi for i ∈ [1, m];
• "dummy women", z i,j,k where i ∈ [1,19] and j and k depend on i as outlined in the description given shortly for the men's preferences (e.g. for i = 1, j = 1 and k ∈ [1, 12m − 1] but for i ∈ [6,8], j ∈ [1, m] and k ∈ [1, 6m + 4j − 6]).
Ignoring the manipulator, the men's preferences will be constructed so that z1, z2 and the dummy women are the first women eliminated by the STV rule, and that ai and bi are 2m out of the next 3m woman eliminated. In addition, let I = {i : bi is eliminated bef ore ai}. Then the men's preferences will be constructed so that STV orders woman y first if and only if I is a 3-COVER. The manipulator can ensure bi is eliminated by the STV rule before ai for i ∈ I by placing ai in the i + 1th position and bi otherwise. The men's preferences are constructed as follows (where preferences are left unspecified, they can be completed in any order):
• a man n with preference (y, . . .) and ∀k ∈ [1, 12m − 1] a man with (z 1,1,k , y, . . .);
• a man p with preference (w, y, . . .) and ∀k ∈ [1, 12m−2] a man with (z 2,1,k , w, y, . . .);
• a man q with preference (e0, w, y, . . .) and ∀k ∈ [1, 10m+ 2n/3 − 1] a man with (z 3,1,k , e0, w, y, . . .);
• ∀j ∈ [1, n], a man with preference (ej, w, y, . . .) and ∀k ∈ [1, 12m−3] a man with preference (z 4,j,k , ej, w, y, . . .);
• ∀j ∈ [1, m], a man rj with preference (gj, w, y, . . .) and ∀k ∈ [1, 12m − 1] a man with preference (z 5,j,k , gj, w, y, . . .);
• ∀j ∈ [1, m], a man with preference (cj , dj, w, y, . . .) and ∀k ∈ [1, 6m+4j−6] a man with preference (z 6,j,k , cj , dj, w, y, . . .), and for each of the three k s.t. k ∈ Sj, a man with preference (z 7,j,k , cj , e k , w, y, . . .), and one with preference (z 8,j,k , cj , e k , w, y, . . .);
• ∀j ∈ [1, m], a man with preference (dj, cj , w, y, . . .) and ∀k ∈ [1, 6m+4j−2] a man with preference (z 9,j,k , dj, cj , w, y, . . .), one with preference (z 10,j,k , dj, e0, w, y, . . .), and one with (z 11,j,k , dj, e0, w, y, . . .);
• ∀j ∈ [1, m], a man with preference (aj, gj, w, y, . . .) and ∀k ∈ [1, 6m+4j−4] a man with preference (z 12,j,k , aj, gj, w, y, . . .), one with preference (z 13,j,k , aj, cj , w, y, . . .), one with preference (z 14,j,k , aj, bj, w, y, . . .), and one with preference (z 15,j,k , aj , bj, w, y, . . .).
• ∀j ∈ [1, m], a man with preference (bj , gj, w, y, . . .) and ∀k ∈ [1, 6m+4j−4] a man with preference (z 16,j,k , bj, gj, w, y, . . .), one with preference (z 17,j,k , bj, dj, w, y, . . .), one with preference (z 18,j,k , bj, aj, w, y, . . .), and one with preference (z 19,j,k , bj , aj, w, y, . . .).
Note that each woman is ranked first by exactly one man. The women's preference will be set up so that the manipulator h is assured at least that he will marry his second choice, z2 as this will be his female optimal partner. To manipulate the election, the manipulator needs to put z1 first in his preferences and to report the rest of his preferences so that the result returned is the male optimal solution. As all woman are ranked first by exactly one man, the male optimal matching marries h with z1.
When we use STV to order the women, z1, z2 and z i,j,k are alphabetically last so are eliminated first by the tie-breaking rule. This leaves the following profile:
• 12m men with preference (y, . . .);
• 12m − 1 men with preference (w, y, . . .);
• 10m + 2n/3 men with preference (e0, w, y, . . .);
• ∀j ∈ [1, n], 12m − 2 men with preference (ej, w, y, . . .);
• ∀j ∈ [1, m], 12m men with preference (gj, w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−5 men with preference (cj, dj , w, y, . . .), and for each of the three k such that k ∈ Sj , two men with preference (cj , e k , w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−1 men with preference (dj, cj , w, y, . . .), and two men with preference (dj, e0, w, y, . . .),
• ∀j ∈ [1, m], 6m+4j−3 men with preference (aj, gj, w, y, . . .), a man with preference (aj, cj , w, y, . . .), and two men with preference (aj, bj , w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−3 men with preference (bj, gj, w, y, . . .) a man with preference (bj, dj , w, y, . . .), and two men with preference (bj, aj , w, y, . . .).
At this point, the votes are identical (up to renaming of the men) to the profile constructed in the proof of Theorem 1 in [2]. Using the same argument as there, it follows that the manipulator can ensure that STV orders woman y first instead of w if and only if there is a 3-COVER. The manipulation will place z1 first in h's preferences. Similar to the proof of Theorem 1 in [2], the manipulation puts woman aj in j + 1th place and bj otherwise where j ∈ J and J is any index set of a 3-COVER.
The women's preferences are as follows:
• the woman y with preference (n, . . .);
• the woman w with preference (q, . . .);
• the woman z1 with preference (p, . . .);
• the woman z2 with preference (h, . . .);
• the women gi with preference (ri, . . .);
• the other women with any preferences which are firstdifferent, and which ensure STV orders r0 first and r1 second overall.
Each man is ranked first by exactly one woman. Hence, the female optimal stable matching is the first choice of the women. The male score vector of the male optimal stable matching is (1, 1, . . . , 1). Hence, the overall score vector of the male optimal stable matching equals the female score vector of the male optimal stable matching. This is (1, 2, . . .) if the manipulation is successful and (2, 1, . . .) if it is not. Similarly, the overall score vector of the female optimal stable matching equals the male score vector of the female optimal stable matching. This is (1, 3, . . .). Hence the lexicographical minimal regret stable marriage procedure will return the male optimal stable matching iff there is a successful manipulation of the STV rule. Note that the profile used in this proof is not universally manipulable. The first choices of the man are all different and each woman therefore only receives one proposal in the men-proposing Gale-Shapley algorithm. 2 We can thus see how the proposed matching procedure is reasonable and appealing. In fact, it allows to discriminate among stable matchings according to the men and women's preferences and it is difficult to manipulate while ensuring gender neutrality.
RELATED WORK
In [18] fairness of a matching procedure is defined in terms of four axioms, two of which are gender neutrality and peer indifference. Then, the existence of a matching procedures satisfying all or a subset of the axioms is considered in terms of restrictions on preference orderings. Here, instead, we propose a preprocessing step that allows to obtain a gender neutral matching procedure from any matching procedure without imposing any restrictions on the preferences in the input.
A detailed description of results about manipulation of stable marriage procedures can be found in [14]. In particular, several early results [6,7,9,20] indicated the futility of men lying, focusing later work mostly on strategies in which the women lie. Gale and Sotomayor [10] presented the manipulation strategy in which women truncate their preference lists. Roth and Vate [23] discussed strategic issues when the stable matching is chosen at random, proposed a truncation strategy and showed that every stable matching can be achieved as an equilibrium in truncation strategies. We instead do not allow the elimination of men from a woman's preference ordering, but permit reordering of the preference lists.
Teo et al. [25] suggested lying strategies for an individual woman, and proposed an algorithm to find the best partner with the male optimal procedure. We instead focus on the complexity of determining if the procedure can be manipulated to obtain a better result. Moreover, we also provide a universal manipulation scheme that, under certain conditions on the profile, assures that the female optimal partner is returned.
Coalition manipulation is considered in [14]. Huang shows how a coalition of men can get a better result in the menproposing Gale-Shapley algorithm. By contrast, we do not consider a coalition but just a single manipulator, and do not consider just the Gale-Shapley algorithm.
CONCLUSIONS
We have studied the manipulability and gender neutrality of stable marriage procedures. We first looked at whether, as with voting rules, computationally complexity might be a barrier to manipulation. It was known already that one prominent stable marriage procedure, the Gale-Shapley algorithm, is computationally easy to manipulate. We proved that, under some simple restrictions on agents' preferences, all stable marriage procedures are in fact easy to manipulate. Our proof provides an universal manipulation which an agent can use to improve his result. On the other hand, when preferences are unrestricted, we proved that there exist stable marriage procedures which are NP-hard to manipulate. We also showed how to use a voting rule to choose between stable matchings. In particular, we gave a stable marriage procedure which picks the stable matching that is most preferred by the most popular men and women. This procedure inherits the computational complexity of the underlying voting rule. Thus, when the STV voting rule (which is NP-hard to manipulate) is used to compute the most popular men and women, the corresponding stable marriage procedure is NP-hard to manipulate. Another desirable property of stable marriage procedures is gender neutrality. Our procedure of turning a voting rule into a stable marriage procedure is gender neutral.
This study of stable marriage procedures is only an initial step to understanding if computational complexity might be a barrier to manipulation. Many questions remain to be answered. For example, if preferences are correlated, are stable marriage procedures still computationally hard to manipulate? As a second example, are there stable marriage procedures which are difficult to manipulate on average? There are also many interesting and related questions connected with privacy and mechanism design. For instance, how do we design a decentralised stable marriage procedure which is resistant to manipulation and in which the agents do not share their preference lists? As a second example, how can side payments be used in stable marriage procedures to prevent manipulation?
| 7,679 |
0909.4437
|
2953180797
|
The stable marriage problem is a well-known problem of matching men to women so that no man and woman who are not married to each other both prefer each other. Such a problem has a wide variety of practical applications ranging from matching resident doctors to hospitals to matching students to schools. A well-known algorithm to solve this problem is the Gale-Shapley algorithm, which runs in polynomial time. It has been proven that stable marriage procedures can always be manipulated. Whilst the Gale-Shapley algorithm is computationally easy to manipulate, we prove that there exist stable marriage procedures which are NP-hard to manipulate. We also consider the relationship between voting theory and stable marriage procedures, showing that voting rules which are NP-hard to manipulate can be used to define stable marriage procedures which are themselves NP-hard to manipulate. Finally, we consider the issue that stable marriage procedures like Gale-Shapley favour one gender over the other, and we show how to use voting rules to make any stable marriage procedure gender neutral.
|
@cite_0 suggested lying strategies for an individual woman, and proposed an algorithm to find the best partner with the male optimal procedure. We instead focus on the complexity of determining if the procedure can be manipulated to obtain a better result. Moreover, we also provide a universal manipulation scheme that, under certain conditions on the profile, assures that the female optimal partner is returned.
|
{
"abstract": [
"We study strategic issues in the Gale-Shapley stable marriage model. In the first part of the paper, we derive the optimal cheating strategy and show that it is not always possible for a woman to recover her women-optimal stable partner from the men-optimal stable matching mechanism when she can only cheat by permuting her preferences. In fact, we show, using simulation, that the chances that a woman can benefit from cheating are slim. In the second part of the paper, we consider a two-sided matching market found in Singapore. We study the matching mechanism used by the Ministry of Education (MOE) in the placement of primary six students in secondary schools, and discuss why the current method has limited success in accommodating the preferences of the students, and the specific needs of the schools (in terms of the “mix” of admitted students). Using insights from the first part of the paper, we show that stable matching mechanisms are more appropriate in this matching market and explain why the strategic behavior of the students need not be a major concern. (Stable Marriage; Strategic Issues; Gale-Shapley Algorithm; Student Posting Exercise)"
],
"cite_N": [
"@cite_0"
],
"mid": [
"1557896792"
]
}
|
Manipulation and gender neutrality in stable marriage procedures
|
The stable marriage problem (SMP) [12] is a well-known problem of matching the elements of two sets. Given n men and n women, where each person expresses a strict ordering over the members of the opposite sex, the problem is to match the men to the women so that there are no two people of opposite sex who would both rather be matched with each other than their current partners. If there are no such people, all the marriages are said to be stable. Gale and Shapley [8] proved that it is always possible to solve the SMP and make all marriages stable, and provided a quadratic time algorithm which can be used to find one of two particular but extreme stable marriages, the so-called male optimal or female optimal solution. The Gale-Shapley algorithm has been used in many real-life applications, such as in systems for matching hospitals to resident doctors [21] and the assignment of primary school students in Singapore to secondary schools [25]. Variants of the stable marriage problem turn up in many domains. For example, the US Navy has a webbased multi-agent system for assigning sailors to ships [17].
One important issue is whether agents have an incentive to tell the truth or can manipulate the result by misreporting their preferences. Unfortunately, Roth [20] has proved that all stable marriage procedures can be manipulated. He demonstrated a stable marriage problem with 3 men and 3 women which can be manipulated whatever stable marriage procedure we use. This result is in some sense analogous to the classical Gibbard Satterthwaite [11,24] theorem for voting theory, which states that all voting procedures are manipulable under modest assumptions provided we have 3 or more voters. For voting theory, Bartholdi, Tovey and Trick [3] proposed that computational complexity might be an escape: whilst manipulation is always possible, there are voting rules where it is NP-hard to find a manipulation.
We might hope that computational complexity might also be a barrier to manipulate stable marriage procedures. Unfortunately, the Gale-Shapley algorithm is computationally easy to manipulate [25]. We identify here stable marriage procedures that are NP-hard to manipulate. This can be considered a first step to understanding if computational complexity might be a barrier to manipulations. Many questions remain to be answered. For example, the preferences met in practice may be highly correlated. Men may have similar preferences for many of the women. Are such profiles computationally difficult to manipulate? As a second example, it has been recently recognised (see, for example, [4,19]) that worst-case results may represent an insufficient barrier against manipulation since they may only apply to problems that are rare. Are there stable marriage procedures which are difficult to manipulate on average?
Another drawback of many stable marriage procedures such as the one proposed by Gale-Shapley is their bias towards one of the two genders. The stable matching returned by the Gale-Shapley algorithm is either male optimal (and the best possible for every man) but female pessimal (that is, the worst possible for every woman), or female optimal but male pessimal. It is often desirable to use stable marriage procedures that are gender neutral [18]. Such procedures return a stable matching that is not affected by swapping the men with the women. The goal of this paper is to study both the complexity of manipulation and gender neutrality in stable marriage procedures, and to design gender neutral procedures that are difficult to manipulate.
It is known that the Gale-Shapley algorithm is computationally easy to manipulate [25]. Our first contribution is to prove that if the male and female preferences have a certain form, it is computationally easy to manipulate any stable marriage procedure. We provide a universal polynomial time manipulation scheme that, under certain conditions on the preferences, guarantees that the manipulator marries his optimal stable partner irrespective of the stable marriage procedure used. On the other hand, our second contribution is to prove that, when the preferences of the men and women are unrestricted, there exist stable marriage procedures which are NP-hard to manipulate.
Our third contribution is to show that any stable marriage procedure can be made gender neutral by means of a simple pre-processing step which may swap the men with the women. This swap can, for instance, be decided by a voting rule. However, this may give a gender neutral stable matching procedure which is easy to manipulate.
Our final contribution is a stable matching procedure which is both gender neutral and NP-hard to manipulate. This procedure uses a voting rule that, considering the male and female preferences, helps to choose between stable matchings. In fact, it picks the stable matching that is most preferred by the most popular men and women. We prove that, if the voting rule used is Single Transferable Vote (STV) [1], which is NP-hard to manipulate, then the resulting stable matching procedure is both gender neutral and NP-hard to manipulate. We conjecture that other voting rules which are NP-hard to manipulate will give rise to stable matching procedures which are also gender neutral and NP-hard to manipulate. Thus, our approach shows how combining voting rules and stable matching procedures can be beneficial in two ways: by using preferences to discriminate among stable matchings and by providing a possible computational shield against manipulation.
The Gale-Shapley algorithm
The Gale-Shapley algorithm [8] is a well-known algorithm to solve the SMP problem. It involves a number of rounds where each un-engaged man "proposes" to his most-preferred woman to whom he has not yet proposed. Each woman then considers all her suitors and tells the one she most prefers "maybe" and all the rest of them "No". She is then provisionally "engaged". In each subsequent round, each unengaged man proposes to one woman to whom he has not yet proposed (the woman may or may not already be engaged), and the women once again reply with one "maybe" and reject the rest. This may mean that already-engaged women can "trade up", and already-engaged men can be "jilted".
This algorithm needs a number of steps that is quadratic in n, and it guarantees that:
• If the number of men and women coincide, and all participants express a linear order over all the members of the other group, everyone gets married. Once a woman becomes engaged, she is always engaged to someone. So, at the end, there cannot be a man and a woman both un-engaged, as he must have proposed to her at some point (since a man will eventually propose to every woman, if necessary) and, being un-engaged, she would have to have said yes.
• The marriages are stable. Let Alice be a woman and Bob be a man. Suppose they are each married, but not to each other. Upon completion of the algorithm, it is not possible for both Alice and Bob to prefer each other over their current partners. If Bob prefers Alice to his current partner, he must have proposed to Alice before he proposed to his current partner. If Alice accepted his proposal, yet is not married to him at the end, she must have dumped him for someone she likes more, and therefore doesn't like Bob more than her current partner. If Alice rejected his proposal, she was already with someone she liked more than Bob.
Note that the pairing generated by the Gale-Shapley algorithm is male optimal, i.e., every man is paired with his highest ranked feasible partner, and female-pessimal, i.e., each female is paired with her lowest ranked feasible partner. It would be the reverse, of course, if the roles of male and female participants in the algorithm were interchanged. Given n men and n women, a profile is a sequence of 2n strict total orders, n over the men and n over the women. In a profile, every woman ranks all the men, and every man ranks all the women. Example 1. Assume n = 3. Let W = {w1, w2, w3} and M = {m1, m2, m3} be respectively the set of women and men. The following sequence of strict total orders defines a profile:
• m1 : w1 > w2 > w3 (i.e., the man m1 prefers the woman w1 to w2 to w3),
• m2 : w2 > w1 > w3,
• m3 : w3 > w2 > w1,
• w1 : m1 > m2 > m3,
• w2 : m3 > m1 > m2,
• w3 : m2 > m1 > m3
For this profile, the Gale-Shapley algorithm returns the male optimal solution {(m1, w1), (m2, w2), (m3, w3)}. On the other hand, the female optimal solution is {(w1, m1), (w2, m3), (w3, m2)}.
Gender neutrality and non-manipulability
A desirable property of a stable marriage procedure is gender neutrality. A stable marriage procedure is gender neutral [18] if and only if when we swap the men with the women, we get the same result. A related property, called peer indifference [18], holds if the result is not affected by the order in which the members of the same sex are considered. The Gale-Shapley procedure is peer indifferent but it is not gender neutral. In fact, if we swap men and women in Example 1, we obtain the female optimal solution rather than the male optimal one.
Another useful property of a stable marriage procedure is its resistance to manipulation. In fact, it would be desirable that lying would not lead to better results for the lier. A stable marriage procedure is manipulable if there is a way for one person to mis-report their preferences and obtain a result which is better than the one they would have obtained with their true preferences.
Roth [20] has proven that stable marriage procedures can always be manipulated, i.e, that no stable marriage procedures exist which always yields a stable outcome and give agents the incentive to reveal their true preferences. He demonstrated a 3 men, 3 women profile which can be manipulated whatever stable marriage procedure we use. A similar result in a different context is the one by Gibbard and Satterthwaite [11,24], that proves that all voting procedures [1] are manipulable under some modest assumptions. In this context, Bartholdi, Tovey and Trick [3] proposed that computational complexity might be an escape: whilst manipulation is always possible, there are rules like Single Transferable Vote (STV) where it is NP-hard to find a manipulation [2]. This resistance to manipulation arises from the difficulty of inverting the voting rule and does not depend on other assumptions like the difficulty of discovering the preferences of the other voters. In this paper, we study whether computational complexity may also be an escape from the manipulability of stable marriage procedures. Our results are only initial steps to a more complete understanding of the computational complexity of manipulating stable matching procedures. As mentioned before, NP-hardness results only address the worst case and may not apply to preferences met in practice.
MANIPULATING STABLE MARRIAGE PROCEDURES
A manipulation attempt by a participant p is the misreporting of p's preferences. A manipulation attempt is unsuccessful if the resulting marriage for p is strictly worse than the marriage obtained telling the truth. Otherwise, it is said to be successful. A stable marriage procedure is manipulable if there is a profile with a successful manipulation attempt from a participant.
The Gale-Shapley procedure, which depending on how it is defined returns either the male optimal or the female optimal solutions, is computationally easy to manipulate [25]. However, besides these two extreme solutions, there may be many other stable matchings. Several procedures have been defined to return some of these other stable matchings [13]. Our first contribution is to show that, under certain conditions on the shape of the male and female preferences, any stable marriage procedure is computationally easy to manipulate.
Consider a profile p and a woman w in such a profile. Let m be the male optimal partner for w in p, and n be the female optimal partner for w in p. Profile p is said to be universally manipulable by w if the following conditions hold:
• in the men-proposing Gale-Shapley algorithm, w receives more than one proposal;
• there exists a woman v such that n is the male optimal partner for v in p;
• v prefers m to n;
• n's preferences are . . . > v > w > . . .;
• m's preferences . . . w > v > . . .. Theorem 1. Consider any stable marriage procedure and any woman w. There is a polynomial manipulation scheme that, for any profile which is universally manipulable by w, produces the female optimal partner for w. Otherwise, it produces the same partner.
Proof. Consider the manipulation attempt that moves the male optimal partner m of w to the lower end of w's preference ordering, obtaining the new profile p ′ . Consider now the behaviour of the men-proposing Gale-Shapley algorithm on p and p ′ . Two cases are possible for p: w is proposed to only by man m, or it is proposed to also by some other man o. In this second case, it must be w prefers m to o since m is the male optimal partner for w.
If w is proposed to by m and also by some o, then, when w compares the two proposals, in p she will decide for m, while in p ′ she will decide for o. At this point, in p ′ , m will have to propose to the next best woman for him, that is, v, and she will accept because of the assumptions on her preference ordering. This means that n (who was married to v in p) now in p ′ has to propose to his next best choice, that is, w, who will accept, since w prefers n to m. So, in p ′ , the male optimal partner for w, as well as her female optimal partner, is n. This means that there is only one stable partner for w in p ′ . Therefore, any stable marriage procedure must return n as the partner for w.
Thus, if woman w wants to manipulate a stable marriage procedure, she can check if the profile is universally manipulable by her. This involves simulating the Gale-Shapley algorithm to see whether she is proposed by m only or also by some other man. In the former case, she will not do the manipulation. Otherwise, she will move m to the far right it and she will get her female optimal partner, whatever stable marriage procedure is used. This procedure is polynomial since the Gale-Shapley algorithm takes quadratic time to run. 2 Example 2. In a setting with 3 men and 3 women, consider the profile {m1 : w1 > w2 > w3; m2 : w2 > w1 > w3; m3 : w1 > w2 > w3; } {w1 : m2 > m1 > m3; w2 : m1 > m2 > m3; w3 : m1 > m2 > m3; } In this profile, the male optimal solution is {(m1, w1), (m2, w2), (m3, w3)}. This profile is universally manipulable by w1. In fact, woman w1 can successfully manipulate by moving m1 after m3, and obtaining the marriage (m2, w1), thus getting her female optimal partner. Notice that this holds no matter what stable marriage procedure is used. This same profile is not universally manipulable by w2 or w3, since they receive just one proposal in the men-proposing Gale-Shapley algorithm. In fact, woman w2 cannot manipulate: trying to move m2 after m3 gets a worse result. Also, woman w3 cannot manipulate since her male optimal partner is her least preferred man.
Restricting to universally manipulable profiles makes manipulation computationally easy. On the other hand, if we allow all possible profiles, there are stable marriage procedures that are NP-hard to manipulate. The intuition is simple. We construct a stable marriage procedure that is computationally easy to compute but NP-hard to invert.
To manipulate, a man or a woman will essentially need to be able to invert the procedure to choose between the exponential number of possible preference orderings. Hence, the constructed stable marriage procedure will be NP-hard to manipulate. The stable marriage procedure used in this proof is somewhat "artificial". However, we will later propose a stable marriage procedure which is more natural while remaining NP-hard to manipulate. This procedure selects the stable matching that is most preferred by the most popular men and women. It is an interesting open question to devise other stable marriage procedures which are "natural" and computationally difficult to manipulate.
Theorem 2. There exist stable marriage procedures for which deciding the existence of a successful manipulation is NP-complete.
Proof. We construct a stable marriage procedure which chooses between the male and female optimal solution based on whether the profile encodes a NP-complete problem and its polynomial witness. The manipulator's preferences define the witness. The other people's preferences define the NPcomplete problem. Hence, the manipulator needs to be able to solve a NP-complete problem to be able to manipulate successfully. Deciding if there is a successful manipulation for this stable marriage procedure is clearly in NP since we can compute male and female optimal solutions in polynomial time, and we can check a witness to a NP-complete problem also in polynomial time.
Our stable marriage procedure is defined to work on n + 3 men (m1, m2 and p1 to pn+1) and n + 3 women (w1, w2 and v1 to vn+1). It returns the female optimal solution if the preferences of woman w1 encode a Hamiltonian path in a directed graph encoded by the other women's preferences, otherwise it returns the male optimal solution. The 3rd to n + 2th preferences of woman w1 encode a possible Hamiltonian path in a n node graph. In particular, if the 2 + ith man in the preference ordering of woman w1 for i > 0 is man pj, then the path goes from vertex i to vertex j. The preferences of the women vi for i ≤ n encode the graph in which we find this Hamiltonian path. In particular, if man pj for j < n + 1 and j = i appears before man pn+1 in the preference list of woman wi, then there is a directed edge in the graph from i to j. It should be noticed that any graph can be produced using this construction.
Given a graph which is not complete in which we wish to find a Hamiltonian path, we now build a special profile. Woman w1 will be able to manipulate this profile successfully iff the graph contains a Hamiltonian path. In the profile, woman w1 most prefers to marry man m1 and then man m2. Consider any pair of vertices (i, j) not in the graph. Woman w1 puts man pj at position 2 + i in her preference order. She puts all other pj's in any arbitrary order. This construction will guarantee that the preferences of w1 do not represent a Hamiltonian path. Woman w2 most prefers to marry man m2. Woman vi most prefers to marry man pi, and has preferences for the other men pj according to the edges from vertex i. Man m1 most prefers woman w2. Man m2 most prefers woman w1. Finally, man pi most prefers woman vi. All other unspecified preferences can be chosen in any way. By construction, all first choices are different. Hence, the male optimal solution has the men married to their first choice, whilst the female optimal solution has the women married to their first choice.
The male optimal solution has woman w1 married to man m2. The female optimal solution has woman w1 married to man m1. By construction, the preferences of woman w1 do not represent a Hamiltonian path. Hence our stable matching procedure returns the male optimal solution: woman w1 married to man m2. The only successful manipulation then for woman w1 is if she can marry her most preferred choice, man m1. As all first choices are different, woman w1 cannot successfully manipulate the male or female optimal solution. Therefore, she must manipulate her preferences so that she spells out a Hamiltonian path in her preference ordering, and our stable marriage procedure therefore returns the female optimal solution. This means she can successful manipulate iff there is a Hamiltonian path. Hence, deciding if there is a successful manipulation is NP-complete. 2
Note that we can modify the proof by introducing O(n 2 ) men so that the graph is encoded in the tail of the preferences of woman w2. This means that it remains NP-hard to manipulate a stable marriage procedure even if we collude with all but one of the women. It also means that it is NPhard to manipulate a stable marriage procedure when the problem is imbalanced and there are just 2 women but an arbitrary number of men. Notice that this procedure is not peer indifferent, since it gives special roles to different men and women. However, it is possible to make it peer indifferent, so that it computes the same result if we rename the men and women. For instance, we just take the men's preferences and compute from them a total ordering of the women (e.g. by running an election with these preferences). Similarly, we take the women's preferences and compute from them a total ordering of the men. We can then use these orderings to assign indices to men and women. Notice also this procedure is not gender neutral. If we swap men and women, we may get a different result. We can, however, use the simple procedure proposed in the next section to make it gender neutral.
GENDER NEUTRALITY
As mentioned before, a weakness of many stable marriage procedures like the Gale-Shapley procedure and the procedure presented in the previous section, is that they are not gender neutral. They may greatly favour one sex over the other. We now present a simple and universal technique for taking any stable marriage procedure and making it gender neutral. We will assume that the men and the women are named from 1 to n. We will also say that the men's preferences are isomorphic to the women's preferences iff there is a bijection between the men and women that preserves both the men's and women's preferences. In this case, it is easy to see that there is only one stable matching.
We can convert any stable marriage procedure into one that is gender neutral by adding a pre-round in which we choose if we swap the men with the women. The idea of using pre-rounds for enforcing certain properties is not new and has been used for example in [5] to make manipulation of voting rules NP-hard. The goal of our pre-round is, instead, to ensure gender-neutrality. More precisely, for each gender we compute its signature: a vector of numbers constructed by concatenating together each of the individual preference lists. Among all such vectors, the signature is the lexicographically smallest vector under reordering of the members of the chosen gender and renumbering of the members of the other gender.
Example 3. Consider the following profile with 3 men and 3 women. {m1 : w2 > w1 > w3; m2 : w3 > w2 > w1; m3 : w2 > w1 > w3} {w1 : m1 > m2 > m3; w2 : m3 > m1 > m2; w3 : m2 > m1 > m3}. The signature of the men is 123123312: each group of three digits represents the preference ordering of a man; men m2 and m3 and women w1 and w2 have been swapped with each other to obtain the lexicographically smallest vector. The signature of the women is instead 123213312.
Note that this vector can be computed in O(n 2 ) time. For each man, we put his preference list first, then reorder the women so that this man's preference list reads 1 to n. Finally, we concatenate the other men's preference lists in lexicographical order. We define the signature as the smallest such vector.
Before applying any stable marriage procedure, we propose to pre-process the profile according to the following rule, that we will call gn-rule (for gender neutral): If the male signature is smaller than the female signature, then we swap the men with the women before calling the stable marriage procedure. On the other hand, if the male signature is equal or greater than the female signature, we will not swap the men with the women before calling the stable marriage procedure. In the example above, the male signature is smaller than the female signature, thus men and women must be swapped before using the stable marriage procedure.
Theorem 3. Consider any stable marriage procedure, say µ. Given a profile p, consider the new procedure µ ′ obtained by applying µ to gn-rule(p). This new procedure returns a stable marriage and it is gender neutral. Moreover, if µ is peer indifferent, then µ ′ is peer indifferent as well.
Proof. To prove gender neutrality, we consider three cases:
• If the male signature is smaller than the female signature, the gn-rule swaps the men with the women. Thus we would apply µ to swapped genders.
To prove that the new procedure is gender neutral, we must prove that, if we swap the men with the women, the result is the same. If we do this swap, their signatures will be swapped. Thus the male signature will result larger than the female signature, and therefore the gn-rule will not swap men and women. Thus procedure µ will be applied to swapped genders.
• If the male signature is larger than the female signature, the gn-rule leaves the profile as it is. Thus µ is applied to profile p.
If we swap the genders, the male signature will result smaller than the female signature, and therefore the gn-rule will perform the swap. Thus procedure µ will be applied to the original profile p.
• If the male and female signatures are identical, the men and women's preferences are isomorphic and there is only one stable matching. Any stable marriage procedure must therefore return this matching, and hence it is gender neutral.
As for peer indifference, if we start from a profile obtained by reordering men or women, the signatures will be the same and thus the gn-rule will perform the same (either swapping or not). Thus the result of applying the whole procedure to the reordered profile will be the same as the one obtained by using the given profile. 2
If we are not concerned about preserving peer indifference, or if we start from a non-peer indifferent matching procedure, we can use a much simpler version of the gnrule, where the signatures are obtained directly from the profile without considering any reordering/renaming of men or women. This simpler approach is still sufficient to guarantee gender neutrality, but might produce a procedure which is not peer indifferent.
VOTING RULES AND STABLE MARRIAGE PROCEDURES
We will now see how we can exploit results about voting rules to build stable marriage procedures which are both gender neutral and difficult to manipulate.
A score-based matching procedure: gender neutral but easy to manipulate
Given a profile, consider a set of its stable matchings. For simplicity, consider the set containing only the male and female optimal stable matchings. However, there is no reason why we could not consider a larger polynomial size set. For example, we might consider all stable matchings found on a path through the stable marriage lattice [16] between the male and female optimal, or we may simply run twice any procedure computing a set of stable marriages, swapping genders the second time. We can now use the men and women's preferences to rank stable matchings in the considered set. For example, as in [15], we can score a matching as the sum of the men's ranks of their partners and of the women's ranks of their partners.
We then choose between the stable matchings in our given set according to which has the smallest score. Since our set contains only the male and the female optimal matches, we choose between the male and female optimal stable matchings according to which has the lowest score. If the male optimal and the female optimal stable matching have the same score, we use the signature of men and women, as defined in the previous section, to tie-break. It is possible to show that the resulting matching procedure, which returns the male optimal or the female optimal stable matching according to the scoring rule (or, if they have the same score, according to the signature) is gender neutral.
Unfortunately, this procedure is easy to manipulate. For a man, it is sufficient to place his male optimal partner in first place in his preference list, and his female optimal partner in last place. If this manipulation does not give the man his male optimal partner, then there is no manipulation that will. A woman manipulates the result in a symmetric way.
Lexicographical minimal regret
Let us now consider a more complex score-based matching procedure to choose between two (or more) stable matchings which will be computationally difficult to manipulate. The intuition behind the procedure is to choose between stable matchings according to the preferences of the most preferred men or women. In particular, we will pick the stable matching that is most preferred by the most popular men and women. Given a voting rule, we order the men using the women's preferences and order the women using the men's preferences. We then construct a male score vector for a matching using this ordering of the men (where a more preferred man is before a less preferred one). The ith element of the male score vector is the integer j iff the ith man in this order is married to his jth most preferred woman. A large male score vector is a measure of dissatisfaction with the matching from the perspective of the more preferred men. A female score vector is computed in an analogous manner.
The overall score for a matching is the lexicographically largest of its male and female score vectors. A large overall score corresponds to dissatisfaction with the matching from the perspective of the more preferred men or women. We then choose the stable matching from our given set which has the lexicographically least overall score. That is, we choose the stable matching which carries less regret for the more preferred men and women.
In the event of a tie, we can use any gender neutral tiebreaking procedure, such as the one based on signatures described above. Let us call this procedure the lexicographical minimal regret stable marriage procedure. In particular, when voting rule v is used to order the men and women we will call it a v-based lexicographical minimal regret stable marriage procedure. It is easy to see that this procedure is gender neutral. In addition, it is computationally hard to manipulate. Here we consider using STV [1] to order the men and women. However, we conjecture that similar results will hold for stable matching procedures which are derived from other voting rules which are NP-hard to manipulate.
In the STV rule each voter provides a total order on candidates and, initially, an individual's vote is allocated to his most preferred candidate. The quota of the election is the minimum number of votes necessary to get elected. If no candidate exceeds the quota, then, the candidate with the fewest votes is eliminated, and his votes are equally distributed among the second choices of the voters who had selected him as first choice. This step is repeated until some candidate exceeds the quota. In the following theorem we assume a quota of at least half of the number of voters.
Theorem 4. It is NP-complete to decide if an agent can manipulate the STV-based lexicographical minimal regret stable marriage procedure.
Proof. We adapt the reduction used to prove that constructive manipulation of the STV rule by a single voter is NP-hard [2]. In our proof, we need to consider how the STV rule treats ties. For example, ties will occur among all men and all women, since we will build a profile where every man and every woman have different first choice. Thus STV will need to tie break between all the men (and between all the women). We suppose that in any such tie break, the candidate alphabetically last is eliminated. We also suppose that a man h will try to manipulate the stable marriage procedure by mis-reporting his preferences.
To prove membership in NP, we observe that a manipulation is a polynomial witness. To prove NP-hardness, we give a reduction from 3-COVER. Given a set S with |S| = n, subsets and subsets Si with i ∈ [1, m], |Si = 3| and Si ⊂ S, we ask if there exists an index set I with |I| = n/3 and S i∈I Si = S. We will construct a profile of preferences for the men so that the only possibility is for STV to order first one of only two women, w or y. The manipulator h will try to vote strategically so that woman y is ordered first. This will have the consequence that we return the male optimal stable marriage in which the manipulator marries his first choice z1. On the other hand, if w is ordered first, we will return the female optimal stable marriage in which the manipulator is married to his second choice z2.
The following sets of women participate in the problem:
• two possible winners of the first STV election, w and y;
• z1 and z2 who are the first two choices of the manipulator;
• "first losers" in this election, ai and bi for i ∈ [1, m];
• "second line" in this election, ci and di for i ∈ [1, m];
• "e-bloc", ei for i ∈ [0, n];
• "garbage collectors", gi for i ∈ [1, m];
• "dummy women", z i,j,k where i ∈ [1,19] and j and k depend on i as outlined in the description given shortly for the men's preferences (e.g. for i = 1, j = 1 and k ∈ [1, 12m − 1] but for i ∈ [6,8], j ∈ [1, m] and k ∈ [1, 6m + 4j − 6]).
Ignoring the manipulator, the men's preferences will be constructed so that z1, z2 and the dummy women are the first women eliminated by the STV rule, and that ai and bi are 2m out of the next 3m woman eliminated. In addition, let I = {i : bi is eliminated bef ore ai}. Then the men's preferences will be constructed so that STV orders woman y first if and only if I is a 3-COVER. The manipulator can ensure bi is eliminated by the STV rule before ai for i ∈ I by placing ai in the i + 1th position and bi otherwise. The men's preferences are constructed as follows (where preferences are left unspecified, they can be completed in any order):
• a man n with preference (y, . . .) and ∀k ∈ [1, 12m − 1] a man with (z 1,1,k , y, . . .);
• a man p with preference (w, y, . . .) and ∀k ∈ [1, 12m−2] a man with (z 2,1,k , w, y, . . .);
• a man q with preference (e0, w, y, . . .) and ∀k ∈ [1, 10m+ 2n/3 − 1] a man with (z 3,1,k , e0, w, y, . . .);
• ∀j ∈ [1, n], a man with preference (ej, w, y, . . .) and ∀k ∈ [1, 12m−3] a man with preference (z 4,j,k , ej, w, y, . . .);
• ∀j ∈ [1, m], a man rj with preference (gj, w, y, . . .) and ∀k ∈ [1, 12m − 1] a man with preference (z 5,j,k , gj, w, y, . . .);
• ∀j ∈ [1, m], a man with preference (cj , dj, w, y, . . .) and ∀k ∈ [1, 6m+4j−6] a man with preference (z 6,j,k , cj , dj, w, y, . . .), and for each of the three k s.t. k ∈ Sj, a man with preference (z 7,j,k , cj , e k , w, y, . . .), and one with preference (z 8,j,k , cj , e k , w, y, . . .);
• ∀j ∈ [1, m], a man with preference (dj, cj , w, y, . . .) and ∀k ∈ [1, 6m+4j−2] a man with preference (z 9,j,k , dj, cj , w, y, . . .), one with preference (z 10,j,k , dj, e0, w, y, . . .), and one with (z 11,j,k , dj, e0, w, y, . . .);
• ∀j ∈ [1, m], a man with preference (aj, gj, w, y, . . .) and ∀k ∈ [1, 6m+4j−4] a man with preference (z 12,j,k , aj, gj, w, y, . . .), one with preference (z 13,j,k , aj, cj , w, y, . . .), one with preference (z 14,j,k , aj, bj, w, y, . . .), and one with preference (z 15,j,k , aj , bj, w, y, . . .).
• ∀j ∈ [1, m], a man with preference (bj , gj, w, y, . . .) and ∀k ∈ [1, 6m+4j−4] a man with preference (z 16,j,k , bj, gj, w, y, . . .), one with preference (z 17,j,k , bj, dj, w, y, . . .), one with preference (z 18,j,k , bj, aj, w, y, . . .), and one with preference (z 19,j,k , bj , aj, w, y, . . .).
Note that each woman is ranked first by exactly one man. The women's preference will be set up so that the manipulator h is assured at least that he will marry his second choice, z2 as this will be his female optimal partner. To manipulate the election, the manipulator needs to put z1 first in his preferences and to report the rest of his preferences so that the result returned is the male optimal solution. As all woman are ranked first by exactly one man, the male optimal matching marries h with z1.
When we use STV to order the women, z1, z2 and z i,j,k are alphabetically last so are eliminated first by the tie-breaking rule. This leaves the following profile:
• 12m men with preference (y, . . .);
• 12m − 1 men with preference (w, y, . . .);
• 10m + 2n/3 men with preference (e0, w, y, . . .);
• ∀j ∈ [1, n], 12m − 2 men with preference (ej, w, y, . . .);
• ∀j ∈ [1, m], 12m men with preference (gj, w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−5 men with preference (cj, dj , w, y, . . .), and for each of the three k such that k ∈ Sj , two men with preference (cj , e k , w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−1 men with preference (dj, cj , w, y, . . .), and two men with preference (dj, e0, w, y, . . .),
• ∀j ∈ [1, m], 6m+4j−3 men with preference (aj, gj, w, y, . . .), a man with preference (aj, cj , w, y, . . .), and two men with preference (aj, bj , w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−3 men with preference (bj, gj, w, y, . . .) a man with preference (bj, dj , w, y, . . .), and two men with preference (bj, aj , w, y, . . .).
At this point, the votes are identical (up to renaming of the men) to the profile constructed in the proof of Theorem 1 in [2]. Using the same argument as there, it follows that the manipulator can ensure that STV orders woman y first instead of w if and only if there is a 3-COVER. The manipulation will place z1 first in h's preferences. Similar to the proof of Theorem 1 in [2], the manipulation puts woman aj in j + 1th place and bj otherwise where j ∈ J and J is any index set of a 3-COVER.
The women's preferences are as follows:
• the woman y with preference (n, . . .);
• the woman w with preference (q, . . .);
• the woman z1 with preference (p, . . .);
• the woman z2 with preference (h, . . .);
• the women gi with preference (ri, . . .);
• the other women with any preferences which are firstdifferent, and which ensure STV orders r0 first and r1 second overall.
Each man is ranked first by exactly one woman. Hence, the female optimal stable matching is the first choice of the women. The male score vector of the male optimal stable matching is (1, 1, . . . , 1). Hence, the overall score vector of the male optimal stable matching equals the female score vector of the male optimal stable matching. This is (1, 2, . . .) if the manipulation is successful and (2, 1, . . .) if it is not. Similarly, the overall score vector of the female optimal stable matching equals the male score vector of the female optimal stable matching. This is (1, 3, . . .). Hence the lexicographical minimal regret stable marriage procedure will return the male optimal stable matching iff there is a successful manipulation of the STV rule. Note that the profile used in this proof is not universally manipulable. The first choices of the man are all different and each woman therefore only receives one proposal in the men-proposing Gale-Shapley algorithm. 2 We can thus see how the proposed matching procedure is reasonable and appealing. In fact, it allows to discriminate among stable matchings according to the men and women's preferences and it is difficult to manipulate while ensuring gender neutrality.
RELATED WORK
In [18] fairness of a matching procedure is defined in terms of four axioms, two of which are gender neutrality and peer indifference. Then, the existence of a matching procedures satisfying all or a subset of the axioms is considered in terms of restrictions on preference orderings. Here, instead, we propose a preprocessing step that allows to obtain a gender neutral matching procedure from any matching procedure without imposing any restrictions on the preferences in the input.
A detailed description of results about manipulation of stable marriage procedures can be found in [14]. In particular, several early results [6,7,9,20] indicated the futility of men lying, focusing later work mostly on strategies in which the women lie. Gale and Sotomayor [10] presented the manipulation strategy in which women truncate their preference lists. Roth and Vate [23] discussed strategic issues when the stable matching is chosen at random, proposed a truncation strategy and showed that every stable matching can be achieved as an equilibrium in truncation strategies. We instead do not allow the elimination of men from a woman's preference ordering, but permit reordering of the preference lists.
Teo et al. [25] suggested lying strategies for an individual woman, and proposed an algorithm to find the best partner with the male optimal procedure. We instead focus on the complexity of determining if the procedure can be manipulated to obtain a better result. Moreover, we also provide a universal manipulation scheme that, under certain conditions on the profile, assures that the female optimal partner is returned.
Coalition manipulation is considered in [14]. Huang shows how a coalition of men can get a better result in the menproposing Gale-Shapley algorithm. By contrast, we do not consider a coalition but just a single manipulator, and do not consider just the Gale-Shapley algorithm.
CONCLUSIONS
We have studied the manipulability and gender neutrality of stable marriage procedures. We first looked at whether, as with voting rules, computationally complexity might be a barrier to manipulation. It was known already that one prominent stable marriage procedure, the Gale-Shapley algorithm, is computationally easy to manipulate. We proved that, under some simple restrictions on agents' preferences, all stable marriage procedures are in fact easy to manipulate. Our proof provides an universal manipulation which an agent can use to improve his result. On the other hand, when preferences are unrestricted, we proved that there exist stable marriage procedures which are NP-hard to manipulate. We also showed how to use a voting rule to choose between stable matchings. In particular, we gave a stable marriage procedure which picks the stable matching that is most preferred by the most popular men and women. This procedure inherits the computational complexity of the underlying voting rule. Thus, when the STV voting rule (which is NP-hard to manipulate) is used to compute the most popular men and women, the corresponding stable marriage procedure is NP-hard to manipulate. Another desirable property of stable marriage procedures is gender neutrality. Our procedure of turning a voting rule into a stable marriage procedure is gender neutral.
This study of stable marriage procedures is only an initial step to understanding if computational complexity might be a barrier to manipulation. Many questions remain to be answered. For example, if preferences are correlated, are stable marriage procedures still computationally hard to manipulate? As a second example, are there stable marriage procedures which are difficult to manipulate on average? There are also many interesting and related questions connected with privacy and mechanism design. For instance, how do we design a decentralised stable marriage procedure which is resistant to manipulation and in which the agents do not share their preference lists? As a second example, how can side payments be used in stable marriage procedures to prevent manipulation?
| 7,679 |
0909.4437
|
2953180797
|
The stable marriage problem is a well-known problem of matching men to women so that no man and woman who are not married to each other both prefer each other. Such a problem has a wide variety of practical applications ranging from matching resident doctors to hospitals to matching students to schools. A well-known algorithm to solve this problem is the Gale-Shapley algorithm, which runs in polynomial time. It has been proven that stable marriage procedures can always be manipulated. Whilst the Gale-Shapley algorithm is computationally easy to manipulate, we prove that there exist stable marriage procedures which are NP-hard to manipulate. We also consider the relationship between voting theory and stable marriage procedures, showing that voting rules which are NP-hard to manipulate can be used to define stable marriage procedures which are themselves NP-hard to manipulate. Finally, we consider the issue that stable marriage procedures like Gale-Shapley favour one gender over the other, and we show how to use voting rules to make any stable marriage procedure gender neutral.
|
Coalition manipulation is considered in @cite_6 . Huang shows how a coalition of men can get a better result in the men-proposing Gale-Shapley algorithm. By contrast, we do not consider a coalition but just a single manipulator, and do not consider just the Gale-Shapley algorithm.
|
{
"abstract": [
"This paper addresses strategies for the stable marriage problem. For the Gale-Shapley algorithm with men proposing, a classical theorem states that it is impossible for every cheating man to get a better partner than the one he gets if everyone is truthful. We study how to circumvent this theorem and incite men to cheat. First we devise coalitions in which a nonempty subset of the liars get better partners and no man is worse off than before. This strategy is limited in that not everyone in the coalition has the incentive to falsify his list. In an attempt to rectify this situation we introduce the element of randomness, but the theorem shows surprising robustness: it is impossible that every liar has a chance to improve the rank of his partner while no one gets hurt. To overcome the problem that some men lack the motivation to lie, we exhibit another randomized lying strategy in which every liar can expect to get a better partner on average, though with a chance of getting a worse one. Finally, we consider a variant scenario: instead of using the Gale-Shapley algorithm, suppose the stable matching is chosen at random. We present a modified form of the coalition strategy ensuring that every man in the coalition has a new probability distribution over partners which majorizes the original one."
],
"cite_N": [
"@cite_6"
],
"mid": [
"1605637365"
]
}
|
Manipulation and gender neutrality in stable marriage procedures
|
The stable marriage problem (SMP) [12] is a well-known problem of matching the elements of two sets. Given n men and n women, where each person expresses a strict ordering over the members of the opposite sex, the problem is to match the men to the women so that there are no two people of opposite sex who would both rather be matched with each other than their current partners. If there are no such people, all the marriages are said to be stable. Gale and Shapley [8] proved that it is always possible to solve the SMP and make all marriages stable, and provided a quadratic time algorithm which can be used to find one of two particular but extreme stable marriages, the so-called male optimal or female optimal solution. The Gale-Shapley algorithm has been used in many real-life applications, such as in systems for matching hospitals to resident doctors [21] and the assignment of primary school students in Singapore to secondary schools [25]. Variants of the stable marriage problem turn up in many domains. For example, the US Navy has a webbased multi-agent system for assigning sailors to ships [17].
One important issue is whether agents have an incentive to tell the truth or can manipulate the result by misreporting their preferences. Unfortunately, Roth [20] has proved that all stable marriage procedures can be manipulated. He demonstrated a stable marriage problem with 3 men and 3 women which can be manipulated whatever stable marriage procedure we use. This result is in some sense analogous to the classical Gibbard Satterthwaite [11,24] theorem for voting theory, which states that all voting procedures are manipulable under modest assumptions provided we have 3 or more voters. For voting theory, Bartholdi, Tovey and Trick [3] proposed that computational complexity might be an escape: whilst manipulation is always possible, there are voting rules where it is NP-hard to find a manipulation.
We might hope that computational complexity might also be a barrier to manipulate stable marriage procedures. Unfortunately, the Gale-Shapley algorithm is computationally easy to manipulate [25]. We identify here stable marriage procedures that are NP-hard to manipulate. This can be considered a first step to understanding if computational complexity might be a barrier to manipulations. Many questions remain to be answered. For example, the preferences met in practice may be highly correlated. Men may have similar preferences for many of the women. Are such profiles computationally difficult to manipulate? As a second example, it has been recently recognised (see, for example, [4,19]) that worst-case results may represent an insufficient barrier against manipulation since they may only apply to problems that are rare. Are there stable marriage procedures which are difficult to manipulate on average?
Another drawback of many stable marriage procedures such as the one proposed by Gale-Shapley is their bias towards one of the two genders. The stable matching returned by the Gale-Shapley algorithm is either male optimal (and the best possible for every man) but female pessimal (that is, the worst possible for every woman), or female optimal but male pessimal. It is often desirable to use stable marriage procedures that are gender neutral [18]. Such procedures return a stable matching that is not affected by swapping the men with the women. The goal of this paper is to study both the complexity of manipulation and gender neutrality in stable marriage procedures, and to design gender neutral procedures that are difficult to manipulate.
It is known that the Gale-Shapley algorithm is computationally easy to manipulate [25]. Our first contribution is to prove that if the male and female preferences have a certain form, it is computationally easy to manipulate any stable marriage procedure. We provide a universal polynomial time manipulation scheme that, under certain conditions on the preferences, guarantees that the manipulator marries his optimal stable partner irrespective of the stable marriage procedure used. On the other hand, our second contribution is to prove that, when the preferences of the men and women are unrestricted, there exist stable marriage procedures which are NP-hard to manipulate.
Our third contribution is to show that any stable marriage procedure can be made gender neutral by means of a simple pre-processing step which may swap the men with the women. This swap can, for instance, be decided by a voting rule. However, this may give a gender neutral stable matching procedure which is easy to manipulate.
Our final contribution is a stable matching procedure which is both gender neutral and NP-hard to manipulate. This procedure uses a voting rule that, considering the male and female preferences, helps to choose between stable matchings. In fact, it picks the stable matching that is most preferred by the most popular men and women. We prove that, if the voting rule used is Single Transferable Vote (STV) [1], which is NP-hard to manipulate, then the resulting stable matching procedure is both gender neutral and NP-hard to manipulate. We conjecture that other voting rules which are NP-hard to manipulate will give rise to stable matching procedures which are also gender neutral and NP-hard to manipulate. Thus, our approach shows how combining voting rules and stable matching procedures can be beneficial in two ways: by using preferences to discriminate among stable matchings and by providing a possible computational shield against manipulation.
The Gale-Shapley algorithm
The Gale-Shapley algorithm [8] is a well-known algorithm to solve the SMP problem. It involves a number of rounds where each un-engaged man "proposes" to his most-preferred woman to whom he has not yet proposed. Each woman then considers all her suitors and tells the one she most prefers "maybe" and all the rest of them "No". She is then provisionally "engaged". In each subsequent round, each unengaged man proposes to one woman to whom he has not yet proposed (the woman may or may not already be engaged), and the women once again reply with one "maybe" and reject the rest. This may mean that already-engaged women can "trade up", and already-engaged men can be "jilted".
This algorithm needs a number of steps that is quadratic in n, and it guarantees that:
• If the number of men and women coincide, and all participants express a linear order over all the members of the other group, everyone gets married. Once a woman becomes engaged, she is always engaged to someone. So, at the end, there cannot be a man and a woman both un-engaged, as he must have proposed to her at some point (since a man will eventually propose to every woman, if necessary) and, being un-engaged, she would have to have said yes.
• The marriages are stable. Let Alice be a woman and Bob be a man. Suppose they are each married, but not to each other. Upon completion of the algorithm, it is not possible for both Alice and Bob to prefer each other over their current partners. If Bob prefers Alice to his current partner, he must have proposed to Alice before he proposed to his current partner. If Alice accepted his proposal, yet is not married to him at the end, she must have dumped him for someone she likes more, and therefore doesn't like Bob more than her current partner. If Alice rejected his proposal, she was already with someone she liked more than Bob.
Note that the pairing generated by the Gale-Shapley algorithm is male optimal, i.e., every man is paired with his highest ranked feasible partner, and female-pessimal, i.e., each female is paired with her lowest ranked feasible partner. It would be the reverse, of course, if the roles of male and female participants in the algorithm were interchanged. Given n men and n women, a profile is a sequence of 2n strict total orders, n over the men and n over the women. In a profile, every woman ranks all the men, and every man ranks all the women. Example 1. Assume n = 3. Let W = {w1, w2, w3} and M = {m1, m2, m3} be respectively the set of women and men. The following sequence of strict total orders defines a profile:
• m1 : w1 > w2 > w3 (i.e., the man m1 prefers the woman w1 to w2 to w3),
• m2 : w2 > w1 > w3,
• m3 : w3 > w2 > w1,
• w1 : m1 > m2 > m3,
• w2 : m3 > m1 > m2,
• w3 : m2 > m1 > m3
For this profile, the Gale-Shapley algorithm returns the male optimal solution {(m1, w1), (m2, w2), (m3, w3)}. On the other hand, the female optimal solution is {(w1, m1), (w2, m3), (w3, m2)}.
Gender neutrality and non-manipulability
A desirable property of a stable marriage procedure is gender neutrality. A stable marriage procedure is gender neutral [18] if and only if when we swap the men with the women, we get the same result. A related property, called peer indifference [18], holds if the result is not affected by the order in which the members of the same sex are considered. The Gale-Shapley procedure is peer indifferent but it is not gender neutral. In fact, if we swap men and women in Example 1, we obtain the female optimal solution rather than the male optimal one.
Another useful property of a stable marriage procedure is its resistance to manipulation. In fact, it would be desirable that lying would not lead to better results for the lier. A stable marriage procedure is manipulable if there is a way for one person to mis-report their preferences and obtain a result which is better than the one they would have obtained with their true preferences.
Roth [20] has proven that stable marriage procedures can always be manipulated, i.e, that no stable marriage procedures exist which always yields a stable outcome and give agents the incentive to reveal their true preferences. He demonstrated a 3 men, 3 women profile which can be manipulated whatever stable marriage procedure we use. A similar result in a different context is the one by Gibbard and Satterthwaite [11,24], that proves that all voting procedures [1] are manipulable under some modest assumptions. In this context, Bartholdi, Tovey and Trick [3] proposed that computational complexity might be an escape: whilst manipulation is always possible, there are rules like Single Transferable Vote (STV) where it is NP-hard to find a manipulation [2]. This resistance to manipulation arises from the difficulty of inverting the voting rule and does not depend on other assumptions like the difficulty of discovering the preferences of the other voters. In this paper, we study whether computational complexity may also be an escape from the manipulability of stable marriage procedures. Our results are only initial steps to a more complete understanding of the computational complexity of manipulating stable matching procedures. As mentioned before, NP-hardness results only address the worst case and may not apply to preferences met in practice.
MANIPULATING STABLE MARRIAGE PROCEDURES
A manipulation attempt by a participant p is the misreporting of p's preferences. A manipulation attempt is unsuccessful if the resulting marriage for p is strictly worse than the marriage obtained telling the truth. Otherwise, it is said to be successful. A stable marriage procedure is manipulable if there is a profile with a successful manipulation attempt from a participant.
The Gale-Shapley procedure, which depending on how it is defined returns either the male optimal or the female optimal solutions, is computationally easy to manipulate [25]. However, besides these two extreme solutions, there may be many other stable matchings. Several procedures have been defined to return some of these other stable matchings [13]. Our first contribution is to show that, under certain conditions on the shape of the male and female preferences, any stable marriage procedure is computationally easy to manipulate.
Consider a profile p and a woman w in such a profile. Let m be the male optimal partner for w in p, and n be the female optimal partner for w in p. Profile p is said to be universally manipulable by w if the following conditions hold:
• in the men-proposing Gale-Shapley algorithm, w receives more than one proposal;
• there exists a woman v such that n is the male optimal partner for v in p;
• v prefers m to n;
• n's preferences are . . . > v > w > . . .;
• m's preferences . . . w > v > . . .. Theorem 1. Consider any stable marriage procedure and any woman w. There is a polynomial manipulation scheme that, for any profile which is universally manipulable by w, produces the female optimal partner for w. Otherwise, it produces the same partner.
Proof. Consider the manipulation attempt that moves the male optimal partner m of w to the lower end of w's preference ordering, obtaining the new profile p ′ . Consider now the behaviour of the men-proposing Gale-Shapley algorithm on p and p ′ . Two cases are possible for p: w is proposed to only by man m, or it is proposed to also by some other man o. In this second case, it must be w prefers m to o since m is the male optimal partner for w.
If w is proposed to by m and also by some o, then, when w compares the two proposals, in p she will decide for m, while in p ′ she will decide for o. At this point, in p ′ , m will have to propose to the next best woman for him, that is, v, and she will accept because of the assumptions on her preference ordering. This means that n (who was married to v in p) now in p ′ has to propose to his next best choice, that is, w, who will accept, since w prefers n to m. So, in p ′ , the male optimal partner for w, as well as her female optimal partner, is n. This means that there is only one stable partner for w in p ′ . Therefore, any stable marriage procedure must return n as the partner for w.
Thus, if woman w wants to manipulate a stable marriage procedure, she can check if the profile is universally manipulable by her. This involves simulating the Gale-Shapley algorithm to see whether she is proposed by m only or also by some other man. In the former case, she will not do the manipulation. Otherwise, she will move m to the far right it and she will get her female optimal partner, whatever stable marriage procedure is used. This procedure is polynomial since the Gale-Shapley algorithm takes quadratic time to run. 2 Example 2. In a setting with 3 men and 3 women, consider the profile {m1 : w1 > w2 > w3; m2 : w2 > w1 > w3; m3 : w1 > w2 > w3; } {w1 : m2 > m1 > m3; w2 : m1 > m2 > m3; w3 : m1 > m2 > m3; } In this profile, the male optimal solution is {(m1, w1), (m2, w2), (m3, w3)}. This profile is universally manipulable by w1. In fact, woman w1 can successfully manipulate by moving m1 after m3, and obtaining the marriage (m2, w1), thus getting her female optimal partner. Notice that this holds no matter what stable marriage procedure is used. This same profile is not universally manipulable by w2 or w3, since they receive just one proposal in the men-proposing Gale-Shapley algorithm. In fact, woman w2 cannot manipulate: trying to move m2 after m3 gets a worse result. Also, woman w3 cannot manipulate since her male optimal partner is her least preferred man.
Restricting to universally manipulable profiles makes manipulation computationally easy. On the other hand, if we allow all possible profiles, there are stable marriage procedures that are NP-hard to manipulate. The intuition is simple. We construct a stable marriage procedure that is computationally easy to compute but NP-hard to invert.
To manipulate, a man or a woman will essentially need to be able to invert the procedure to choose between the exponential number of possible preference orderings. Hence, the constructed stable marriage procedure will be NP-hard to manipulate. The stable marriage procedure used in this proof is somewhat "artificial". However, we will later propose a stable marriage procedure which is more natural while remaining NP-hard to manipulate. This procedure selects the stable matching that is most preferred by the most popular men and women. It is an interesting open question to devise other stable marriage procedures which are "natural" and computationally difficult to manipulate.
Theorem 2. There exist stable marriage procedures for which deciding the existence of a successful manipulation is NP-complete.
Proof. We construct a stable marriage procedure which chooses between the male and female optimal solution based on whether the profile encodes a NP-complete problem and its polynomial witness. The manipulator's preferences define the witness. The other people's preferences define the NPcomplete problem. Hence, the manipulator needs to be able to solve a NP-complete problem to be able to manipulate successfully. Deciding if there is a successful manipulation for this stable marriage procedure is clearly in NP since we can compute male and female optimal solutions in polynomial time, and we can check a witness to a NP-complete problem also in polynomial time.
Our stable marriage procedure is defined to work on n + 3 men (m1, m2 and p1 to pn+1) and n + 3 women (w1, w2 and v1 to vn+1). It returns the female optimal solution if the preferences of woman w1 encode a Hamiltonian path in a directed graph encoded by the other women's preferences, otherwise it returns the male optimal solution. The 3rd to n + 2th preferences of woman w1 encode a possible Hamiltonian path in a n node graph. In particular, if the 2 + ith man in the preference ordering of woman w1 for i > 0 is man pj, then the path goes from vertex i to vertex j. The preferences of the women vi for i ≤ n encode the graph in which we find this Hamiltonian path. In particular, if man pj for j < n + 1 and j = i appears before man pn+1 in the preference list of woman wi, then there is a directed edge in the graph from i to j. It should be noticed that any graph can be produced using this construction.
Given a graph which is not complete in which we wish to find a Hamiltonian path, we now build a special profile. Woman w1 will be able to manipulate this profile successfully iff the graph contains a Hamiltonian path. In the profile, woman w1 most prefers to marry man m1 and then man m2. Consider any pair of vertices (i, j) not in the graph. Woman w1 puts man pj at position 2 + i in her preference order. She puts all other pj's in any arbitrary order. This construction will guarantee that the preferences of w1 do not represent a Hamiltonian path. Woman w2 most prefers to marry man m2. Woman vi most prefers to marry man pi, and has preferences for the other men pj according to the edges from vertex i. Man m1 most prefers woman w2. Man m2 most prefers woman w1. Finally, man pi most prefers woman vi. All other unspecified preferences can be chosen in any way. By construction, all first choices are different. Hence, the male optimal solution has the men married to their first choice, whilst the female optimal solution has the women married to their first choice.
The male optimal solution has woman w1 married to man m2. The female optimal solution has woman w1 married to man m1. By construction, the preferences of woman w1 do not represent a Hamiltonian path. Hence our stable matching procedure returns the male optimal solution: woman w1 married to man m2. The only successful manipulation then for woman w1 is if she can marry her most preferred choice, man m1. As all first choices are different, woman w1 cannot successfully manipulate the male or female optimal solution. Therefore, she must manipulate her preferences so that she spells out a Hamiltonian path in her preference ordering, and our stable marriage procedure therefore returns the female optimal solution. This means she can successful manipulate iff there is a Hamiltonian path. Hence, deciding if there is a successful manipulation is NP-complete. 2
Note that we can modify the proof by introducing O(n 2 ) men so that the graph is encoded in the tail of the preferences of woman w2. This means that it remains NP-hard to manipulate a stable marriage procedure even if we collude with all but one of the women. It also means that it is NPhard to manipulate a stable marriage procedure when the problem is imbalanced and there are just 2 women but an arbitrary number of men. Notice that this procedure is not peer indifferent, since it gives special roles to different men and women. However, it is possible to make it peer indifferent, so that it computes the same result if we rename the men and women. For instance, we just take the men's preferences and compute from them a total ordering of the women (e.g. by running an election with these preferences). Similarly, we take the women's preferences and compute from them a total ordering of the men. We can then use these orderings to assign indices to men and women. Notice also this procedure is not gender neutral. If we swap men and women, we may get a different result. We can, however, use the simple procedure proposed in the next section to make it gender neutral.
GENDER NEUTRALITY
As mentioned before, a weakness of many stable marriage procedures like the Gale-Shapley procedure and the procedure presented in the previous section, is that they are not gender neutral. They may greatly favour one sex over the other. We now present a simple and universal technique for taking any stable marriage procedure and making it gender neutral. We will assume that the men and the women are named from 1 to n. We will also say that the men's preferences are isomorphic to the women's preferences iff there is a bijection between the men and women that preserves both the men's and women's preferences. In this case, it is easy to see that there is only one stable matching.
We can convert any stable marriage procedure into one that is gender neutral by adding a pre-round in which we choose if we swap the men with the women. The idea of using pre-rounds for enforcing certain properties is not new and has been used for example in [5] to make manipulation of voting rules NP-hard. The goal of our pre-round is, instead, to ensure gender-neutrality. More precisely, for each gender we compute its signature: a vector of numbers constructed by concatenating together each of the individual preference lists. Among all such vectors, the signature is the lexicographically smallest vector under reordering of the members of the chosen gender and renumbering of the members of the other gender.
Example 3. Consider the following profile with 3 men and 3 women. {m1 : w2 > w1 > w3; m2 : w3 > w2 > w1; m3 : w2 > w1 > w3} {w1 : m1 > m2 > m3; w2 : m3 > m1 > m2; w3 : m2 > m1 > m3}. The signature of the men is 123123312: each group of three digits represents the preference ordering of a man; men m2 and m3 and women w1 and w2 have been swapped with each other to obtain the lexicographically smallest vector. The signature of the women is instead 123213312.
Note that this vector can be computed in O(n 2 ) time. For each man, we put his preference list first, then reorder the women so that this man's preference list reads 1 to n. Finally, we concatenate the other men's preference lists in lexicographical order. We define the signature as the smallest such vector.
Before applying any stable marriage procedure, we propose to pre-process the profile according to the following rule, that we will call gn-rule (for gender neutral): If the male signature is smaller than the female signature, then we swap the men with the women before calling the stable marriage procedure. On the other hand, if the male signature is equal or greater than the female signature, we will not swap the men with the women before calling the stable marriage procedure. In the example above, the male signature is smaller than the female signature, thus men and women must be swapped before using the stable marriage procedure.
Theorem 3. Consider any stable marriage procedure, say µ. Given a profile p, consider the new procedure µ ′ obtained by applying µ to gn-rule(p). This new procedure returns a stable marriage and it is gender neutral. Moreover, if µ is peer indifferent, then µ ′ is peer indifferent as well.
Proof. To prove gender neutrality, we consider three cases:
• If the male signature is smaller than the female signature, the gn-rule swaps the men with the women. Thus we would apply µ to swapped genders.
To prove that the new procedure is gender neutral, we must prove that, if we swap the men with the women, the result is the same. If we do this swap, their signatures will be swapped. Thus the male signature will result larger than the female signature, and therefore the gn-rule will not swap men and women. Thus procedure µ will be applied to swapped genders.
• If the male signature is larger than the female signature, the gn-rule leaves the profile as it is. Thus µ is applied to profile p.
If we swap the genders, the male signature will result smaller than the female signature, and therefore the gn-rule will perform the swap. Thus procedure µ will be applied to the original profile p.
• If the male and female signatures are identical, the men and women's preferences are isomorphic and there is only one stable matching. Any stable marriage procedure must therefore return this matching, and hence it is gender neutral.
As for peer indifference, if we start from a profile obtained by reordering men or women, the signatures will be the same and thus the gn-rule will perform the same (either swapping or not). Thus the result of applying the whole procedure to the reordered profile will be the same as the one obtained by using the given profile. 2
If we are not concerned about preserving peer indifference, or if we start from a non-peer indifferent matching procedure, we can use a much simpler version of the gnrule, where the signatures are obtained directly from the profile without considering any reordering/renaming of men or women. This simpler approach is still sufficient to guarantee gender neutrality, but might produce a procedure which is not peer indifferent.
VOTING RULES AND STABLE MARRIAGE PROCEDURES
We will now see how we can exploit results about voting rules to build stable marriage procedures which are both gender neutral and difficult to manipulate.
A score-based matching procedure: gender neutral but easy to manipulate
Given a profile, consider a set of its stable matchings. For simplicity, consider the set containing only the male and female optimal stable matchings. However, there is no reason why we could not consider a larger polynomial size set. For example, we might consider all stable matchings found on a path through the stable marriage lattice [16] between the male and female optimal, or we may simply run twice any procedure computing a set of stable marriages, swapping genders the second time. We can now use the men and women's preferences to rank stable matchings in the considered set. For example, as in [15], we can score a matching as the sum of the men's ranks of their partners and of the women's ranks of their partners.
We then choose between the stable matchings in our given set according to which has the smallest score. Since our set contains only the male and the female optimal matches, we choose between the male and female optimal stable matchings according to which has the lowest score. If the male optimal and the female optimal stable matching have the same score, we use the signature of men and women, as defined in the previous section, to tie-break. It is possible to show that the resulting matching procedure, which returns the male optimal or the female optimal stable matching according to the scoring rule (or, if they have the same score, according to the signature) is gender neutral.
Unfortunately, this procedure is easy to manipulate. For a man, it is sufficient to place his male optimal partner in first place in his preference list, and his female optimal partner in last place. If this manipulation does not give the man his male optimal partner, then there is no manipulation that will. A woman manipulates the result in a symmetric way.
Lexicographical minimal regret
Let us now consider a more complex score-based matching procedure to choose between two (or more) stable matchings which will be computationally difficult to manipulate. The intuition behind the procedure is to choose between stable matchings according to the preferences of the most preferred men or women. In particular, we will pick the stable matching that is most preferred by the most popular men and women. Given a voting rule, we order the men using the women's preferences and order the women using the men's preferences. We then construct a male score vector for a matching using this ordering of the men (where a more preferred man is before a less preferred one). The ith element of the male score vector is the integer j iff the ith man in this order is married to his jth most preferred woman. A large male score vector is a measure of dissatisfaction with the matching from the perspective of the more preferred men. A female score vector is computed in an analogous manner.
The overall score for a matching is the lexicographically largest of its male and female score vectors. A large overall score corresponds to dissatisfaction with the matching from the perspective of the more preferred men or women. We then choose the stable matching from our given set which has the lexicographically least overall score. That is, we choose the stable matching which carries less regret for the more preferred men and women.
In the event of a tie, we can use any gender neutral tiebreaking procedure, such as the one based on signatures described above. Let us call this procedure the lexicographical minimal regret stable marriage procedure. In particular, when voting rule v is used to order the men and women we will call it a v-based lexicographical minimal regret stable marriage procedure. It is easy to see that this procedure is gender neutral. In addition, it is computationally hard to manipulate. Here we consider using STV [1] to order the men and women. However, we conjecture that similar results will hold for stable matching procedures which are derived from other voting rules which are NP-hard to manipulate.
In the STV rule each voter provides a total order on candidates and, initially, an individual's vote is allocated to his most preferred candidate. The quota of the election is the minimum number of votes necessary to get elected. If no candidate exceeds the quota, then, the candidate with the fewest votes is eliminated, and his votes are equally distributed among the second choices of the voters who had selected him as first choice. This step is repeated until some candidate exceeds the quota. In the following theorem we assume a quota of at least half of the number of voters.
Theorem 4. It is NP-complete to decide if an agent can manipulate the STV-based lexicographical minimal regret stable marriage procedure.
Proof. We adapt the reduction used to prove that constructive manipulation of the STV rule by a single voter is NP-hard [2]. In our proof, we need to consider how the STV rule treats ties. For example, ties will occur among all men and all women, since we will build a profile where every man and every woman have different first choice. Thus STV will need to tie break between all the men (and between all the women). We suppose that in any such tie break, the candidate alphabetically last is eliminated. We also suppose that a man h will try to manipulate the stable marriage procedure by mis-reporting his preferences.
To prove membership in NP, we observe that a manipulation is a polynomial witness. To prove NP-hardness, we give a reduction from 3-COVER. Given a set S with |S| = n, subsets and subsets Si with i ∈ [1, m], |Si = 3| and Si ⊂ S, we ask if there exists an index set I with |I| = n/3 and S i∈I Si = S. We will construct a profile of preferences for the men so that the only possibility is for STV to order first one of only two women, w or y. The manipulator h will try to vote strategically so that woman y is ordered first. This will have the consequence that we return the male optimal stable marriage in which the manipulator marries his first choice z1. On the other hand, if w is ordered first, we will return the female optimal stable marriage in which the manipulator is married to his second choice z2.
The following sets of women participate in the problem:
• two possible winners of the first STV election, w and y;
• z1 and z2 who are the first two choices of the manipulator;
• "first losers" in this election, ai and bi for i ∈ [1, m];
• "second line" in this election, ci and di for i ∈ [1, m];
• "e-bloc", ei for i ∈ [0, n];
• "garbage collectors", gi for i ∈ [1, m];
• "dummy women", z i,j,k where i ∈ [1,19] and j and k depend on i as outlined in the description given shortly for the men's preferences (e.g. for i = 1, j = 1 and k ∈ [1, 12m − 1] but for i ∈ [6,8], j ∈ [1, m] and k ∈ [1, 6m + 4j − 6]).
Ignoring the manipulator, the men's preferences will be constructed so that z1, z2 and the dummy women are the first women eliminated by the STV rule, and that ai and bi are 2m out of the next 3m woman eliminated. In addition, let I = {i : bi is eliminated bef ore ai}. Then the men's preferences will be constructed so that STV orders woman y first if and only if I is a 3-COVER. The manipulator can ensure bi is eliminated by the STV rule before ai for i ∈ I by placing ai in the i + 1th position and bi otherwise. The men's preferences are constructed as follows (where preferences are left unspecified, they can be completed in any order):
• a man n with preference (y, . . .) and ∀k ∈ [1, 12m − 1] a man with (z 1,1,k , y, . . .);
• a man p with preference (w, y, . . .) and ∀k ∈ [1, 12m−2] a man with (z 2,1,k , w, y, . . .);
• a man q with preference (e0, w, y, . . .) and ∀k ∈ [1, 10m+ 2n/3 − 1] a man with (z 3,1,k , e0, w, y, . . .);
• ∀j ∈ [1, n], a man with preference (ej, w, y, . . .) and ∀k ∈ [1, 12m−3] a man with preference (z 4,j,k , ej, w, y, . . .);
• ∀j ∈ [1, m], a man rj with preference (gj, w, y, . . .) and ∀k ∈ [1, 12m − 1] a man with preference (z 5,j,k , gj, w, y, . . .);
• ∀j ∈ [1, m], a man with preference (cj , dj, w, y, . . .) and ∀k ∈ [1, 6m+4j−6] a man with preference (z 6,j,k , cj , dj, w, y, . . .), and for each of the three k s.t. k ∈ Sj, a man with preference (z 7,j,k , cj , e k , w, y, . . .), and one with preference (z 8,j,k , cj , e k , w, y, . . .);
• ∀j ∈ [1, m], a man with preference (dj, cj , w, y, . . .) and ∀k ∈ [1, 6m+4j−2] a man with preference (z 9,j,k , dj, cj , w, y, . . .), one with preference (z 10,j,k , dj, e0, w, y, . . .), and one with (z 11,j,k , dj, e0, w, y, . . .);
• ∀j ∈ [1, m], a man with preference (aj, gj, w, y, . . .) and ∀k ∈ [1, 6m+4j−4] a man with preference (z 12,j,k , aj, gj, w, y, . . .), one with preference (z 13,j,k , aj, cj , w, y, . . .), one with preference (z 14,j,k , aj, bj, w, y, . . .), and one with preference (z 15,j,k , aj , bj, w, y, . . .).
• ∀j ∈ [1, m], a man with preference (bj , gj, w, y, . . .) and ∀k ∈ [1, 6m+4j−4] a man with preference (z 16,j,k , bj, gj, w, y, . . .), one with preference (z 17,j,k , bj, dj, w, y, . . .), one with preference (z 18,j,k , bj, aj, w, y, . . .), and one with preference (z 19,j,k , bj , aj, w, y, . . .).
Note that each woman is ranked first by exactly one man. The women's preference will be set up so that the manipulator h is assured at least that he will marry his second choice, z2 as this will be his female optimal partner. To manipulate the election, the manipulator needs to put z1 first in his preferences and to report the rest of his preferences so that the result returned is the male optimal solution. As all woman are ranked first by exactly one man, the male optimal matching marries h with z1.
When we use STV to order the women, z1, z2 and z i,j,k are alphabetically last so are eliminated first by the tie-breaking rule. This leaves the following profile:
• 12m men with preference (y, . . .);
• 12m − 1 men with preference (w, y, . . .);
• 10m + 2n/3 men with preference (e0, w, y, . . .);
• ∀j ∈ [1, n], 12m − 2 men with preference (ej, w, y, . . .);
• ∀j ∈ [1, m], 12m men with preference (gj, w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−5 men with preference (cj, dj , w, y, . . .), and for each of the three k such that k ∈ Sj , two men with preference (cj , e k , w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−1 men with preference (dj, cj , w, y, . . .), and two men with preference (dj, e0, w, y, . . .),
• ∀j ∈ [1, m], 6m+4j−3 men with preference (aj, gj, w, y, . . .), a man with preference (aj, cj , w, y, . . .), and two men with preference (aj, bj , w, y, . . .);
• ∀j ∈ [1, m], 6m+4j−3 men with preference (bj, gj, w, y, . . .) a man with preference (bj, dj , w, y, . . .), and two men with preference (bj, aj , w, y, . . .).
At this point, the votes are identical (up to renaming of the men) to the profile constructed in the proof of Theorem 1 in [2]. Using the same argument as there, it follows that the manipulator can ensure that STV orders woman y first instead of w if and only if there is a 3-COVER. The manipulation will place z1 first in h's preferences. Similar to the proof of Theorem 1 in [2], the manipulation puts woman aj in j + 1th place and bj otherwise where j ∈ J and J is any index set of a 3-COVER.
The women's preferences are as follows:
• the woman y with preference (n, . . .);
• the woman w with preference (q, . . .);
• the woman z1 with preference (p, . . .);
• the woman z2 with preference (h, . . .);
• the women gi with preference (ri, . . .);
• the other women with any preferences which are firstdifferent, and which ensure STV orders r0 first and r1 second overall.
Each man is ranked first by exactly one woman. Hence, the female optimal stable matching is the first choice of the women. The male score vector of the male optimal stable matching is (1, 1, . . . , 1). Hence, the overall score vector of the male optimal stable matching equals the female score vector of the male optimal stable matching. This is (1, 2, . . .) if the manipulation is successful and (2, 1, . . .) if it is not. Similarly, the overall score vector of the female optimal stable matching equals the male score vector of the female optimal stable matching. This is (1, 3, . . .). Hence the lexicographical minimal regret stable marriage procedure will return the male optimal stable matching iff there is a successful manipulation of the STV rule. Note that the profile used in this proof is not universally manipulable. The first choices of the man are all different and each woman therefore only receives one proposal in the men-proposing Gale-Shapley algorithm. 2 We can thus see how the proposed matching procedure is reasonable and appealing. In fact, it allows to discriminate among stable matchings according to the men and women's preferences and it is difficult to manipulate while ensuring gender neutrality.
RELATED WORK
In [18] fairness of a matching procedure is defined in terms of four axioms, two of which are gender neutrality and peer indifference. Then, the existence of a matching procedures satisfying all or a subset of the axioms is considered in terms of restrictions on preference orderings. Here, instead, we propose a preprocessing step that allows to obtain a gender neutral matching procedure from any matching procedure without imposing any restrictions on the preferences in the input.
A detailed description of results about manipulation of stable marriage procedures can be found in [14]. In particular, several early results [6,7,9,20] indicated the futility of men lying, focusing later work mostly on strategies in which the women lie. Gale and Sotomayor [10] presented the manipulation strategy in which women truncate their preference lists. Roth and Vate [23] discussed strategic issues when the stable matching is chosen at random, proposed a truncation strategy and showed that every stable matching can be achieved as an equilibrium in truncation strategies. We instead do not allow the elimination of men from a woman's preference ordering, but permit reordering of the preference lists.
Teo et al. [25] suggested lying strategies for an individual woman, and proposed an algorithm to find the best partner with the male optimal procedure. We instead focus on the complexity of determining if the procedure can be manipulated to obtain a better result. Moreover, we also provide a universal manipulation scheme that, under certain conditions on the profile, assures that the female optimal partner is returned.
Coalition manipulation is considered in [14]. Huang shows how a coalition of men can get a better result in the menproposing Gale-Shapley algorithm. By contrast, we do not consider a coalition but just a single manipulator, and do not consider just the Gale-Shapley algorithm.
CONCLUSIONS
We have studied the manipulability and gender neutrality of stable marriage procedures. We first looked at whether, as with voting rules, computationally complexity might be a barrier to manipulation. It was known already that one prominent stable marriage procedure, the Gale-Shapley algorithm, is computationally easy to manipulate. We proved that, under some simple restrictions on agents' preferences, all stable marriage procedures are in fact easy to manipulate. Our proof provides an universal manipulation which an agent can use to improve his result. On the other hand, when preferences are unrestricted, we proved that there exist stable marriage procedures which are NP-hard to manipulate. We also showed how to use a voting rule to choose between stable matchings. In particular, we gave a stable marriage procedure which picks the stable matching that is most preferred by the most popular men and women. This procedure inherits the computational complexity of the underlying voting rule. Thus, when the STV voting rule (which is NP-hard to manipulate) is used to compute the most popular men and women, the corresponding stable marriage procedure is NP-hard to manipulate. Another desirable property of stable marriage procedures is gender neutrality. Our procedure of turning a voting rule into a stable marriage procedure is gender neutral.
This study of stable marriage procedures is only an initial step to understanding if computational complexity might be a barrier to manipulation. Many questions remain to be answered. For example, if preferences are correlated, are stable marriage procedures still computationally hard to manipulate? As a second example, are there stable marriage procedures which are difficult to manipulate on average? There are also many interesting and related questions connected with privacy and mechanism design. For instance, how do we design a decentralised stable marriage procedure which is resistant to manipulation and in which the agents do not share their preference lists? As a second example, how can side payments be used in stable marriage procedures to prevent manipulation?
| 7,679 |
0909.4370
|
2950815430
|
We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with a variant of the popular SIR model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term . We establish that this is an ML estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has non-trivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like.
|
Prior work on rumor spreading has primarily focused on viral epidemics in populations. The natural (and somewhat standard) model for viral epidemics is known as the or SIR model @cite_13 . In this model, there are three types of nodes: (i) susceptible nodes, capable of being infected; (ii) infected nodes that can spread the virus further; and (iii) recovered nodes that are cured and can no longer become infected. Research in the SIR model has focused on understanding how the structure of the network and rates of infection cure lead to large epidemics @cite_0 , @cite_25 , @cite_26 , @cite_19 . This motivated various researchers to propose network inference techniques for learning the relevant network parameters @cite_16 , @cite_10 , @cite_4 , @cite_22 , @cite_5 . However, there has been little (or no) work done on inferring the source of an epidemic.
|
{
"abstract": [
"The study of social networks, and in particular the spread of disease on networks, has attracted considerable recent attention in the physics community. In this paper, we show that a large class of standard epidemiological models, the so-called susceptible infective removed (SIR) models can be solved exactly on a wide variety of networks. In addition to the standard but unrealistic case of fixed infectiveness time and fixed and uncorrelated probability of transmission between all pairs of individuals, we solve cases in which times and probabilities are nonuniform and correlated. We also consider one simple case of an epidemic in a structured population, that of a sexually transmitted disease in a population divided into men and women. We confirm the correctness of our exact solutions with numerical simulations of SIR epidemics on networks.",
"Abstract Recent Bayesian methods for the analysis of infectious disease outbreak data using stochastic epidemic models are reviewed. These methods rely on Markov chain Monte Carlo methods. Both temporal and non-temporal data are considered. The methods are illustrated with a number of examples featuring different models and datasets.",
"The paper is concerned with new methodology for statistical inference for final outcome infectious disease data using certain structured population stochastic epidemic models. A major obstacle to inference for such models is that the likelihood is both analytically and numerically intractable. The approach that is taken here is to impute missing information in the form of a random graph that describes the potential infectious contacts between individuals. This level of imputation overcomes various constraints of existing methodologies and yields more detailed information about the spread of disease. The methods are illustrated with both real and test data. Copyright 2005 Royal Statistical Society.",
"",
"We study some simple models of disease transmission on small-world networks, in which either the probability of infection by a disease or the probability of its transmission is varied, or both. The resulting models display epidemic behavior when the infection or transmission probability rises above the threshold for site or bond percolation on the network, and we give exact solutions for the position of this threshold in a variety of cases. We confirm our analytic results by numerical simulation.",
"Many network phenomena are well modeled as spreads of epidemics through a network. Prominent examples include the spread of worms and email viruses, and, more generally, faults. Many types of information dissemination can also be modeled as spreads of epidemics. In this paper we address the question of what makes an epidemic either weak or potent. More precisely, we identify topological properties of the graph that determine the persistence of epidemics. In particular, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, then the mean epidemic lifetime is of order log n, where n is the number of nodes. Conversely, if this ratio is smaller than a generalization of the isoperimetric constant of the graph, then the mean epidemic lifetime is of order e sup na , for a positive constant a. We apply these results to several network topologies including the hypercube, which is a representative connectivity graph for a distributed hash table, the complete graph, which is an important connectivity graph for BGP, and the power law graph, of which the AS-level Internet graph is a prime example. We also study the star topology and the Erdos-Renyi graph as their epidemic spreading behaviors determine the spreading behavior of power law graphs.",
"This paper presents statistical inference of computer virus propagation using non-homogeneous Poisson processes (NHPPs). Under some mathematical assumptions, the number of infected hosts can be modeled by an NHPP In particular, this paper applies a framework of mixed-type NHPPs to the statistical inference of periodic virus propagation. The mixed-type NHPP is defined by a superposition of NHPPs. In numerical experiments, we examine a goodness-of-fit criterion of NHPPs on fitting to real virus infection data, and discuss the effectiveness of the model-based prediction approach for computer virus propagation.",
"Methodology for Bayesian inference is considered for a stochastic epidemic model which permits mixing on both local and global scales. Interest focuses on estimation of the within- and between-group transmission rates given data on the final outcome. The model is sufficiently complex that the likelihood of the data is numerically intractable. To overcome this difficulty, an appropriate latent variable is introduced, about which asymptotic information is known as the population size tends to infinity. This yields a method for approximate inference for the true model. The methods are applied to real data, tested with simulated data, and also applied to a simple epidemic model for which exact results are available for comparison. Copyright 2005 Board of the Foundation of the Scandinavian Journal of Statistics..",
"",
"The Internet has a very complex connectivity recently modeled by the class of scale-free networks. This feature, which appears to be very efficient for a communications network, favors at the same time the spreading of computer viruses. We analyze real data from computer virus infections and find the average lifetime and persistence of viral strains on the Internet. We define a dynamical model for the spreading of infections on scale-free networks, finding the absence of an epidemic threshold and its associated critical behavior. This new epidemiological framework rationalizes data of computer viruses and could help in the understanding of other spreading phenomena on communication and social networks."
],
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_10",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"2030539428",
"2128232522",
"1984430808",
"2625625722",
"1969723574",
"1914027636",
"2100740305",
"2079848860",
"114870970",
"2038195874"
]
}
|
Rumors in a Network: Who's the Culprit?
|
In the modern world the ubiquity of networks has made us vulnerable to new types of network risks. These network risks arise in many different contexts, but share a common structure: an isolated risk is amplified because it is spread by the network. For example, as we have witnessed in the recent financial crisis, the strong dependencies or 'network' between institutions have led to the situation where the failure of one (or few) institution(s) have led to global instabilities. More generally, various forms of social networks allow information and instructions to be disseminated and finding the leader of these networks is of great interest for various purposes -identification of the 'latent leader' in a political network, identification of the 'hidden voice' in a spy network, or learning the unknown hierarchy of rulers in a historical setup. Finally, one wishes to identify the source of computer viruses or worms in the Internet and the source of contagious diseases in populations in order to quarantine them.
In essence, all of these situations can be modeled as a rumor spreading through a network. The goal is to find the source of the rumor in these networks in order to control and prevent these network risks based on limited information about the network structure and the 'rumor infected' nodes. In this paper, we will provide a systematic study of the question of identifying the rumor source based on the network structure and rumor infected nodes, as well as understand the fundamental limitations on this estimation problem.
Our Contributions.
In this paper, we provide a systematic study of the question of designing an estimator for the rumor source based on knowledge of the underlying network structure and the rumor infected nodes. To begin, we present a probabilistic model of rumor spreading in a network based on the SIR model. On one hand this is a natural and well studied model for rumor spreading; on the other hand it should be thought of a good starting point to undertake the systematic study of such inference problems.
Following the approach of researchers working on the reconstruction problem and efficient inference algorithm design (i.e. Belief Propagtation), we first address the rumor source estimation problem for tree networks. We characterize the maximum likelihood estimator for the rumor source in regular trees. This estimator assigns to each node a likelihood which we call its rumor centrality. Rumor centrality strongly depends on the underlying topology of the rumor network as well as the rumor infected nodes. The notion of rumor centrality of a node readily extends to arbitrary tree networks.
For arbitrary trees, we find the following surprising threshold phenomenon about the estimator's effectiveness. If the number of nodes within a distance d from any node in a tree scales like d α , then for trees with α = 0 (i.e. line graphs), the detection probability of our estimator will go to 0 as the network grows in size; but for trees with α > 0, the detection probability will always be strictly greater than 0 (uniformly bounded away from 0) irrespective of the network size. In the latter case, we find that estimator error remains finite with probability 1, independent of the network size. In the former case (i.e. α = 0), it can be shown that for any estimator the detection probability will go to 0. Thus, our estimator is essentially the optimal for any tree network.
Motivated by these results for trees, we develop a systematic approach to utilize the tree estimator -the rumor centrality -to develop an estimator for general networks. This is possible because in essence, under the SIR model, rumors spread along a (random) sub-tree of the network. We perform extensive simulations to show that this estimator performs extremely well. In addition, we apply our estimator to the 15th century Florentine elite family marriage network and are able to accurately infer the most powerful family in the network -the Medici family.
Estimator Construction
In this section we start with a description of our rumor spreading model and then we define the maximum likelihood estimator for the rumor source. For regular tree graphs, we equate the maximum likelihood estimator to a novel combinatoric quantity we call rumor centrality. We obtain a closed form expression for this quantity. Using rumor centrality, we construct rumor source estimators for general trees and general graphs.
Rumor Spreading Model.
We consider a network of nodes to be modeled by an undirected graph G(V, E), where V is a countably infinite set of nodes and E is the set of edges of the form (i, j) for some i and j in V . We assume the set of nodes is countably infinite in order to avoid boundary effects. We consider the case where initially only one node v * is the rumor source.
We use a variant of the SIR model for the rumor spreading known as the susceptibleinfected or SI model which does not allow for any nodes to recover, i.e. once a node has the rumor, it keeps it forever. Once a node i has the rumor, it is able to spread it to another node j if and only if there is an edge between them, i.e. if (i, j) ∈ E. The time for a node i to spread the rumor to node j is modeled by an exponential random variable τ ij with rate λ. We assume without loss of generality that λ = 1. All τ ij 's are independent and identically distributed.
Rumor Source Maximum Likelihood Estimator
We now assume that the rumor has spread in G(V, E) according to our model and that N nodes have the rumor. These nodes are represented by a rumor graph G N (V, E) which is a subgraph of G(V, E). We will refer to this rumor graph as G N from here on. The actual rumor source is denoted as v * and our estimator will be v. We assume that each node is equally likely to be the source a priori, so the best estimator will be the maximum likelihood estimator. The only data we have available is the final rumor graph G N , so the estimator becomes
v = arg max v∈G N P(G N |v * = v)(1)
In general, P(G N |v * = v) will be difficult to evaluate. However, we will show that in regular tree graphs, it can be expressed in a simple closed form.
Rumor Source Estimator for Regular Trees
To simplify our rumor source estimator, we consider the case where the underlying graph is a regular tree where every node has the same degree. In this case, P(G N |v * = v) can be exactly evaluated when we observe G N at the instant when the N th node is infected. First, because of the tree structure of the network, there is a unique sequence of nodes for the rumor to spread to each node in G N . Therefore, to obtain the rumor graph G N , we simply need to construct a permutation of the N nodes subject to the ordering constraints set by the structure of the rumor graph. We will refer to these permutations as permitted permutations. For example, for the network in Figure 1, if node 1 is the source, then {1, 2, 4} is a permitted permutation, whereas {1, 4, 2} is not because node 2 must have the rumor before node 4.
Second, because of the memoryless property of the rumor spreading time between nodes and the constant degree of all nodes, each permitted permutation resulting in G N is equally likely. To see this, imagine every node has degree k and we wish to find the probability of a permitted permutation σ conditioned on v * = v. A new node can connect to any node with a free edge with equal probability. When it joins, it contributes k − 2 new free edges. Therefore, the probability of any N node permitted permutation σ for any node v in G N is
P(σ|v * = v) = 1 k 1 k + (k − 2) ... 1 k + (N − 2)(k − 2)
The probability of obtaining G N given that v * = v is obtained by summing the probability of all permitted permutations which result in G N . Because all of the permutations are equally likely, P(G N |v * = v) will be proportional to the number of permitted permutations which start with v and result in G N . Because we will find it necessary to count the number of these permutations, we introduce the following definition: Definition 1. Consider a tree T . Then R(v,T ) is the number of permitted permutations of nodes which start with node v and result in T . We refer to R(v,T ) as the rumor centrality of node v.
With this definition, the likelihood is proportional to R(v, G N ), so we can then rewrite our estimator as
v = arg max v∈G N P(G N |v * = v) = arg max v∈G N R(v, G N )(2)
Because the maximum likelihood estimator for the rumor source is also the node which maximizes R(v, G N ), we call this term the rumor centrality of the node v, and the node which maximizes it the rumor center of the graph.
Rumor Source Estimator for General Trees
To obtain the form of the rumor source estimator in equation (2), we relied on the fact that every permitted permutation was equally likely in a regular tree. However, in a general tree where node degrees may not all be the same, this fact may not hold. This considerably complicates the construction of the maximum likelihood estimator.
To avoid this complication, we define the following randomized estimator for general trees. Consider a rumor that has spread on a tree and reached all nodes in the subgraph G N . Then, let the estimate for the rumor source be a random variable v with the following distribution.
P( v = v|G N ) ∝ R(v, G N )(3)
This estimator weighs each node by its rumor centrality. It is not the maximum likelihood estimator as we had for regular trees. However, we will show that this estimator is qualitatively as good as the best possible estimator for general trees.
Rumor Source Estimator for General Graphs
When a rumor spreads in a network, each node receives the rumor from one other node. Therefore, there is a spanning tree corresponding to a rumor graph. If we knew this spanning tree, we could apply the previously developed tree estimators. However, the knowledge of the spanning tree will be unknown in a general graph, complicating the rumor source inference.
To begin constructing a rumor source estimator for a general graph, we first define the set T (G N ) to be the set of all spanning trees of the rumor graph G N . Then, we can express the likelihood as a sum of likelihoods over all trees in T (G N ).
P(G N |v = v * ) = T ∈T (G N ) P (T |v * = v)(4)
We showed that for regular trees every permitted permutation of nodes was equally likely. We now assume this to be true for a general graph. With this assumption, the likelihood of any spanning tree T given that the source is v is proportional to its rumor centrality R(v, T ). Then the rumor source estimator v will be
v=arg max v∈G N T ∈T (G N ) R(v, T ) (5)
We show a practical implementation of this estimator in Section 3.
Evaluating the Rumor Centrality
The rumor source estimators we have constructed all require us to evaluate the rumor centrality of a tree graph, R(v, G N ). We now show how to evaluate R(v, G N ). To begin, we first define a term which will be of use in our calculations.
Definition 2.
T v vj is the number of nodes in the subtree rooted at node v j , with node v as the source.
To illustrate this definition, a simple example is shown in Figure 1. In this graph, T 1 2 = 3 because there are 3 nodes in the subtree with node 2 as the root and node 1 as the source. Similarly, T 1 7 = 1 because there is only 1 node in the subtree with node 7 as the root and node 1 as the source.
We now can count the permutations of G N with v as the source. In the following analysis, we will abuse notation and use T v vj to refer to the subtrees and the number of nodes in the subtrees. To begin, we assume v has k neighbors, v 1 , v 2 , ..., v k . Each of these nodes is the root of a subtree with T v v1 , T v v2 , ..., T v v k nodes, respectively. Each node in the subtrees can receive the rumor after its respective root has the rumor. We will have N slots in a given permitted permutation, the first of which must be the source node v. Then, from the remaining N − 1 nodes, we must choose T v v1 slots for the nodes in the subtree rooted at v 1 . These nodes can be ordered in
R(v 1 , T v v1 ) different ways. With the remaining N − 1 − T v v1
nodes, we must choose T v v2 nodes for the tree rooted at node v 2 , and these can be ordered R(v 2 , T v v2 ) ways. We continue this way recursively to obtain
R(v, G N ) = N − 1 T v v1 N − 1 − T v v1 T v v2 ... N − 1 − k−1 i=1 T v vi T v v k k i=1 R(v i , T v vi ) = (N − 1)! k i=1 R(v i , T v vi ) T v vi !
Now, to complete the recursion, we expand each of the R(v i , T v vi ) in terms of the subtrees rooted at the nearest neighbor children of these nodes. To simplify notion, we label the nearest neighbor children of node v i with a second subscript, i.e. v ij . We continue this recursion until we reach the leaves of the tree. The leaf subtrees have 1 node and 1 permitted permutation. Therefore, the number of permitted permutations for a given tree G N rooted at v is
R(v, G N ) = (N − 1)! k i=1 R(v i , T v vi ) T v vi ! = (N − 1)! k i=1 (T v vi − 1)! T v vi ! vij ∈T v v i R(v ij , T v vij ) T v vij ! = (N − 1)! k i=1 1 T v vi vij ∈T v v i R(v ij , T v vij ) T v vij ! = N ! u∈G N 1 T v u(6)
In the last line, we have used the fact that T v v = N . We thus end up with a simple expression for R(v, G N ) in terms of the size of the subtrees of all nodes in G N .
Evaluating the Rumor Source Estimator
In the following sections we present algorithms for evaluating the rumor source estimator for trees and general graphs. For trees, the estimator is the rumor centrality defined earlier. We present a message passing algorithm to evaluate the rumor centrality of all nodes in a tree. Rumor centrality plays an important role in the rumor source estimator for general graphs. We present an algorithm for evaluating the rumor source estimator in a general graph using the rumor centrality algorithm for trees in combination with an algorithm for generating uniformly distributed random spanning trees.
Trees: A Message Passing Algorithm
In order to find the rumor center of a tree graph of N nodes G N , we need to first find the rumor centrality of every node in G N . To do this we need the size of the subtrees T v u for all v and u in G N . There are N 2 of these subtrees, but we can utilize a local condition of the rumor centrality in order to calculate all the rumor centralities with only O(N ) computation. Consider two neighboring nodes u and v in G N . All of their subtrees will be the same size except for those rooted at u and v. In fact, there is a special relation between these two subtrees.
T v u = N − T u v(7)
For example, in Figure 1, for node 1, T 1 2 has 3 nodes, while for node 2, T 2 1 has N − T 1 2 or 4 nodes. Because of this relation, we can relate the rumor centralities of any two neighboring nodes.
R(u, G N ) = R(v, G N ) T v u N − T v u (8)
This result is the key to our algorithm for calculating the rumor centrality for all nodes in G N . We first select any node v as the source node and calculate the size of all of its subtrees T v u and its rumor centrality R(v, G N ). This can be done by having each node u pass two messages up to its parent. The first message is the number of nodes in u's subtree, which we call t up u→parent(u) . The second message is the cumulative product of the size of the subtrees of all nodes in u's subtree, which we call p up u→parent(u) . The parent node then adds the t up u→parent(u) messages together to obtain the size of their own subtree, and multiply the p up u→parent(u) messages together to obtain their cumulative subtree product. These messages are then passed upward until the source node receives the messages. By multiplying the cumulative subtree products of its children, the source node will obtain its rumor centrality, R(v, G N ). This algorithm will require only O(N ) computation.
With the rumor centrality of node v, we then evaluate the rumor centrality for the children of v using equation (8). Each node u passes its rumor centrality to its children in a message we define as r down u→child(u) . Each node u can calculate its rumor centrality using its parent's rumor centrality and its own subtree size T v u . The computational effort of this algorithm is also O(N ). Therefore, the overall algorithm obtains the rumor centrality of all N nodes with O(N ) computation. The pseudocode for this message passing algorithm is shown for completeness.
General Graphs
For a general graph G N with N nodes, recall that the rumor source estimator was of the form
v = arg max v∈G N T ∈T (G N ) R(v, T )(9)
where T (G N ) was the set of all spanning trees of G N . If we consider the spanning tree T to be a uniformly distributed random variable in the sample space in T (G N ), where each tree has probability 1/|T (G N )|, then we can rewrite the sum as an expectation of if u is source v then 8: end if 15: end for the random variable R(v, T ) over this uniform distribution.
r down v→child(v) = N ! N j∈children(v)T ∈T (G N ) R(v, T ) = |T (G N )| T ∈T (G N ) R(v, T ) |T (G N )| = |T (G N )|E [R(v, T )](10)
We now need a way to evaluate the above expectation for all nodes in G N . We accomplish this using two algorithms. The first is an algorithm for generating uniformly distributed spanning trees utilizing a random walk on G N [12]. The second is the previous algorithm for calculating the rumor centrality on a tree. To generate uniformly distributed random spanning trees, we perform a random walk on G N in the following manner. The random walk starts at a random node and moves to any of the node's neighbors with equal probability. This random walk continues this way on G N until the graph is covered (i.e. until every node is reached).
Once the random walk has covered every node in G N , we obtain a spanning tree with the following construction. We call the first node in the random walk v start . For each node v ∈ G N /v start , we add to the spanning tree the edge (w, v) which corresponds to the first transition into node v in the random walk. For example, consider a random walk on the graph in Figure 2 with the covering random walk node sequence {1, 2, 4, 2, 1, 3}. Then the generated tree will consist of edges {(1, 2), (1, 3), (2, 4)}, as indicated in the figure. The trees generated by this random walk on G N will have a uniform distribution and the runtime of this algorithm is given by the cover time of Once we have generated a tree, we use the tree rumor centrality algorithm to calculate the rumor centrality for every node in the tree. We generate many trees and take the average of the rumor centralities for each node. The node with the maximum expected value becomes our estimate of the rumor source. In more detail, if we define the i th generated tree as T i , and M total trees are generated, the our estimator will be v = arg max
v∈G N 1 M M i=1 R(v, T i )(11)
Detection Probability: A Threshold Phenomenon
This section examines the behavior of the detection probability of the rumor source estimators for different graph structures. We establish that the asymptotic detection probability has a phase-transition effect: for line graphs it is 0, while for trees with finite growth it is strictly greater than 0.
Line Graphs: No Detection
We first consider the detection probability for a line graph. This is a regular tree with degree 2, so we use the maximum likelihood estimator for regular trees. We will establish the following result for the performance of the rumor source estimator in a line graph.
Theorem 1. Define the event of correct rumor source detection after time t on a linear graph as C t . Then the probability of correct detection of the maximum likelihood rumor source estimator, P(C t ), scales as As can be seen, the line graph detection probability scales as t −1/2 , which goes to 0 as t goes to infinity. The intuition for this result is that the rumor source estimator provides very little information because of the linear graph's trivial structure.
P(C t ) = O 1 √ t
We generated 1000 rumor graphs per rumor graph size on an underlying linear graph. The detection probability versus the graph size is show in Figure 3. As can be seen, the detection probability decays as N −1/2 as predicted in Theorem 1.
Proof of Theorem 1
In this section, we present a proof of Theorem 1. The rumor spreading in the line graph is equivalent to 2 independent Poisson processes with rate 1 beginning at the source and spreading in opposite directions. The following theorem, which is proved in the appendix, bounds the number of arrivals in a Poisson process in time t.
Theorem 2. Consider a Poisson process N (·) with rate 1, and a small positive . In a time t, where t is large, the probability of having less than t (1 − ) arrivals is bounded by
P(N (t) ≤ t(1 − )) ≤ c(t + δ) 1/2 e −t 2
for some positive c and some small positive δ. Also, in a time t, the probability of having more than t (1 + ) arrivals is bounded by
P (N (t) ≥ t(1 + )) ≤ e −t 2
Therefore, with high probability, after a time t, for some small , the number of total nodes N , which is the sum of the arrivals in both Poisson processes, will be bounded by
2t(1 − ) ≤ N ≤ 2t(1 + )(12)
If N is fixed, the detection probability can be easily calculated. However, we want the detection probability after a fixed time t. Therefore, we define the C N as the event of correct detection given N nodes in the graph. Then we can rewrite P(C t ) as
P(C t ) = N P(C N )P(N |t) ≤ 2t(1+ ) N =2t(1− ) P(C N )P(N |t) + e −t 2 (c(t + δ) 1/2 + 1)
For large t, we can neglect the exponential term on the right, so the above expression reduces to
P(C t ) ≈ 2t(1+ ) N =2t(1− ) P(C N )P(N |t)(13)
We now consider N to be a fixed quantity and evaluate P (C N ).
Because of the linear structure of the underlying graph, all rumor graphs G N with N nodes are isomorphic (they are all lines on length N ). For any G N , the estimate for the rumor source v will be the node at the center of the line. The following lemma makes this more precise.
Lemma 1. For a linear rumor graph with N nodes, label nodes a distance k from one side of the line as v k . Then, if N is odd, the rumor source estimator will be node v (N +1)/2 . If N is even, the rumor source estimator is either node v N/2 or node v N/2+1 with equal probability.
To prove this, we first must evaluate the rumor centrality of a node in the line graph. For a node v k a distance k from one end, the rumor centrality is
R(v k , G N )=N ! vi∈G N 1 T v k vi = N ! N k−1 i=1 1 i N −k j=1 1 j = (N − 1)! (k − 1)!(N − k)! = (N − 1)! (k − 1)!(N − 1 − (k − 1))! = N ! k !(N − k )!(14)
We see that the rumor centrality R(v k , G N ) is just the binomial coefficient. It is known that this will be maximized when k is N /2 for even N , and when k is (N + 1)/2 or (N − 1)/2 for odd N . In terms of the original labels for the line graph, the rumor centrality is maximized for k = (N + 1)/2 for odd N and for k = N/2 and k = N/2 + 1 for even N . This proves Lemma 1.
Without loss of generality, we now assume that N is odd and that the rumor source estimator v is node v (N +1)/2 . The detection probability P(C N ) will then be equal to the conditional probability that v * = v (N +1)/2 given a graph G N . To evaluate this probability, we express it in terms of the rumor centrality of the nodes.
P(C N ) = P(v * = v (N +1)/2 |G N ) = P(G N |v * = v (N +1)/2 )P(v * = v (N +1)/2 ) P(G N ) = R(v (N +1)/2 , G N )P(v * = v (N +1)/2 ) v∈G N R(v, G N )P(v * = v) = R(v (N +1)/2 , G N ) v∈G N R(v, G N )(15)
Now we can evaluate the detection probability.
P(C N ) = R(v (N +1)/2 , G N ) N k=1 R(v k , G N ) = (N − 1)! ((N − 1)/2)!((N − 1)/2)! N k=1 (N − 1)! (k − 1)!(N − 1 − (k − 1))! −1 = N ! (N /2)!(N /2)! N k =0 N ! k !(N − k)! −1
To simplify the expression above, we use Stirling's approximation for N !,
N ! ≈ √ 2πN N e N(16)
along with the identity
N k=0 N ! k!(N − k)! = 2 N(17)
Then, the detection probability becomes
P(C N ) ≈ 2 −N √ 2πN N e N 2πN /2 N 2e N /2 2 ≈ 2 −N √ 2πN N e N πN N e N 2 −N ≈ 2 πN = O 1 √ N
Now we need to convert this expression from a function of N to a function of t. Using equation (13), we obtain
P(C t ) ≈ 2t(1+ ) N =2t(1− ) P(C N )P(N |t) ≈ 2t(1+ ) N =2t(1− ) O 1 √ N P(N |t) ≈ O 1 √ t
This complete the proof of Theorem 1.
Geometric Trees: Non-Trivial Detection
We now consider the detection probability of our estimator in a geometric tree, which is a non-regular tree parameterized by a number α. If we let n(d) denote the maximum number of nodes a distance d from any node, then there exist constants b and c such that b ≤ c and bd α ≤ n(d) ≤ cd α
We use the randomized estimator for geometric trees. For this estimator, we obtain the following result.
Theorem 3. Define the event of correct rumor source detection after time t on a geometric tree with parameter α > 0 as C t . Then the probability of correct detection of the randomized rumor source estimator, P(C t ), is strictly greater than 0. That is,
lim inf t P(C t ) > 0
This theorem says that α = 0 and α > 0 serve as a threshold for non-trivial detection: For α = 0, the graph is essentially a linear graph, so we would expect the detection probability to go to 0 based on Theorem 1. While Theorem 3 only deals with correct detection, one would also be interested in the size of the rumor source estimator error. We obtain the following result for the estimator error.
Lemma 2.
Define d( v, v * ) as the distance from the rumor source estimator v to the rumor source v * . Assume a rumor has spread for a time t on a geometric tree with parameter α > 0. Then, for any > 0, there exists a l ≥ 0 such that
lim inf t P(d( v, v * ) ≤ l) ≥ 1 −
What this lemma says is that no matter how large the rumor graph becomes, most of the detection probability mass concentrates on a region close to the rumor source v * .
We generated 1000 instances of rumor graphs per rumor graph size on underlying geometric trees. The α parameters ranged from 0 to 4. As can be seen in Figure 4, the detection probability remains constant as the tree size grows for strictly positive α and decays to 0 for α = 0, as predicted by Theorem 3. Notice that the detection probability for non-zero α is close to 1. A histogram for the geometric tree with α = 1 shows that the error is no larger than 4 hops. This indicates that the estimator error remains bounded, in accordance with Lemma 2.
Proof of Theorem 3
In this section we present a proof of Theorem 3. This proof involves 3 steps. First, we show that the rumor graph will have a certain structure with high probability. This allows us to put bounds on T v * v , the sizes of the subtrees with the rumor source as the source node. Then, we express the detection probability in terms of the variables T v * v . Finally, we show that with this structure for the rumor graphs, the detection probability is bounded away from zero. Throughout we assume that the underlying geometric tree satisfies the property that there exist constants b and c such that b ≤ c and the number of nodes a distance d from any node, n(d), is bounded by Structure of Rumor Graphs. We wish to understand the structure of a rumor graph on an underlying geometric tree. To do this, we first assume that the rumor has been spreading for a long time t. Then, we will formally show that there are two conditions that the rumor graph G t will satisfy. First, the rumor graph will contain every node within a distance t (1 − ) of the source node, for some small positive . Second, there will not be any nodes beyond a distance t (1 + ) from the source node. Figure 5 shows the basic structure of the rumor graph. It is full up to a distance t(1 − ) and does not extend beyond t(1+ ). We now formally state our results for the structure of the rumor graph.
bd α ≤ n(d) ≤ cd α(19)
Theorem 4. Consider a geometric tree with parameter α on which a rumor spreads for a long time t, and let = t −1/2+δ for some small δ. Define the resulting rumor graph as G t and G t as the set of all rumor graphs which occur after a time t that have the following two properties: every node within a distance t(1 − ) from the source receives the rumor and there are no nodes with the rumor beyond a distance t(1 + ) from the source. Then, lim t→∞ P(G t ∈ G t ) = 1 (20)
To prove this theorem, we first note that every spreading time is exponentially distributed with an identical parameter, which we assume to be 1 without loss of generality. Then after a time t, a node a distance t (1 − ) from the source having the rumor is equivalent to a Poisson process N (·) with rate 1 having t (1 − ) arrivals in time t. Theorem 2 bounds the number of arrivals in the Poisson process. Now, we define the following events.
• E i = Node i which is a distance t(1 − ) from the source has the rumor • F = All nodes less than a distance t(1 − ) from the source have the rumor α nodes for the geometric tree. With this we now apply the union bound to the probability of event F .
• A i = Node iP (F ) = P c[t(1− )] α i=1 E i = 1 − P c[t(1− )] α i=1 E c i ≥ 1 − c[t(1− )] α i=1 P (E c i ) ≥ 1 − c [t(1 − )] α P (E c i ) ≥ 1 − ct α P (E c i )
Event E c i occurring means a node a distance t(1 − ) from the source does not have the rumor. This is equivalent to a Poisson process of rate 1 having less than t(1 − ) arrivals in time t. We can use Theorem 2 to lower bound P (E c i ).
P (E c i ) ≤ a √ te −t 2
Using this bound, we now obtain a lower bound for P (F ).
P (F ) ≥ 1 − ct α P (E c i ) ≥ 1 − act α+1/2 e −t 2
We now wish to take the limit as t approaches infinity. However, the is dependent upon t, so care must be taken. Substituting in the expression for and taking the limit we obtain lim t→∞ P (F )≥ lim
t→∞ 1 − act α+1/2 e −t 2 ≥ lim t→∞ 1 − act α+1/2 e −(t 2δ )
≥1
Now we wish to prove that all nodes beyond a distance t(1 + ) from the source do not have the rumor. We will follow a similar procedure as we did for proving the first half of Theorem 4. At a distance t(1 + ) there are at most c [t(1 + )] α nodes for the geometric tree. With this we now apply the union bound to the probability of event B.
P (B) = P c[t(1+ )] α i=1 A c i = 1 − P c[t(1+ )] α i=1 A i ≥ 1 − c[t(1+ )] α i=1 P (A i ) ≥ 1 − c [t(1 + )] α P (A i )
Event A i occurring means a node a distance t(1 + ) from the source has the rumor. This is equivalent to a Poisson process of rate 1 having more than t(1 + ) arrivals in time t. We can use Theorem 2 to lower bound P (A i ).
P (A i ) ≤ e −t 2
Using this bound, we now obtain an lower bound for P (B).
P (B) ≥ 1 − c [t(1 + )] α P (A i ) ≥ 1 − c [t(1 + )] α e −t 2
We now wish to take the limit as t approaches infinity. Again, we substitute in the expression for and take the limit.
lim t→∞ P (B) ≥ lim t→∞ 1 − c [t(1 + )] α e −t 2 ≥ lim t→∞ 1 − c t(1 + t −1/2+δ ) α e −t 2δ ≥ 1
This completes the proof of Theorem 4.
Detection Probability in terms of T v * v . Our rumor source estimator is a random variable v which takes the value v with probability proportional to R(v, G t ). The conditional probability of correct detection given a rumor graph G t will be the probability of this estimator choosing the source node v * , which is P( v = v * |G t ). We showed that all rumor graphs will belong to the set G t with probability 1 for large t. Therefore, we lower bound the probability of correct detection P(C t ) as
lim inf t P(C t ) = lim inf t Gt P( v = v * |G t )P(G t ) ≥ lim inf t Gt∈Gt P( v = v * |G t ) lim inf t P(G t ∈ G t ) ≥ lim inf t inf Gt∈Gt P( v = v * |G t )
We see that the detection probability is lower bounded by the infimum of the conditional detection probability P( v = v * |G t ) over G t ∈ G t . Next, we express the detection probability in terms of the size of the subtrees T v vi .
lim inf t P(C t ) ≥ lim inf t inf Gt∈Gt P( v = v * |G t ) ≥ lim inf t inf Gt∈Gt R(v * , G t ) v∈Gt R(v, G t ) ≥ lim inf t inf Gt∈Gt u∈Gt T v * u −1 v∈Gt vi∈Gt T v vi −1 ≥ lim inf t inf Gt∈Gt v∈Gt vi∈Gt T v * vi T v vi −1(21)
The structure of rumor graphs in G t will allow us to bound the sizes of subtrees whose source is node v * (T v * v ). Therefore, if we can express P( v = v * |G t ) in terms of T v * v , we will be able to bound the detection probability.
In order to evaluate the detection probability for a general tree, we must relate T v * vi to T v vi . We have already seen that when node v is one hop from v * , all of the subtrees are the same except for those rooted at v and v * . In fact, we showed that for a graph with N total nodes,
T v v * = N − T v * v(22)
For a node v one hop from v * , the product in equation 21 becomes Figure 6: Comparison of T j i variables for source nodes 2 hops apart.
vi∈Gt T v * vi T v vi = T v * v * T v * v T v v * T v v (23) = T v * v (N − T v * v )(24)
When v is two hops from v * , all of the subtrees are the same except for those rooted at v, v * , and the node in between, which we call node 1. Figure 6 shows an example. In this case, the product in equation 21 becomes
vi∈Gt T v * vi T v vi = T v * v * T v * v T v * 1 T v v * T v v T v 1 (25) = T v * v T v * 1 (N − T v * 1 ) (N − T v * v )(26)
Continuing this way, we find that in general, for any node v in G t ,
vi∈Gt T v * vi T v vi = vi∈P(v * ,v) T v * vi N − T v * vi(27)
where P(v * , v) means any node in the path between v * and v, not including v * . The detection probability of the rumor source estimator is then
lim inf t P(C t ) ≥ lim inf t inf Gt∈Gt 1 + v∈Gt/v * vi∈P(v * ,v) T v * vi N − T v * 1 vi −1 ≥ lim inf t inf Gt∈Gt 1 S
We call the resulting summation S and will need to upper bound it in order to get a lower bound on the detection probability.
Upper Bounding S. In this section we will show that the sum S has a finite upper bound. We start with an underlying geometric tree with parameter α > 0. We then assume we have a rumor graph G t with N nodes which belongs to G t . To evaluate the detection probability, we must upper bound the sum
S = 1 + v∈Gt/v * vi∈P(v * ,v) T v * vi N − T v * vi(28)
We know from Theorem 4 that after a time t the graph will be full up to t(1 − ), with = t −1/2+δ as before. We will now divide G t into two parts as show in Figure 5. The first part is the portion of the graph within a distance t(1 − ) from the source and not including the source, and is denoted G 0 . The remaining nodes will form graph G 1 . We can then break the sum S into two parts.
S = 1 + v∈Gt/v * vi∈P(v * ,v) T v * vi N − T v * vi S = 1 + v∈G0 vi∈P(v * ,v) T v * vi N − T v * vi + v∈G1 vi∈P (v * ,v) T v * vi N − T v * vi S = 1 + S 0 + S 1
First we will upper bound S 0 . To do this, we must first count the number of nodes in G 0 , which we will call N 0 . We know that there are d α nodes a distance d from the source. By summing over d up to t(1 − ) we obtain the following bounds for N 0 .
t(1− ) d=1 bd α ≤N 0 ≤ t(1− ) d=1 cd α b [t(1 − )] α+1 α + 1 ≤N 0 ≤ c [t(1 − )] α+1 α + 1 N min 0 ≤N 0 ≤ N max 0
We have approximated the sum by an integral, which is valid when t is large. Now, we must calculate N 1 , the number of nodes in G 1 . To do this, we note that from Theorem 2, there are no nodes beyond a distance t(1 + ). Therefore, using the integral approximation again for the sum, we obtain the following bounds for
N 1 b t α+1 α + 1 (1 + ) α+1 − (1 − ) α+1 ≤ N 1 ≤ c t α+1 α + 1 (1 + ) α+1 − (1 − ) α+1 b 2 (α + 1)t α+1 α + 1 ≤ N 1 ≤ c 2 (α + 1)t α+1 α + 1 2b t α+1 ≤ N 1 ≤ 2c t α+1 N min 1 ≤ N 1 ≤ N max 1
We used the first order term of the binomial approximation for (1 ± ) α+1 above. Now we rewrite S 0 in a more convenient notation.
S 0 = v∈G0 vi∈P (v * ,v) T v * vi N − T v * vi (29) = v∈G0 vi∈P (v * ,v) w vi (30) = v∈G0 b v(31)
Now, to upper bound S 0 , we group the b v according to the distance of v from v * . We denote a d as the maximum value of b v among the set of nodes a distance d from the source. Then we can upper bound S 0 as
S 0 ≤ t(1− ) d=1 cd α a d
Now, to calculate a d , we first must evaluate the w vi term in equation (30). To do this, we consider a node v i ∈ G 0 a distance i from the source. For this node, we upper bound the number of nodes in its subtree by dividing all N 0 nodes in G 0 among the minimum bi α nodes a distance i from the root. Then , to this we add all N 1 nodes in G 1 to get the following upper bound on T v * vi T v * vi ≤ N 0 bi α + N 1 With this, we obtain the following upper bound for w vi
w vi = T v * vi N − T v * vi ≤ N0 bi α + N 1 N − N0 bi α − N 1 ≤ N0 bi α + N 1 N 0 − N0 bi α ≤ 1 bi α + N1 N0 1 − 1 bi α ≤ 1 bi α + N max 1 N min 0 1 − 1 bi α ≤ c 1 1 bi α + 2c (α + 1) b(1 − ) α+1
The constant c 1 is equal to (1 − 1/b) −1 . Now, we write down an upper bound for S 0 , recalling that = t −1/2+δ .
S 0 ≤ t(1− ) d=1 cd α a d ≤ t(1− ) d=1 cd α d i=1 c 1 1 ci α + 2c (α + 1) b(1 − ) α+1 ≤ t(1−t −1/2+δ ) d=1 cd α d i=1 c 1 1 ci α + 2ct −1/2+δ (α + 1) b(1 − t −1/2+δ ) α+1 ≤ t(1−t −1/2+δ ) d=1 cd α d i=1 c 1 1 ci α + 2cd −1/2+δ (α + 1) b(1 − d −1/2+δ ) α+1
In the last line, we used the fact that d ≤ t to upper bound the product.
We define the terms in the above sum corresponding to a specific value of d as A d . Then, we use an infinite sum to upper bound this sum.
S 0 ≤ t(1−t −1/2+δ ) d=1 A d ≤ ∞ d=1 A d
If we apply the ratio test to the terms of the infinite sum, we find that
lim sup d A d A d−1 = lim sup d d d − 1 α c 1 1 cd α + 2cd −1/2+δ (α + 1) b(1 − d −1/2+δ ) α+1 = 0
Thus, the infinite sum converges, so S 0 also converges. Now we only need to show convergence of S 1 .
We upper bound S 1 in the same way as we did for S 0 . We write the sum as
S 1 = v∈G1 vi∈P (v * ,v) T v * vi N − T v * vi (32) = v∈G1 vi∈P (v * ,v) w vi (33) = v∈G1 vi∈P (v * ,v),vi∈G0 w vi vi∈P (v * ,v),vi∈G1 w vi (34) = v∈G1 vi∈P (v * ,v),vi∈G0 w vi b v(35)
To upper bound S 1 , we group the b v according to the distance of v from the top of G 1 . We denote a d as the maximum value of b v among the set of nodes a distance d from the top of G 1 . We also denote the upper bound of the product of w vi over nodes in P (v * , v) and G 0 as Γ. Then we can upper bound S 1 as
S 1 ≤ v∈G1 Γb v S 1 ≤ 2t d=1 cd α Γa d
Now, to calculate a d , we upper bound the w vi for nodes in G 1 . We assume that every subtree in G 1 has size N 1 . Then, similar to our procedure for S 0 , we upper bound the weights w vi for the nodes in G 1 .
w vi = T v * vi N − T v * vi ≤ N 1 N − N 1 ≤ N 1 N 0 ≤ N max 1 N min 0 ≤ 2c (α + 1) b(1 − ) α+1
Recalling that = t −1/2+δ , we upper bound S 1 as
S 1 ≤ 2t d=1 cd α Γa d ≤ 2t 1/2+δ d=1 cd α Γ d i=1 w vi ≤ 2t 1/2+δ d=1 cd α Γ 2ct −1/2+δ (α + 1) b(1 − t −1/2+δ ) α+1 d ≤ 2t 1/2+δ d=1 cd α Γ 2cd −1/2+δ (α + 1) b(1 − d −1/2+δ ) α+1 d ≤ 2t 1/2+δ d=1 B d
Above we have used the relation that d ≤ t. Similar to what was done for S 0 , we upper bound this sum with an infinite sum.
S 1 ≤ 2t 1/2+δ d=1 B d ≤ ∞ d=1 B d
If we apply the ratio test to the terms of the infinite sum, we find that
lim sup d B d B d−1 = lim sup d d d − 1 α 2cd −1/2+δ (α + 1) b(1 − d −1/2+δ ) α+1 = 0
Again, the ratio test proves convergence of the sum S 1 .
We have now shown that the sum S = 1 + S 0 + S 1 is upper bounded by some finite S * . With this, we can lower bound the detection probability for the geometric tree.
lim inf t P(C t ) ≥ lim inf t inf Gt∈Gt 1 S ≥ 1 S * > 0
This completes the proof of Theorem 3.
Proof of Lemma 2
We utilize Theorem 3 to prove Lemma 2. First, we rewrite the distribution of the estimator v on a rumor graph G t formed after a rumor has spread for a time t.
P( v = v) = R(v, G t ) v∈Gt R(v, G t ) = R(v, G t )/R(v * , G t ) v∈Gt R(v, G t )/R(v * , G t ) = ρ(v, G t ) v∈Gt ρ(v, G t )
where ρ(v, G t ) is defined as follows using equation 27
ρ(v, G t ) = vi∈P(v * ,v) T v * vi N − T v * vi
We recognize the sum of ρ(v, G t ) over all v in G t as the sum S which was previously shown to converge to a positive constant S * . Now, let d( v, v * ) be the distance between the rumor source estimator and the rumor source. We can write the probability of the estimator error being greater than l hops as
P(d( v, v * ) > l|G t ) = v:d(v,v * )>l ρ(v, G t ) v∈Gt ρ(v, G t ) = v:d(v,v * )>l ρ(v, G t ) S
We select an > 0 and define 1 = S. Then, because of the convergence of the sum S, there exists an l ≥ 0 such that
v:d(v,v * )>l ρ(v, G t ) ≤ 1 ≤ S
Now, using this result along with Theorem 4 we find the limiting behavior of the probability of the error being less than l hops:
lim inf t P(d( v, v * ) ≤ l) = 1 − lim sup t P(d( v, v * ) > l) = 1 − lim sup t Gt∈Gt P(d( v, v * ) > l|G t )P(G t ) ≥ 1 − lim sup t v:d(b v,v * )>l ρ(v, G t ) S lim sup t P(G t ∈ G t ) ≥ 1 − lim sup t S S ≥ 1 −
Thus, for any positive , there will always be a finite l such that the probability of the estimator being within l hops of the rumor source is greater than 1 − , no matter how large the rumor graph is.
Simulation Results for General Graphs
This section provides simulation results for our rumor source estimators on two general graphs: a simulated grid graph and a real network. For the grid graph, several random rumor graph instances were generated on the underlying grid and the statistics of the rumor source estimator were collected. The real network we used is the marriage network of elite families in 15th century Florence. We find that our estimator performs extremely well for both networks.
Grid Graphs.
Grid graphs are not trees, so we must utilize the general graph rumor source estimator. We generated 100 instances of rumor graphs per rumor graph size on an underlying grid graph. To calculate the expectation value in equation 10, 1000 trees were generated per rumor graph. Figure 7 shows an example of a 100 node rumor graph on a grid. In this case, our estimator was able to find the rumor source exactly. Next is a plot of the detection probability of the estimator versus rumor graph size. We find that for rumor graphs with up to 100 nodes, the detection probability does not go to 0. Finally, we show a histogram of the estimator error for a 100 node rumor graph. As can be seen, we never obtain an error greater than 3 hops. This empirical data indicates that the general graph estimator should have good performance on general graphs. [13]. The darkened node is our estimate of the rumor source, which in this case is the Medici family. This family is also the true center of power in this network.
Florentine Marriage Network: A Future Application
In order to see if our estimator can be applied to situations beyond finding rumor sources, we used it on the marriage network of elite families in 15th century Florence. This is a well known network in the social science literature. The links in this network represent a marriage between families. It is known that the Medici family wielded the most power and so was effectively the center of the network [13]. Even though there was no rumor spreading, our estimator found, rather surprisingly, that the Medici family was the source of this network. This indicates that our estimator may do more than just determine the rumor source. It may also indicate which nodes are important or influential in a network. The Florentine marriage network can be seen in Figure 8.
Conclusion and Future Work
We constructed estimators for the rumor source in regular trees, general trees, and general graphs. We defined the maximum likelihood estimator for a regular tree to be a new notion of network centrality which we called rumor centrality. We used rumor centrality as the basis for estimators for general trees and general graphs.
We analyzed the asymptotic behavior of the rumor source estimators for line graphs and geometric trees. For line graphs, it was shown that the detection probability goes to 0 as the network grows in size. However, for geometric trees, it was shown that the estimator detection probability is bounded away from 0 as the graph grows in size. Simulations performed on synthetic graphs agreed with these tree results and also demonstrated that the general graph estimator performed well. The general graph estimator was also able to predict the most powerful family in the 15th century Flo-rentine elite family marriage network. This indicates that this estimator may be able to find influential nodes in networks in addition to finding rumor sources.
There are several future steps for this work. First, we would like to develop estimators when the spreading times are not identically distributed. Second, we would like to create a message passing algorithm for the general graph estimator in order for it to be applicable to distributed environments. Third, we would like to test our estimators on other real networks to accurately assess their performance.
Proof of Theorem 2
To prove the bound for t(1 − ) arrivals in a Poisson process N (·) of rate 1, we first write down the exact probability of this event
P(N (t) ≤ t(1 − )) = e −t t(1− ) i=0 t i i!
Next, we upper bound the sum by noting that its terms are monotonically increasing. To see this, we take the ratio of consecutive terms.
t i−1 (i)! (i − 1)!t i = i t
This ratio is less than 1 if i < t, which is true for the sum. Therefore, we upper bound the sum by taking all terms equal to the largest term.
P(N (t) ≤ t(1 − ))≤e −t t(1− ) i=0 t t(1− ) (t(1 − ))! ≤(t(1 − ) + 1)e −t t t(1− ) (t(1 − ))!
We apply Stirling's approximation to the factorial in the denominator to obtain
P(N (t) ≤ t(1 − )) ≤ (t(1 − ) + 1)e −t t t(1− ) 2πt(1 − )t(1 − ) t(1− ) ≤ 1 − 2π t + 1 t(1 − ) 2 e −t( +(1− ) log(1− ) ≤ a √ t + δe −t( +(1− ) log(1− )(36)
where we have defined a and δ as a= 1 − 2π
δ= 1 t(1 − ) 2
Now, in order to simplify the exponent, we approximate log(1 − ) as − for small . Inserting this into equation (36) we obtain the first part of Theorem 2.
P(N (t) ≤ t(1 − ))≤a √ t + δe −t( −(1− ) ≤a √ t + δe −t 2
To prove the bound on t(1 + ), we use the Chernoff bound. For a θ > 0, we have P(N (t) ≥ t(1 + )) ≤ e −θt(1+ ) E e θN (t)
For a Poisson process, the above expectation is E e θN (t) = e t(e θ −1)
We insert this into the Chernoff bound to obtain P(N (t) ≥ t(1 + )) ≤ e −t[θ(1+ )+(e θ −1)]
To obtain the tightest possible bound, we maximize the expression inside the brackets in the exponent. The maximum is achieved for θ = log(1 + ). Using as an approximation for log(1 + ), we obtain the second result of Theorem 2.
P(N (t) ≥ t(1 + )) ≤ e −t 2
| 10,009 |
0909.4441
|
1830065213
|
We consider multi-agent systems where agents' preferences are aggregated via sequential majority voting: each decision is taken by performing a sequence of pairwise comparisons where each comparison is a weighted majority vote among the agents. Incompleteness in the agents' preferences is common in many real-life settings due to privacy issues or an ongoing elicitation process. In addition, there may be uncertainty about how the preferences are aggregated. For example, the agenda (a tree whose leaves are labelled with the decisions being compared) may not yet be known or fixed. We therefore study how to determine collectively optimal decisions (also called winners) when preferences may be incomplete, and when the agenda may be uncertain. We show that it is computationally easy to determine if a candidate decision always wins, or may win, whatever the agenda. On the other hand, it is computationally hard to know wheth er a candidate decision wins in at least one agenda for at least one completion of the agents' preferences. These results hold even if the agenda must be balanced so that each candidate decision faces the same number of majority votes. Such results are useful for reasoning about preference elicitation. They help understand the complexity of tasks such as determining if a decision can be taken collectively, as well as knowing if the winner can be manipulated by appropriately ordering the agenda.
|
The most related work is @cite_7 Like our paper, this considers the computational complexity of determining winners for sequential majority voting. However, they start from an incomplete majority graph which throws away information about individual votes, whilst we start from an incomplete profile.
|
{
"abstract": [
"Preferences can be aggregated using voting rules. We consider here the family of rules which perform a sequence of pairwise majority comparisons between two candidates. The winner thus depends on the chosen sequence of comparisons, which can be represented by a binary tree. We address the difficulty of computing candidates that win for some trees, and then introduce and study the notion of fair winner, i.e. candidates who win in a balanced tree. We then consider the situation where we lack complete informations about preferences, and determine the computational complexity of computing winners in this case."
],
"cite_N": [
"@cite_7"
],
"mid": [
"1640822234"
]
}
| 0 |
||
0909.4441
|
1830065213
|
We consider multi-agent systems where agents' preferences are aggregated via sequential majority voting: each decision is taken by performing a sequence of pairwise comparisons where each comparison is a weighted majority vote among the agents. Incompleteness in the agents' preferences is common in many real-life settings due to privacy issues or an ongoing elicitation process. In addition, there may be uncertainty about how the preferences are aggregated. For example, the agenda (a tree whose leaves are labelled with the decisions being compared) may not yet be known or fixed. We therefore study how to determine collectively optimal decisions (also called winners) when preferences may be incomplete, and when the agenda may be uncertain. We show that it is computationally easy to determine if a candidate decision always wins, or may win, whatever the agenda. On the other hand, it is computationally hard to know wheth er a candidate decision wins in at least one agenda for at least one completion of the agents' preferences. These results hold even if the agenda must be balanced so that each candidate decision faces the same number of majority votes. Such results are useful for reasoning about preference elicitation. They help understand the complexity of tasks such as determining if a decision can be taken collectively, as well as knowing if the winner can be manipulated by appropriately ordering the agenda.
|
Pini prove that computing the possible and necessary winners for the STV rule is NP-hard @cite_6 . They show it is NP-hard even to approximate these sets within some constant factor in size. They also give a preference elicitation procedure which focuses just on the set of possible winners.
|
{
"abstract": [
"We consider how to combine the preferences of multiple agents despite the presence of incompleteness and incomparability in their preference orderings. An agent's preference ordering may be incomplete because, for example, there is an ongoing preference elicitation process. It may also contain incomparability as this is useful, for example, in multi-criteria scenarios. We focus on the problem of computing the possible and necessary winners, that is, those outcomes which can be or always are the most preferred for the agents. Possible and necessary winners are useful in many scenarios including preference elicitation. First we show that computing the sets of possible and necessary winners is in general a difficult problem as is providing a good approximation of such sets. Then we identify general properties of the preference aggregation function which are sufficient for such sets to be computed in polynomial time. Finally, we show how possible and necessary winners can be used to focus preference elicitation."
],
"cite_N": [
"@cite_6"
],
"mid": [
"64851098"
]
}
| 0 |
||
0909.4441
|
1830065213
|
We consider multi-agent systems where agents' preferences are aggregated via sequential majority voting: each decision is taken by performing a sequence of pairwise comparisons where each comparison is a weighted majority vote among the agents. Incompleteness in the agents' preferences is common in many real-life settings due to privacy issues or an ongoing elicitation process. In addition, there may be uncertainty about how the preferences are aggregated. For example, the agenda (a tree whose leaves are labelled with the decisions being compared) may not yet be known or fixed. We therefore study how to determine collectively optimal decisions (also called winners) when preferences may be incomplete, and when the agenda may be uncertain. We show that it is computationally easy to determine if a candidate decision always wins, or may win, whatever the agenda. On the other hand, it is computationally hard to know wheth er a candidate decision wins in at least one agenda for at least one completion of the agents' preferences. These results hold even if the agenda must be balanced so that each candidate decision faces the same number of majority votes. Such results are useful for reasoning about preference elicitation. They help understand the complexity of tasks such as determining if a decision can be taken collectively, as well as knowing if the winner can be manipulated by appropriately ordering the agenda.
|
Finally, Brandt consider different notions of winners starting from incomplete majority graphs @cite_2 . We plan to investigate these kinds of winners in our framework.
|
{
"abstract": [
"Social choice rules are often evaluated and compared by inquiring whether they fulfill certain desirable criteria such as the Condorcet criterion, which states that an alternative should always be chosen when more than half of the voters prefer it over any other alternative. Many of these criteria can be formulated in terms of choice sets that single out reasonable alternatives based on the preferences of the voters. In this paper, we consider choice sets whose definition merely relies on the pairwise majority relation. These sets include the Copeland set, the Smith set, the Schwartz set, von Neumann-Morgenstern stable sets (a concept originally introduced in the context of cooperative game theory), the Banks set, and the Slater set. We investigate the relationships between these sets and completely characterize their computational complexity which allows us to obtain hardness results for entire classes of social choice rules. In contrast to most existing work, we do not impose any restrictions on individual preferences, apart from the indifference relation being reflexive and symmetric. This assumption is motivated by the fact that many realistic types of preferences in computational contexts such as incomplete or quasi-transitive preferences may lead to general pairwise majority relations that need not be complete."
],
"cite_N": [
"@cite_2"
],
"mid": [
"1991733990"
]
}
| 0 |
||
0909.4569
|
2951099604
|
We study the envy free pricing problem faced by a seller who wishes to maximize revenue by setting prices for bundles of items. If there is an unlimited supply of items and agents are single minded then we show that finding the revenue maximizing envy free allocation pricing can be solved in polynomial time by reducing it to an instance of weighted independent set on a perfect graph. We define an allocation pricing as if no agent wishes to replace her allocation with the union of the allocations of some set of other agents and her price with the sum of their prices. We show that it is -hard to decide if a given allocation pricing is multi envy free. We also show that revenue maximization multi envy free allocation pricing is hard. Furthermore, we give efficient algorithms and hardness results for various variants of the highway problem.
|
Much of the work on envy free revenue maximization is on item pricing rather than on subset pricing. @cite_13 give an @math -approximation for the general single minded problem, where @math is the number of items and @math is the number of agents. This result was extended by @cite_4 to an @math -approximation for arbitrary valuations and unlimited supply using single fixed pricing which is basically pricing all bundles with the same price. @cite_6 show that the general item pricing problem with unlimited availability of items is hard to approximate within a (semi-)logarithmic factor.
|
{
"abstract": [
"",
"We prove semi-logarithmic inapproximability for a maximization problem called unique coverage: given a collection of sets, find a subcollection that maximizes the number of elements covered exactly once. Specifically, we prove O(1 logσ(e)n) inapproximability assuming that NP n BPTIME(2ne) for some e > 0. We also prove O(1 log1 3-e n) inapproximability, for any e > 0, assuming that refuting random instances of 3SAT is hard on average; and prove O(1 log n) inapproximability under a plausible hypothesis concerning the hardness of another problem, balanced bipartite independent set. We establish matching upper bounds up to exponents, even for a more general (budgeted) setting, giving an Ω(1 log n)-approximation algorithm as well as an Ω(1 log B)-approximation algorithm when every set has at most B elements. We also show that our inapproximability results extend to envy-free pricing, an important problem in computational economics. We describe how the (budgeted) unique coverage problem, motivated by real-world applications, has close connections to other theoretical problems including max cut, maximum coverage, and radio broad-casting.",
"We consider the problem of pricing n items to maximize revenue when faced with a series of unknown buyers with complex preferences, and show that a simple pricing scheme achieves surprisingly strong guarantees. We show that in the unlimited supply setting, a random single price achieves expected revenue within a logarithmic factor of the total social welfare for customers with general valuation functions, which may not even necessarily be monotone. This generalizes work of Guruswami et. al [18], who show a logarithmic factor for only the special cases of single-minded and unit-demand customers. In the limited supply setting, we show that for subadditive valuations, a random single price achieves revenue within a factor of 2O(√(log n loglog n) of the total social welfare, i.e., the optimal revenue the seller could hope to extract even if the seller could price each bundle differently for every buyer. This is the best approximation known for any item pricing scheme for subadditive (or even submodular) valuations, even using multiple prices. We complement this result with a lower bound showing a sequence of subadditive (in fact, XOS) buyers for which any single price has approximation ratio 2Ω(log1 4 n), thus showing that single price schemes cannot achieve a polylogarithmic ratio. This lower bound demonstrates a clear distinction between revenue maximization and social welfare maximization in this setting, for which [12,10] show that a fixed price achieves a logarithmic approximation in the case of XOS [12], and more generally subadditive [10], customers. We also consider the multi-unit case examined by [1111] in the context of social welfare, and show that so long as no buyer requires more than a 1 -- e fraction of the items, a random single price now does in fact achieve revenue within an O(log n) factor of the maximum social welfare."
],
"cite_N": [
"@cite_13",
"@cite_6",
"@cite_4"
],
"mid": [
"",
"2010996873",
"2031502124"
]
}
|
Envy, Multi Envy, and Revenue Maximization
|
We consider the combinatorial auction setting where there are several different items for sale, not all items are identical, and agents have valuations for subsets of items. We allow the seller to have identical copies of an item. We distinguish between the case of limited supply (e.g., physical goods) and that of unlimited supply (e.g., digital goods). Agents have known valuations for subsets of items. We assume free disposal, i.e., the valuation of a superset is ≥ the valuation of a subset. Let S be a set of items, agent i has valuation v i (S) for set S. The valuation functions, v i , item-pricing [multi] envy-freeness is not automatic. Circumstances may arise where some agent has a valuation less than the price of some set of items she is interested in, but there is insufficient supply of these items. An envy-free solution must avoid such scenarios. Even so, for limited or unlimited supply, item pricing is envy-free if and only if item pricing is multi envy-free (this follows from the monotonicity of the item pricing).
For subset pricing, it does not necessarily follow that every allocation/pricing that is envy-free is also multi envy-free.
Although the definitions above are valid in general, we are interested in single minded bidders, and more specifically in a special case of single minded bidders called the highway problem ( [13,2]) where items are ordered and agents bid for consecutive interval of items. Table 1 gives gaps in revenue between the item pricing (where envy freeness and multi envy freeness are equivalent), multi envy freeness, and envy freeness. These gaps are for single minded bidders, and the gaps between item pricing and multi envy free subset pricing are in the context of the highway problem. In all cases (single minded bidders or not)
Our Results
Revenue([Multi] EF item pricing) ≤ Revenue(Multi EF subset pricing)
≤ Revenue(EF subset pricing)
≤ Social Welfare.
Clearly, if a lower bound holds for unlimited supply it clearly also holds for (big enough) limited supply. All of our lower bound constructions are for single minded bidders, for single minded bidders with unlimited supply the bounds are almost tight as (Social welfare)/(Envy-free item pricing) ≤ H m + H n .
This follows directly from [13], see below.
For limited supply, lower bound # 1 shows for some inputs the revenue of [Multi] Envy-free item pricing can be significantly smaller (by a factor ≤ H n /n or ≤ H m /m) than the revenue of Multi envy-free subset pricing. This gap is smaller for unlimited supply, lower bound # 3 shows that for unlimited supply it is possible to achieve a ratio of 1/H n or 1/ log log m between the revenue of [Multi] Envy-free item pricing and that of Multi Table 1 shows a gap in revenue (1/H n or 1/H m ) between Multi envy-free subset pricing and Envy-free subset pricing. This bound is for single minded bidders, but not for the highway problem.
We further give several hardness results and several positive algorithmic results:
1. For unlimited supply, and single minded bidders, we show that finding the envy free allocation/pricing that maximizes revenue is polynomial time. 2. We show that the decision problem of whether an allocation/pricing is multi envy free is coNP -hard. 3. We also show that finding an allocation/pricing that is multi envy free and maximizes the revenue is APX -hard. 4. For the the highway problem, we show a that if all capacities are O(1) then the (exact) revenue maximizing envy free allocation/pricing can be computed in polynomial time. I.e., the problem is fixed parameter tractable with respect to the capacity. 5. Again, for the highway problem with O(1) capacities, we give a FP-TAS for revenue maximization under the more difficult Multi envy-free requirements.
The Highway Problem
For the unlimited supply the problem is NP-hard (Briest and Kriesta [4]). Also the problem is given O(log n)-approximation by [2]. When the length of each interval is bounded by a constant or the valuation of each agent is bounded by a constant, Guruswami et al. [13] give a fully polynomial time approximation scheme (FPTAS). If the intervals requested by different agents have a nested structure then an FPTAS is possible [2,4].
For limited supply when the number of available copies per item is bounded by C, Grigoriev et al. [11] propose a dynamic programming algorithm that computes the optimal solution in time O(n 2C B 2C m), where n is the number of agents, m the number of items, and B an upper bound on the valuations. For C constant, and by appropriately discretizing B, this algorithm can be used to derive an FPTAS for this version of the highway problem. However, the solution produced by this algorithm need not be envy-free. For the highway problem with uniform capacities, [7] gives an O(log u max ) approximation algorithm where all capacities are equal, this algorithm does produce an envy-free allocation/pricing.
In our setting we have m agents and n items.
The capacity of an item is the number of (identical) copies of the item available for sale. The supply can be unlimited supply or limited supply. In a limited supply seller is allowed to sell up to some fixed amount of copies of each item. In the unlimited supply setting, there is no limit as to how many units of an item can be sold.
We consider single-minded bidders, where each agent has a valuation for a bundle of items, S i , and has valuation 0 for all sets S that are not supersets of S i . The valuation function for i, v i , has a succinct repre-
sentation as (S i , v i ) where v i = v i (S i ). For every S such that S i ⊆ S, v i (S) = v i (S i ) = v i , for all other sets S ′ , v i (S ′ ) = 0.
If a i = S and S i ⊂ S, we can change the allocation to be a i = S i and keep the same price. The original allocation/pricing is envy free if and only if the modified allocation is also envy free. Therefore, we can say without lost of generality, that an allocation/pricing must either have
a i = ∅ and p i (a i ) = 0 or a i = S i and p i (a i ) ≤ v i (S i ).
We denote the set of agents i allocated S i (the winning agents) W = {i : a i = ∅} to be the set of agents for which a i = S i . Our goal is to find an allocation/pricing that maximizes i∈W p(S i ).
For single minded bidders, we say that agent i wins if i ∈ W . Otherwise, we say that i loses.
Fix the price function p. For unlimited supply,it is easy to see that the revenue maximizing winner set W = {i : p(S i ) ≤ v(S i )}. For limited supply, it must be that
{i : p(S i ) < v(S i )} ⊆ W ⊆ {i : p(S i ) ≤ v(S i )}.
Envy and Multi Envy for Single minded bidders
For single minded bidders, the definitions of envy free and multi envy free can be simplified as follows: 1. For any winning agent i and any collection of winning agents C such that S i ⊆ j∈C S j the following must hold: p(S i ) ≤ j∈C p(S j ) 2. For any losing agent i and any collection of winning agents C such that S i ⊆ j∈C S j the following must hold: v(S i ) ≤ j∈C p(S j ).
Observation
Revenue Gaps Between Models
In this section we show the gaps between the optimal solutions of the different models. It is clear that item pricing envy free setting is less profitable than the subset pricing multi envy free setting since any item pricing solution is also multi envy free solution. We give two theorems that show the gaps between the two models when items are given in limited supply and when items are given in unlimited supply.
The following theorem corresponds to line #1 in Table 1.
Theorem 7
The maximal revenue in an envy-free item pricing for the limited supply highway setting may be as low as H k k times the maximal revenue achievable by a subset pricing that is multi envy free(where H k is the k'th Harmonic number, and k is the maximal capacity of an item).
Proof. Consider a path of k segments where segment 1 has capacity of k and for i > 1, segment i has capacity n − i + 1. There are two groups of agents in the setting: Figure 1 for illustration.
-k agents: { request [1, i] with valuation of 1 | 1 ≤ i ≤ k} -k − 1 agents: request [i, i] with valuation of 1 i | 1 ≤ i ≤ k See
Clearly the best an item pricing envy-free solution can achieve is H k by assigning price of 1 i to segment i. A subset pricing multi envy free solution can price all intervals on the path at 1 and get a revenue of k.
The number of requests is O(k), therefore the gap is O(m/H m ).
We now deal with the gaps between item pricing setting, multi envy free setting and envy free setting when items are given in unlimited supply. It is clear that any item pricing solution is also multi envy free solution and that any multi envy free solution is also an envy free solution. The question is how big can the gap be, how more can a seller profit by choosing another concept of envy? Guruswami et al. [13] give an O(log m + log n)-approximation for the general item pricing problem. The upper bound used is the sum of all valuations. Since any item pricing solution is also multi envy free and since the sum of all valuations can be used as an upper bound on multi envy free and envy free solutions, we conclude that the gap between any pair of these problems is upper bounded by O(log m + log n). We show lower bounds on the gaps. The following theorem corresponds to line #3 in Table 1.
Theorem 8 Multi envy free subset pricing setting for the unlimited supply problem (even for a highway) can improve the optimal item pricing solution by a factor of O(log log m) where m is the number of requests.
Proof. We construct the following agent requests along a path. We assume for simplicity that n, the number of items, is a power of 2, so n = 2 k . We have k + 1 layers of requests, starting from layer 0 up to layer k. In layer i there are 2 i equal sized requests, each for 2 k−i items, of valuation
1 4(k−2) 1 4(k−2) Fig. 2.
The following theorem corresponds to line #2 in Table 1.
S i = V \ {i} with valuation v i = 1
i . Since every request is a subset of the union of any two other requests, the minimal v i + v j where v i and v j both, is an upper bound on the revenue that can be achieved from any allocated request in the multi envy free setting. Let the two lowest winning valuations be i α and i β . All requests with lower valuations are not allocated. All requests with valuation higher than i α and i β can have price that is at most 1 iα + 1 i β . Therefore no multi envy free allocation/pricing can achieve more than revenue 2.
On the other hand, in the envy free setting, an allocation of each request at price p i = v i is valid and achieves revenue of H m . Therefore, the ratio between maximal revenue achieved in the multi envy free setting vs. the maximal revenue achieved in the envy free setting can be as low as O(1/ log m).
6 Polynomial time envy-free revenue maximization (unlimited supply, single minded bidders)
We discuss in this section the classical envy free approach where each agent may envy only one other agent (not a group of other agents). Therefore we seek an allocation/pricing such that Observation 4.1 is valid.
With limited supply of items, a reduction from independent set can be used to show that the problem of maximizing revenue is hard to approximate within m 1−ǫ . Grigoriev et al. [11] show that it is NP-complete to approximate the maximum profit limited supply item pricing solution to within a factor m 1−ǫ , for any ǫ > 0 even when the underlying graph is a grid. The same construction can be used here.
For the unlimited supply setting we show that:
Theorem 10 For single minded bidders the revenue maximizing envy free allocation/pricing with unlimited supply can be computed in polynomial time.
Our idea is to make use of the fact that allocating a certain request at price p means that any request that is a superset and has valuation < p must not be allocated. We transform all requests into a directed acyclic perfect graph H and then compute the revenue maximizing allocation/pricing by computing a maximal independent set on H (which can be done in polynomial time for perfect graph). A similar construction using the hierarchical nature of a pricing was done by Chen et al. [6] for envy free pricing of products over a graph with metric substitutability. As before, agent i seeks bundle S i and has valuation v i .
Construction of Graph H
For each i ∈ {1, . . . , m}, define
A(i) = {1 ≤ j ≤ m|S i ⊆ S j and v j < v i }.
The reason we consider only requests with lower valuations is that when picking i's price only these allocations are at risk. Given price p for agent i, all requests in A(i) with valuation < p cannot be allocated at any price. For each agent i , define an ordering π i on A(i) in non-decreasing order of valuation. I.e., for each pair j, k such that 1 ≤ j ≤ k ≤ n i , where n i = |A(i)|, the valuations must be ordered, v π i (j) ≤ v π i (k) (ties are broken arbitrarily).
We construct an undirected vertex-weighted graph H as follows. For each i ∈ V we associate n i + 1 weighted vertices in H. These vertices constitute the T (i) component, the vertices of which are {i 1 , i 2 , ..., i n i +1 }. The set of all H vertices is defined as i∈V T i . The weight of each vertex in T (i) is:
w(i 1 ) = v π i (1) w(i 2 ) = v π i (2) − v π i (1)
. . .
w(i n ) = v π i (n i ) − v π i (n i −1) w(i n i +1 ) = v i − v π i (n i )
By definition of A(i) and π i , all weights are non-negative and
j∈T (i) w(j) = v i .
For each T (i) component we connect the various vertices of T (i) to components T (j) such that j ∈ A(i) (connecting a vertex i k ∈ T (i) to a component T (j) means connecting i k to all vertices of T (j) by edges) as follows. Vertex i n i +1 is connected to all components T (j) for j ∈ A(i).
Vertex i k is connected to all components T (π i (j)) for each j such that 1 ≤ j < k. For instance, vertex i 2 is connected to component T (π i (1)), vertex i 3 is connected to components T (π i (1)) and T (π i (2)), and so on.
As i is not contained in A(i) there can't be self edges. It is easy to see that for any a < b, i b is connected to each component that i a is connected to (and maybe to some additional components). See Figure 3 for the exact construction of edges from a component t(i). Figure 4 shows an example of transforming a pricing problem into the graph H.
Lemma 11
The value of the maximum weighted independent set on H is equal to the revenue obtained from the optimal envy free pricing of the original problem.
Proof. When picking vertices for the maximal independent set instance, in every component T (i) one must choose vertices whose sum equals the valuation of agent j, where j ∈ A(i) or j = i. A component that none of its vertices were picked means that agent i was not allocated her set. The construction of H ensures that the pricing is envy-free. We prove that a maximal revenue envy-free allcation/pricing can be translated into an independent set in H and that a maximal independent set in H can be translated into a revenue maximizing envy free allocation/pricing.
(envyf ree ⇒ IS) We show how to construct an independent set solution in H from the optimal allocation/pricing of the original pricing instance. It is easy to see that the price p i is equal to one of the valuations v j for j such that j ∈ A(i) or j = i (otherwise the prices can be increased). We will pick vertices in T (i) to achieve price of p i . Let us assume that p i = v j for some j such that j ∈ A(i) ∪ {i}. We will pick all vertices of i k ∈ T (i) such that k ≤ π(j). By construction of T (i) our pick gives an accumulated value of v j = p i . As we have a valid pricing we can assume that ∀i, j : S i ⊂ S j ⇒ p i ≤ p j . Let us assume by contradiction that our pick is not a valid independent set. It follows that there are two vertices i k and j m such that j ∈ A(i) and there is an edge between them. Since edges are drawn from i k to all components that represent requests that have lower valuation than 1≤t<k w(i t ) we get that p j must be less than p i (p i ≥ 1≤t<k w(i t )). This contradicts our assumption.
(IS ⇒ envyf ree) Assuming we have an optimal independent set solution in H. By H construction, in each component T (i) any node i k is connected to all neighbors of i m for m < k. Therefore the vertices set that are picked as part of the independent set in each component T (i) is of the form {i k |k ≤ i max }. We transform the independent set solution into a pricing as follows:
-Agent i such that none of T (i)'s vertices were picked receives nothing.
-Agent i such that the vertices {i k |k ≤ i max } were picked in T (i) receives S i at price k≤i max w(i k )
T (i) T (πi(3)) T (πi(ni)) T (πi(1)) T (πi(ni−1)) i1 i2 i3 i4 in i in i+1 T (πi(2))
Fig. 3. Construction of H
Assume that the pricing is not envy-free and we have requests i, j such that S i ⊂ S j and p i > p j . By the construction of H we can assume that p i and p j are equal to v i ′ and v j ′ such that i ′ ∈ A(i) and j ′ ∈ A(j). We picked from T (i) vertices {i k : 1 ≤ k ≤ i max } such that 1≤k≤i max w(i k ) = v i ′ . The same goes for T (j).
Let's inspect the vertices in T (j) that should have been picked in order to make p j ≥ p i .
Define J as the minimal set of vertices in T (j) of the form {j k |j max < k ≤ t} such that 1≤k≤t w(j k ) ≥ p i . The vertices of J has outgoing edges (from i to A(i)) into components T (k) such that k ∈ A(i) (requests that are superset of j and i and have lower valuation than i).
The vertices reachable from J by the outgoing edges are reachable also by i i max . Hence we are guaranteed that they are not picked. Also we know that all vertices that have edges into J were not picked as picking one of them would have prevented picking any of T (j)'s vertices.
Therefore there is no reason for the independent set not to pick also the vertices of J and increase the independent set value. This contradicts the maximality of the independent set solution.
Lemma 12 H is a comparability graph
Proof. Let us direct the edges of H. In case of edge from node of T (i) to node of T (j) the direction will be from i to j if j ∈ A(i).
This orientation assignment results a directed graph with transitivity: if we have directed edge from i α to j β and from j β to k γ , we show that there must be directed edge from i α to k γ .
Note that since k ∈ A(j) and j ∈ A(i) then k ∈ A(i) as A(j) ⊂ A(i). The fact that there is directed edge from j β to T (k) means that v j > v k , therefore in π i order it will also be higher (And to the right in Figure 3).
Since i α is connected to all components T (π i (l)) such that 1 ≤ l < α and i α is also connected to T (j), i α must be connected to T (k) as well (as π −1 i (k) < π −1 i (j)). Clearly i α is connected to each node of T (k) including k γ .
We've shown that the edges of the graph can be oriented so that the transitivity property is maintained. Therefore the graph is a comparability graph.
A graph is said to be perfect if the chromatic number (the least number of colors needed to color the graph) of every induced subgraph equals the clique number of that subgraph. Showing that H is a comparability graph implies the following corollary:
Corollary 13 H is a perfect graph.
We know that the maximal weighted independent set can be solved in polynomial time on perfect graphs [12]. By Lemma 11 and Lemma 12 we conclude that finding the optimal envy free allocation/pricing in the most general single minded setting can be done in polynomial time. This completes the proof of Theorem 10.
Hardness of multi envy-free allocation/pricing
In this section we show the following hardness results:
-The problem of deciding whether a certain pricing assignment is multi envy free is coNP -hard. -Maximizing revenue from single minded agents subject to multi envy free pricing is APX -hard.
Theorem 14 The problem of deciding whether a certain pricing assignment is multi envy free is coNP-hard.
Proof. We show a polynomial reduction from VERTEX-COVER(k) to our decision problem. Assume that there is an algorithm A that confirm the envy-freeness of a given subset pricing. The NP-hard problem of VERTEX-COVER(k) can be reduced to this problem. The building of subset pricing from VERTEX-COVER(k) instance is as follows:
-Each edge turns into an item.
-Each vertex turns into a set with price 1.
-Give the price k − 1 to the set of all items.
(Note that this is a limited supply setting where each item is chosen by 3 subsets at most.)
Clearly A confirms that this instance is not legal if and only if there is a vertex cover to the VERTEX-COVER instance of size k.
Since VERTEX-COVER is NP-hard this imply that The problem of deciding weather a certain pricing is envy-free is coNP-hard.
Note that even though deciding whether a pricing is multi envy free is hard, finding such a pricing can be approximated. Balcan et al. [3] showed O(log m + log n)-approximation for arbitrary bundle valuations and unlimited supply using single fixed pricing which is basically pricing all bundles at the same price. Such a pricing is multi envy free pricing as well.
Theorem 15 Maximizing revenue from single minded agents subject to multi envy free pricing is APX-hard, even when all agents are interested in at most two items.
Proof. We show that finding the optimal multi envy free pricing is APXhard by a reduction from MAX-2SAT. Given a MAX-2SAT instance we build multi envy free allocation instance as follows.
Let's denote C as the number of clauses in the SAT and C (v) the number of clauses containing variable v.
Items in the allocation problems are:
-For each literal we have an item to sell. Thus for each variable there are two items, one for the variable and one for its negation.
Agents in the allocation are: We show that there is a pricing with revenue at least 314C + k if and only if there is a solution to the 2SAT instance that satisfies at least k clauses.
Let us prove the easy direction. Assume that there is a solution to the MAX-2SAT problem that satisfy at least k clauses. Assign a price of 3 to each request for a literal that is set to true in the solution. Assign a price of 2 to each request for literals that are set to false in the solution. Assign a price of 5 for each clause request. Each variable request is priced at 5. This gives a valid multi envy free pricing and we can verify that the revenue of it is 314C + k.
For the other direction, let p be a pricing with maximum revenue, and assume the the revenue is at least 314C + k. The optimal way to price variable and literal agents for variable v is be pricing one literal at 2 and the other at 3, in that way the variable requests for both literals is priced at 5 and the revenue from the variable and literals agents is 155C (v) . By the optimality of p, since variable and literals agents are always more profitable than the clause agents, all variables must be priced in this manner. Each clause can be priced at 4 if both its literals are priced at 2, or by 5 if one (at least) of its literals is priced at 3. In total this means that from pricing allocation of revenue 314C + k defines a natural assignment to the MAX-2SAT problem by making literals priced at 3 to be true.
Because the maximum 2-SAT solution satisfy at least 1 4 of the clauses, we are seek for a case where k ≥ c 4 . Some straightforward calculations shows that a 1256+η 1257 approximation of the multi envy free pricing/allocation problem would yield an η-approximation to the MAX-2-SAT. This proves that multi envy free pricing/allocation problem is NP-hard to approximate to within 1256+η 1257 , where η = .943 is the approximation hardness constant for MAX-2SAT problem shown in [14].
The Highway Problem
The highway problem is the vertex problem pricing in the special case where the vertices are numbered 1, ..., n and each agent is interested in an interval [i, j].
Multi Envy Free Hardness Results
Theorem 16 Multi envy free allocation/pricing for the highway problem is in NP.
Proof. We give a polynomial time algorithm that verifies that a given pricing and allocation is envy free. The algorithm builds a directed graph over the same nodes of the highway, where for each segment I there is an edge for any allocated request i that contains I, with weight v i . Then the algorithm computes the shortest path for any allocated request's segment in order to find irregularities. See Algorithm 1.
Algorithm 1 Verifying a given allocation/pricing on the highway to be multi envy free 1. Create directed graph G where for each allocated bundle of price p there are directed edges between the first node and all other nodes in the bundle with weight p ( see Figure 5) 2. For each allocated bundle of price q and for each unallocated bundle with valuation q do: -Compute the shortest path in G between the first and last nodes of the bundle -If q is higher than the shortest path, return false (the allocation/pricing is not multi envy free) 3. return true Every path of weight w can be translated into a set of allocated bundles of total price w and vice versa. Therefore is there is no path shorter than q there is no set of requests that has total total price lower than q and the agent is not envious. In the other direction it is easy to see that if the algorithm finds a path which is shorter than q then this path can be translated into a set of agents that get the bundle at a lower price than q.
Theorem 17 The problem of finding the revenue maximizing multi envy free solution for the highway problem is NP-hard.
Proof. We show a polynomial time reduction from PARTITION, similar to the reduction of item pricing highway problem shown in [4]. The input to the partition instance is a multiset of weights I = {w i }. For weight w i we construct a weight component on the highway W i which consist of three agents interested in item i:
-Request at price w i -Another request at price w i -Request at price 2w i In addition another two agents are interested in purchasing all the items. Both of them with valuation 3 2 w i . See Figure 6. The weight obtained from component W i can be 2w i , or 3w i . 3w i is achieved by pricing the item at w i and accepting all agents with price w i each. 2w i is achieved by pricing the item at 2w i and accepting only the third agent. In order to profit from the full valuations of the two agents interested in all items we need that the total price of all items be no more than 3 2 w i . It can be argued that the maximum revenue is earned when there is a partition between components, when some of them earn 3w i and some 2w i and the two agents interested in all items pay their full valuations. There is a pricing that reach revenue of 9 2 if and only if there is a partition of I into S and I \ S such that the sum of weights in both is equal.
O(1) Edge Capacities, Multi Envy Free Highway Allocation/Pricing
In the problem on a path with limited edge capacities, each edge e ∈ E can accommodate no more than c e allocated requests. Let C = max c e .
Somewhat inspired by what's done in [11] we show that we can solve this problem in time O(m 2C B 2C 2 n) by finding a longest path in an acyclic digraph.
Here is a useful definition of winner multi envy free allocation/pricing and a lemma that shows how to transform winner multi envy free allocation/pricing to multi envy free allocation/pricing.
Definition 18 An allocation/pricing is winner multi envy free if for any winning agent i, its set S i is not a subset of a union of other sets (of winning agents) for which the sum of the prices is strictly less than the price of the set S i .
Lemma 19 Assume we have a winner multi envy free allocation/pricing for a subset pricing instance over a highway with revenue R. A multi envy free allocation/pricing can be computed in time O(m 2 ) with revenue ≥ R.
Proof. Given an allocation/pricing that is winner multi envy free, one can convert it into an allocation/pricing that achieves at least the same revenue and is multi envy free.
For each agent i such that a i = ∅ compute the cheapest (by pricing) collection B of winning agents such that S i ⊂ ∪ j∈P a j . This can be done by using shortest path algorithm on winning agents prices in the same way as Algorithm 1. If v i (S i ) ≥ j∈B p(a j ) then agent i obeys the condition required for unallocated agents in multi envy free allocations.
If v i (S i ) < j∈B p(a j ), then we perform the following steps:
-Compute the cheapest (by valuation) collection C of winning agents such that S i ⊂ ∪ j∈C a j (this can be done by the same way as before). -For each agent j in C, if p(j) < v j , set p(j) = v j and change the prices of any allocated agent k such that S j S k = ∅ from p(a k ) to the accumulated payment amount of the minimal weight (by payment) collection of agents that is superset of S k . C contains the agents that are part of the cheapest set of agents among their path, therefore none of the agent prices can exceed their valuation. -If v i (S i ) < j∈C p(a j ) then we assign agent i the set S i , set p i (S i ) = j∈C p(a j ), and set a j = ∅ for j ∈ C. Since each agent j in C was allocated with p(a j ) = v j , making a j = ∅ does not make j envious.
By doing each replacement we clearly still have a winner multi envy free solution. In addition, for each j ∈ C there is no cheaper set of agents for S j (otherwise these agents would have composed C instead of j), Therefore none of the agents in C is envious after the switch.
After m iterations the solution is multi envy free.
Theorem 20 For a highway with n elements and m agents with maximal valuation B where the capacities of the edges are ≤ C, there is a O(m 2C B 2C 2 n) time algorithm for the profit maximization multi envy free problem on a path.
Proof. We create an n-layered digraph D with an additional source s and sink t, layers 0 and n + 1, respectively. There are arcs only between layers that represents neighboring items on the highway. Hence, in any s → t path, there are exactly n + 2 nodes.
In each node in layer e, corresponding to item e, we store all winning agents j that are accommodated by edge e. We store the total amounts all these agents spend on all items (network links) in their path. Moreover, we store for each pair < i, j > the value of the shortest possible path between first edge of i to current edge that accommodates j. Basically these values can be thought of as matrix (A) i,j of size ≤ C 2 that holds in each cell the shortest paths between first edge of i to current edge that accommodates j, then the diagonal (i, i) represents the amount spent by i itself.
Any node x (more precisely, the path s → x) in the digraph represents a feasible partial solution. Arcs from node x of layer e to node y of layer e + 1 are only introduced if the path s → y represents a feasible extension of the partial solution represented by the path s → x. The weight on an arc that connects a node of layer e to a node of layer e + 1 is equal to the profit earned on edge e + 1, that is, the total amount that the new introduced allocated agents of edge e + 1 pay.
Therefore, the weight of the longest s → t path in digraph D is equal to the maximum total profit. Moreover, the set of winning agents can be reconstructed from the longest s → t path. Algorithm 2 shows a more formal description. The allocated agents in this allocation do not envy each other, however they can be envied by the losers (the allocation/pricing is winner multi envy free). Lemma 19 shows how to overcome this issue and produce multi envy free allocation/pricing.
Lemma 21
There is a O(m 2C B 2C 2 n) time algorithm that produces optimal winner multi envy free allocation/pricing for the profit maximization problem on the highway.
Proof. Recall that C is an upper bound on the edge capacities. Consider path P from s to t in D. The winner set is the union of all winning agents of nodes of P. By the construction of D its nodes on level e can't accommodate more than c e and agent i can't get item e that does not belong to S i . By condition 1 of the arcs definition an agent can be allocated with her entire bundle or with an empty bundle. By definition of the nodes set (condition 1) all allocations with higher price than agent's valuation are removed. By condition 2 of the node set no agent can envy other agents in P.
We showed that P is giving a legal allocation/pricing that is winner multi envy free. Since the total weight of P gives the revenue (each winning agent price is summed once, when it first appears) we get that the heaviest path P yields an optimal solution.
The size of D is Each edge in the original graph translated to layer of nodes in D. For a layer there are at most m C possible subsets of size ≤ max{c e , |J e |} which is multiplied by the size of k U e k which is bounded by B C 2 . Therefore there are at most m C B C 2 nodes in a layer. Each node in a layer has at most n C B C 2 edges to nodes in the next layer, this gives total of m 2C B 2C 2 arcs between two consecutive layers. This means that there are at most m 2C B 2C 2 n arcs in D. The computation time to find the longest path in D is linear in the number of arcs, since D is acyclic [1].
We continue with the proof of Theorem 20. By Lemma 21 we can build a winner multi envy free optimal solution to the problem in time O(m 2C B 2C 2 n). Then if we use simple algorithm (similar to Algorithm 1) that computes the smallest valuation collection of winners for each envious losing agent, from Lemma 19 we get an O(m 2C B 2C 2 n) algorithm as required.
FPTAS for Highway Revenue, O(1) Edge Capacities, Multi Envy Freeness
We next show how to turn the dynamic programming algorithm into a fully polynomial time approximation scheme (FPTAS); that is, for any ǫ > 0, we have an algorithm that computes a solution with profit at least (1 − ǫ) times the optimum profit, in time polynomial in the input and 1 ǫ . To that end, we just apply the dynamic programming algorithm on a rounded instance in which the agents' valuations are b ′ j = ⌊b j /K⌋ where K := (ǫB/m(n + 1)) for ǫ > 0.
We show an FPTAS for the problem of finding an optimal winner multi envy free solution. By Lemma 19 we also get an FPTAS for the multi envy free problem as well.
Let us denote by (W, p) an allocation of winners (W ) and prices for bundles (p). Let (W, p) denote the revenue of the instance (W, p).
Fig. 4.
Step by step example of turning pricing problem into the graph H. In the bundle requests, each agent would like to buy a set of products (the black balls) as long as its price is less than her valuation (the numbers in the bundle requests are valuations). A can be seen as a dependency graph where there is a vertical edge from each request i up to the requests of A(i) (note there is no edge between 2 and 7 since 2 ≤ 7). At the last step the dependency graph is translated into the graph H as defined.
| 6,852 |
0909.4569
|
2951099604
|
We study the envy free pricing problem faced by a seller who wishes to maximize revenue by setting prices for bundles of items. If there is an unlimited supply of items and agents are single minded then we show that finding the revenue maximizing envy free allocation pricing can be solved in polynomial time by reducing it to an instance of weighted independent set on a perfect graph. We define an allocation pricing as if no agent wishes to replace her allocation with the union of the allocations of some set of other agents and her price with the sum of their prices. We show that it is -hard to decide if a given allocation pricing is multi envy free. We also show that revenue maximization multi envy free allocation pricing is hard. Furthermore, we give efficient algorithms and hardness results for various variants of the highway problem.
|
For the unlimited supply the problem is NP-hard (Briest and Kriesta @cite_11 ). Also the problem is given @math -approximation by @cite_3 . When the length of each interval is bounded by a constant or the valuation of each agent is bounded by a constant, @cite_13 give a fully polynomial time approximation scheme (FPTAS). If the intervals requested by different agents have a nested structure then an FPTAS is possible @cite_3 @cite_11 .
|
{
"abstract": [
"",
"We present approximation and online algorithms for a number of problems of pricing items for sale so as to maximize seller's revenue in an unlimited supply setting. Our first result is an O(k)-approximation algorithm for pricing items to single-minded bidders who each want at most k items. This improves over recent independent work of Briest and Krysta [5] who achieve an O(k2) bound. For the case k = 2, where we obtain a 4-approximation, this can be viewed as the following graph vertex pricing problem: given a (multi) graph G with valuations we on the edges, find prices pi ≥ 0 for the vertices to maximize Σ (pi + pj). e=(i,j):we ≥ pi + pj.We also improve the approximation of [11] from O(log m + log n) to O(log n), where m is the number of bidders and n is the number of items, for the \"highway problem\" in which all desired subsets are intervals on a line.Our approximation algorithms can be fed into the generic reduction of [2] to yield an incentive-compatible auction with nearly the same performance guarantees so long as the number of bidders is sufficiently large. In addition, we show how our algorithms can be combined with results of Blum and Hartline [3], [4], and Kalai and Vempala [13] to achieve good performance in the online setting, where customers arrive one at a time and each must be presented a set of item prices based only on knowledge of the customers seen so far.",
"We deal with the problem of finding profit-maximizing prices for a finite number of distinct goods, assuming that of each good an unlimited number of copies is available, or that goods can be reproduced at no cost (e.g., digital goods). Consumers specify subsets of the goods and the maximum prices they are willing to pay. In the considered single-minded case every consumer is interested in precisely one such subset. If the goods are the edges of a graph and consumers are requesting to purchase paths in this graph, then we can think of the problem as pricing computer network connections or transportation links.We start by showing weak NP-hardness of the very restricted case in which the requested subsets are nested, i.e., contained inside each other or non-intersecting, thereby resolving the previously open question whether the problem remains NP-hard when the underlying graph is simply a line. Using a reduction inspired by this result we present an approximation preserving reduction that proves APX-hardness even for very sparse instances defined on general graphs, where the number of requests per edge is bounded by a constant B and no path is longer than some constant l. On the algorithmic side we first present an O(log l + log B)-approximation algorithm that (almost) matches the previously best known approximation guarantee in the general case, but is especially well suited for sparse problem instances. Using a new upper bounding technique we then give an O(l2)-approximation, which is the first algorithm for the general problem with an approximation ratio that does not depend on B."
],
"cite_N": [
"@cite_13",
"@cite_3",
"@cite_11"
],
"mid": [
"",
"2159007113",
"2062453462"
]
}
|
Envy, Multi Envy, and Revenue Maximization
|
We consider the combinatorial auction setting where there are several different items for sale, not all items are identical, and agents have valuations for subsets of items. We allow the seller to have identical copies of an item. We distinguish between the case of limited supply (e.g., physical goods) and that of unlimited supply (e.g., digital goods). Agents have known valuations for subsets of items. We assume free disposal, i.e., the valuation of a superset is ≥ the valuation of a subset. Let S be a set of items, agent i has valuation v i (S) for set S. The valuation functions, v i , item-pricing [multi] envy-freeness is not automatic. Circumstances may arise where some agent has a valuation less than the price of some set of items she is interested in, but there is insufficient supply of these items. An envy-free solution must avoid such scenarios. Even so, for limited or unlimited supply, item pricing is envy-free if and only if item pricing is multi envy-free (this follows from the monotonicity of the item pricing).
For subset pricing, it does not necessarily follow that every allocation/pricing that is envy-free is also multi envy-free.
Although the definitions above are valid in general, we are interested in single minded bidders, and more specifically in a special case of single minded bidders called the highway problem ( [13,2]) where items are ordered and agents bid for consecutive interval of items. Table 1 gives gaps in revenue between the item pricing (where envy freeness and multi envy freeness are equivalent), multi envy freeness, and envy freeness. These gaps are for single minded bidders, and the gaps between item pricing and multi envy free subset pricing are in the context of the highway problem. In all cases (single minded bidders or not)
Our Results
Revenue([Multi] EF item pricing) ≤ Revenue(Multi EF subset pricing)
≤ Revenue(EF subset pricing)
≤ Social Welfare.
Clearly, if a lower bound holds for unlimited supply it clearly also holds for (big enough) limited supply. All of our lower bound constructions are for single minded bidders, for single minded bidders with unlimited supply the bounds are almost tight as (Social welfare)/(Envy-free item pricing) ≤ H m + H n .
This follows directly from [13], see below.
For limited supply, lower bound # 1 shows for some inputs the revenue of [Multi] Envy-free item pricing can be significantly smaller (by a factor ≤ H n /n or ≤ H m /m) than the revenue of Multi envy-free subset pricing. This gap is smaller for unlimited supply, lower bound # 3 shows that for unlimited supply it is possible to achieve a ratio of 1/H n or 1/ log log m between the revenue of [Multi] Envy-free item pricing and that of Multi Table 1 shows a gap in revenue (1/H n or 1/H m ) between Multi envy-free subset pricing and Envy-free subset pricing. This bound is for single minded bidders, but not for the highway problem.
We further give several hardness results and several positive algorithmic results:
1. For unlimited supply, and single minded bidders, we show that finding the envy free allocation/pricing that maximizes revenue is polynomial time. 2. We show that the decision problem of whether an allocation/pricing is multi envy free is coNP -hard. 3. We also show that finding an allocation/pricing that is multi envy free and maximizes the revenue is APX -hard. 4. For the the highway problem, we show a that if all capacities are O(1) then the (exact) revenue maximizing envy free allocation/pricing can be computed in polynomial time. I.e., the problem is fixed parameter tractable with respect to the capacity. 5. Again, for the highway problem with O(1) capacities, we give a FP-TAS for revenue maximization under the more difficult Multi envy-free requirements.
The Highway Problem
For the unlimited supply the problem is NP-hard (Briest and Kriesta [4]). Also the problem is given O(log n)-approximation by [2]. When the length of each interval is bounded by a constant or the valuation of each agent is bounded by a constant, Guruswami et al. [13] give a fully polynomial time approximation scheme (FPTAS). If the intervals requested by different agents have a nested structure then an FPTAS is possible [2,4].
For limited supply when the number of available copies per item is bounded by C, Grigoriev et al. [11] propose a dynamic programming algorithm that computes the optimal solution in time O(n 2C B 2C m), where n is the number of agents, m the number of items, and B an upper bound on the valuations. For C constant, and by appropriately discretizing B, this algorithm can be used to derive an FPTAS for this version of the highway problem. However, the solution produced by this algorithm need not be envy-free. For the highway problem with uniform capacities, [7] gives an O(log u max ) approximation algorithm where all capacities are equal, this algorithm does produce an envy-free allocation/pricing.
In our setting we have m agents and n items.
The capacity of an item is the number of (identical) copies of the item available for sale. The supply can be unlimited supply or limited supply. In a limited supply seller is allowed to sell up to some fixed amount of copies of each item. In the unlimited supply setting, there is no limit as to how many units of an item can be sold.
We consider single-minded bidders, where each agent has a valuation for a bundle of items, S i , and has valuation 0 for all sets S that are not supersets of S i . The valuation function for i, v i , has a succinct repre-
sentation as (S i , v i ) where v i = v i (S i ). For every S such that S i ⊆ S, v i (S) = v i (S i ) = v i , for all other sets S ′ , v i (S ′ ) = 0.
If a i = S and S i ⊂ S, we can change the allocation to be a i = S i and keep the same price. The original allocation/pricing is envy free if and only if the modified allocation is also envy free. Therefore, we can say without lost of generality, that an allocation/pricing must either have
a i = ∅ and p i (a i ) = 0 or a i = S i and p i (a i ) ≤ v i (S i ).
We denote the set of agents i allocated S i (the winning agents) W = {i : a i = ∅} to be the set of agents for which a i = S i . Our goal is to find an allocation/pricing that maximizes i∈W p(S i ).
For single minded bidders, we say that agent i wins if i ∈ W . Otherwise, we say that i loses.
Fix the price function p. For unlimited supply,it is easy to see that the revenue maximizing winner set W = {i : p(S i ) ≤ v(S i )}. For limited supply, it must be that
{i : p(S i ) < v(S i )} ⊆ W ⊆ {i : p(S i ) ≤ v(S i )}.
Envy and Multi Envy for Single minded bidders
For single minded bidders, the definitions of envy free and multi envy free can be simplified as follows: 1. For any winning agent i and any collection of winning agents C such that S i ⊆ j∈C S j the following must hold: p(S i ) ≤ j∈C p(S j ) 2. For any losing agent i and any collection of winning agents C such that S i ⊆ j∈C S j the following must hold: v(S i ) ≤ j∈C p(S j ).
Observation
Revenue Gaps Between Models
In this section we show the gaps between the optimal solutions of the different models. It is clear that item pricing envy free setting is less profitable than the subset pricing multi envy free setting since any item pricing solution is also multi envy free solution. We give two theorems that show the gaps between the two models when items are given in limited supply and when items are given in unlimited supply.
The following theorem corresponds to line #1 in Table 1.
Theorem 7
The maximal revenue in an envy-free item pricing for the limited supply highway setting may be as low as H k k times the maximal revenue achievable by a subset pricing that is multi envy free(where H k is the k'th Harmonic number, and k is the maximal capacity of an item).
Proof. Consider a path of k segments where segment 1 has capacity of k and for i > 1, segment i has capacity n − i + 1. There are two groups of agents in the setting: Figure 1 for illustration.
-k agents: { request [1, i] with valuation of 1 | 1 ≤ i ≤ k} -k − 1 agents: request [i, i] with valuation of 1 i | 1 ≤ i ≤ k See
Clearly the best an item pricing envy-free solution can achieve is H k by assigning price of 1 i to segment i. A subset pricing multi envy free solution can price all intervals on the path at 1 and get a revenue of k.
The number of requests is O(k), therefore the gap is O(m/H m ).
We now deal with the gaps between item pricing setting, multi envy free setting and envy free setting when items are given in unlimited supply. It is clear that any item pricing solution is also multi envy free solution and that any multi envy free solution is also an envy free solution. The question is how big can the gap be, how more can a seller profit by choosing another concept of envy? Guruswami et al. [13] give an O(log m + log n)-approximation for the general item pricing problem. The upper bound used is the sum of all valuations. Since any item pricing solution is also multi envy free and since the sum of all valuations can be used as an upper bound on multi envy free and envy free solutions, we conclude that the gap between any pair of these problems is upper bounded by O(log m + log n). We show lower bounds on the gaps. The following theorem corresponds to line #3 in Table 1.
Theorem 8 Multi envy free subset pricing setting for the unlimited supply problem (even for a highway) can improve the optimal item pricing solution by a factor of O(log log m) where m is the number of requests.
Proof. We construct the following agent requests along a path. We assume for simplicity that n, the number of items, is a power of 2, so n = 2 k . We have k + 1 layers of requests, starting from layer 0 up to layer k. In layer i there are 2 i equal sized requests, each for 2 k−i items, of valuation
1 4(k−2) 1 4(k−2) Fig. 2.
The following theorem corresponds to line #2 in Table 1.
S i = V \ {i} with valuation v i = 1
i . Since every request is a subset of the union of any two other requests, the minimal v i + v j where v i and v j both, is an upper bound on the revenue that can be achieved from any allocated request in the multi envy free setting. Let the two lowest winning valuations be i α and i β . All requests with lower valuations are not allocated. All requests with valuation higher than i α and i β can have price that is at most 1 iα + 1 i β . Therefore no multi envy free allocation/pricing can achieve more than revenue 2.
On the other hand, in the envy free setting, an allocation of each request at price p i = v i is valid and achieves revenue of H m . Therefore, the ratio between maximal revenue achieved in the multi envy free setting vs. the maximal revenue achieved in the envy free setting can be as low as O(1/ log m).
6 Polynomial time envy-free revenue maximization (unlimited supply, single minded bidders)
We discuss in this section the classical envy free approach where each agent may envy only one other agent (not a group of other agents). Therefore we seek an allocation/pricing such that Observation 4.1 is valid.
With limited supply of items, a reduction from independent set can be used to show that the problem of maximizing revenue is hard to approximate within m 1−ǫ . Grigoriev et al. [11] show that it is NP-complete to approximate the maximum profit limited supply item pricing solution to within a factor m 1−ǫ , for any ǫ > 0 even when the underlying graph is a grid. The same construction can be used here.
For the unlimited supply setting we show that:
Theorem 10 For single minded bidders the revenue maximizing envy free allocation/pricing with unlimited supply can be computed in polynomial time.
Our idea is to make use of the fact that allocating a certain request at price p means that any request that is a superset and has valuation < p must not be allocated. We transform all requests into a directed acyclic perfect graph H and then compute the revenue maximizing allocation/pricing by computing a maximal independent set on H (which can be done in polynomial time for perfect graph). A similar construction using the hierarchical nature of a pricing was done by Chen et al. [6] for envy free pricing of products over a graph with metric substitutability. As before, agent i seeks bundle S i and has valuation v i .
Construction of Graph H
For each i ∈ {1, . . . , m}, define
A(i) = {1 ≤ j ≤ m|S i ⊆ S j and v j < v i }.
The reason we consider only requests with lower valuations is that when picking i's price only these allocations are at risk. Given price p for agent i, all requests in A(i) with valuation < p cannot be allocated at any price. For each agent i , define an ordering π i on A(i) in non-decreasing order of valuation. I.e., for each pair j, k such that 1 ≤ j ≤ k ≤ n i , where n i = |A(i)|, the valuations must be ordered, v π i (j) ≤ v π i (k) (ties are broken arbitrarily).
We construct an undirected vertex-weighted graph H as follows. For each i ∈ V we associate n i + 1 weighted vertices in H. These vertices constitute the T (i) component, the vertices of which are {i 1 , i 2 , ..., i n i +1 }. The set of all H vertices is defined as i∈V T i . The weight of each vertex in T (i) is:
w(i 1 ) = v π i (1) w(i 2 ) = v π i (2) − v π i (1)
. . .
w(i n ) = v π i (n i ) − v π i (n i −1) w(i n i +1 ) = v i − v π i (n i )
By definition of A(i) and π i , all weights are non-negative and
j∈T (i) w(j) = v i .
For each T (i) component we connect the various vertices of T (i) to components T (j) such that j ∈ A(i) (connecting a vertex i k ∈ T (i) to a component T (j) means connecting i k to all vertices of T (j) by edges) as follows. Vertex i n i +1 is connected to all components T (j) for j ∈ A(i).
Vertex i k is connected to all components T (π i (j)) for each j such that 1 ≤ j < k. For instance, vertex i 2 is connected to component T (π i (1)), vertex i 3 is connected to components T (π i (1)) and T (π i (2)), and so on.
As i is not contained in A(i) there can't be self edges. It is easy to see that for any a < b, i b is connected to each component that i a is connected to (and maybe to some additional components). See Figure 3 for the exact construction of edges from a component t(i). Figure 4 shows an example of transforming a pricing problem into the graph H.
Lemma 11
The value of the maximum weighted independent set on H is equal to the revenue obtained from the optimal envy free pricing of the original problem.
Proof. When picking vertices for the maximal independent set instance, in every component T (i) one must choose vertices whose sum equals the valuation of agent j, where j ∈ A(i) or j = i. A component that none of its vertices were picked means that agent i was not allocated her set. The construction of H ensures that the pricing is envy-free. We prove that a maximal revenue envy-free allcation/pricing can be translated into an independent set in H and that a maximal independent set in H can be translated into a revenue maximizing envy free allocation/pricing.
(envyf ree ⇒ IS) We show how to construct an independent set solution in H from the optimal allocation/pricing of the original pricing instance. It is easy to see that the price p i is equal to one of the valuations v j for j such that j ∈ A(i) or j = i (otherwise the prices can be increased). We will pick vertices in T (i) to achieve price of p i . Let us assume that p i = v j for some j such that j ∈ A(i) ∪ {i}. We will pick all vertices of i k ∈ T (i) such that k ≤ π(j). By construction of T (i) our pick gives an accumulated value of v j = p i . As we have a valid pricing we can assume that ∀i, j : S i ⊂ S j ⇒ p i ≤ p j . Let us assume by contradiction that our pick is not a valid independent set. It follows that there are two vertices i k and j m such that j ∈ A(i) and there is an edge between them. Since edges are drawn from i k to all components that represent requests that have lower valuation than 1≤t<k w(i t ) we get that p j must be less than p i (p i ≥ 1≤t<k w(i t )). This contradicts our assumption.
(IS ⇒ envyf ree) Assuming we have an optimal independent set solution in H. By H construction, in each component T (i) any node i k is connected to all neighbors of i m for m < k. Therefore the vertices set that are picked as part of the independent set in each component T (i) is of the form {i k |k ≤ i max }. We transform the independent set solution into a pricing as follows:
-Agent i such that none of T (i)'s vertices were picked receives nothing.
-Agent i such that the vertices {i k |k ≤ i max } were picked in T (i) receives S i at price k≤i max w(i k )
T (i) T (πi(3)) T (πi(ni)) T (πi(1)) T (πi(ni−1)) i1 i2 i3 i4 in i in i+1 T (πi(2))
Fig. 3. Construction of H
Assume that the pricing is not envy-free and we have requests i, j such that S i ⊂ S j and p i > p j . By the construction of H we can assume that p i and p j are equal to v i ′ and v j ′ such that i ′ ∈ A(i) and j ′ ∈ A(j). We picked from T (i) vertices {i k : 1 ≤ k ≤ i max } such that 1≤k≤i max w(i k ) = v i ′ . The same goes for T (j).
Let's inspect the vertices in T (j) that should have been picked in order to make p j ≥ p i .
Define J as the minimal set of vertices in T (j) of the form {j k |j max < k ≤ t} such that 1≤k≤t w(j k ) ≥ p i . The vertices of J has outgoing edges (from i to A(i)) into components T (k) such that k ∈ A(i) (requests that are superset of j and i and have lower valuation than i).
The vertices reachable from J by the outgoing edges are reachable also by i i max . Hence we are guaranteed that they are not picked. Also we know that all vertices that have edges into J were not picked as picking one of them would have prevented picking any of T (j)'s vertices.
Therefore there is no reason for the independent set not to pick also the vertices of J and increase the independent set value. This contradicts the maximality of the independent set solution.
Lemma 12 H is a comparability graph
Proof. Let us direct the edges of H. In case of edge from node of T (i) to node of T (j) the direction will be from i to j if j ∈ A(i).
This orientation assignment results a directed graph with transitivity: if we have directed edge from i α to j β and from j β to k γ , we show that there must be directed edge from i α to k γ .
Note that since k ∈ A(j) and j ∈ A(i) then k ∈ A(i) as A(j) ⊂ A(i). The fact that there is directed edge from j β to T (k) means that v j > v k , therefore in π i order it will also be higher (And to the right in Figure 3).
Since i α is connected to all components T (π i (l)) such that 1 ≤ l < α and i α is also connected to T (j), i α must be connected to T (k) as well (as π −1 i (k) < π −1 i (j)). Clearly i α is connected to each node of T (k) including k γ .
We've shown that the edges of the graph can be oriented so that the transitivity property is maintained. Therefore the graph is a comparability graph.
A graph is said to be perfect if the chromatic number (the least number of colors needed to color the graph) of every induced subgraph equals the clique number of that subgraph. Showing that H is a comparability graph implies the following corollary:
Corollary 13 H is a perfect graph.
We know that the maximal weighted independent set can be solved in polynomial time on perfect graphs [12]. By Lemma 11 and Lemma 12 we conclude that finding the optimal envy free allocation/pricing in the most general single minded setting can be done in polynomial time. This completes the proof of Theorem 10.
Hardness of multi envy-free allocation/pricing
In this section we show the following hardness results:
-The problem of deciding whether a certain pricing assignment is multi envy free is coNP -hard. -Maximizing revenue from single minded agents subject to multi envy free pricing is APX -hard.
Theorem 14 The problem of deciding whether a certain pricing assignment is multi envy free is coNP-hard.
Proof. We show a polynomial reduction from VERTEX-COVER(k) to our decision problem. Assume that there is an algorithm A that confirm the envy-freeness of a given subset pricing. The NP-hard problem of VERTEX-COVER(k) can be reduced to this problem. The building of subset pricing from VERTEX-COVER(k) instance is as follows:
-Each edge turns into an item.
-Each vertex turns into a set with price 1.
-Give the price k − 1 to the set of all items.
(Note that this is a limited supply setting where each item is chosen by 3 subsets at most.)
Clearly A confirms that this instance is not legal if and only if there is a vertex cover to the VERTEX-COVER instance of size k.
Since VERTEX-COVER is NP-hard this imply that The problem of deciding weather a certain pricing is envy-free is coNP-hard.
Note that even though deciding whether a pricing is multi envy free is hard, finding such a pricing can be approximated. Balcan et al. [3] showed O(log m + log n)-approximation for arbitrary bundle valuations and unlimited supply using single fixed pricing which is basically pricing all bundles at the same price. Such a pricing is multi envy free pricing as well.
Theorem 15 Maximizing revenue from single minded agents subject to multi envy free pricing is APX-hard, even when all agents are interested in at most two items.
Proof. We show that finding the optimal multi envy free pricing is APXhard by a reduction from MAX-2SAT. Given a MAX-2SAT instance we build multi envy free allocation instance as follows.
Let's denote C as the number of clauses in the SAT and C (v) the number of clauses containing variable v.
Items in the allocation problems are:
-For each literal we have an item to sell. Thus for each variable there are two items, one for the variable and one for its negation.
Agents in the allocation are: We show that there is a pricing with revenue at least 314C + k if and only if there is a solution to the 2SAT instance that satisfies at least k clauses.
Let us prove the easy direction. Assume that there is a solution to the MAX-2SAT problem that satisfy at least k clauses. Assign a price of 3 to each request for a literal that is set to true in the solution. Assign a price of 2 to each request for literals that are set to false in the solution. Assign a price of 5 for each clause request. Each variable request is priced at 5. This gives a valid multi envy free pricing and we can verify that the revenue of it is 314C + k.
For the other direction, let p be a pricing with maximum revenue, and assume the the revenue is at least 314C + k. The optimal way to price variable and literal agents for variable v is be pricing one literal at 2 and the other at 3, in that way the variable requests for both literals is priced at 5 and the revenue from the variable and literals agents is 155C (v) . By the optimality of p, since variable and literals agents are always more profitable than the clause agents, all variables must be priced in this manner. Each clause can be priced at 4 if both its literals are priced at 2, or by 5 if one (at least) of its literals is priced at 3. In total this means that from pricing allocation of revenue 314C + k defines a natural assignment to the MAX-2SAT problem by making literals priced at 3 to be true.
Because the maximum 2-SAT solution satisfy at least 1 4 of the clauses, we are seek for a case where k ≥ c 4 . Some straightforward calculations shows that a 1256+η 1257 approximation of the multi envy free pricing/allocation problem would yield an η-approximation to the MAX-2-SAT. This proves that multi envy free pricing/allocation problem is NP-hard to approximate to within 1256+η 1257 , where η = .943 is the approximation hardness constant for MAX-2SAT problem shown in [14].
The Highway Problem
The highway problem is the vertex problem pricing in the special case where the vertices are numbered 1, ..., n and each agent is interested in an interval [i, j].
Multi Envy Free Hardness Results
Theorem 16 Multi envy free allocation/pricing for the highway problem is in NP.
Proof. We give a polynomial time algorithm that verifies that a given pricing and allocation is envy free. The algorithm builds a directed graph over the same nodes of the highway, where for each segment I there is an edge for any allocated request i that contains I, with weight v i . Then the algorithm computes the shortest path for any allocated request's segment in order to find irregularities. See Algorithm 1.
Algorithm 1 Verifying a given allocation/pricing on the highway to be multi envy free 1. Create directed graph G where for each allocated bundle of price p there are directed edges between the first node and all other nodes in the bundle with weight p ( see Figure 5) 2. For each allocated bundle of price q and for each unallocated bundle with valuation q do: -Compute the shortest path in G between the first and last nodes of the bundle -If q is higher than the shortest path, return false (the allocation/pricing is not multi envy free) 3. return true Every path of weight w can be translated into a set of allocated bundles of total price w and vice versa. Therefore is there is no path shorter than q there is no set of requests that has total total price lower than q and the agent is not envious. In the other direction it is easy to see that if the algorithm finds a path which is shorter than q then this path can be translated into a set of agents that get the bundle at a lower price than q.
Theorem 17 The problem of finding the revenue maximizing multi envy free solution for the highway problem is NP-hard.
Proof. We show a polynomial time reduction from PARTITION, similar to the reduction of item pricing highway problem shown in [4]. The input to the partition instance is a multiset of weights I = {w i }. For weight w i we construct a weight component on the highway W i which consist of three agents interested in item i:
-Request at price w i -Another request at price w i -Request at price 2w i In addition another two agents are interested in purchasing all the items. Both of them with valuation 3 2 w i . See Figure 6. The weight obtained from component W i can be 2w i , or 3w i . 3w i is achieved by pricing the item at w i and accepting all agents with price w i each. 2w i is achieved by pricing the item at 2w i and accepting only the third agent. In order to profit from the full valuations of the two agents interested in all items we need that the total price of all items be no more than 3 2 w i . It can be argued that the maximum revenue is earned when there is a partition between components, when some of them earn 3w i and some 2w i and the two agents interested in all items pay their full valuations. There is a pricing that reach revenue of 9 2 if and only if there is a partition of I into S and I \ S such that the sum of weights in both is equal.
O(1) Edge Capacities, Multi Envy Free Highway Allocation/Pricing
In the problem on a path with limited edge capacities, each edge e ∈ E can accommodate no more than c e allocated requests. Let C = max c e .
Somewhat inspired by what's done in [11] we show that we can solve this problem in time O(m 2C B 2C 2 n) by finding a longest path in an acyclic digraph.
Here is a useful definition of winner multi envy free allocation/pricing and a lemma that shows how to transform winner multi envy free allocation/pricing to multi envy free allocation/pricing.
Definition 18 An allocation/pricing is winner multi envy free if for any winning agent i, its set S i is not a subset of a union of other sets (of winning agents) for which the sum of the prices is strictly less than the price of the set S i .
Lemma 19 Assume we have a winner multi envy free allocation/pricing for a subset pricing instance over a highway with revenue R. A multi envy free allocation/pricing can be computed in time O(m 2 ) with revenue ≥ R.
Proof. Given an allocation/pricing that is winner multi envy free, one can convert it into an allocation/pricing that achieves at least the same revenue and is multi envy free.
For each agent i such that a i = ∅ compute the cheapest (by pricing) collection B of winning agents such that S i ⊂ ∪ j∈P a j . This can be done by using shortest path algorithm on winning agents prices in the same way as Algorithm 1. If v i (S i ) ≥ j∈B p(a j ) then agent i obeys the condition required for unallocated agents in multi envy free allocations.
If v i (S i ) < j∈B p(a j ), then we perform the following steps:
-Compute the cheapest (by valuation) collection C of winning agents such that S i ⊂ ∪ j∈C a j (this can be done by the same way as before). -For each agent j in C, if p(j) < v j , set p(j) = v j and change the prices of any allocated agent k such that S j S k = ∅ from p(a k ) to the accumulated payment amount of the minimal weight (by payment) collection of agents that is superset of S k . C contains the agents that are part of the cheapest set of agents among their path, therefore none of the agent prices can exceed their valuation. -If v i (S i ) < j∈C p(a j ) then we assign agent i the set S i , set p i (S i ) = j∈C p(a j ), and set a j = ∅ for j ∈ C. Since each agent j in C was allocated with p(a j ) = v j , making a j = ∅ does not make j envious.
By doing each replacement we clearly still have a winner multi envy free solution. In addition, for each j ∈ C there is no cheaper set of agents for S j (otherwise these agents would have composed C instead of j), Therefore none of the agents in C is envious after the switch.
After m iterations the solution is multi envy free.
Theorem 20 For a highway with n elements and m agents with maximal valuation B where the capacities of the edges are ≤ C, there is a O(m 2C B 2C 2 n) time algorithm for the profit maximization multi envy free problem on a path.
Proof. We create an n-layered digraph D with an additional source s and sink t, layers 0 and n + 1, respectively. There are arcs only between layers that represents neighboring items on the highway. Hence, in any s → t path, there are exactly n + 2 nodes.
In each node in layer e, corresponding to item e, we store all winning agents j that are accommodated by edge e. We store the total amounts all these agents spend on all items (network links) in their path. Moreover, we store for each pair < i, j > the value of the shortest possible path between first edge of i to current edge that accommodates j. Basically these values can be thought of as matrix (A) i,j of size ≤ C 2 that holds in each cell the shortest paths between first edge of i to current edge that accommodates j, then the diagonal (i, i) represents the amount spent by i itself.
Any node x (more precisely, the path s → x) in the digraph represents a feasible partial solution. Arcs from node x of layer e to node y of layer e + 1 are only introduced if the path s → y represents a feasible extension of the partial solution represented by the path s → x. The weight on an arc that connects a node of layer e to a node of layer e + 1 is equal to the profit earned on edge e + 1, that is, the total amount that the new introduced allocated agents of edge e + 1 pay.
Therefore, the weight of the longest s → t path in digraph D is equal to the maximum total profit. Moreover, the set of winning agents can be reconstructed from the longest s → t path. Algorithm 2 shows a more formal description. The allocated agents in this allocation do not envy each other, however they can be envied by the losers (the allocation/pricing is winner multi envy free). Lemma 19 shows how to overcome this issue and produce multi envy free allocation/pricing.
Lemma 21
There is a O(m 2C B 2C 2 n) time algorithm that produces optimal winner multi envy free allocation/pricing for the profit maximization problem on the highway.
Proof. Recall that C is an upper bound on the edge capacities. Consider path P from s to t in D. The winner set is the union of all winning agents of nodes of P. By the construction of D its nodes on level e can't accommodate more than c e and agent i can't get item e that does not belong to S i . By condition 1 of the arcs definition an agent can be allocated with her entire bundle or with an empty bundle. By definition of the nodes set (condition 1) all allocations with higher price than agent's valuation are removed. By condition 2 of the node set no agent can envy other agents in P.
We showed that P is giving a legal allocation/pricing that is winner multi envy free. Since the total weight of P gives the revenue (each winning agent price is summed once, when it first appears) we get that the heaviest path P yields an optimal solution.
The size of D is Each edge in the original graph translated to layer of nodes in D. For a layer there are at most m C possible subsets of size ≤ max{c e , |J e |} which is multiplied by the size of k U e k which is bounded by B C 2 . Therefore there are at most m C B C 2 nodes in a layer. Each node in a layer has at most n C B C 2 edges to nodes in the next layer, this gives total of m 2C B 2C 2 arcs between two consecutive layers. This means that there are at most m 2C B 2C 2 n arcs in D. The computation time to find the longest path in D is linear in the number of arcs, since D is acyclic [1].
We continue with the proof of Theorem 20. By Lemma 21 we can build a winner multi envy free optimal solution to the problem in time O(m 2C B 2C 2 n). Then if we use simple algorithm (similar to Algorithm 1) that computes the smallest valuation collection of winners for each envious losing agent, from Lemma 19 we get an O(m 2C B 2C 2 n) algorithm as required.
FPTAS for Highway Revenue, O(1) Edge Capacities, Multi Envy Freeness
We next show how to turn the dynamic programming algorithm into a fully polynomial time approximation scheme (FPTAS); that is, for any ǫ > 0, we have an algorithm that computes a solution with profit at least (1 − ǫ) times the optimum profit, in time polynomial in the input and 1 ǫ . To that end, we just apply the dynamic programming algorithm on a rounded instance in which the agents' valuations are b ′ j = ⌊b j /K⌋ where K := (ǫB/m(n + 1)) for ǫ > 0.
We show an FPTAS for the problem of finding an optimal winner multi envy free solution. By Lemma 19 we also get an FPTAS for the multi envy free problem as well.
Let us denote by (W, p) an allocation of winners (W ) and prices for bundles (p). Let (W, p) denote the revenue of the instance (W, p).
Fig. 4.
Step by step example of turning pricing problem into the graph H. In the bundle requests, each agent would like to buy a set of products (the black balls) as long as its price is less than her valuation (the numbers in the bundle requests are valuations). A can be seen as a dependency graph where there is a vertical edge from each request i up to the requests of A(i) (note there is no edge between 2 and 7 since 2 ≤ 7). At the last step the dependency graph is translated into the graph H as defined.
| 6,852 |
0909.4603
|
1584512231
|
We investigate the problem of learning a topic model - the well-known Latent Dirichlet Allocation - in a distributed manner, using a cluster of C processors and dividing the corpus to be learned equally among them. We propose a simple approximated method that can be tuned, trading speed for accuracy according to the task at hand. Our approach is asynchronous, and therefore suitable for clusters of heterogenous machines.
|
@cite_1 proposed a way to avoid computing eq:z_sampling for each @math by getting an upper bound on @math using Holder's inequality and computing eq:z_sampling for the most probable topics first, leading to a speed up of up to 8x of the sampling process.
|
{
"abstract": [
"We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model."
],
"cite_N": [
"@cite_1"
],
"mid": [
"1880262756"
]
}
|
Scalable Inference for Latent Dirichlet Allocation
|
Very large datasets are becoming increasingly common -from specific collections, such as Reuters and PubMed, to very broad and large ones, such as the images and metadata of sites like Flickr, scanned books of sites like Google Books and the whole internet content itself. Topic models, such as Latent Dirichlet Allocation (LDA), have proved to be a useful tool to model such collections, but suffer from scalability limitations. Even though there has been some recent advances in speeding up inference for such models, this still remains a fundamental open problem.
Latent Dirichlet Allocation
Before introducing our method we briefly describe the Latent Dirichlet Allocation (LDA) topic model [BNJ03]. In LDA (see Figure 1), each document is modeled as a mixture over K topics, and each topic has a multinomial distribution β k over a vocabulary of V words (please refer to table 1 for a summary of the notation used throughout this paper). For a given document m we first draw a topic distribution θ m from a Dirichlet distribution parametrized by α. Then, for each word n in the document we draw a topic z m,n from a multinomial distribution with parameter θ m . Finally, we draw the word n from the multinomial distribution parametrized by β zm,n . In Appendix A.4, we show that the M-step update for the conditional multinomial parameter # can be written out analytically:
# i j % M & d=1 N d & n=1 ' * dni w j dn .(9)
We further show that the M-step update for Dirichlet parameter ! can be implemented using an efficient Newton-Raphson method in which the Hessian is inverted in linear time.
Smoothing
The large vocabulary size that is characteristic of many document corpora creates serious problems of sparsity. A new document is very likely to contain words that did not appear in any of the documents in a training corpus. Maximum likelihood estimates of the multinomial parameters assign zero probability to such words, and thus zero probability to new documents. The standard approach to coping with this problem is to "smooth" the multinomial parameters, assigning positive probability to all vocabulary items whether or not they are observed in the training set (Jelinek, 1997). Laplace smoothing is commonly used; this essentially yields the mean of the posterior distribution under a uniform Dirichlet prior on the multinomial parameters. Unfortunately, in the mixture model setting, simple Laplace smoothing is no longer justified as a maximum a posteriori method (although it is often implemented in practice; cf. Nigam et al., 1999). In fact, by placing a Dirichlet prior on the multinomial parameter we obtain an intractable posterior in the mixture model setting, for much the same reason that one obtains an intractable posterior in the basic LDA model. Our proposed solution to this problem is to simply apply variational inference methods to the extended model that includes Dirichlet smoothing on the multinomial parameter.
In the LDA setting, we obtain the extended graphical model shown in Figure 7. We treat # as a k × V random matrix (one row for each mixture component), where we assume that each row is independently drawn from an exchangeable Dirichlet distribution. 2 We now extend our inference procedures to treat the # i as random variables that are endowed with a posterior distribution, 2. An exchangeable Dirichlet is simply a Dirichlet distribution with a single scalar parameter $. The density is the same as a Dirichlet (Eq. 1) where ! i = $ for each component.
Inference in LDA
Many inference algorithms for LDA have been proposed, such as variational Bayesian (VB) inference [BNJ03], expectation propagation (EP) [ML02], collapsed Gibbs sampling [GS04,Hei04] and collapsed variational Bayesian (CVB) inference [TNW06]. In this paper we will focus on collapsed Gibbs sampling.
Collapsed Gibbs sampling
Collapsed Gibbs sampling is an MCMC method that works by iterating over each of the latent topic variables z 1 , ..., z n , sampling each z i from P (z i |z ¬i ). This is done by integrating out the other latent variables (θ and β). We are not going to dwell on the details here, since this has already been well explained in [GS04,Hei04], but in essence what we need to do is to sample from this distribution:
p(z i = k|z ¬i , w) ∝ (n k,v,¬i + η) V v=1 (n k,v,¬i + η) (n m,k,¬i + α) (1) ∝ (n k,v,¬i + η) (n k,¬i + V η) (n m,k,¬i + α)(2)
In simple terms, to sample the topic of a word of a document given all the other words and topics we need, for each k in {1, . . . , K}:
1. n k,v,¬i : the total number of times the word's term has been observed with topic k (excluding the word we are sampling for).
2. n k,¬i : the total number of times topic k has been observed in all documents (excluding the word we are sampling for).
3. n m,k,¬i : the number of times topic k has been observed in a word of this document (excluding the word we are sampling for).
Faster sampling
The usual approach to draw samples of z using (1) is to compute a normalization constant Z = K k=1 p(z i = k|z ¬i , w) to obtain a probabily distribution that can be sampled from:
p(z i=k |z ¬i , w) = 1 Z (n k,v,¬i + η) (n k,¬i + V η) (n m,k,¬i + α)(3)
This leads to a complexity for each iteration of standard Gibbs sampling of O(N T K), where N T is the total number of words in the corpus, and K is the number of topics.
[PNI + 08] proposed a way to avoid computing (1) for each K by getting an upper bound on Z using Holder's inequality and computing (1) for the most probable topics first, leading to a speed up of up to 8x of the sampling process.
[YMM09] broke (1) in three components and took leverage on the resulting sparsity in k of some of them -that, combined with an efficient storage scheme led to a speed up of the order of 20x.
3
Parallelism
A complementary approach for scalability is to share the processing among several CPUs/cores, in the same computer (multi core) or in different computers (clusters).
Fine grained parallelism
In most CPU architectures the cost incurred in creating threads/processes and synchronizing data among them can be very significant, making it infeasible to share a task in a fine-grained manner. One exception, however, are Graphics Processing Units (GPUs). Since they were originally designed to parallelize jobs in the pixel level, they are well suited for fine-grained parallelization tasks.
[ MHSO09] proposed to use GPUs to parallelize the sampling at the topic level. Although their work was with collapsed variational Bayesian (CVB) inference [TNW06], it could probably be extended to collapsed Gibbs sampling. It's interesting to note that this kind of parallelization is complementary to the document-level one (see next section), so both can be applied in conjunction.
Coarse grained parallelism
Most of the work on parallelism has been on the document level -each CPU/core is responsible for a set of documents.
Looking at equation (1) it can be seen that in the right hand side we have a document specific variable (n m,k ). Only n k,v (and its sum, n k ), on the left hand side, is shared among all documents. Using this fact, [NASW07] proposed to simply compute a subset of the documents in each CPU, synchronizing the global counts (n k,v ) at the end of each step. This is an approximation, since we are no longer sampling from the true distribution, but from a noisy version of it. They showed, however, that it works well in practice. They also proposed a more principled way of sharing the task using a hierarchical model and, even though that was more costly, the results were similar.
[ASW08] proposed a similar idea, but with an asynchronous model, where there is no global synchronization step (as there is in [NASW07]).
Our method
We follow [ASW08] and work in a coarse-grained asynchronous parallelism, dividing the task at the document level. For simplicity, we split the M documents among the C CPUs equally, so that each CPU receives M C documents 1 . We then proceed in the usual manner, with each CPU running the standard Gibbs sampling in its set of documents. Each CPU, however, keeps a copy of all its modifications to n k,v and, at the end of each iteration, stores them in a file in a shared filesystem. Right after that, it reads all modifications stored by other CPUs and incorporates them to its n k,v . This works in an asynchronous manner, with each CPU saving its modifications and reading other CPU's modifications at the end of each iteration. The algorithm is detailed in 1.
Algorithm 1 Simple sharing
Input: α, η, K, D train , C, num iter Randomly initialize z m,n , updating n k,v and n l k,v accordingly. Save n l k,v to a file for t = 1 to num iter do Run collapsed Gibbs sampling, updating z m,n , n k,v and n l k,v
Save n l k,v to a file Load modifications to n k,v from other CPUs end for
We first note that, in this simple algorithm, the complexity of the sampling step is O(N c K) (whre N c is the number of words being processed in CPU c), while the synchronization part takes O(CKV ) (we save a KxV matrix once and load it C − 1 times). Plugging in the following values, based on a standard large scale task:
• K = 500 topics • C = 100 CPUs • N c = 10 7 words • V = 10 5 terms
we get similar values for the sampling and the synchronization steps. That, however, doesn't take into account the constants. In our experiments, with these parameters a sampling step will take approximately 500 seconds, while the synchronization will take around 20,000 seconds (assuming a 1Gbit/s ethernet connection shared among all CPUs). The bottleneck is clearly in the synchronization step.
We propose, therefore, a variation of the first algorithm. When saving the modifications at the end of an iteration, only save those that are relevant -more formally, save (in a sparse format) only those items of n l k,v for which
n l k,v n k,v > threshold(4)
where threshold is a parameter that can range from 0 to 1. The algorithm is detailed in 2. Note that setting threshold to zero recovers Algorithm 1.
Experiments
Datasets
We ran our experiments in three datasets: NIPS full papers (books.nips.cc), Enron emails (www.cs.cmu.edu/∼enron) and KOS (dailykos.com) 2 . Each dataset was split Algorithm 2 Sparse sharing Input: α, η, K, D train , C, num iter Randomly initialize z m,n , updating n k,v and n l k,v accordingly. Save n l k,v to a file for t = 1 to num iter do Run collapsed Gibbs sampling, updating z m,n , n k,v and n l k,v
for k = 1 to K do for v = 1 to V do Save n l k,v if n l k,v
n k,v > threshold end for end for Load modifications to n k,v from other CPUs end for All experiments were ran in a cluster of 11 machines, each one with a dual-core AMD64 2.4 GHz CPU and 8 Gb of RAM (22 CPUs total). All machines share a network file system over an 1GB Ethernet network.
We used a fixed set of LDA parameters: K = 50 (unless otherwise noticed), α = 0.1, η = 0.01 and 1500 iterations of the Gibbs sampler. To compare the quality of different approximations we computed the perplexity of a held-out test set. The perplexity is commonly used in language modeling: it is equivalent to the inverse of the geometric mean per-word likelihood. Formally, given a test set of M test documents:
perplexity(D test ) = exp − Mtest m=1 Nm n=1 log p(w m,n ) Mtest m=1 N m (5)
Results
In figure 2 we compare running time and perplexity for different values of threshold and different number of CPUs. We can see that as we increase threshold we can significantly reduce training time, with just a small impact on the quality of the approximation, measured by the perplexity computed on a held-out test set. We can also see that, as expected, the training time reduction becomes more significant as we increasing the amount of information that has to be shared, by adding more CPUs to the 6 task.
In figure 3 we show the proportion of time spent in synchronization at each iteration when training the LDA model with different numbers of CPUs. By increasing threshold we can substantially decrease synchronization time. As expected, as the number of CPUs increase synchronization starts to dominate over processing time.
In figure 4 we show the amount of information saved at each step for different values of threshold. We see that in the first few iterations the savings obtained by Algorithm 2 are small, since almost all modifications are relevant, but as the model converges the amount of relevant information stabilizes at a lower level. We can also see that as we add more CPUs the savings become more prominent -this is expected, since then modifications of a single CPU tend to be less relevant as it becomes responsible for a smaller proportion of the corpus.
In figure 5 we plot the speed-up obtained for different number of CPUs with different values of threshold. We see that the simple sharing method (Algorithm 1), which corresponds to threshold = 0, fails to get a significant improvement, except for small clusters of 4 CPUs. With sparse sharing (threshold > 0), however, we can get speedups of more than 7x for 8 CPUs, and more than 12x for 16 CPUs. This can also be seen in figure 6, where we plot the speed-up for different number of CPUs for both algorithms.
We would like to note that the datasets used are relatively small, as are the number of topics (k = 50), leading to tasks that are not well suited for parallelization with a large number of CPUs. The purpose of these experiments was simply to measure the effects of the approximation proposed in Algorithm 2 -for greater speed-ups when working with hundreds of CPUs a larger dataset or number of topics would be required. As an example we ran experiments with k = 500, and as can be seen in figure 7, we can get speed-ups closer to the theoretical limit.
To get some perspective on the significance of the approximations being used, in figure 8 we compare our results to a variational Bayes inference implementation. We used the code from [BNJ03] 3 , with its default parameters, and α fixed to 0.1, as in the Gibbs experiments. As can be seen, not only the Gibbs sampler is substantially faster, its perplexity results are better, even with all the approximations.
Conclusion and Discussion
We proposed a simple method to reduce the amount of time spent in synchronization in a distributed implementation of LDA. We present empirical results showing a reasonable speed-up improvement, at the cost of a small reduction in the quality of the learned model. The method is tunable, allowing a trade off between speed and accuracy, and is completely asynchronous. Source code is available at the first authors' web page. 4 As future work we plan to look for more efficient ways of sharing information among CPUs, while also applying the method to larger datasets, where we expect to see more significative speed-up improvements.
| 2,635 |
0909.4603
|
1584512231
|
We investigate the problem of learning a topic model - the well-known Latent Dirichlet Allocation - in a distributed manner, using a cluster of C processors and dividing the corpus to be learned equally among them. We propose a simple approximated method that can be tuned, trading speed for accuracy according to the task at hand. Our approach is asynchronous, and therefore suitable for clusters of heterogenous machines.
|
@cite_7 broke eq:z_sampling in three components and took leverage on the resulting sparsity in @math of some of them -- that, combined with an efficient storage scheme led to a speed up of the order of 20x.
|
{
"abstract": [
"Presents parameter estimation methods common with discrete proba- bility distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed. As an application, the model of latent Dirichlet allocation (LDA) is explained in detail with a full derivation of an approximate inference algorithm based on Gibbs sampling, in- cluding a discussion of Dirichlet hyperparameter estimation. Finally, analysis methods of LDA models are discussed."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2110591510"
]
}
|
Scalable Inference for Latent Dirichlet Allocation
|
Very large datasets are becoming increasingly common -from specific collections, such as Reuters and PubMed, to very broad and large ones, such as the images and metadata of sites like Flickr, scanned books of sites like Google Books and the whole internet content itself. Topic models, such as Latent Dirichlet Allocation (LDA), have proved to be a useful tool to model such collections, but suffer from scalability limitations. Even though there has been some recent advances in speeding up inference for such models, this still remains a fundamental open problem.
Latent Dirichlet Allocation
Before introducing our method we briefly describe the Latent Dirichlet Allocation (LDA) topic model [BNJ03]. In LDA (see Figure 1), each document is modeled as a mixture over K topics, and each topic has a multinomial distribution β k over a vocabulary of V words (please refer to table 1 for a summary of the notation used throughout this paper). For a given document m we first draw a topic distribution θ m from a Dirichlet distribution parametrized by α. Then, for each word n in the document we draw a topic z m,n from a multinomial distribution with parameter θ m . Finally, we draw the word n from the multinomial distribution parametrized by β zm,n . In Appendix A.4, we show that the M-step update for the conditional multinomial parameter # can be written out analytically:
# i j % M & d=1 N d & n=1 ' * dni w j dn .(9)
We further show that the M-step update for Dirichlet parameter ! can be implemented using an efficient Newton-Raphson method in which the Hessian is inverted in linear time.
Smoothing
The large vocabulary size that is characteristic of many document corpora creates serious problems of sparsity. A new document is very likely to contain words that did not appear in any of the documents in a training corpus. Maximum likelihood estimates of the multinomial parameters assign zero probability to such words, and thus zero probability to new documents. The standard approach to coping with this problem is to "smooth" the multinomial parameters, assigning positive probability to all vocabulary items whether or not they are observed in the training set (Jelinek, 1997). Laplace smoothing is commonly used; this essentially yields the mean of the posterior distribution under a uniform Dirichlet prior on the multinomial parameters. Unfortunately, in the mixture model setting, simple Laplace smoothing is no longer justified as a maximum a posteriori method (although it is often implemented in practice; cf. Nigam et al., 1999). In fact, by placing a Dirichlet prior on the multinomial parameter we obtain an intractable posterior in the mixture model setting, for much the same reason that one obtains an intractable posterior in the basic LDA model. Our proposed solution to this problem is to simply apply variational inference methods to the extended model that includes Dirichlet smoothing on the multinomial parameter.
In the LDA setting, we obtain the extended graphical model shown in Figure 7. We treat # as a k × V random matrix (one row for each mixture component), where we assume that each row is independently drawn from an exchangeable Dirichlet distribution. 2 We now extend our inference procedures to treat the # i as random variables that are endowed with a posterior distribution, 2. An exchangeable Dirichlet is simply a Dirichlet distribution with a single scalar parameter $. The density is the same as a Dirichlet (Eq. 1) where ! i = $ for each component.
Inference in LDA
Many inference algorithms for LDA have been proposed, such as variational Bayesian (VB) inference [BNJ03], expectation propagation (EP) [ML02], collapsed Gibbs sampling [GS04,Hei04] and collapsed variational Bayesian (CVB) inference [TNW06]. In this paper we will focus on collapsed Gibbs sampling.
Collapsed Gibbs sampling
Collapsed Gibbs sampling is an MCMC method that works by iterating over each of the latent topic variables z 1 , ..., z n , sampling each z i from P (z i |z ¬i ). This is done by integrating out the other latent variables (θ and β). We are not going to dwell on the details here, since this has already been well explained in [GS04,Hei04], but in essence what we need to do is to sample from this distribution:
p(z i = k|z ¬i , w) ∝ (n k,v,¬i + η) V v=1 (n k,v,¬i + η) (n m,k,¬i + α) (1) ∝ (n k,v,¬i + η) (n k,¬i + V η) (n m,k,¬i + α)(2)
In simple terms, to sample the topic of a word of a document given all the other words and topics we need, for each k in {1, . . . , K}:
1. n k,v,¬i : the total number of times the word's term has been observed with topic k (excluding the word we are sampling for).
2. n k,¬i : the total number of times topic k has been observed in all documents (excluding the word we are sampling for).
3. n m,k,¬i : the number of times topic k has been observed in a word of this document (excluding the word we are sampling for).
Faster sampling
The usual approach to draw samples of z using (1) is to compute a normalization constant Z = K k=1 p(z i = k|z ¬i , w) to obtain a probabily distribution that can be sampled from:
p(z i=k |z ¬i , w) = 1 Z (n k,v,¬i + η) (n k,¬i + V η) (n m,k,¬i + α)(3)
This leads to a complexity for each iteration of standard Gibbs sampling of O(N T K), where N T is the total number of words in the corpus, and K is the number of topics.
[PNI + 08] proposed a way to avoid computing (1) for each K by getting an upper bound on Z using Holder's inequality and computing (1) for the most probable topics first, leading to a speed up of up to 8x of the sampling process.
[YMM09] broke (1) in three components and took leverage on the resulting sparsity in k of some of them -that, combined with an efficient storage scheme led to a speed up of the order of 20x.
3
Parallelism
A complementary approach for scalability is to share the processing among several CPUs/cores, in the same computer (multi core) or in different computers (clusters).
Fine grained parallelism
In most CPU architectures the cost incurred in creating threads/processes and synchronizing data among them can be very significant, making it infeasible to share a task in a fine-grained manner. One exception, however, are Graphics Processing Units (GPUs). Since they were originally designed to parallelize jobs in the pixel level, they are well suited for fine-grained parallelization tasks.
[ MHSO09] proposed to use GPUs to parallelize the sampling at the topic level. Although their work was with collapsed variational Bayesian (CVB) inference [TNW06], it could probably be extended to collapsed Gibbs sampling. It's interesting to note that this kind of parallelization is complementary to the document-level one (see next section), so both can be applied in conjunction.
Coarse grained parallelism
Most of the work on parallelism has been on the document level -each CPU/core is responsible for a set of documents.
Looking at equation (1) it can be seen that in the right hand side we have a document specific variable (n m,k ). Only n k,v (and its sum, n k ), on the left hand side, is shared among all documents. Using this fact, [NASW07] proposed to simply compute a subset of the documents in each CPU, synchronizing the global counts (n k,v ) at the end of each step. This is an approximation, since we are no longer sampling from the true distribution, but from a noisy version of it. They showed, however, that it works well in practice. They also proposed a more principled way of sharing the task using a hierarchical model and, even though that was more costly, the results were similar.
[ASW08] proposed a similar idea, but with an asynchronous model, where there is no global synchronization step (as there is in [NASW07]).
Our method
We follow [ASW08] and work in a coarse-grained asynchronous parallelism, dividing the task at the document level. For simplicity, we split the M documents among the C CPUs equally, so that each CPU receives M C documents 1 . We then proceed in the usual manner, with each CPU running the standard Gibbs sampling in its set of documents. Each CPU, however, keeps a copy of all its modifications to n k,v and, at the end of each iteration, stores them in a file in a shared filesystem. Right after that, it reads all modifications stored by other CPUs and incorporates them to its n k,v . This works in an asynchronous manner, with each CPU saving its modifications and reading other CPU's modifications at the end of each iteration. The algorithm is detailed in 1.
Algorithm 1 Simple sharing
Input: α, η, K, D train , C, num iter Randomly initialize z m,n , updating n k,v and n l k,v accordingly. Save n l k,v to a file for t = 1 to num iter do Run collapsed Gibbs sampling, updating z m,n , n k,v and n l k,v
Save n l k,v to a file Load modifications to n k,v from other CPUs end for
We first note that, in this simple algorithm, the complexity of the sampling step is O(N c K) (whre N c is the number of words being processed in CPU c), while the synchronization part takes O(CKV ) (we save a KxV matrix once and load it C − 1 times). Plugging in the following values, based on a standard large scale task:
• K = 500 topics • C = 100 CPUs • N c = 10 7 words • V = 10 5 terms
we get similar values for the sampling and the synchronization steps. That, however, doesn't take into account the constants. In our experiments, with these parameters a sampling step will take approximately 500 seconds, while the synchronization will take around 20,000 seconds (assuming a 1Gbit/s ethernet connection shared among all CPUs). The bottleneck is clearly in the synchronization step.
We propose, therefore, a variation of the first algorithm. When saving the modifications at the end of an iteration, only save those that are relevant -more formally, save (in a sparse format) only those items of n l k,v for which
n l k,v n k,v > threshold(4)
where threshold is a parameter that can range from 0 to 1. The algorithm is detailed in 2. Note that setting threshold to zero recovers Algorithm 1.
Experiments
Datasets
We ran our experiments in three datasets: NIPS full papers (books.nips.cc), Enron emails (www.cs.cmu.edu/∼enron) and KOS (dailykos.com) 2 . Each dataset was split Algorithm 2 Sparse sharing Input: α, η, K, D train , C, num iter Randomly initialize z m,n , updating n k,v and n l k,v accordingly. Save n l k,v to a file for t = 1 to num iter do Run collapsed Gibbs sampling, updating z m,n , n k,v and n l k,v
for k = 1 to K do for v = 1 to V do Save n l k,v if n l k,v
n k,v > threshold end for end for Load modifications to n k,v from other CPUs end for All experiments were ran in a cluster of 11 machines, each one with a dual-core AMD64 2.4 GHz CPU and 8 Gb of RAM (22 CPUs total). All machines share a network file system over an 1GB Ethernet network.
We used a fixed set of LDA parameters: K = 50 (unless otherwise noticed), α = 0.1, η = 0.01 and 1500 iterations of the Gibbs sampler. To compare the quality of different approximations we computed the perplexity of a held-out test set. The perplexity is commonly used in language modeling: it is equivalent to the inverse of the geometric mean per-word likelihood. Formally, given a test set of M test documents:
perplexity(D test ) = exp − Mtest m=1 Nm n=1 log p(w m,n ) Mtest m=1 N m (5)
Results
In figure 2 we compare running time and perplexity for different values of threshold and different number of CPUs. We can see that as we increase threshold we can significantly reduce training time, with just a small impact on the quality of the approximation, measured by the perplexity computed on a held-out test set. We can also see that, as expected, the training time reduction becomes more significant as we increasing the amount of information that has to be shared, by adding more CPUs to the 6 task.
In figure 3 we show the proportion of time spent in synchronization at each iteration when training the LDA model with different numbers of CPUs. By increasing threshold we can substantially decrease synchronization time. As expected, as the number of CPUs increase synchronization starts to dominate over processing time.
In figure 4 we show the amount of information saved at each step for different values of threshold. We see that in the first few iterations the savings obtained by Algorithm 2 are small, since almost all modifications are relevant, but as the model converges the amount of relevant information stabilizes at a lower level. We can also see that as we add more CPUs the savings become more prominent -this is expected, since then modifications of a single CPU tend to be less relevant as it becomes responsible for a smaller proportion of the corpus.
In figure 5 we plot the speed-up obtained for different number of CPUs with different values of threshold. We see that the simple sharing method (Algorithm 1), which corresponds to threshold = 0, fails to get a significant improvement, except for small clusters of 4 CPUs. With sparse sharing (threshold > 0), however, we can get speedups of more than 7x for 8 CPUs, and more than 12x for 16 CPUs. This can also be seen in figure 6, where we plot the speed-up for different number of CPUs for both algorithms.
We would like to note that the datasets used are relatively small, as are the number of topics (k = 50), leading to tasks that are not well suited for parallelization with a large number of CPUs. The purpose of these experiments was simply to measure the effects of the approximation proposed in Algorithm 2 -for greater speed-ups when working with hundreds of CPUs a larger dataset or number of topics would be required. As an example we ran experiments with k = 500, and as can be seen in figure 7, we can get speed-ups closer to the theoretical limit.
To get some perspective on the significance of the approximations being used, in figure 8 we compare our results to a variational Bayes inference implementation. We used the code from [BNJ03] 3 , with its default parameters, and α fixed to 0.1, as in the Gibbs experiments. As can be seen, not only the Gibbs sampler is substantially faster, its perplexity results are better, even with all the approximations.
Conclusion and Discussion
We proposed a simple method to reduce the amount of time spent in synchronization in a distributed implementation of LDA. We present empirical results showing a reasonable speed-up improvement, at the cost of a small reduction in the quality of the learned model. The method is tunable, allowing a trade off between speed and accuracy, and is completely asynchronous. Source code is available at the first authors' web page. 4 As future work we plan to look for more efficient ways of sharing information among CPUs, while also applying the method to larger datasets, where we expect to see more significative speed-up improvements.
| 2,635 |
0908.0362
|
2951024449
|
This paper studies the problem of utility maximization for clients with delay based QoS requirements in wireless networks. We adopt a model used in a previous work that characterizes the QoS requirements of clients by their delay constraints, channel reliabilities, and delivery ratio requirements. In this work, we assume that the utility of a client is a function of the delivery ratio it obtains. We treat the delivery ratio for a client as a tunable parameter by the access point (AP), instead of a given value as in the previous work. We then study how the AP should assign delivery ratios to clients so that the total utility of all clients is maximized. We apply the techniques introduced in two previous papers to decompose the utility maximization problem into two simpler problems, a CLIENT problem and an ACCESS-POINT problem. We show that this decomposition actually describes a bidding game, where clients bid for the service time from the AP. We prove that although all clients behave selfishly in this game, the resulting equilibrium point of the game maximizes the total utility. In addition, we also establish an efficient scheduling policy for the AP to reach the optimal point of the ACCESS-POINT problem. We prove that the policy not only approaches the optimal point but also achieves some forms of fairness among clients. Finally, simulation results show that our proposed policy does achieve higher utility than all other compared policies.
|
There has been a lot of research on providing QoS over wireless channels. Most of the research has focused on admission control and scheduling policies. Hou, Borkar, and Kumar @cite_13 and Hou and Kumar @cite_11 have proposed analytical models to characterize QoS requirements, and have also proposed both admission control and scheduling policies. Ni, Romdhani, and Turletti @cite_5 provides an overview of the IEEE 802.11 mechanisms and discusses the limitations and challenges in providing QoS in 802.11. Gao, Cai, and Ngan @cite_6 , Niyato and Hossain @cite_14 , and Ahmed @cite_0 have surveyed existing admission control algorithms in different types of wireless networks. On the other hand, Fattah and Leung @cite_8 and Cao and Li @cite_1 have provided extensive surveys on scheduling policies for providing QoS.
|
{
"abstract": [
"This article presents a survey on the issues and the approaches related to designing call admission control schemes for fourth-generation wireless systems. We review the state of the art of CAC algorithms used in the traditional wireless networks. The major challenges in designing the CAC schemes for 4G wireless networks are identified. These challenges are mainly due to heterogeneous wireless access environments, provisioning of quality of service to multiple types of applications with different requirements, provisioning for adaptive bandwidth allocation, consideration of both call-level and packet-level performance measures, and consideration of QoS at both the air interface and the wired Internet. To this end, architecture of a two-tier CAC scheme for a differentiated services cellular wireless network is presented. The proposed CAC architecture is based on the call-level and packet-level QoS considerations at both the wireless and wired parts of the network. A performance analysis model for an example CAC scheme based on this architecture is outlined, and typical numerical results are presented.",
"Scheduling algorithms are important components in the provision of guaranteed quality of service parameters such as delay, delay jitter, packet loss rate, or throughput. The design of scheduling algorithms for mobile communication networks is especially challenging given the highly variable link error rates and capacities, and the. changing mobile station connectivity typically encountered in such networks. This article provides a survey of scheduling techniques for several types of wireless networks. Some of the challenges in designing such schedulers are first discussed. Desirable features and classifications of schedulers are then reviewed. This is followed by a discussion of several, scheduling algorithms which have been proposed for TDMA, CDMA, and multihop packet networks.",
"Scheduling algorithms that support quality of service (QoS) differentiation and guarantees for wireless data networks are crucial to the development of broadband wireless networks. Wireless communication poses special problems that do not exist in wireline networks, such as time-varying channel capacity and location-dependent errors. Although many mature scheduling algorithms are available for wireline networks, they are not directly applicable in wireless networks because of these special problems. This paper provides a comprehensive and in-depth survey on recent research in wireless scheduling. The problems and difficulties in wireless scheduling are discussed. Various representative algorithms are examined. Their themes of thoughts and pros and cons are compared and analyzed. At the end of the paper, some open questions and future research directions are addressed.",
"Although IEEE 802.11 based wireless local area networks have become more and more popular due to low cost and easy deployment, they can only provide best effort services and do not have quality of service supports for multimedia applications. Recently, a new standard, IEEE 802.11e, has been proposed, which introduces a so-called hybrid coordination function containing two medium access mechanisms: contention-based channel access and controlled channel access. In this article we first give a brief tutorial on the various MAC-layer QoS mechanisms provided by 802.11e. We show that the 802.11e standard provides a very powerful platform for QoS supports in WLANs. Then we provide an extensive survey of recent advances in admission control algorithms protocols in IEEE 802.11e WLANs. Our survey covers the research work in admission control for both EDCA and HCCA. We show that the new MAC-layer QoS schemes and parameters provided in EDCA and HCCA can be well utilized to fulfill the requirements of admission control so that QoS for multimedia applications can be provided in WLANs. Last, we give a summary of the design of admission control in EDCA and HCCA, and point out the remaining challenges.",
"",
"Quality-of-service (QoS) is a key problem of today's IP networks. Many frameworks (ImServ, DiffServ, MPLS etc.) have been proposed to provide service differentiation in the Internet. At the same time, the Internet is becoming more and more heterogeneous due to the recent explosion of wireless networks. In wireless environments, bandwidth is scarce and channel conditions are time-varying and sometimes highly lossy. Many previous research works show that what works well in a wired network cannot be directly applied in the wireless environment. Although IEEE 802.11 wireless LAN (WLAN) is the most widely used IEEE 802.11 wireless LAN (WLAN) standard today, it cannot provide QoS support for the increasing number of multimedia applications. Thus. a large number of 802.11 QoS enhancement schemes have been proposed, each one focusing on a particular mode. This paper summarizes all these schemes and presents a survey of current research activities. First, we analyze the QoS limitations of IEEE 802.11 wireless MAC layers. Then, different QoS enhancement techniques proposed for 802.11 WLAN are described and classified along with their advantages drawbacks. Finally, the upcoming IEEE 802.1 le QoS enhancement standard is introduced and studied in detail.",
"Wireless networks are increasingly used to carry applications with QoS constraints. Two problems arise when dealing with traffic with QoS constraints. One is admission control, which consists of determining whether it is possible to fulfill the demands of a set of clients. The other is finding an optimal scheduling policy to meet the demands of all clients. In this paper, we propose a framework for jointly addressing three QoS criteria: delay, delivery ratio, and channel reliability. We analytically prove the necessary and sufficient condition for a set of clients to be feasible with respect to the above three criteria. We then establish an efficient algorithm for admission control to decide whether a set of clients is feasible. We further propose two scheduling policies and prove that they are feasibility optimal in the sense that they can meet the demands of every feasible set of clients. In addition, we show that these policies are easily implementable on the IEEE 802.11 mechanisms. We also present the results of simulation studies that appear to confirm the theoretical studies and suggest that the proposed policies outperform others tested under a variety of settings.",
"Providing differentiated Quality of Service (QoS) over unreliable wireless channels is an important challenge for supporting several future applications. We analyze a model that has been proposed to describe the QoS requirements by four criteria: traffic pattern, channel reliability, delay bound, and throughput bound. We study this mathematical model and extend it to handle variable bit rate applications. We then obtain a sharp characterization of schedulability vis-a-vis latencies and timely throughput. Our results extend the results so that they are general enough to be applied on a wide range of wireless applications, including MPEG Variable-Bit-Rate (VBR) video streaming, VoIP with differentiated quality, and wireless sensor networks (WSN). Two major issues concerning QoS over wireless are admission control and scheduling. Based on the model incorporating the QoS criteria, we analytically derive a necessary and sufficient condition for a set of variable bit-rate clients to be feasible. Admission control is reduced to evaluating the necessary and sufficient condition. We further analyze two scheduling policies that have been proposed, and show that they are both optimal in the sense that they can fulfill every set of clients that is feasible by some scheduling algorithms. The policies are easily implemented on the IEEE 802.11 standard. Simulation results under various settings support the theoretical study."
],
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2054167523",
"1973473949",
"2153420862",
"2117584078",
"",
"2124441589",
"2167264214",
"1985441584"
]
}
|
Utility Maximization for Delay Constrained QoS in Wireless
|
I. INTRODUCTION We study how to provide QoS to maximize utility for wireless clients. We jointly consider the delay constraint and channel unreliability of each client. The access point (AP) assigns delivery ratios to clients under the delay and reliability constraints. This distinguishes our work from most other work on providing QoS where the delivery ratios to clients are taken as given inputs rather than tunable parameters.
We consider the scenario where there is one AP that serves a set of wireless clients. We extend the model proposed in a previous work [8]. This model analytically describes three important factors for QoS: delay, channel unreliability, and delivery ratio. The previous work also provides a necessary and sufficient condition for the This material is based upon work partially supported by USARO under Contract Nos. W911NF-08-1-0238 and W-911-NF-0710287, AFOSR under Contract FA9550-09-0121, and NSF under Contract Nos. CNS-07-21992, ECCS-0701604, CNS-0626584, and CNS-05-19535. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the above agencies. demands of the set of clients to be feasible. In this work, we treat the delivery ratios for clients as variables to be determined by the AP. We assume that each client receives a certain amount of utility when it is provided a delivery ratio. The relation between utility and delivery ratio is described by an utility function, which may differ from client to client. Based on this model, we study the problem of maximizing the total utility of all clients, under feasibility constraints. We show that this problem can be formulated as a convex optimization problem.
Instead of solving the problem directly, we apply the techniques introduced by Kelly [10] and Kelly, Maulloo, and Tan [11] to decompose the problem of system utility maximization into two simpler subproblems that describe the behaviors of the clients and the AP, respectively. We prove that the utility maximization problem can be solved by jointly solving the two simpler subproblems. Further, we describe a bidding game for the reconciliation between the two subproblems. In this game, clients bid for service time from the AP, and the AP assigns delivery ratios to clients according to their bids, to optimize its own subproblem, under feasibility constraints. Based on the AP's behavior, each client aims to maximize its own net utility, that is, the difference between the utility it obtains and the bid it pays. We show that, while all clients behave selfishly in the game, the equilibrium point of the game solves the two subproblems jointly, and hence maximizes the total utility of the system.
We then address how to design a scheduling policy for the AP to solve its subproblem. We propose a very simple priority based scheduling algorithm for the AP. This policy requires no information of the underlying channel qualities of the clients and thus needs no overhead to probe or estimate the channels. We prove that the long-term average performance of this policy converges to a single point, which is in fact the solution to the subproblem for the AP. Further, we also establish that the policy achieves some forms of fairness.
Our contribution is therefore threefold. First, we formulate the problem of system utility maximization as a convex optimization problem. We then show that this problem is amenable to solution by a bidding game. Finally, we propose a very simple priority based AP scheduling policy to solve the AP's subproblem, that can be used in the bidding iteration to reach the optimal point of the system's utility maximization problem.
Finally, we conduct simulation studies to verify all the theoretical results. Simulations show that the performance of the proposed scheduling policy converges quickly to the optimal value of the subproblem for AP. Also, by jointly applying the scheduling policy and the bidding game, we can achieve higher total utility than all other compared policies.
The rest of the paper is organized as follows: Section II reviews some existing related work. Section III introduces the model for QoS proposed in [8] and also summarizes some related results. In Section IV, we formulate the problem of utility maximization as a convex programming problem. We also show that this problem can be decomposed into two subproblems. Section V describes a bidding game that jointly solves the two subproblems. One phase of the bidding game consists of each client selfishly maximizing its own net profit, and the other phase consists of the AP scheduling client transmissions to optimize its subproblem. Section VI addresses the scheduling policy to optimize this latter subproblem. Section VII demonstrates some simulation studies. Finally, Section VIII concludes this paper.
III. SYSTEM MODEL AND FEASIBILITY CONDITION
We adopt the model proposed in a previous work [8] to capture two key QoS requirements, delay constraints and delivery ratio requirements, and incorporating channel conditions for users. In this section, we describe the proposed model and summarize relevant results of [8].
We consider a system with N clients, numbered as {1, 2, . . . , N }, and one access point (AP). Packets for clients arrive at the AP and the AP needs to dispatch packets to clients to meet their respective requirements. We assume that time is slotted, with slots numbered as {0, 1, 2, . . . }. The AP can make exactly one transmission in each time slot. Thus, the length of a time slot would include the times needed for transmitting a DATA packet, an ACK, and possibly other MAC headers. Assume there is one packet arriving at the AP periodically for each client, with a fixed period of τ time slots, at time slots 0, τ, 2τ, . . . . Each packet that arrives at the beginning of a period [kτ, (k+1)τ ) must be delivered within the ensuing period, or else it expires and is dropped from the system at the end of this period. Thus, a delay constraint of τ time slots is enforced on all successfully delivered packet. Further, unreliable and heterogeneous wireless channels to these clients are considered. When the AP makes a transmission for client n, the transmission succeeds (by which is meant the successful deliveries of both the DATA packet and the ACK) with probability p n . Due to the unreliable channels and delay constraint, it may not be possible to deliver the arrived packets of all the clients. Therefore, each client stipulates a certain delivery ratio q n that it has to receive, which is defined as the average proportion of periods in which its packet is successfully delivered. The previous work also shows how this model can be used to capture scenarios where both uplink traffic and downlink traffic exist.
Below we describe the formal definitions of the concepts of fulfilling a set of clients and the feasibility of a set of client requirements.
Definition 1: A set of clients with the above QoS constraints is said to be fulfilled by a particular scheduling policy η of the AP if the time averaged delivery ratio of each client is at least q n with probability 1.
Definition 2: A set of clients is feasible if there exists some scheduling policy of the AP that fulfills it.
Whether a certain client is fulfilled can be decided by the average number of time slots that the AP spends on working for the client per period:
Lemma 1: The delivery ratio of client n converges to q n with probability one if and only if the work performed on client n, defined as the long-term average number of time slots that the AP spends on working for client n per period, converges to w n (q n ) = qn pn with probability one. We therefore call w n (q n ) the workload of client n.
Since expired packets are dropped from the system at the end of each period, there is exactly one packet for each client at the beginning of each period. Therefore, there may be occasions where the AP has delivered all packets before the end of a period and is therefore forced to stay idle for the remaining time slots in the period. Let I S be the expected number of such forced idle time slots in a period when the client set is just S ⊆ {1, 2, . . . , N } (i.e., all clients except those in S are removed from consideration), and the AP only caters to the subset S of clients. Since each client n ∈ S requires w n time slots per period on average, we can obtain a necessary condition for feasibility: i∈S w i (q i ) + I S ≤ τ , for all S ⊆ {1, 2, . . . , N }. It is shown in [8] that this necessary condition is also sufficient:
Theorem 1: A set of clients, with delivery ratio requirements [q n ], is feasible if and only if i∈S qi pi ≤ τ − I S , for all S ⊆ {1, 2, . . . , N }.
IV. UTILITY MAXIMIZATION AND DECOMPOSITION
In the previous section, it is assumed that the delivery ratio requirements, [q n ], are given and fixed. In this paper, we address the problem of how to choose q := [q n ] so that the total utility of all the clients in the system can be maximized.
We begin by supposing that each client has a certain utility function, U n (q n ), which is strictly increasing, strictly concave, and continuously differentiable function over the range 0 < q n ≤ 1, with the value at 0 defined as the right limit, possibly −∞. The problem of choosing q n to maximize the total utility, under the feasibility constraint of Theorem 1, can be described by the following convex optimization problem:
SYSTEM:
Max N i=1 U i (q i )(1)s.t. i∈S q i p i ≤ τ − I S , ∀S ⊆ {1, 2, . . . , N },(2)over q n ≥ 0, ∀1 ≤ n ≤ N.(3)
It may be difficult to solve SY ST EM directly. So, we decompose it into two simpler problems, namely, CLIEN T and ACCESS-P OIN T , as described below. This decomposition was first introduced by Kelly [10], though in the context of dealing with rate control for nonreal time traffic.
Suppose client n is willing to pay an amount of ρ n per period, and receives a long-term average delivery ratio q n proportional to ρ n , with ρ n = ψ n q n . If ψ n > 0, the utility maximization problem for client n is:
CLIENT n : Max U n ( ρ n ψ n ) − ρ n (4) over 0 ≤ ρ n ≤ ψ n .(5)
On the other hand, given that client n is willing to pay ρ n per period, we suppose that the AP wishes to find the vector q to maximize N i=1 ρ i log q i , under the feasibility constraints. In other words, the AP has to solve the following optimization problem:
ACCESS-POINT:
Max N i=1 ρ i log q i(6)s.t. i∈S q i p i ≤ τ − I S , ∀S ⊆ {1, 2, . . . , N },(7)
over q n ≥ 0, ∀1 ≤ n ≤ N.
We begin by showing that solving ACCESS-P OIN T is equivalent to jointly solving CLIEN T n and ACCESS-P OIN T .
Theorem 2: There exist non-negative vectors q, ρ := [ρ n ], and ψ := [ψ n ], with ρ n = ψ n q n , such that:
(i) For n such that ψ n > 0, ρ n is a solution to CLIEN T n ; (ii) Given that each client n pays ρ n per period, q is a solution to ACCESS-P OIN T . Further, if q, ρ, and ψ are all positive vectors, the vector q is also a solution to SY ST EM .
Proof: We will first show the existence of q, ρ, and ψ that satisfy (i) and (ii). We will then show that the resulting q is also the solution to SY ST EM .
There exists some ǫ > 0 so that by letting q n ≡ ǫ, the vector q is an interior point of the feasible region for both SY ST EM (2) (3), and ACCESS-P OIN T (7) (8). Also, by setting ρ n ≡ ǫ, ρ n is also an interior point of the feasible region for CLIEN T n (5). Therefore, by Slater's condition, a feasible point for SY ST EM , CLIEN T n , or ACCESS-P OIN T , is the optimal solution for the respective problem if and only if it satisfies the corresponding Karush-Kuhn-Tucker (KKT) condition for the problem. Further, since the feasible region for each of the problems is compact, and the utilities are continuous on it, or since the utility converges to −∞ at q n = 0, there exists an optimal solution to each of them.
The Lagrangian of SY ST EM is:
L SY S (q, λ, ν) := − N i=1 U i (q i ) + S⊆{1,2,...,N } λ S [ i∈S qi pi − (τ − I S )] − N i=1 ν i q i ,
where λ := [λ S : S ⊆ {1, 2, . . . , N }] and ν := [ν n : 1 ≤ n ≤ N ] are the Lagrange multipliers. By the KKT condition, a vector q * := [q * 1 , q * 2 , . . . , q * N ] is the optimal solution to SYSTEM if q * is feasible and there exists vectors λ * and ν * such that:
∂LSY S ∂qn q * ,λ * ,ν * = −U ′ n (q * n ) + P S∋n λ * S pn − ν * n = 0, ∀1 ≤ n ≤ N,(9)λ * S [ i∈S q * i p i − (τ − I S )] = 0, ∀S ⊆ {1, 2, . . . , N },(10)ν * n q * n = 0, ∀1 ≤ n ≤ N,(11)λ * S ≥ 0, ∀S ⊆ {1, . . . , N }, and ν * n ≥ 0, ∀1 ≤ n ≤ N. (12)
The Lagrangian of CLIEN T n is:
L CLI (ρ n , ξ n ) := −U n ( ρ n ψ n ) + ρ n − ξ n ρ n ,
where ξ n is the Lagrange multiplier for CLIEN T n . By the KKT condition, ρ * n is the optimal solution to CLIEN T n if ρ * n ≥ 0 and there exists ξ * n such that:
dL CLI dρ n ρ * n ,ξ * n = − 1 ψ n U ′ n ( ρ * n ψ n ) + 1 − ξ * n = 0,(13)ξ * n ρ * n = 0,(14)ξ * n ≥ 0.(15)
Finally, the Lagrangian of ACCESS-P OIN T is:
L N ET (q, ζ, µ) := − N i=1 ρ i log q i + S⊆{1,2,...,N } ζ S [ i∈S qi pi − (τ − I S )] − N i=1 µ i q i ,
where ζ := [ζ S : S ⊆ {1, 2, . . . , N }] and µ := [µ n : 1 ≤ n ≤ N ] are the Lagrange multipliers. Again, by the KKT condition, a vector q * := [q * n ] is the optimal solution to ACCESS-P OIN T if q * is feasible and there exists vectors ζ * and µ * such that:
∂LNET ∂qn q * ,ζ * ,µ * = − ρn q * n + P S∋n ζ * S pn − µ * n = 0, ∀1 ≤ n ≤ N,(16)ζ * S [ i∈S q * i p i − (τ − I S )] = 0, ∀S ⊆ {1, 2, . . . , N },(17)µ * n q * n = 0, ∀1 ≤ n ≤ N, (18) ζ * S ≥ 0, ∀S ⊆ {1, . . . , N }, and µ * n ≥ 0, ∀1 ≤ n ≤ N. (19)
Let q * be a solution to SY ST EM , and let λ * , ν * be the corresponding Lagrange multipliers that satisfy conditions (9)- (12). Let q n = q * n , ρ n = P S∋n λ * S pn q * n , and ψ n = P S∋n λ * S pn , for all n. Clearly, q, ρ, and ψ are all nonnegative vectors. We will show (q, ρ, ψ) satisfy (i) and (ii).
We first show (i) for all n such that ψ n = P S∋n λ * S pn > 0. It is obvious that ρ n = ψ n q n . Also, ρ n ≥ 0, since λ * S ≥ 0 (by (12)) and q * n ≥ 0 (since q * is feasible). Further, let the Lagrange multiplier of CLIEN T n , ξ n , be equal to ν * n / P S∋n λ * S pn = ν * n /ψ n . Then we have: (9), ξ n ρ n = ν * n ψ n ψ n q * n = ν * n q * n = 0, by (11) ξ n = ν * n / S∋n λ * S p n ≥ 0, by (12).
∂LCLI ∂ρn ρn,ξn = − 1 ψn U ′ n ( ρn ψn ) + 1 − ξ n = 1 ψn (−U ′ n ( ρn ψn ) + ψ n − ψ n ξ n ) = 1 ψn (−U ′ n (q * n ) + P S∋n λ * S pn − ν * n ) = 0, by
In sum, (ρ, ψ, ξ) satisfies the KKT conditions for CLIEN T n , and therefore ρ n is a solution to CLIEN T n , with ρ n = ψ n q n . Next we establish (ii). Since q = q * is the solution to SY ST EM , it is feasible. Let the Lagrange multipliers of ACCESS-P OIN T be ζ S = λ * S , ∀S, and µ n = 0, ∀n, respectively. Given that each client n pays ρ n per period, we have: (10), µ n q n = 0 × q n = 0, ∀n, ζ S = λ * S ≥ 0, ∀S (by (12)), and µ n ≥ 0, ∀n.
∂LNET ∂qn q,ζ,µ = − ρn qn + P S∋n ζS pn − µ n = −ψ n + ψ n − 0 = 0, ∀n, ζ S [ i∈S qi pi − (τ − I S )] = λ * S [ i∈S q * i pi − (τ − I S )] = 0, ∀S, by
Therefore, (q, ζ, µ) satisfies the KKT condition for ACCESS-P OIN T and thus q is a solution to ACCESS-P OIN T . For the converse, suppose (q, ρ, ψ) are positive vectors with ρ n = ψ n q n , for all n, that satisfy (i) and (ii). We wish to show that q is a solution to SY ST EM . Let ξ n be the Lagrange multiplier for CLIEN T n . Since we assume ψ n > 0 for all n, the problem CLIEN T n is well-defined for all n, and so is ξ n . Also, let ζ and µ be the Lagrange multipliers for ACCESS-P OIN T . Since q n > 0 for all n, we have µ n = 0 for all n by (18). By (16), we also have:
∂LNET ∂qn q,ζ,µ = − ρn qn + P S∋n ζS pn − µ n = −ψ n + P S∋n ζS pn = 0,
and thus ψ n = P S∋n ζS pn . Let λ S = ζ S , for all S, and ν n = ψ n ξ n , for all n. We claim that q is the optimal solution to SY ST EM with Lagrange multipliers λ and ν.
Since q is a solution to ACCESS-P OIN T , it is feasible. Further, we have: (17), ν n q n = ξ n ρ n = 0, ∀n, by (14), (19), ν n = ψ n ξ n ≥ 0, ∀n, by (15).
∂LSY S ∂qn q,λ,ν = −U ′ n (q n ) + P S∋n λS pn − ν n = −U ′ n ( ρn ψn ) + ψ n − ψ n ξ n = 0, ∀n, by (13), λ S [ n∈S qn pn − (τ − I S )] = ζ S [ n∈S qn pn − (τ − I S )] = 0, ∀S, byλ S = ζ S ≥ 0, ∀S, by
Thus, (q, λ, ν) satisfy the KKT condition for SY ST EM , and so q is a solution to SY ST EM .
V. A BIDDING GAME BETWEEN CLIENTS AND ACCESS POINT
Theorem 2 states that the maximum total utility of the system can be achieved when the solutions to the problems CLIEN T n and ACCESS-P OIN T agree. In this section, we formulate a repeated game for such reconciliation. We also discuss the meanings of the problems CLIEN T n and ACCESS-P OIN T in this repeated game.
The repeated game is formulated as follows:
Step 1: Each client n announces an amount ρ n that it pays per period.
Step 2: After noting the amounts, ρ 1 , ρ 2 , . . . , ρ N , paid by the clients, the AP chooses a scheduling policy so that the resulting long-term delivery ratio, q n , for each client maximizes N i=1 ρ i log q i . Step 3: The client n observes its own delivery ratio, q n . It computes ψ n := ρ n /q n . It then determines ρ * n ≥ 0 to maximize U n ( ρ * n ψn ) − ρ * n . Client n updates the amount it pays to (1−α)ρ n +αρ * n , with some fixed 0 < α < 1, and announces the new bid value.
Step 4: Go back to Step 2.
In Step 3 of the game, client n chooses its new amount of payment as a weighted average of the past amount and the derived optimal value, instead of the derived optimal value. This design serves two purposes. First, it seeks to avoid the system from oscillating between two extreme values. Second, since ρ n is initiated to a positive value, and ρ * n derived in each iteration is always non-negative, this design guarantees ρ n to be positive throughout all iterations. Since ψ n = ρ n /q n , this also ensures ψ n > 0 and the function U n ( ρn ψn ) is consequently always well-defined. We show that the fixed point of this repeated game maximizes the total utility of the system: Theorem 3: Suppose at the fixed point of the repeated game, each client n pays ρ * n per period, and receives delivery ratio q * n . If both ρ * n and q * n are positive for all n, the vector q * maximizes the total utility of the system.
Proof:
Let ψ * n = ρ * n q * n .
It is positive since both ρ * n and q * n are positive. Since the vectors q * and ρ * are derived from the fixed point, ρ * n maximizes U n ( ρn ψ * n )−ρ n , over all ρ n ≥ 0, as described in Step 3 of the game. Thus, ρ * n is a solution to CLIEN T n , given ρ * n = ψ * n q * n . Similarly, from Step 2, q * is the feasible vector that maximizes N i=1 ρ * i log q i , over all feasible vectors q. Thus, q * is a solution to ACCESS-P OIN T , given that each client n pays ρ * n per period. By Theorem 2, q * is the unique solution to SY ST EM and therefore maximizes the total utility of the system.
Next, we describe the meaning of the game. In Step 3, client n assumes a linear relation between the amount it pays, ρ n , and the delivery ratio it receives, q n . To be more exactly, it assumes ρ n = ψ n q n , where ψ n is the price. Thus, maximizing U n ( ρn ψn )−ρ n is equivalent to maximizing U n (q n ) − ρ n . Recall that U n (q n ) is the utility that client n obtains when it receives delivery ratio q n . U n (q n ) − ρ n is therefore the net profit that client n gets. In short, in
Step 3, the goal of client n is to selfishly maximize its own net profit using a first order linear approximation to the relation between payment and delivery ratio.
We next discuss the behavior of the AP in Step 2. The AP schedules clients so that the resulting delivery ratio vector q is a solution to the problem ACCESS-P OIN T , given that each client n pays ρ n per period. Thus, q is feasible and there exists vectors ζ and µ that satisfy conditions (16)-(19). While it is difficult to solve this problem, we consider a special restrictive case that gives us a simple solution and insights into the AP's behavior. Let T OT := {1, 2, . . . , N } be the set that consists of all clients. We assume that a solution (q, ζ, µ) to the problem has the following properties: ζ S = 0, for all S = T OT , ζ T OT > 0, and µ n = 0, for all n. By (16), we have:
− ρ n q n + S∋n ζ S p n − µ n = − ρ n q n + ζ T OT p n = 0,
and therefore q n = p n ρ n /ζ T OT . Further, since ζ T OT > 0, (17) requires that:
i∈T OT q i p i − (τ − I T OT ) = i∈T OT ρ i ζ T OT − (τ − I T OT ) = 0.
Thus, ζ T OT = P N i=1 ρi τ −IT OT and qn pn = ρn P N i=1 mi (τ − I T OT ), for all n. Notice that the derived (q, ζ, µ) satisfies conditions (16)-(19). Thus, under the assumption that q is feasible, this special case actually maximizes N i=1 ρ i log q i . In Section VI we will address the general situation without any such assumption, since it needs not be true.
Recall that I T OT is the average number of time slots that the AP is forced to be idle in a period after it has completed all clients. Also, by Lemma 1, qn pn is the workload of client n, that is, the average number of time slots that the AP should spend working for client n. Thus, letting qn pn = ρn P N i=1 ρi (τ − I T OT ), for all n, the AP tries to allocate those non-idle time slots so that the average number of time slots each client gets is proportional to its payment. Although we only study the special case of I T OT here, we will show that the same behavior also holds for the general case in the Section VI.
In summary, the game proposed in this section actually describes a bidding game, where clients are bidding for non-idle time slots. Each client gets a share of time slots that is proportional to its bid. The AP thus assigns delivery ratios, based on which the clients calculate a price and selfishly maximize their own net profits. Finally, Theorem 3 states that the equilibrium point of this game maximizes the total utility of the system.
VI. A SCHEDULING POLICY FOR SOLVING
ACCESS-P OIN T
In Section V, we have shown that by setting q n = p n ρn P N i=1 mi (τ − I T OT ), the resulting vector q solves ACCESS-P OIN T provided q is indeed feasible. Unfortunately, such q is not always feasible and solving ACCESS-P OIN T is in general difficult. Even for the special case discussed in Section V, solving ACCESS-P OIN T requires knowledge of channel conditions, that is, p n . In this section, we propose a very simple priority based scheduling policy that can achieve the optimal solution for ACCESS-P OIN T , and that too without any knowledge of the channel conditions.
In the special case discussed in Section V, the AP tries, though it may be impossible in general, to allocate nonidle time slots to clients in proportion to their payments. Based on this intuitive guideline, we design the following scheduling policy. Let u n (t) be the number of time slots that the AP has allocated for client n up to time t. At the beginning of each period, the AP sorts all clients in increasing order of un(t) ρn , so that u1(t) ρ1 ≤ u2(t) ρ2 ≤ . . . after renumbering clients if necessary. The AP then schedules transmissions according to the priority ordering, where clients with smaller un(t) ρn get higher priorities. Specifically, in each time slot during the period, the AP chooses the smallest i for which the packet for client i is not yet delivered, and then transmits the packet for client i in that time slot. We call this the weighted transmission policy (WT). Notice that the policy only requires the AP to keep track of the bids of clients and the number of time slots each client has been allocated in the past, followed by a sorting of un(t) ρn among all clients. Thus, the policy requires no information on the actual channel conditions, and is tractable. Simple as it is, we show that the policy actually achieves the optimal solution for ACCESS-P OIN T . In the following sections, we first prove that the vector of delivery ratios resulting from the WT policy converges to a single point. We then prove that this limit is the optimal solution for ACCESS-P OIN T . Finally, we establish that the WT policy additionally achieves some forms of fairness.
A. Convergence of the Weighted Transmission Policy
We now prove that, by applying the WT policy, the delivery ratios of clients will converge to a vector q. To do so, we actually prove the convergence property and precise limit of a more general class of scheduling policies, which not only consists of the WT policy but also a scheduling policy proposed in [8]. The proof is similar to that used in [8] and is based on Blackwell's approachability theorem [3]. The proof in [8] only shows that the vector of delivery ratios approaches a desirable set in the N -space under a particular policy, while here we prove that the vector of delivery ratios converges to a single point under a more general class of scheduling policies. Thus, our result is both stronger and more general than the one in [8].
We start by introducing Blackwell's approachability theorem. Consider a single player repeated game. In each round i of the game, the player chooses some action, a(i), and receives a reward v(i), which is a random vector whose distribution is a function of a(i). Blackwell studies the long-term average of the rewards received, lim j→∞ j i=1 v(i)/j, defining a set as approachable, under some policy, if the distance between j i=1 v(i)/j and the set converges to 0 with probability one, as j → ∞.
Theorem 4 (Blackwell [3]): Let A ⊆ R N be any closed set. Suppose that for every x / ∈ A, a policy η chooses an action a (= a(x)), which results in an expected payoff vector E(v). If the hyperplane through y, the closest point in A to x, perpendicular to the line segment xy, separates x from E(v), then A is approachable with the policy η.
Now we formulate our more general class of scheduling policies. We call a policy a generalized transmission time policy if, for a choice of a positive parameter vector a and non-negative parameter vector b, the AP sorts clients by a n u n (t) − b n t at the beginning of each period, and gives priorities to clients with lower values of this quantity. Note that the special case a n ≡ 1 ρn and b n ≡ 0 yields the WT policy, while the choice a n ≡ 1 and b n ≡ qn pn yields the largest time-based debt first policy of [8], and thus we describe a more general set of policies.
Theorem 5: For each generalized transmission time policy, there exists a vector q such that the vector of work loads resulting from the policy converges to w(q) := [w n (q n )].
Proof: Given the parameters {(a n , b n ) : 1 ≤ n ≤ N }, we give an exact expression for the limiting q. We define a sequence of sets
(I H k−1 − I H k ) − n∈H k \H k−1
bn an n∈H k \H k−1 1/a n , for all k > 0.
In selecting H k , we always choose a maximal subset, breaking ties arbitrary. (H 1 , θ 1 ), (H 2 , θ 2 ), . . . , can be iteratively defined until every client is in some H k . Also, by the definition, we have θ k > θ k−1 , for all k > 0. If client n is in H k \H k−1 , we define q n := τ p n bn+θ k an , and so w n (q n ) = τ bn+θ k an . The proof of convergence consists of two parts. First we prove that the vector of work performed (see Lemma 1 for definition) approaches the set {w * |w * n ≥ w n (q n )}. Then we prove that w(q) is the only feasible vector in the set {w * |w * n ≥ w n (q n )}. Since the feasible region for work loads, defined as the set of all feasible vectors for work loads, is approachable under any policy, the vector of work performed resulting from the generalized transmission time policy must converge to w(q).
For the first part, we prove the following statement: for each k ≥ 1, the set ]. Proving W k is approachable is equivalent to proving that its image under L, V k := {l|l n ≥ θ k √ an , ∀n / ∈ H k−1 }, is approachable. Now we apply Blackwell's theorem. Suppose at some time t that is the beginning of a period, the number of time slots that the AP has worked on client n is u n (t). The work performed for client n is un(t) t/τ , and the image of the vector of work performed under L is x(t) := [x n (t)|x n (t) = anun(t)/t−bn √ an ], which we shall suppose is not in V k . The generalized transmission time policy sorts clients so that a 1 u 1 (t)−b 1 ≤ a 2 u 2 (t)−b 2 ≤ . . . , or equivalently, √ a 1 x 1 (t) ≤ √ a 2 x 2 (t) ≤ . . . . The closest point in V k to x(t) is y := [y n ], where y n = θ k √ an , if x n (t) < θ k √ an and n / ∈ H k−1 , and y n = x n , otherwise. The hyperplane that passes through y and is orthogonal to the line segment xy is:
W k := {w * |w * n ≥ τ bn+θ k an , ∀n / ∈ H k−1 } is approachable. Since ∩ i≥0 W i = {w * |w * n ≥ w n (q n )}, we also prove that {w * |w * n ≥ w n (q n )} is approachable.{z|f (z) := n:n≤n0,n / ∈H k−1 (z n − θ k √ a n )(x n (t) − θ k √ a n ) = 0}.
Let π n be the expected number of time slots that the AP spends on working for client n in this period under the generalized transmission time policy. The image under L of the expected reward in this period is π L := [ anπn/τ −bn √ an
].
Blackwell's theorem shows that V k is approachable if x(t) and π L are separated by the plane {z|f (z) = 0}. Since f (x(t)) ≥ 0, it suffices to show f (π L ) ≤ 0.
We manipulate the original ordering, for this period, so that all clients in H k−1 have higher priorities than those not in H k−1 , while preserving the relative ordering between clients not in H k−1 . Note this manipulation will not give any client n / ∈ H k−1 higher priority than it had in the original ordering. Therefore, π n will not increase for any n / ∈ H k−1 . Since the value of f (π L ) only depends on π n for n / ∈ H k−1 , and increases as those π n decrease, this manipulation will not decrease the value of f (π L ). Thus, it suffices to prove that f (π L ) ≤ 0, under this new ordering. Let n 0 := |H k−1 | + 1. Under this new ordering, we have:
√ a n0 x n0 (t) ≤ √ a n0+1 x n0+1 (t) ≤ · · · ≤ √ a n1 x n1 (t) < θ k ≤ √ a n1+1 x n1+1 (t) ≤ . . . . Let δ n =
√ a n x n (t)− √ a n+1 x n+1 (t), for n 0 ≤ n ≤ n 1 − 1 and δ n1 = √ a n1 x n1 (t) − θ k . Clearly, δ n ≤ 0, for all n 0 ≤ n ≤ n 1 . Now we can derive:
f (π L ) = n1 n=n0 ( a n π n /τ − b n √ a n − θ k √ a n )(x n (t) − θ k √ a n ) = n1 n=n0 ( π n τ − b n a n − θ k a n )( √ a n x n (t) − θ k ) = n1 i=n0 ( i n=n0 π n τ − i n=n0 b n a n − θ k i n=n0 1 a n )δ i .
Recall that I S is the expected number of idle time slots when the AP only caters on the subset S. Thus, under this ordering, we have
i n=n0 π n τ − i n=n0 b n a n − θ k i n=n0 1 a n =( i n=n0 1 a n )( 1 τ (I H k−1 − I {1,...,i} ) − i n=n0
bn an n∈{1,...,i}\H k−1 1/a n − θ k ) ≥ 0.
Therefore, f (π L ) ≤ 0, since δ i ≤ 0, and V k is indeed approachable, for all k.
We have established that the set {w * |w * n ≥ w n (q n )} is approachable. Next we prove that [w n (q n )] is the only feasible vector in the set. Consider any vector w ′ = w(q) in the set. We have w ′ n ≥ w n (q n ) for all n, and w ′ n0 > w n0 (q n0 ), for some n 0 . Suppose n 0 ∈ H k \H k−1 . We have:
n∈H k w ′ n > n∈H k w n (q n ) = k i=1 n∈Hi\Hi−1 τ b n + θ k a n = k i=1
(I Hi−1 − I Hi ) = τ − I H k , and thus w ′ is not feasible. Therefore, w(q) is the only feasible vector in {w * |w * n ≥ w n (q n )}, and the vector of work performed resulting from the generalized transmission time policy must converge to w(q).
Corollary 1: For the policy of Theorem 5, the vector of delivery ratios converges to q.
Proof: Follows from Lemma 1.
B. Optimality of the Weighted Transmission Policy for ACCESS-P OIN T
Theorem 6: Given [ρ n ], the vector q of long-term average delivery ratios resulting from the WT policy is a solution to ACCESS-P OIN T .
Proof: We use the sequence of sets {H k } and values {θ k }, with a n := 1 ρn and b n := 0, as defined in the proof of Theorem 5. Let K := |{θ k }|. Thus, we have H K = T OT = {1, 2, . . . , N }. Also, let m k := |H k |. For convenience, we renumber clients so that H k = {1, 2, . . . , m k }. The proof of Theorem 5 shows that q n = τ p n θ k ρ n , for n ∈ H k \H k−1 . Therefore, w n (q n ) = qn pn = τ θ k ρ n . Obviously, q is feasible, since it is indeed achieved by the WT policy. Thus, to establish optimality, we only need to prove the existence of vectors ζ and µ that satisfy conditions (16)-(19). Set µ n = 0, for all n. Let ζ HK = ζ T OT := ρN wN (qN ) = 1 τ θK
and ζ H k := ρm k wm k (qm k ) − ρm k+1 wm k+1 (qm k+1 ) = 1 τ θ k − 1 τ θ k+1 , for 1 ≤ k ≤ K − 1.
Finally, let ζ S := 0, for all S / ∈ {H 1 , H 2 , . . . , H K }. We claim that the vectors ζ and µ, along with q, satisfy conditions (16)-(19).
We first evaluate condition (16). Suppose client n is in H k \H k−1 . Then client n is also in H k+1 , H k+2 , . . . , H K . So,
− ρ n q n + S∋n ζ S p n − µ n = − 1 τ θ k p n + K i=k ζ Hi p n = − 1 τ θ k p n + 1 τ θ k p n = 0.
Thus, condition (16) is satisfied. Since µ n = 0, for all n, condition (18) is satisfied. Further, since 1 θ k > 1 θ k+1 , for all 1 ≤ k ≤ K −1, condition (19) is also satisfied. It remains to establish condition (17). ρj . Since w n (q n ) is the average number of time slots that the AP spends on working for client n,
we have ui(t) ρi < uj (t)
ρj , for all i ∈ H k and j / ∈ H k , after a finite number of periods. Therefore, except for a finite number of periods, clients in H k will have priorities over those not in H k . In other words, if we only consider the behavior of those clients in H k , it is the same as if the AP only works on the subset H k of clients. Further, recall that I H k is the expected number of time slots that the AP is forced to stay idle when the AP only works on the subset H k of clients. Thus, we have i∈H k w i (q i ) = τ − I H k and i∈H k qi pi − (τ − I H k ) = 0, for all k.
C. Fairness of Allocated Delivery Ratios
We now show that the WT policy not only solves the ACCESS-P OIN T problem but also achieves some forms of fairness among clients. Two common fairness criteria are max-min fair and proportionally fair. We extend the definitions of these two criteria as follows:
Definition 3: A scheduling policy is called weighted max-min fair with positive weight vector a = [a n ] if it achieves q, and, for any other feasible vector q ′ , we have:
q ′ i > q i ⇒ q ′ j < q j , for some j such that wi(qi) ai ≥ wj (qj )
aj . Definition 4: A scheduling policy is called weighted proportionally fair with positive weight vector a if it achieves q and, for any other feasible vector q ′ , we have: N n=1 w n (q ′ n ) − w n (q n ) w n (q n )/a n ≤ 0.
Next, we prove that the WT policy is both weighted max-min fair and proportionally fair with weight vector ρ.
Theorem 7: The weighted transmission policy is weighted max-min fair with weight ρ Proof: We sort clients and define {H k } as in the proof of Theorem 6. Let q be the vector achieved by the WT policy and q ′ be any feasible vector. Suppose q ′ i > q i for some i. Assume client i is in H k \H k−1 . The proof in Theorem 6 shows that n∈H k w n (q n ) = τ − I H k . On the other hand, the feasibility condition requires
n∈H k w n (q ′ n ) ≤ τ − I H k = n∈H k w n (q n ). Further, since q ′ i > q i , w i (q ′ i ) > w i (q i ), there must exist some j ∈ H k so that w j (q ′ j ) < w j (q j ), that is, q ′ j < q j . Finally, since i ∈ H k \H k−1 , we have wi(qi) ρi ≥ wn(qn)
ρn , for all n ∈ H k , and hence wi(qi) ρi ≥ wj (qj ) ρj . Theorem 8: The weighted transmission policy is proportionally fair with weight ρ.
Proof: We sort clients and define {H k } as in the proof of Theorem 6. Let q be the vector achieved by the WT policy, and let q ′ be any feasible vector. We have wi(qi)
ρi = τ θ k , if i ∈ H k \H k−1 . Define ∆ k := n∈H k \H k−1 w n (q ′ n ) − w n (q n ).
To prove the theorem, we prove a stronger statement by induction:
n∈H k w n (q ′ n ) − w n (q n ) w n (q n )/ρ n = k i=1 ∆ i τ θ i ≤ 0, for all k > 0.
First consider the case k = 1. The proof in Theorem 6 shows that n∈H1 w n (q n ) = τ − I H1 . Further, the feasibility condition requires n∈H1 w n (q ′ n ) ≤ τ − I H1 = n∈H1 w n (q n ) = τ − I H1 , and so
∆ 1 = n∈H1 w n (q ′ n ) − w n (q n ) ≤ 0. Thus, we have ∆1 τ θ1 ≤ 0. Suppose we have k i=1
∆i τ θi ≤ 0, for all k ≤ k 0 . Again, the proof in Theorem 6 gives us n∈H k 0 +1 w n (q n ) = τ − I H k 0 +1 and the feasibility condition requires
n∈H k 0 +1 w n (q ′ n ) ≤ τ −I H k 0 +1 = n∈H k 0 +1 w n (q n ). Thus, k0+1 i=1 ∆ i ≤ 0.
We can further derive:
k0+1 i=1 ∆ i τ θ i ≤ k0 i=1 ∆ i τ θ i (1 − θ i θ k0+1 ) (since k0+1 i=1 ∆ i τ θ k0+1 ≤ 0) = k0 j=1 [( θ j+1 − θ j θ k0+1 ) j i=1 ∆ i τ θ i ] ≤0 (since j i=1 ∆ i τ θ i ≤ 0, and θ j+1 > θ j , ∀j ≤ k 0 ) By induction, k i=1
∆i τ θi ≤ 0, for all k. Finally, we have:
N n=1 w n (q ′ n ) − w n (q n ) w n (q n )/ρ n = K i=1 ∆ i τ θ i ≤ 0,
and the WT policy is proportionally fair with weight ρ.
VII. SIMULATION RESULTS We have implemented the WT policy and the bidding game, as described in Section V, on ns-2. We use the Table I. All results in this section are averages of 20 simulation runs.
A. Convergence Time for the Weighted Transmission Policy
We have proved that the vector of delivery ratios will converge under the WT policy in Section VI-A. However, the speed of convergence is not discussed. In the bidding game, we assume that the delivery ratio observed by each client is post convergence. Thus, it is important to verify whether the WT policy converges quickly. In this simulation, we assume that there are 30 clients in the system. The n th client has channel reliability (50 + n)% and offers a bid ρ n = (n mod 2) + 1. We run each simulation for 10 seconds simulation time and then compare the absolute difference of n ρ n log q n between the delivery ratios at the end of each period with those after 10 seconds. In particular, we artificially set q n = 0.001 if the delivery ratio for client n is zero, to avoid computation error for log q n .
Simulation results are shown in Fig. 1. It can be seen that the delivery ratios converge rather quickly. At time 0.2 seconds, the difference is smaller than 1.4, which is less than 10% of the final value. Based on this observation, we assume that each client updates its bid every 0.2 seconds in the following simulations.
B. Utility Maximization
In this section, we study the total utility that is achieved by iterating between the bidding game and the WT policy, which we call WT-Bid. We assume that the utility function of each client n is given by γ n q αn n −1 αn , where γ n is a positive integer and 0 < α n < 1. This utility function is strictly increasing, strictly concave, and differentiable for any γ n and α n . In addition to evaluating the policy WT-Bid, we also compare the results of three other policies: a (a) Average of total utility (b) Variance of total utility Fig. 2: Performance of total utility policy that employs the WT policy but without updating the bids from clients, which we call WT-NoBid; a policy that decides priorities randomly among clients at the beginning of each period, which we call Rand; and a policy that gives clients with larger γ n higher priorities, with ties broken randomly, which we call P-Rand.
In each simulation, we assume there are 30 clients. The n th client has channel reliability p n = (50 + n)%, γ n = (n mod 3) + 1, and α n = 0.3 + 0.1(n mod 5). In addition to plotting the average of total utility over all simulation runs, we also plot the variance of total utility. Fig. 2 shows the simulation results. The WT-Bid policy not only achieves the highest average total utility but also the smallest variance. This result suggests that the WT-Bid policy converges very fast. On the other hand, the WT-NoBid policy fails to provide satisfactory performance since it does not consider the different utility functions that clients may have. The P-Rand policy offers much better performance than both the WT-NoBid policy and the Rand policy since it correctly gives higher priority to clients with higher γ n . Still, it cannot differentiate between clients with the same γ n and thus can only provide suboptimal performance.
VIII. CONCLUDING REMARKS
We have studied the problem of utility maximization problem for clients that demand delay-based QoS support from an access point. Based on an analytical model for QoS support proposed in previous work, we formulate the utility maximization problem as a convex optimization problem. We decompose the problem into two simpler subproblems, namely, CLIEN T n and ACCESS-P OIN T . We have proved that the total utility of the system can be maximized by jointly solving the two subproblems. We also describe a bidding game to reconciliate the two subproblems. In the game, each client announces its bid to maximize its own net profit and the AP allocates time slots to achieve the optimal point of ACCESS-P OIN T . We have proved that the equilibrium point of the bidding game jointly solves the two subproblems, and therefore achieves the maximum total utility.
In addition, we have proposed a very simple, prioritybased weighted transmission policy for solving the ACCESS-P OIN T subproblem. This policy does not require that the AP know the channel reliabilities of the clients, or their individual utilities. We have proved that the long-term performance of a general class of prioritybased policies that includes our proposed policy converges to a single point. We then proved that the limiting point of the proposed scheduling policy is the optimal solution to ACCESS-P OIN T . Moreover, we have also proved that the resulting allocation by the AP satisfies some forms of fairness criteria. Finally, we have implemented both the bidding game and the scheduling policy in ns-2. Simulation results suggests that the scheduling policy quickly results in convergence. Further, by iterating between the bidding game and the WT policy, the resulting total utility is higher than other tested policies.
| 9,211 |
0908.0362
|
2951024449
|
This paper studies the problem of utility maximization for clients with delay based QoS requirements in wireless networks. We adopt a model used in a previous work that characterizes the QoS requirements of clients by their delay constraints, channel reliabilities, and delivery ratio requirements. In this work, we assume that the utility of a client is a function of the delivery ratio it obtains. We treat the delivery ratio for a client as a tunable parameter by the access point (AP), instead of a given value as in the previous work. We then study how the AP should assign delivery ratios to clients so that the total utility of all clients is maximized. We apply the techniques introduced in two previous papers to decompose the utility maximization problem into two simpler problems, a CLIENT problem and an ACCESS-POINT problem. We show that this decomposition actually describes a bidding game, where clients bid for the service time from the AP. We prove that although all clients behave selfishly in this game, the resulting equilibrium point of the game maximizes the total utility. In addition, we also establish an efficient scheduling policy for the AP to reach the optimal point of the ACCESS-POINT problem. We prove that the policy not only approaches the optimal point but also achieves some forms of fairness among clients. Finally, simulation results show that our proposed policy does achieve higher utility than all other compared policies.
|
There is also research on utility maximization for both wireline and wireless networks. Kelly @cite_2 and Kelly, Maulloo, and Tan @cite_4 have considered the rate control algorithm to achieve maximum utility in a wireline network. Lin and Shroff @cite_7 has studied the same problem with multi-path routing. As for wireless networks, Xiao, Shroff, and Chong @cite_3 has proposed a power-control framework to maximize utility, which is defined as a function of the signal-to-interference ratio and cannot reflect channel unreliability. Cao and Li @cite_10 has proposed a bandwidth allocation policy that also considers channel degradation. Bianchi, Campbell, and Liao @cite_9 has studied utility-fair services in wireless networks. However, all the aforementioned works assume that the utility is only determined by the allocated bandwidth. Thus, they do not consider applications that require delay bounds.
|
{
"abstract": [
"",
"In this paper, we study utility maximization problems for communication networks where each user (or class) can have multiple alternative paths through the network. This type of multi-path utility maximization problems appear naturally in several resource allocation problems in communication networks, such as the multi-path flow control problem, the optimal quality-of-service (QoS) routing problem, and the optimal network pricing problem. We develop a distributed solution to this problem that is amenable to online implementation. We analyze the convergence of our algorithm in both continuous-time and discrete-time, and with and without measurement noise. These analyses provide us with guidelines on how to choose the parameters of the algorithm to ensure efficient network control.",
"Adaptive quality-of-service (QoS) techniques can effectively respond to time-varying channel conditions found in wireless networks. In this paper, we assess the state-of-the-art in QoS adaptive wireless systems and argue for new adaptation techniques that are better suited to respond to application-specific adaptation needs. A QoS adaptive data link control model is presented that accounts for application-specific adaptation dynamics that include adaptation time scales and adaptation policies. A centralized adaptation controller employs a novel utility-fair bandwidth allocation scheme that supports the dynamic bandwidth needs of adaptive flows over a range of operating conditions. Three wireless service classes play an integral role in accommodating a wide variety of adaptation strategies. In this paper, we discuss the design of the utility-fair allocation scheme and the interaction between the centralized adaptation controller and a set of distributed adaptation handlers, which play a key role in intelligently responding to the time-varying channel capacity experienced over the air-interface.",
"",
"This paper addresses the issues of charging, rate control and routing for a communication network carrying elastic traffic, such as an ATM network offering an available bit rate service. A model is described from which max-min fairness of rates emerges as a limiting special case; more generally, the charges users are prepared to pay influence their allocated rates. In the preferred version of the model, a user chooses the charge per unit time that the user will pay; thereafter the user's rate is determined by the network according to a proportional fairness criterion applied to the rate per unit charge. A system optimum is achieved when users' choices of charges and the network's choice of allocated rates are in equilibrium.",
"In this paper we propose a general utility-oriented adaptive quality of service (QoS) model for wireless networks and establish a framework for formulating the bandwidth allocation problem for users with time-varying links. For slow link variations, it is inadequate to only employ low-level adaptive mechanisms at the symbol or packet level, such as error correction coding or swapping packet transmission opportunities. To improve bandwidth utilization and satisfy users' QoS requirements, high-level adaptive mechanisms working at larger time scale are needed. We propose an adaptive bandwidth allocation scheme, which is capable of providing QoS guarantees, ensuring long-term fairness, and achieving high bandwidth utilization. A finite-state Markov channel model (FSMC) is used to model wireless links."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_2",
"@cite_10"
],
"mid": [
"",
"2161615677",
"2138566564",
"",
"1987497363",
"1605913486"
]
}
|
Utility Maximization for Delay Constrained QoS in Wireless
|
I. INTRODUCTION We study how to provide QoS to maximize utility for wireless clients. We jointly consider the delay constraint and channel unreliability of each client. The access point (AP) assigns delivery ratios to clients under the delay and reliability constraints. This distinguishes our work from most other work on providing QoS where the delivery ratios to clients are taken as given inputs rather than tunable parameters.
We consider the scenario where there is one AP that serves a set of wireless clients. We extend the model proposed in a previous work [8]. This model analytically describes three important factors for QoS: delay, channel unreliability, and delivery ratio. The previous work also provides a necessary and sufficient condition for the This material is based upon work partially supported by USARO under Contract Nos. W911NF-08-1-0238 and W-911-NF-0710287, AFOSR under Contract FA9550-09-0121, and NSF under Contract Nos. CNS-07-21992, ECCS-0701604, CNS-0626584, and CNS-05-19535. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the above agencies. demands of the set of clients to be feasible. In this work, we treat the delivery ratios for clients as variables to be determined by the AP. We assume that each client receives a certain amount of utility when it is provided a delivery ratio. The relation between utility and delivery ratio is described by an utility function, which may differ from client to client. Based on this model, we study the problem of maximizing the total utility of all clients, under feasibility constraints. We show that this problem can be formulated as a convex optimization problem.
Instead of solving the problem directly, we apply the techniques introduced by Kelly [10] and Kelly, Maulloo, and Tan [11] to decompose the problem of system utility maximization into two simpler subproblems that describe the behaviors of the clients and the AP, respectively. We prove that the utility maximization problem can be solved by jointly solving the two simpler subproblems. Further, we describe a bidding game for the reconciliation between the two subproblems. In this game, clients bid for service time from the AP, and the AP assigns delivery ratios to clients according to their bids, to optimize its own subproblem, under feasibility constraints. Based on the AP's behavior, each client aims to maximize its own net utility, that is, the difference between the utility it obtains and the bid it pays. We show that, while all clients behave selfishly in the game, the equilibrium point of the game solves the two subproblems jointly, and hence maximizes the total utility of the system.
We then address how to design a scheduling policy for the AP to solve its subproblem. We propose a very simple priority based scheduling algorithm for the AP. This policy requires no information of the underlying channel qualities of the clients and thus needs no overhead to probe or estimate the channels. We prove that the long-term average performance of this policy converges to a single point, which is in fact the solution to the subproblem for the AP. Further, we also establish that the policy achieves some forms of fairness.
Our contribution is therefore threefold. First, we formulate the problem of system utility maximization as a convex optimization problem. We then show that this problem is amenable to solution by a bidding game. Finally, we propose a very simple priority based AP scheduling policy to solve the AP's subproblem, that can be used in the bidding iteration to reach the optimal point of the system's utility maximization problem.
Finally, we conduct simulation studies to verify all the theoretical results. Simulations show that the performance of the proposed scheduling policy converges quickly to the optimal value of the subproblem for AP. Also, by jointly applying the scheduling policy and the bidding game, we can achieve higher total utility than all other compared policies.
The rest of the paper is organized as follows: Section II reviews some existing related work. Section III introduces the model for QoS proposed in [8] and also summarizes some related results. In Section IV, we formulate the problem of utility maximization as a convex programming problem. We also show that this problem can be decomposed into two subproblems. Section V describes a bidding game that jointly solves the two subproblems. One phase of the bidding game consists of each client selfishly maximizing its own net profit, and the other phase consists of the AP scheduling client transmissions to optimize its subproblem. Section VI addresses the scheduling policy to optimize this latter subproblem. Section VII demonstrates some simulation studies. Finally, Section VIII concludes this paper.
III. SYSTEM MODEL AND FEASIBILITY CONDITION
We adopt the model proposed in a previous work [8] to capture two key QoS requirements, delay constraints and delivery ratio requirements, and incorporating channel conditions for users. In this section, we describe the proposed model and summarize relevant results of [8].
We consider a system with N clients, numbered as {1, 2, . . . , N }, and one access point (AP). Packets for clients arrive at the AP and the AP needs to dispatch packets to clients to meet their respective requirements. We assume that time is slotted, with slots numbered as {0, 1, 2, . . . }. The AP can make exactly one transmission in each time slot. Thus, the length of a time slot would include the times needed for transmitting a DATA packet, an ACK, and possibly other MAC headers. Assume there is one packet arriving at the AP periodically for each client, with a fixed period of τ time slots, at time slots 0, τ, 2τ, . . . . Each packet that arrives at the beginning of a period [kτ, (k+1)τ ) must be delivered within the ensuing period, or else it expires and is dropped from the system at the end of this period. Thus, a delay constraint of τ time slots is enforced on all successfully delivered packet. Further, unreliable and heterogeneous wireless channels to these clients are considered. When the AP makes a transmission for client n, the transmission succeeds (by which is meant the successful deliveries of both the DATA packet and the ACK) with probability p n . Due to the unreliable channels and delay constraint, it may not be possible to deliver the arrived packets of all the clients. Therefore, each client stipulates a certain delivery ratio q n that it has to receive, which is defined as the average proportion of periods in which its packet is successfully delivered. The previous work also shows how this model can be used to capture scenarios where both uplink traffic and downlink traffic exist.
Below we describe the formal definitions of the concepts of fulfilling a set of clients and the feasibility of a set of client requirements.
Definition 1: A set of clients with the above QoS constraints is said to be fulfilled by a particular scheduling policy η of the AP if the time averaged delivery ratio of each client is at least q n with probability 1.
Definition 2: A set of clients is feasible if there exists some scheduling policy of the AP that fulfills it.
Whether a certain client is fulfilled can be decided by the average number of time slots that the AP spends on working for the client per period:
Lemma 1: The delivery ratio of client n converges to q n with probability one if and only if the work performed on client n, defined as the long-term average number of time slots that the AP spends on working for client n per period, converges to w n (q n ) = qn pn with probability one. We therefore call w n (q n ) the workload of client n.
Since expired packets are dropped from the system at the end of each period, there is exactly one packet for each client at the beginning of each period. Therefore, there may be occasions where the AP has delivered all packets before the end of a period and is therefore forced to stay idle for the remaining time slots in the period. Let I S be the expected number of such forced idle time slots in a period when the client set is just S ⊆ {1, 2, . . . , N } (i.e., all clients except those in S are removed from consideration), and the AP only caters to the subset S of clients. Since each client n ∈ S requires w n time slots per period on average, we can obtain a necessary condition for feasibility: i∈S w i (q i ) + I S ≤ τ , for all S ⊆ {1, 2, . . . , N }. It is shown in [8] that this necessary condition is also sufficient:
Theorem 1: A set of clients, with delivery ratio requirements [q n ], is feasible if and only if i∈S qi pi ≤ τ − I S , for all S ⊆ {1, 2, . . . , N }.
IV. UTILITY MAXIMIZATION AND DECOMPOSITION
In the previous section, it is assumed that the delivery ratio requirements, [q n ], are given and fixed. In this paper, we address the problem of how to choose q := [q n ] so that the total utility of all the clients in the system can be maximized.
We begin by supposing that each client has a certain utility function, U n (q n ), which is strictly increasing, strictly concave, and continuously differentiable function over the range 0 < q n ≤ 1, with the value at 0 defined as the right limit, possibly −∞. The problem of choosing q n to maximize the total utility, under the feasibility constraint of Theorem 1, can be described by the following convex optimization problem:
SYSTEM:
Max N i=1 U i (q i )(1)s.t. i∈S q i p i ≤ τ − I S , ∀S ⊆ {1, 2, . . . , N },(2)over q n ≥ 0, ∀1 ≤ n ≤ N.(3)
It may be difficult to solve SY ST EM directly. So, we decompose it into two simpler problems, namely, CLIEN T and ACCESS-P OIN T , as described below. This decomposition was first introduced by Kelly [10], though in the context of dealing with rate control for nonreal time traffic.
Suppose client n is willing to pay an amount of ρ n per period, and receives a long-term average delivery ratio q n proportional to ρ n , with ρ n = ψ n q n . If ψ n > 0, the utility maximization problem for client n is:
CLIENT n : Max U n ( ρ n ψ n ) − ρ n (4) over 0 ≤ ρ n ≤ ψ n .(5)
On the other hand, given that client n is willing to pay ρ n per period, we suppose that the AP wishes to find the vector q to maximize N i=1 ρ i log q i , under the feasibility constraints. In other words, the AP has to solve the following optimization problem:
ACCESS-POINT:
Max N i=1 ρ i log q i(6)s.t. i∈S q i p i ≤ τ − I S , ∀S ⊆ {1, 2, . . . , N },(7)
over q n ≥ 0, ∀1 ≤ n ≤ N.
We begin by showing that solving ACCESS-P OIN T is equivalent to jointly solving CLIEN T n and ACCESS-P OIN T .
Theorem 2: There exist non-negative vectors q, ρ := [ρ n ], and ψ := [ψ n ], with ρ n = ψ n q n , such that:
(i) For n such that ψ n > 0, ρ n is a solution to CLIEN T n ; (ii) Given that each client n pays ρ n per period, q is a solution to ACCESS-P OIN T . Further, if q, ρ, and ψ are all positive vectors, the vector q is also a solution to SY ST EM .
Proof: We will first show the existence of q, ρ, and ψ that satisfy (i) and (ii). We will then show that the resulting q is also the solution to SY ST EM .
There exists some ǫ > 0 so that by letting q n ≡ ǫ, the vector q is an interior point of the feasible region for both SY ST EM (2) (3), and ACCESS-P OIN T (7) (8). Also, by setting ρ n ≡ ǫ, ρ n is also an interior point of the feasible region for CLIEN T n (5). Therefore, by Slater's condition, a feasible point for SY ST EM , CLIEN T n , or ACCESS-P OIN T , is the optimal solution for the respective problem if and only if it satisfies the corresponding Karush-Kuhn-Tucker (KKT) condition for the problem. Further, since the feasible region for each of the problems is compact, and the utilities are continuous on it, or since the utility converges to −∞ at q n = 0, there exists an optimal solution to each of them.
The Lagrangian of SY ST EM is:
L SY S (q, λ, ν) := − N i=1 U i (q i ) + S⊆{1,2,...,N } λ S [ i∈S qi pi − (τ − I S )] − N i=1 ν i q i ,
where λ := [λ S : S ⊆ {1, 2, . . . , N }] and ν := [ν n : 1 ≤ n ≤ N ] are the Lagrange multipliers. By the KKT condition, a vector q * := [q * 1 , q * 2 , . . . , q * N ] is the optimal solution to SYSTEM if q * is feasible and there exists vectors λ * and ν * such that:
∂LSY S ∂qn q * ,λ * ,ν * = −U ′ n (q * n ) + P S∋n λ * S pn − ν * n = 0, ∀1 ≤ n ≤ N,(9)λ * S [ i∈S q * i p i − (τ − I S )] = 0, ∀S ⊆ {1, 2, . . . , N },(10)ν * n q * n = 0, ∀1 ≤ n ≤ N,(11)λ * S ≥ 0, ∀S ⊆ {1, . . . , N }, and ν * n ≥ 0, ∀1 ≤ n ≤ N. (12)
The Lagrangian of CLIEN T n is:
L CLI (ρ n , ξ n ) := −U n ( ρ n ψ n ) + ρ n − ξ n ρ n ,
where ξ n is the Lagrange multiplier for CLIEN T n . By the KKT condition, ρ * n is the optimal solution to CLIEN T n if ρ * n ≥ 0 and there exists ξ * n such that:
dL CLI dρ n ρ * n ,ξ * n = − 1 ψ n U ′ n ( ρ * n ψ n ) + 1 − ξ * n = 0,(13)ξ * n ρ * n = 0,(14)ξ * n ≥ 0.(15)
Finally, the Lagrangian of ACCESS-P OIN T is:
L N ET (q, ζ, µ) := − N i=1 ρ i log q i + S⊆{1,2,...,N } ζ S [ i∈S qi pi − (τ − I S )] − N i=1 µ i q i ,
where ζ := [ζ S : S ⊆ {1, 2, . . . , N }] and µ := [µ n : 1 ≤ n ≤ N ] are the Lagrange multipliers. Again, by the KKT condition, a vector q * := [q * n ] is the optimal solution to ACCESS-P OIN T if q * is feasible and there exists vectors ζ * and µ * such that:
∂LNET ∂qn q * ,ζ * ,µ * = − ρn q * n + P S∋n ζ * S pn − µ * n = 0, ∀1 ≤ n ≤ N,(16)ζ * S [ i∈S q * i p i − (τ − I S )] = 0, ∀S ⊆ {1, 2, . . . , N },(17)µ * n q * n = 0, ∀1 ≤ n ≤ N, (18) ζ * S ≥ 0, ∀S ⊆ {1, . . . , N }, and µ * n ≥ 0, ∀1 ≤ n ≤ N. (19)
Let q * be a solution to SY ST EM , and let λ * , ν * be the corresponding Lagrange multipliers that satisfy conditions (9)- (12). Let q n = q * n , ρ n = P S∋n λ * S pn q * n , and ψ n = P S∋n λ * S pn , for all n. Clearly, q, ρ, and ψ are all nonnegative vectors. We will show (q, ρ, ψ) satisfy (i) and (ii).
We first show (i) for all n such that ψ n = P S∋n λ * S pn > 0. It is obvious that ρ n = ψ n q n . Also, ρ n ≥ 0, since λ * S ≥ 0 (by (12)) and q * n ≥ 0 (since q * is feasible). Further, let the Lagrange multiplier of CLIEN T n , ξ n , be equal to ν * n / P S∋n λ * S pn = ν * n /ψ n . Then we have: (9), ξ n ρ n = ν * n ψ n ψ n q * n = ν * n q * n = 0, by (11) ξ n = ν * n / S∋n λ * S p n ≥ 0, by (12).
∂LCLI ∂ρn ρn,ξn = − 1 ψn U ′ n ( ρn ψn ) + 1 − ξ n = 1 ψn (−U ′ n ( ρn ψn ) + ψ n − ψ n ξ n ) = 1 ψn (−U ′ n (q * n ) + P S∋n λ * S pn − ν * n ) = 0, by
In sum, (ρ, ψ, ξ) satisfies the KKT conditions for CLIEN T n , and therefore ρ n is a solution to CLIEN T n , with ρ n = ψ n q n . Next we establish (ii). Since q = q * is the solution to SY ST EM , it is feasible. Let the Lagrange multipliers of ACCESS-P OIN T be ζ S = λ * S , ∀S, and µ n = 0, ∀n, respectively. Given that each client n pays ρ n per period, we have: (10), µ n q n = 0 × q n = 0, ∀n, ζ S = λ * S ≥ 0, ∀S (by (12)), and µ n ≥ 0, ∀n.
∂LNET ∂qn q,ζ,µ = − ρn qn + P S∋n ζS pn − µ n = −ψ n + ψ n − 0 = 0, ∀n, ζ S [ i∈S qi pi − (τ − I S )] = λ * S [ i∈S q * i pi − (τ − I S )] = 0, ∀S, by
Therefore, (q, ζ, µ) satisfies the KKT condition for ACCESS-P OIN T and thus q is a solution to ACCESS-P OIN T . For the converse, suppose (q, ρ, ψ) are positive vectors with ρ n = ψ n q n , for all n, that satisfy (i) and (ii). We wish to show that q is a solution to SY ST EM . Let ξ n be the Lagrange multiplier for CLIEN T n . Since we assume ψ n > 0 for all n, the problem CLIEN T n is well-defined for all n, and so is ξ n . Also, let ζ and µ be the Lagrange multipliers for ACCESS-P OIN T . Since q n > 0 for all n, we have µ n = 0 for all n by (18). By (16), we also have:
∂LNET ∂qn q,ζ,µ = − ρn qn + P S∋n ζS pn − µ n = −ψ n + P S∋n ζS pn = 0,
and thus ψ n = P S∋n ζS pn . Let λ S = ζ S , for all S, and ν n = ψ n ξ n , for all n. We claim that q is the optimal solution to SY ST EM with Lagrange multipliers λ and ν.
Since q is a solution to ACCESS-P OIN T , it is feasible. Further, we have: (17), ν n q n = ξ n ρ n = 0, ∀n, by (14), (19), ν n = ψ n ξ n ≥ 0, ∀n, by (15).
∂LSY S ∂qn q,λ,ν = −U ′ n (q n ) + P S∋n λS pn − ν n = −U ′ n ( ρn ψn ) + ψ n − ψ n ξ n = 0, ∀n, by (13), λ S [ n∈S qn pn − (τ − I S )] = ζ S [ n∈S qn pn − (τ − I S )] = 0, ∀S, byλ S = ζ S ≥ 0, ∀S, by
Thus, (q, λ, ν) satisfy the KKT condition for SY ST EM , and so q is a solution to SY ST EM .
V. A BIDDING GAME BETWEEN CLIENTS AND ACCESS POINT
Theorem 2 states that the maximum total utility of the system can be achieved when the solutions to the problems CLIEN T n and ACCESS-P OIN T agree. In this section, we formulate a repeated game for such reconciliation. We also discuss the meanings of the problems CLIEN T n and ACCESS-P OIN T in this repeated game.
The repeated game is formulated as follows:
Step 1: Each client n announces an amount ρ n that it pays per period.
Step 2: After noting the amounts, ρ 1 , ρ 2 , . . . , ρ N , paid by the clients, the AP chooses a scheduling policy so that the resulting long-term delivery ratio, q n , for each client maximizes N i=1 ρ i log q i . Step 3: The client n observes its own delivery ratio, q n . It computes ψ n := ρ n /q n . It then determines ρ * n ≥ 0 to maximize U n ( ρ * n ψn ) − ρ * n . Client n updates the amount it pays to (1−α)ρ n +αρ * n , with some fixed 0 < α < 1, and announces the new bid value.
Step 4: Go back to Step 2.
In Step 3 of the game, client n chooses its new amount of payment as a weighted average of the past amount and the derived optimal value, instead of the derived optimal value. This design serves two purposes. First, it seeks to avoid the system from oscillating between two extreme values. Second, since ρ n is initiated to a positive value, and ρ * n derived in each iteration is always non-negative, this design guarantees ρ n to be positive throughout all iterations. Since ψ n = ρ n /q n , this also ensures ψ n > 0 and the function U n ( ρn ψn ) is consequently always well-defined. We show that the fixed point of this repeated game maximizes the total utility of the system: Theorem 3: Suppose at the fixed point of the repeated game, each client n pays ρ * n per period, and receives delivery ratio q * n . If both ρ * n and q * n are positive for all n, the vector q * maximizes the total utility of the system.
Proof:
Let ψ * n = ρ * n q * n .
It is positive since both ρ * n and q * n are positive. Since the vectors q * and ρ * are derived from the fixed point, ρ * n maximizes U n ( ρn ψ * n )−ρ n , over all ρ n ≥ 0, as described in Step 3 of the game. Thus, ρ * n is a solution to CLIEN T n , given ρ * n = ψ * n q * n . Similarly, from Step 2, q * is the feasible vector that maximizes N i=1 ρ * i log q i , over all feasible vectors q. Thus, q * is a solution to ACCESS-P OIN T , given that each client n pays ρ * n per period. By Theorem 2, q * is the unique solution to SY ST EM and therefore maximizes the total utility of the system.
Next, we describe the meaning of the game. In Step 3, client n assumes a linear relation between the amount it pays, ρ n , and the delivery ratio it receives, q n . To be more exactly, it assumes ρ n = ψ n q n , where ψ n is the price. Thus, maximizing U n ( ρn ψn )−ρ n is equivalent to maximizing U n (q n ) − ρ n . Recall that U n (q n ) is the utility that client n obtains when it receives delivery ratio q n . U n (q n ) − ρ n is therefore the net profit that client n gets. In short, in
Step 3, the goal of client n is to selfishly maximize its own net profit using a first order linear approximation to the relation between payment and delivery ratio.
We next discuss the behavior of the AP in Step 2. The AP schedules clients so that the resulting delivery ratio vector q is a solution to the problem ACCESS-P OIN T , given that each client n pays ρ n per period. Thus, q is feasible and there exists vectors ζ and µ that satisfy conditions (16)-(19). While it is difficult to solve this problem, we consider a special restrictive case that gives us a simple solution and insights into the AP's behavior. Let T OT := {1, 2, . . . , N } be the set that consists of all clients. We assume that a solution (q, ζ, µ) to the problem has the following properties: ζ S = 0, for all S = T OT , ζ T OT > 0, and µ n = 0, for all n. By (16), we have:
− ρ n q n + S∋n ζ S p n − µ n = − ρ n q n + ζ T OT p n = 0,
and therefore q n = p n ρ n /ζ T OT . Further, since ζ T OT > 0, (17) requires that:
i∈T OT q i p i − (τ − I T OT ) = i∈T OT ρ i ζ T OT − (τ − I T OT ) = 0.
Thus, ζ T OT = P N i=1 ρi τ −IT OT and qn pn = ρn P N i=1 mi (τ − I T OT ), for all n. Notice that the derived (q, ζ, µ) satisfies conditions (16)-(19). Thus, under the assumption that q is feasible, this special case actually maximizes N i=1 ρ i log q i . In Section VI we will address the general situation without any such assumption, since it needs not be true.
Recall that I T OT is the average number of time slots that the AP is forced to be idle in a period after it has completed all clients. Also, by Lemma 1, qn pn is the workload of client n, that is, the average number of time slots that the AP should spend working for client n. Thus, letting qn pn = ρn P N i=1 ρi (τ − I T OT ), for all n, the AP tries to allocate those non-idle time slots so that the average number of time slots each client gets is proportional to its payment. Although we only study the special case of I T OT here, we will show that the same behavior also holds for the general case in the Section VI.
In summary, the game proposed in this section actually describes a bidding game, where clients are bidding for non-idle time slots. Each client gets a share of time slots that is proportional to its bid. The AP thus assigns delivery ratios, based on which the clients calculate a price and selfishly maximize their own net profits. Finally, Theorem 3 states that the equilibrium point of this game maximizes the total utility of the system.
VI. A SCHEDULING POLICY FOR SOLVING
ACCESS-P OIN T
In Section V, we have shown that by setting q n = p n ρn P N i=1 mi (τ − I T OT ), the resulting vector q solves ACCESS-P OIN T provided q is indeed feasible. Unfortunately, such q is not always feasible and solving ACCESS-P OIN T is in general difficult. Even for the special case discussed in Section V, solving ACCESS-P OIN T requires knowledge of channel conditions, that is, p n . In this section, we propose a very simple priority based scheduling policy that can achieve the optimal solution for ACCESS-P OIN T , and that too without any knowledge of the channel conditions.
In the special case discussed in Section V, the AP tries, though it may be impossible in general, to allocate nonidle time slots to clients in proportion to their payments. Based on this intuitive guideline, we design the following scheduling policy. Let u n (t) be the number of time slots that the AP has allocated for client n up to time t. At the beginning of each period, the AP sorts all clients in increasing order of un(t) ρn , so that u1(t) ρ1 ≤ u2(t) ρ2 ≤ . . . after renumbering clients if necessary. The AP then schedules transmissions according to the priority ordering, where clients with smaller un(t) ρn get higher priorities. Specifically, in each time slot during the period, the AP chooses the smallest i for which the packet for client i is not yet delivered, and then transmits the packet for client i in that time slot. We call this the weighted transmission policy (WT). Notice that the policy only requires the AP to keep track of the bids of clients and the number of time slots each client has been allocated in the past, followed by a sorting of un(t) ρn among all clients. Thus, the policy requires no information on the actual channel conditions, and is tractable. Simple as it is, we show that the policy actually achieves the optimal solution for ACCESS-P OIN T . In the following sections, we first prove that the vector of delivery ratios resulting from the WT policy converges to a single point. We then prove that this limit is the optimal solution for ACCESS-P OIN T . Finally, we establish that the WT policy additionally achieves some forms of fairness.
A. Convergence of the Weighted Transmission Policy
We now prove that, by applying the WT policy, the delivery ratios of clients will converge to a vector q. To do so, we actually prove the convergence property and precise limit of a more general class of scheduling policies, which not only consists of the WT policy but also a scheduling policy proposed in [8]. The proof is similar to that used in [8] and is based on Blackwell's approachability theorem [3]. The proof in [8] only shows that the vector of delivery ratios approaches a desirable set in the N -space under a particular policy, while here we prove that the vector of delivery ratios converges to a single point under a more general class of scheduling policies. Thus, our result is both stronger and more general than the one in [8].
We start by introducing Blackwell's approachability theorem. Consider a single player repeated game. In each round i of the game, the player chooses some action, a(i), and receives a reward v(i), which is a random vector whose distribution is a function of a(i). Blackwell studies the long-term average of the rewards received, lim j→∞ j i=1 v(i)/j, defining a set as approachable, under some policy, if the distance between j i=1 v(i)/j and the set converges to 0 with probability one, as j → ∞.
Theorem 4 (Blackwell [3]): Let A ⊆ R N be any closed set. Suppose that for every x / ∈ A, a policy η chooses an action a (= a(x)), which results in an expected payoff vector E(v). If the hyperplane through y, the closest point in A to x, perpendicular to the line segment xy, separates x from E(v), then A is approachable with the policy η.
Now we formulate our more general class of scheduling policies. We call a policy a generalized transmission time policy if, for a choice of a positive parameter vector a and non-negative parameter vector b, the AP sorts clients by a n u n (t) − b n t at the beginning of each period, and gives priorities to clients with lower values of this quantity. Note that the special case a n ≡ 1 ρn and b n ≡ 0 yields the WT policy, while the choice a n ≡ 1 and b n ≡ qn pn yields the largest time-based debt first policy of [8], and thus we describe a more general set of policies.
Theorem 5: For each generalized transmission time policy, there exists a vector q such that the vector of work loads resulting from the policy converges to w(q) := [w n (q n )].
Proof: Given the parameters {(a n , b n ) : 1 ≤ n ≤ N }, we give an exact expression for the limiting q. We define a sequence of sets
(I H k−1 − I H k ) − n∈H k \H k−1
bn an n∈H k \H k−1 1/a n , for all k > 0.
In selecting H k , we always choose a maximal subset, breaking ties arbitrary. (H 1 , θ 1 ), (H 2 , θ 2 ), . . . , can be iteratively defined until every client is in some H k . Also, by the definition, we have θ k > θ k−1 , for all k > 0. If client n is in H k \H k−1 , we define q n := τ p n bn+θ k an , and so w n (q n ) = τ bn+θ k an . The proof of convergence consists of two parts. First we prove that the vector of work performed (see Lemma 1 for definition) approaches the set {w * |w * n ≥ w n (q n )}. Then we prove that w(q) is the only feasible vector in the set {w * |w * n ≥ w n (q n )}. Since the feasible region for work loads, defined as the set of all feasible vectors for work loads, is approachable under any policy, the vector of work performed resulting from the generalized transmission time policy must converge to w(q).
For the first part, we prove the following statement: for each k ≥ 1, the set ]. Proving W k is approachable is equivalent to proving that its image under L, V k := {l|l n ≥ θ k √ an , ∀n / ∈ H k−1 }, is approachable. Now we apply Blackwell's theorem. Suppose at some time t that is the beginning of a period, the number of time slots that the AP has worked on client n is u n (t). The work performed for client n is un(t) t/τ , and the image of the vector of work performed under L is x(t) := [x n (t)|x n (t) = anun(t)/t−bn √ an ], which we shall suppose is not in V k . The generalized transmission time policy sorts clients so that a 1 u 1 (t)−b 1 ≤ a 2 u 2 (t)−b 2 ≤ . . . , or equivalently, √ a 1 x 1 (t) ≤ √ a 2 x 2 (t) ≤ . . . . The closest point in V k to x(t) is y := [y n ], where y n = θ k √ an , if x n (t) < θ k √ an and n / ∈ H k−1 , and y n = x n , otherwise. The hyperplane that passes through y and is orthogonal to the line segment xy is:
W k := {w * |w * n ≥ τ bn+θ k an , ∀n / ∈ H k−1 } is approachable. Since ∩ i≥0 W i = {w * |w * n ≥ w n (q n )}, we also prove that {w * |w * n ≥ w n (q n )} is approachable.{z|f (z) := n:n≤n0,n / ∈H k−1 (z n − θ k √ a n )(x n (t) − θ k √ a n ) = 0}.
Let π n be the expected number of time slots that the AP spends on working for client n in this period under the generalized transmission time policy. The image under L of the expected reward in this period is π L := [ anπn/τ −bn √ an
].
Blackwell's theorem shows that V k is approachable if x(t) and π L are separated by the plane {z|f (z) = 0}. Since f (x(t)) ≥ 0, it suffices to show f (π L ) ≤ 0.
We manipulate the original ordering, for this period, so that all clients in H k−1 have higher priorities than those not in H k−1 , while preserving the relative ordering between clients not in H k−1 . Note this manipulation will not give any client n / ∈ H k−1 higher priority than it had in the original ordering. Therefore, π n will not increase for any n / ∈ H k−1 . Since the value of f (π L ) only depends on π n for n / ∈ H k−1 , and increases as those π n decrease, this manipulation will not decrease the value of f (π L ). Thus, it suffices to prove that f (π L ) ≤ 0, under this new ordering. Let n 0 := |H k−1 | + 1. Under this new ordering, we have:
√ a n0 x n0 (t) ≤ √ a n0+1 x n0+1 (t) ≤ · · · ≤ √ a n1 x n1 (t) < θ k ≤ √ a n1+1 x n1+1 (t) ≤ . . . . Let δ n =
√ a n x n (t)− √ a n+1 x n+1 (t), for n 0 ≤ n ≤ n 1 − 1 and δ n1 = √ a n1 x n1 (t) − θ k . Clearly, δ n ≤ 0, for all n 0 ≤ n ≤ n 1 . Now we can derive:
f (π L ) = n1 n=n0 ( a n π n /τ − b n √ a n − θ k √ a n )(x n (t) − θ k √ a n ) = n1 n=n0 ( π n τ − b n a n − θ k a n )( √ a n x n (t) − θ k ) = n1 i=n0 ( i n=n0 π n τ − i n=n0 b n a n − θ k i n=n0 1 a n )δ i .
Recall that I S is the expected number of idle time slots when the AP only caters on the subset S. Thus, under this ordering, we have
i n=n0 π n τ − i n=n0 b n a n − θ k i n=n0 1 a n =( i n=n0 1 a n )( 1 τ (I H k−1 − I {1,...,i} ) − i n=n0
bn an n∈{1,...,i}\H k−1 1/a n − θ k ) ≥ 0.
Therefore, f (π L ) ≤ 0, since δ i ≤ 0, and V k is indeed approachable, for all k.
We have established that the set {w * |w * n ≥ w n (q n )} is approachable. Next we prove that [w n (q n )] is the only feasible vector in the set. Consider any vector w ′ = w(q) in the set. We have w ′ n ≥ w n (q n ) for all n, and w ′ n0 > w n0 (q n0 ), for some n 0 . Suppose n 0 ∈ H k \H k−1 . We have:
n∈H k w ′ n > n∈H k w n (q n ) = k i=1 n∈Hi\Hi−1 τ b n + θ k a n = k i=1
(I Hi−1 − I Hi ) = τ − I H k , and thus w ′ is not feasible. Therefore, w(q) is the only feasible vector in {w * |w * n ≥ w n (q n )}, and the vector of work performed resulting from the generalized transmission time policy must converge to w(q).
Corollary 1: For the policy of Theorem 5, the vector of delivery ratios converges to q.
Proof: Follows from Lemma 1.
B. Optimality of the Weighted Transmission Policy for ACCESS-P OIN T
Theorem 6: Given [ρ n ], the vector q of long-term average delivery ratios resulting from the WT policy is a solution to ACCESS-P OIN T .
Proof: We use the sequence of sets {H k } and values {θ k }, with a n := 1 ρn and b n := 0, as defined in the proof of Theorem 5. Let K := |{θ k }|. Thus, we have H K = T OT = {1, 2, . . . , N }. Also, let m k := |H k |. For convenience, we renumber clients so that H k = {1, 2, . . . , m k }. The proof of Theorem 5 shows that q n = τ p n θ k ρ n , for n ∈ H k \H k−1 . Therefore, w n (q n ) = qn pn = τ θ k ρ n . Obviously, q is feasible, since it is indeed achieved by the WT policy. Thus, to establish optimality, we only need to prove the existence of vectors ζ and µ that satisfy conditions (16)-(19). Set µ n = 0, for all n. Let ζ HK = ζ T OT := ρN wN (qN ) = 1 τ θK
and ζ H k := ρm k wm k (qm k ) − ρm k+1 wm k+1 (qm k+1 ) = 1 τ θ k − 1 τ θ k+1 , for 1 ≤ k ≤ K − 1.
Finally, let ζ S := 0, for all S / ∈ {H 1 , H 2 , . . . , H K }. We claim that the vectors ζ and µ, along with q, satisfy conditions (16)-(19).
We first evaluate condition (16). Suppose client n is in H k \H k−1 . Then client n is also in H k+1 , H k+2 , . . . , H K . So,
− ρ n q n + S∋n ζ S p n − µ n = − 1 τ θ k p n + K i=k ζ Hi p n = − 1 τ θ k p n + 1 τ θ k p n = 0.
Thus, condition (16) is satisfied. Since µ n = 0, for all n, condition (18) is satisfied. Further, since 1 θ k > 1 θ k+1 , for all 1 ≤ k ≤ K −1, condition (19) is also satisfied. It remains to establish condition (17). ρj . Since w n (q n ) is the average number of time slots that the AP spends on working for client n,
we have ui(t) ρi < uj (t)
ρj , for all i ∈ H k and j / ∈ H k , after a finite number of periods. Therefore, except for a finite number of periods, clients in H k will have priorities over those not in H k . In other words, if we only consider the behavior of those clients in H k , it is the same as if the AP only works on the subset H k of clients. Further, recall that I H k is the expected number of time slots that the AP is forced to stay idle when the AP only works on the subset H k of clients. Thus, we have i∈H k w i (q i ) = τ − I H k and i∈H k qi pi − (τ − I H k ) = 0, for all k.
C. Fairness of Allocated Delivery Ratios
We now show that the WT policy not only solves the ACCESS-P OIN T problem but also achieves some forms of fairness among clients. Two common fairness criteria are max-min fair and proportionally fair. We extend the definitions of these two criteria as follows:
Definition 3: A scheduling policy is called weighted max-min fair with positive weight vector a = [a n ] if it achieves q, and, for any other feasible vector q ′ , we have:
q ′ i > q i ⇒ q ′ j < q j , for some j such that wi(qi) ai ≥ wj (qj )
aj . Definition 4: A scheduling policy is called weighted proportionally fair with positive weight vector a if it achieves q and, for any other feasible vector q ′ , we have: N n=1 w n (q ′ n ) − w n (q n ) w n (q n )/a n ≤ 0.
Next, we prove that the WT policy is both weighted max-min fair and proportionally fair with weight vector ρ.
Theorem 7: The weighted transmission policy is weighted max-min fair with weight ρ Proof: We sort clients and define {H k } as in the proof of Theorem 6. Let q be the vector achieved by the WT policy and q ′ be any feasible vector. Suppose q ′ i > q i for some i. Assume client i is in H k \H k−1 . The proof in Theorem 6 shows that n∈H k w n (q n ) = τ − I H k . On the other hand, the feasibility condition requires
n∈H k w n (q ′ n ) ≤ τ − I H k = n∈H k w n (q n ). Further, since q ′ i > q i , w i (q ′ i ) > w i (q i ), there must exist some j ∈ H k so that w j (q ′ j ) < w j (q j ), that is, q ′ j < q j . Finally, since i ∈ H k \H k−1 , we have wi(qi) ρi ≥ wn(qn)
ρn , for all n ∈ H k , and hence wi(qi) ρi ≥ wj (qj ) ρj . Theorem 8: The weighted transmission policy is proportionally fair with weight ρ.
Proof: We sort clients and define {H k } as in the proof of Theorem 6. Let q be the vector achieved by the WT policy, and let q ′ be any feasible vector. We have wi(qi)
ρi = τ θ k , if i ∈ H k \H k−1 . Define ∆ k := n∈H k \H k−1 w n (q ′ n ) − w n (q n ).
To prove the theorem, we prove a stronger statement by induction:
n∈H k w n (q ′ n ) − w n (q n ) w n (q n )/ρ n = k i=1 ∆ i τ θ i ≤ 0, for all k > 0.
First consider the case k = 1. The proof in Theorem 6 shows that n∈H1 w n (q n ) = τ − I H1 . Further, the feasibility condition requires n∈H1 w n (q ′ n ) ≤ τ − I H1 = n∈H1 w n (q n ) = τ − I H1 , and so
∆ 1 = n∈H1 w n (q ′ n ) − w n (q n ) ≤ 0. Thus, we have ∆1 τ θ1 ≤ 0. Suppose we have k i=1
∆i τ θi ≤ 0, for all k ≤ k 0 . Again, the proof in Theorem 6 gives us n∈H k 0 +1 w n (q n ) = τ − I H k 0 +1 and the feasibility condition requires
n∈H k 0 +1 w n (q ′ n ) ≤ τ −I H k 0 +1 = n∈H k 0 +1 w n (q n ). Thus, k0+1 i=1 ∆ i ≤ 0.
We can further derive:
k0+1 i=1 ∆ i τ θ i ≤ k0 i=1 ∆ i τ θ i (1 − θ i θ k0+1 ) (since k0+1 i=1 ∆ i τ θ k0+1 ≤ 0) = k0 j=1 [( θ j+1 − θ j θ k0+1 ) j i=1 ∆ i τ θ i ] ≤0 (since j i=1 ∆ i τ θ i ≤ 0, and θ j+1 > θ j , ∀j ≤ k 0 ) By induction, k i=1
∆i τ θi ≤ 0, for all k. Finally, we have:
N n=1 w n (q ′ n ) − w n (q n ) w n (q n )/ρ n = K i=1 ∆ i τ θ i ≤ 0,
and the WT policy is proportionally fair with weight ρ.
VII. SIMULATION RESULTS We have implemented the WT policy and the bidding game, as described in Section V, on ns-2. We use the Table I. All results in this section are averages of 20 simulation runs.
A. Convergence Time for the Weighted Transmission Policy
We have proved that the vector of delivery ratios will converge under the WT policy in Section VI-A. However, the speed of convergence is not discussed. In the bidding game, we assume that the delivery ratio observed by each client is post convergence. Thus, it is important to verify whether the WT policy converges quickly. In this simulation, we assume that there are 30 clients in the system. The n th client has channel reliability (50 + n)% and offers a bid ρ n = (n mod 2) + 1. We run each simulation for 10 seconds simulation time and then compare the absolute difference of n ρ n log q n between the delivery ratios at the end of each period with those after 10 seconds. In particular, we artificially set q n = 0.001 if the delivery ratio for client n is zero, to avoid computation error for log q n .
Simulation results are shown in Fig. 1. It can be seen that the delivery ratios converge rather quickly. At time 0.2 seconds, the difference is smaller than 1.4, which is less than 10% of the final value. Based on this observation, we assume that each client updates its bid every 0.2 seconds in the following simulations.
B. Utility Maximization
In this section, we study the total utility that is achieved by iterating between the bidding game and the WT policy, which we call WT-Bid. We assume that the utility function of each client n is given by γ n q αn n −1 αn , where γ n is a positive integer and 0 < α n < 1. This utility function is strictly increasing, strictly concave, and differentiable for any γ n and α n . In addition to evaluating the policy WT-Bid, we also compare the results of three other policies: a (a) Average of total utility (b) Variance of total utility Fig. 2: Performance of total utility policy that employs the WT policy but without updating the bids from clients, which we call WT-NoBid; a policy that decides priorities randomly among clients at the beginning of each period, which we call Rand; and a policy that gives clients with larger γ n higher priorities, with ties broken randomly, which we call P-Rand.
In each simulation, we assume there are 30 clients. The n th client has channel reliability p n = (50 + n)%, γ n = (n mod 3) + 1, and α n = 0.3 + 0.1(n mod 5). In addition to plotting the average of total utility over all simulation runs, we also plot the variance of total utility. Fig. 2 shows the simulation results. The WT-Bid policy not only achieves the highest average total utility but also the smallest variance. This result suggests that the WT-Bid policy converges very fast. On the other hand, the WT-NoBid policy fails to provide satisfactory performance since it does not consider the different utility functions that clients may have. The P-Rand policy offers much better performance than both the WT-NoBid policy and the Rand policy since it correctly gives higher priority to clients with higher γ n . Still, it cannot differentiate between clients with the same γ n and thus can only provide suboptimal performance.
VIII. CONCLUDING REMARKS
We have studied the problem of utility maximization problem for clients that demand delay-based QoS support from an access point. Based on an analytical model for QoS support proposed in previous work, we formulate the utility maximization problem as a convex optimization problem. We decompose the problem into two simpler subproblems, namely, CLIEN T n and ACCESS-P OIN T . We have proved that the total utility of the system can be maximized by jointly solving the two subproblems. We also describe a bidding game to reconciliate the two subproblems. In the game, each client announces its bid to maximize its own net profit and the AP allocates time slots to achieve the optimal point of ACCESS-P OIN T . We have proved that the equilibrium point of the bidding game jointly solves the two subproblems, and therefore achieves the maximum total utility.
In addition, we have proposed a very simple, prioritybased weighted transmission policy for solving the ACCESS-P OIN T subproblem. This policy does not require that the AP know the channel reliabilities of the clients, or their individual utilities. We have proved that the long-term performance of a general class of prioritybased policies that includes our proposed policy converges to a single point. We then proved that the limiting point of the proposed scheduling policy is the optimal solution to ACCESS-P OIN T . Moreover, we have also proved that the resulting allocation by the AP satisfies some forms of fairness criteria. Finally, we have implemented both the bidding game and the scheduling policy in ns-2. Simulation results suggests that the scheduling policy quickly results in convergence. Further, by iterating between the bidding game and the WT policy, the resulting total utility is higher than other tested policies.
| 9,211 |
0908.0570
|
2952664369
|
We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman's coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis.
|
A number of probabilistic approaches have been proposed in the past for the problem of gene-regulatory network reconstruction @cite_10 @cite_13 @cite_4 @cite_3 . Some take into account the information on the prior network topology @cite_10 , which is not always available. Most assume the number of factors is known. To get around this, one can perform model selection via Reversible Jump MCMC @cite_0 or evolutionary stochastic model search @cite_12 . Unfortunately, these methods are often difficult to design and may take quite long to converge. Moreover, they are difficult to integrate with other forms of prior knowledge (eg., factor hierarchies). A somewhat similar approach to ours is the infinite independent component analysis (iICA) model of @cite_8 which treats factor analysis as a special case of ICA. However, their model is limited to factor analysis and does not take into account feature selection, factor hierarchy and factor regression. As a generalization to the standard ICA model, @cite_11 proposed a model in which the components can be related via a tree-structured graphical model. It, however, assumes a fixed number of components.
|
{
"abstract": [
"Motivation: We have used state-space models (SSMs) to reverse engineer transcriptional networks from highly replicated gene expression profiling time series data obtained from a well-established model of T cell activation. SSMs are a class of dynamic Bayesian networks in which the observed measurements depend on some hidden state variables that evolve according to Markovian dynamics. These hidden variables can capture effects that cannot be directly measured in a gene expression profiling experiment, for example: genes that have not been included in the microarray, levels of regulatory proteins, the effects of mRNA and protein degradation, etc. Results: We have approached the problem of inferring the model structure of these state-space models using both classical and Bayesian methods. In our previous work, a bootstrap procedure was used to derive classical confidence intervals for parameters representing 'gene--gene' interactions over time. In this article, variational approximations are used to perform the analogous model selection task in the Bayesian context. Certain interactions are present in both the classical and the Bayesian analyses of these regulatory networks. The resulting models place JunB and JunD at the centre of the mechanisms that control apoptosis and proliferation. These mechanisms are key for clonal expansion and for controlling the long term behavior (e.g. programmed cell death) of these cells. Availability: Supplementary data is available at http: public.kgi.edu wild index.htm and Matlab source code for variational Bayesian learning of SSMs is available at http: www.cse.ebuffalo.edu faculty mbeal software.html Contact: [email protected]",
"",
"Motivation: In systems like Escherichia Coli, the abundance of sequence information, gene expression array studies and small scale experiments allows one to reconstruct the regulatory network and to quantify the effects of transcription factors on gene expression. However, this goal can only be achieved if all information sources are used in concert. Results: Our method integrates literature information, DNA sequences and expression arrays. A set of relevant transcription factors is defined on the basis of literature. Sequence data are used to identify potential target genes and the results are used to define a prior distribution on the topology of the regulatory network. A Bayesian hidden component model for the expression array data allows us to identify which of the potential binding sites are actually used by the regulatory proteins in the studied cell conditions, the strength of their control, and their activation profile in a series of experiments. We apply our methodology to 35 expression studies in E.Coli with convincing results. Availability: www.genetics.ucla.edu labs sabatti software.html Supplementary information: The supplementary material are available at Bioinformatics online. Contact: [email protected]",
"",
"Markov chain Monte Carlo methods for Bayesian computation have until recently been restricted to problems where the joint distribution of all variables has a density with respect to some fixed standard underlying measure. They have therefore not been available for application to Bayesian model determination, where the dimensionality of the parameter vector is typically not fixed. This paper proposes a new framework for the construction of reversible Markov chain samplers that jump between parameter subspaces of differing dimensionality, which is flexible and entirely constructive. It should therefore have wide applicability in model determination problems. The methodology is illustrated with applications to multiple change-point analysis in one and two dimensions, and to a Bayesian comparison of binomial experiments.",
"Motivation: Quantitative estimation of the regulatory relationship between transcription factors and genes is a fundamental stepping stone when trying to develop models of cellular processes. Recent experimental high-throughput techniques, such as Chromatin Immunoprecipitation (ChIP) provide important information about the architecture of the regulatory networks in the cell. However, it is very difficult to measure the concentration levels of transcription factor proteins and determine their regulatory effect on gene transcription. It is therefore an important computational challenge to infer these quantities using gene expression data and network architecture data. Results: We develop a probabilistic state space model that allows genome-wide inference of both transcription factor protein concentrations and their effect on the transcription rates of each target gene from microarray data. We use variational inference techniques to learn the model parameters and perform posterior inference of protein concentrations and regulatory strengths. The probabilistic nature of the model also means that we can associate credibility intervals to our estimates, as well as providing a tool to detect which binding events lead to significant regulation. We demonstrate our model on artificial data and on two yeast datasets in which the network structure has previously been obtained using ChIP data. Predictions from our model are consistent with the underlying biology and offer novel quantitative insights into the regulatory structure of the yeast cell. Availability: MATLAB code is available from http: umber.sbs.man.ac.uk resources puma Contact: [email protected] Supplementary information: Supplementary Data are available at Bioinformatics online",
"We describe studies in molecular profiling and biological pathway analysis that use sparse latent factor and regression models for microarray gene expression data. We discuss breast cancer applications and key aspects of the modeling and computational methodology. Our case studies aim to investigate and characterize heterogeneity of structure related to specific oncogenic pathways, as well as links between aggregate patterns in gene expression profiles and clinical biomarkers. Based on the metaphor of statistically derived “factors” as representing biological “subpathway” structure, we explore the decomposition of fitted sparse factor models into pathway subcomponents and investigate how these components overlay multiple aspects of known biological activity. Our methodology is based on sparsity modeling of multivariate regression, ANOVA, and latent factor models, as well as a class of models that combines all components. Hierarchical sparsity priors address questions of dimension reduction and multiple co...",
"We present a generalization of independent component analysis (ICA), where instead of looking for a linear transform that makes the data components independent, we look for a transform that makes the data components well fit by a tree-structured graphical model. This tree-dependent component analysis (TCA) provides a tractable and flexible approach to weakening the assumption of independence in ICA. In particular, TCA allows the underlying graph to have multiple connected components, and thus the method is able to find \"clusters\" of components such that components are dependent within a cluster and independent between clusters. Finally, we make use of a notion of graphical models for time series due to Brillinger (1996) to extend these ideas to the temporal setting. In particular, we are able to fit models that incorporate tree-structured dependencies among multiple time series."
],
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_10",
"@cite_3",
"@cite_0",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2134092975",
"",
"2096218546",
"",
"2106706098",
"2010532309",
"2053609837",
"2128110671"
]
}
|
The Infinite Hierarchical Factor Regression Model
|
Factor analysis is the task of explaining data by means of a set of latent factors. Factor regression couples this analysis with a prediction task, where the predictions are made solely on the basis of the factor representation. The latent factor representation achieves two-fold benefits: (1) discovering the latent process underlying the data; (2) simpler predictive modeling through a compact data representation. In particular, (2) is motivated by the problem of prediction in the "large P small N" paradigm [1], where the number of features P greatly exceeds the number of examples N , potentially resulting in overfitting.
We address three fundamental shortcomings of standard factor analysis approaches [2,3,4,1]: (1) we do not assume a known number of factors; (2) we do not assume factors are independent; (3) we do not assume all features are relevant to the factor analysis. Our motivation for this work stems from the task of reconstructing regulatory structure from gene-expression data. In this context, factors correspond to regulatory pathways. Our contributions thus parallel the needs of gene pathway modeling. In addition, we couple predictive modeling (for factor regression) within the factor analysis framework itself, instead of having to model it separately. Our factor regression model is fundamentally nonparametric. In particular, we treat the gene-tofactor relationship nonparametrically by proposing a sparse variant of the Indian Buffet Process (IBP) [5], designed to account for the sparsity of relevant genes (features). We couple this IBP with a hierarchical prior over the factors. This prior explains the fact that pathways are fundamentally related: some are involved in transcription, some in signaling, some in synthesis. The nonparametric nature of our sparse IBP requires that the hierarchical prior also be nonparametric. A natural choice is Kingman's coalescent [6], a popular distribution over infinite binary trees.
Since our motivation is an application in bioinformatics, our notation and terminology will be drawn from that area. In particular, genes are features, samples are examples, and pathways are factors. However, our model is more general. An alternative application might be to a collaborative filtering problem, in which case our genes might correspond to movies, our samples might correspond to users and our pathways might correspond to genres. In this context, all three contributions of our model still make sense: we do not know how many movie genres there are; some genres are closely related (romance to comedy versus to action); many movies may be spurious.
Indian Buffet Process
The Indian Buffet Process [7] defines a distribution over infinite binary matrices, originally motivated by the need to model the latent factor structure of a given set of observations. In the standard form it is parameterized by a scale value, α. The distribution can be explained by means of a simple culinary analogy. Customers (in our context, genes) enter an Indian restaurant and select dishes (in our context, pathways) from an infinite array of dishes. The first customer selects P oisson(α) dishes. Thereafter, each incoming customer i selects a previously-selected dish k with a probability m k /(i − 1), where m k is the number of previous customers who have selected dish k. Customer i then selects an additional P oisson(α/i) new dishes. We can easily define a binary matrix Z with value Z ik = 1 precisely when customer i selects dish k. This stochastic process thus defines a distribution over infinite binary matrices.
It turn out [7] that the stochastic process defined above corresponds to an infinite limit of an exchangeable process over finite matrices with K columns. This distribution takes the form
p(Z | α) = K k=1 α K Γ(m k + α K )Γ(P −m k −1) Γ(P +1+ α K )
, where m k = i Z ik and P is the total number of customers. Taking K → ∞ yields the IBP. The IBP has several nice properties, the most important of which is exchangeablility. It is the exchangeablility (over samples) that makes efficient sampling algorithms possible. There also exists a two-parameter generalization to IBP where the second parameter β controls the sharability of dishes.
Kingman's Coalescent
Our model makes use of a latent hierarchical structure over factors; we use Kingman's coalescent [6] as a convenient prior distribution over hierarchies. Kingman's coalescent originated in the study of population genetics for a set of single-parent organisms. The coalescent is a nonparametric model over a countable set of organisms. It is most easily understood in terms of its finite dimensional marginal distributions over n individuals, in which case it is called an n-coalescent. We then take the limit n → ∞. In our case, the individuals are factors.
The n-coalescent considers a population of n organisms at time t = 0. We follow the ancestry of these individuals backward in time, where each organism has exactly one parent at time t < 0. The n-coalescent is a continuous-time, partition-valued Markov process which starts with n singleton clusters at time t = 0 and evolves backward, coalescing lineages until there is only one left. We denote by t i the time at which the ith coalescent event occurs (note t i ≤ 0), and δ i = t i−1 − t i the time between events (note δ i > 0). Under the n-coalescent, each pair of lineages merges indepentently with exponential rate 1; so δ i ∼ Exp n−i+1 2 . With probability one, a random draw from the n-coalescent is a binary tree with a single root at t = −∞ and n individuals at time t = 0. We denote the tree structure by π. The marginal distribution over tree topologies is uniform and independent of coalescent times; and the model is infinitely exchangeable. We therefore consider the limit as n → ∞, called the coalescent.
Once the tree structure is obtained, one can define an additional Markov process to evolve over the tree. One common choice is a Brownian diffusion process. In Brownian diffusion in D dimensions, we assume an underlying diffusion covariance of Λ ∈ R D×D p.s.d. The root is a D-dimensional vector drawn z. Each non-root node in the tree is drawn Gaussian with mean equal to the value of the parent, and variance δ i Λ, where δ i is the time that has passed.
Recently, Teh et al. [8] proposed efficient bottom-up agglomerative inference algorithms for the coalescent. These (approximately) maximize the probability of π and δs, marginalizing out internal nodes by Belief Propagation. If we associate with each node in the tree a mean y and variance v message, we update messages as Eq (1), where i is the current node and li and ri are its children.
v i = (v li + (t li − t i )Λ) −1 + (v ri + (t ri − t i )Λ) −1 −1(1)y i = y li (v li + (t li − t i )Λ) −1 + y ri (v ri + (t ri − t i )Λ) −1 −1 v i
Nonparametric Bayesian Factor Regression
Recall the standard factor analysis problem: X = AF + E, for standardized data X. X is a P × N matrix consisting of N samples [x 1 , ..., x N ] of P features each. A is the factor loading matrix of size P × K and F = [f 1 , ..., f N ] is the factor matrix of size K × N . E = [e 1 , ..., e N ] is the matrix of idiosyncratic variations. K, the number of factors, is known.
Recall that our goal is to treat the factor analysis problem nonparametrically, to model feature relevance, and to model hierarchical factors. For expository purposes, it is simplest to deal with each of these issues in turn. In our context, we begin by modeling the gene-factor relationship nonparametrically (using the IBP). Next, we propose a variant of IBP to model gene relevance. We then present the hierarchical model for inferring factor hierarchies. We conclude with a presentation of the full model and our mechanism for modifying the factor analysis problem to factor regression.
Nonparametric Gene-Factor Model
We begin by directly using the IBP to infer the number of factors. Although IBP has been applied to nonparametric factor analysis in the past [5], the standard IBP formulation places IBP prior on the factor matrix (F) associating samples (i.e. a set of features) with factors. Such a model assumes that the sample-fctor relationship is sparse. However, this assumption is inappropriate in the geneexpression context where it is not the factors themselves but the associations among genes and factors (i.e., the factor loading matrix A) that are sparse. In such a context, each sample depends on all the factors but each gene within a sample usually depends only on a small number of factors.
Thus, it is more appropriate to model the factor loading matrix (A) with the IBP prior. Note that since A and F are related with each other via the number of factors K, modeling A nonparametrically allows our model to also have an unbounded number of factors.
For most gene-expression problems [1], a binary factor loadings matrix (A) is inappropriate. Therefore, we instead use the Hadamard (element-wise) product of a binary matrix Z and a matrix V of reals. Z and V are of the same size as A. The factor analysis model, for each sample i, thus becomes:
x i = (Z ⊙ V )f i + e i .
We have Z ∼ IBP(α, β). α and β are IBP hyperparameters and have vague gamma priors on them. Our initial model assumes no factor hierarchies and hence the prior over V would simply be a Gaussian: V ∼ Nor(0, σ 2 v I) with an inverse-gamma prior on σ v . F has a zero mean, unit variance Gaussian prior, as used in standard factor analysis. Finally, e i = Nor(0, Ψ) models the idiosyncratic variations of genes where Ψ is a P × P diagonal matrix (diag(Ψ 1 , ..., Ψ P )). Each entry Ψ P has an inverse-gamma prior on it.
Feature Selection Prior
Typical gene-expression datasets are of the order of several thousands of genes, most of which are not associated with any pathway (factor). In the above, these are accounted for only by the idiosyncratic noise term. A more realistic model is that certain genes simply do not participate in the factor analysis: for a culinary analogy, the genes enter the restaurant and leave before selecting any dishes. Those genes that "leave", we term "spurious." We add an additional prior term to account for such spurious genes; effectively leading to a sparse solution (over the rows of the IBP matrix). It is important to note that this notion of sparsity is fundamentally different from the conventional notion of sparsity in the IBP. The sparsity in IBP is over columns, not rows. To see the difference, recall that the IBP contains a "rich get richer" phenomenon: frequently selected factors are more likely to get reselected. Consider a truly spurious gene and ask whether it is likely to select any factors. If some factor k is already frequently used, then a priori this gene is more likely to select it. The only downside to selecting it is the data likelihood. By setting the corresponding value in V to zero, there is no penalty.
Our sparse-IBP prior is identical to the standard IBP prior with one exception. Each customer (gene) p is associated with Bernoulli random variable T p that indicates whether it samples any dishes. The T vector is given a parameter ρ, which, in turn, is given a Beta prior with parameters a, b.
Hierarchical Factor Model
In our basic model, each column of the matrix Z (and the corresponding column in V ) is associated with a factor. These factors are considered unrelated. To model the fact that factors are, in fact, re-lated, we introduce a factor hierarchy. Kingman's coalescent [6] is an attractive prior for integration with IBP for several reasons. It is nonparametric and describes exchangeable distributions. This means that it can model a varying number of factors. Moreover, efficient inference algorithms exist [8].
Full Model and Extension to Factor Regression
Our proposed graphical model is depicted in Figure 1. The key aspects of this model are: the IBP prior over Z, the sparse binary vector T, and the Coalescent prior over V.
In standard Bayesian factor regression [1], factor analysis is followed by the regression task. The regression is performed only on the basis of F, rather than the full data X. For example, a simple linear regression problem would involve estimating a K-dimensional parameter vector θ with regression value θ ⊤ F. Our model, on the other hand, integrates factor regression component in the nonparametric factor analysis framework itself. We do so by prepending the responses y i to the expression vector x i and joining the training and test data (see figure 2). The unknown responses in the test data are treated as missing variables to be iteratively imputed in our MCMC inference procedure. It is straightforward to see that it is equivalent to fitting another sparse model relating factors to responses. Our model thus allows the factor analysis to take into account the regression task as well. In case of binary responses, we add an extra probit regression step to predict binary outcomes from real-valued responses.
Inference
We use Gibbs sampling with a few M-H steps. The Gibbs distributions are summarized here.
Sampling the IBP matrix Z: Sampling Z consists of sampling existing dishes, proposing new dishes and accepting or rejecting them based on the acceptance ratio in the associated M-H step. For sampling existing dishes, an entry in Z is set as 1 according to
p(Z ik = 1|X, Z −ik , V, F, Ψ) ∝ m −i,k (P +β−1) p(X|Z, V, F, Ψ) whereas it is set as 0 according to p(Z ik = 0|X, Z −ik , V, F, Ψ) ∝ P +β−1−m −i,k (P +β−1) p(X|Z, V, F, Ψ). m −i,k = j =i Z jk is how many other customers chose dish k.
For sampling new dishes, we use an M-H step where we simultaneously propose η = (K new , V new , F new ) where K new ∼ P oisson(αβ/(β + P − 1)). We accept the proposal with an acceptance probability (following [9]) given by a = min{1, p(rest|η * ) p(rest|η) }. Here, p(rest|η) is the likelihood of the data given parameters η. We propose V new from its prior (either Gaussian or Coalescent) but, for faster mixing, we propose F new from its posterior.
Sampling V new from the coalescent is slightly involved. As shown pictorially in figure 3, proposing a new column of V corresponds to adding a new leaf node to the existing coalescent tree. In particular, we need to find a sibling (s) to the new node y ′ and need to find an insertion point on the branch joining the sibling s to its parent p (the grandparent of y ′ ). Since the marginal distribution over trees under the coalescent is uniform, the sibling s is chosen uniformly over nodes in the tree. We then use importance sampling to select an insertion time for the new node y ′ between t s and t p , according to the exponential distribution given by the coalescent prior (our proposal distribution is uniform). This gives an insertion point in the tree, which corresponds to the new parent of y ′ .
We denote this new parent by p ′ and the time of insertion as t. The predictive density of the newly inserted node y ′ can be obtained by marginalizing the parent p ′ . This yields Nor(y 0 , v 0 ), given by:
v 0 = [(v s + (t s − t)Λ) −1 + (v p + (t − t p )Λ) −1 ] −1 y 0 = [y s /(v s + (t s − t)Λ) + y p /(v p + (t p − t)Λ)]v 0
Here, y s and v s are the messages passed up through the tree, while y p and v p are the messages passed down through the tree (compare to Eq (1)).
Figure 3: Adding a new node to the tree
Sampling the sparse IBP vector T: In the sparse IBP prior, recall that we have an additional P -many variables T p , indicating whether gene p "eats" any dishes. T p is drawn from Bernoulli with parameter ρ, which, in turn, is given a Bet(a, b) prior. For inference, we collapse ρ and Ψ and get Gibbs posterior over T p of the form p(
T p = 1|.) ∝ (a + q =p T p )Stu(x p |(Z p ⊙ V p )F , g/h, g)) and p(T p = 0|.) ∝ (b + P − q =p T q )Stu(x p |0, g/h, g),
where Stu is the non-standard Student's t-distribution. g, h are hyperparameters of the inverse-gamma prior on the entries of Ψ.
Sampling the real valued matrix V:
For the case when V has a Gaussian prior on it, we sample V from its posterior p(V g,j |X, Z, F, Ψ) ∝
Nor(V g,j |µ g,j , Σ g,j ), where Σ g,j = ( N i=1 F 2 j,i Ψg + 1 σ 2 v ) −1 and µ g,j = Σ g,j ( N i=1 F j,i X * g,j )Ψ −1 g . We define X * g,j = X g,i − K l=1,l =j (A g,l V g,l )F l,i , and A = Z ⊙ V.
The hyperparameter σ v on V has an inverse-gamma prior and posterior also has the same form. For the case with coalescent prior on V, we have
Σ g,j = ( N i=1 F 2 j,i Ψg + 1 v0 j ) −1 and µ g,j = Σ g,j ( N i=1 F j,i X * g,j )(Ψ g + y0 g,j v0 j ) −1 ,
where y 0 and v 0 are the Gaussian posteriors of the leaf node added in the coalescent tree (see Eq (1)), which corresponds to the column of V being sampled.
(g + N 2 , h 1+ h 2 tr(E T E) ), where E = X − (Z ⊙ V)F.
Sampling IBP parameters:
We sample the IBP parameter α from its posterior: p(α|.) ∼ Gam(K + + a, b 1+bHP (β) ), where K + is the number of active features at any moment and H P (β) = P i=1 1/(β + i − 1). β is sampled from a prior proposal using an M-H step. Sampling the Factor Tree: Use the Greedy-Rate1 algorithm [8].
Related Work
A number of probabilistic approaches have been proposed in the past for the problem of generegulatory network reconstruction [2,3,4,1]. Some take into account the information on the prior network topology [2], which is not always available. Most assume the number of factors is known. To get around this, one can perform model selection via Reversible Jump MCMC [10] or evolutionary stochastic model search [11]. Unfortunately, these methods are often difficult to design and may take quite long to converge. Moreover, they are difficult to integrate with other forms of prior knowledge (eg., factor hierarchies). A somewhat similar approach to ours is the infinite independent component analysis (iICA) model of [12] which treats factor analysis as a special case of ICA. However, their model is limited to factor analysis and does not take into account feature selection, factor hierarchy and factor regression. As a generalization to the standard ICA model, [13] proposed a model in which the components can be related via a tree-structured graphical model. It, however, assumes a fixed number of components. Structurally, our model with Gaussian-V (i.e. no hierarchy over factors) is most similar to the Bayesian Factor Regression Model (BFRM) of [1]. BFRM assumes a sparsity inducing mixture prior on the factor loading matrix A. Specifically, A pk ∼ (1 − π pk )δ 0 (A pk ) + π pk Nor(A pk |0, τ k ) where δ 0 () is a point mass centered at zero. To complete the model specification, they define π pk ∼ (1 − ρ k )δ 0 (π pk ) + ρ k Bet(π pk |sr, s(1 − r)) and ρ k ∼ Bet(ρ k |av, a(1 − v)). Now, integrating out π pk gives: A pk ∼ (1−vρ k )δ 0 (A pk )+vρ k Nor(A pk |0, τ k ). It is interesting to note that the nonparametric prior of our model (factor loading matrix defined as A = Z ⊙ V) is actually equivalent to the (parametric) sparse mixture prior of the BFRM as K → ∞. To see this, note that our prior on the factor loading matrix A (composed of Z having an IBP prior, and V having a Gaussian prior), can be written as A pk ∼ (1 − ρ k )δ 0 (A pk ) + ρ k Nor(A pk |0, σ 2 v ), if we define ρ k ∼ Bet(1, αβ/K). It is easy to see that, for BFRM where ρ k ∼ Bet (av, a(1 − v)), setting a = 1 + αβ/K and v = 1 − αβ/(aK) recovers our model in the limiting case when K → ∞.
Experiments
In this section, we report our results on synthetic and real datasets. We compare our nonparametric approach with the evolutionary search based approach proposed in [11], which is the nonparametric extension to BFRM.
We used the gene-factor connectivity matrix of E-coli network (described in [14]) to generate a synthetic dataset having 100 samples of 50 genes and 8 underlying factors. Since we knew the ground truth for factor loadings in this case, this dataset was ideal to test for efficacy in recovering the factor loadings (binding sites and number of factors). We also experimented with a real geneexpression data which is a breast cancer dataset having 251 samples of 226 genes and 5 prominent underlying factors (we know this from domain knowledge).
Nonparametric Gene-Factor Modeling and Variable Selection
For the synthetic dataset generated by the E-coli network, the results are shown in figure 4 comparing the actual network used to generate the data and the inferred factor loading matrix. As shown in figure 4, we recovered exactly the same number (8) of factors, and almost exactly the same factor loadings (binding sites and number of factors) as the ground truth. In comparison, the evolutionary search based approach overestimated the number of factors and the inferred loadings clearly seem to be off from the actual loadings (even modulo column permutations). Our results on real data are shown in figure 5. To see the effect of variable selection for this data, we also introduced spurious genes by adding 50 random features in each sample. We observe the following: (1) Without variable selection being on, spurious genes result in an overestimated number of factors and falsely discovered factor loadings for spurious genes (see figure 5(a)), (2) Variable selection, when on, effectively filters out spurious genes, without overestimating the number of factors (see figure 5(b)). We also investigated the effect of noise on the evolutionary search based approach and it resulted in an overestimated number of factor, plus false discovered factor loadings for spurious genes (see figure 5(c)). To conserve space, we do not show here the cases when there are no spurious genes in the data but it turns out that variable selection does not filter out any of 226 relevant genes in such a case.
Hierarchical Factor Modeling
Our results with hierarchical factor modeling are shown in figure 6 for synthetic and real data. As shown, the model correctly infers the gene-factor associations, the number of factors, and the factor hierarchy. There are several ways to interpret the hierarchy. From the factor hierarchy for E-coli data (figure 6), we see that column-2 (corresponding to factor-2) of the V matrix is the most prominent one (it regulates the highest number of genes), and is closest to the tree-root, followed by column-2, which it looks most similar to. Columns corresponding to lesser prominent factors are located further down in the hierarchy (with appropriate relatedness). Figure 6 (d) can be interpreted in a similar manner for breast-cancer data. The hierarchy can be used to find factors in order of their prominence. The higher we chop off the tree along the hierarchy, the more prominent the factors, we discover, are. For instance, if we are only interested in top 2 factors in E-coli data, we can chop off the tree above the sixth coalescent point. This is akin to the agglomerative clustering sense which is usually done post-hoc. In contrast, our model discovers the factor hierarchies as part of the inference procedure itself. At the same time, there is no degradation of data reconstruction (in mean squared error sense) and the log-likelihood, when compared to the case with Gaussian prior on V (see figure 7 -they actually improve). We also show in section 6.3 that hierarchical modeling results in better predictive performance for the factor regression task. Empirical evidences also suggest that the factor hierarchy leads to faster convergence since most of the unlikely configurations will never be visited as they are constrained by the hierarchy.
Factor Regression
We report factor regression results for binary and real-valued responses and compare both variants of our model (Gaussian V and Coalescent V) against 3 different approaches: logistic regression, BFRM, and fitting a separate predictive model on the discovered factors (see figure 7 (c)). The breast-cancer dataset had two binary response variables (phenotypes) associated with each sample. For this binary prediction task, we split the data into training-set of 151 samples and test-set of 100 samples. This is essentially a transduction setting as described in section 3.4 and shown in figure 2.
For real-valued prediction task, we treated a 30x20 block of the data matrix as our held-out data and predicted it based on the rest of the entries in the matrix. This method of evaluation is akin to the task of image reconstruction [15]. The results are averaged over 20 random initializations and the low error variances suggest that our method is fairly robust w.r.t. initializations.
| 4,398 |
0908.0595
|
2949273047
|
Search engine researchers typically depict search as the solitary activity of an individual searcher. In contrast, results from our critical-incident survey of 150 users on Amazon's Mechanical Turk service suggest that social interactions play an important role throughout the search process. Our main contribution is that we have integrated models from previous work in sensemaking and information seeking behavior to present a canonical social model of user activities before, during, and after search, suggesting where in the search process even implicitly shared information may be valuable to individual searchers.
|
Surprisingly, researchers have thought about navigating and browsing for information as a single user activity, centered on eliciting users' information needs and improving the relevance of search results. For example, Choo, Detlor & Turnbull @cite_1 discussed categories of search behaviors and motivations in information seeking, but they overlooked the role of other individuals in search. On the other hand, library scientists @cite_3 have observed for some time that friends and colleagues may be valuable information resources during search. Similarly, recent authors have begun to recognize the prevalence and benefits of @cite_9 @cite_0 .
|
{
"abstract": [
"",
"Today's Web browsers provide limited support for rich information-seeking and information-sharing scenarios. A survey we conducted of 204 knowledge workers at a large technology company has revealed that a large proportion of users engage in searches that include collaborative activities. We present the results of the survey, and then review the implications of these findings for designing new Web search interfaces that provide tools for sharing.",
"This paper presents findings from a study of how knowledge workers use the Web to seek external information as part of their daily work. Thirty-four users from seven companies took part in the study. Participants were mainly IT specialists, managers, and research marketing consulting staff working in organizations that included a large utility company, a major bank, and a consulting firm. Participants answered a detailed questionnaire and were interviewed individually in order to understand their information needs and information seeking preferences. A custom-developed WebTracker software application was installed on each of their work place PCs, and participants' Web-use activities were then recorded continuously during two-week periods. The WebTracker recorded how participants used the browser to seek information on the Web: it logged menu choices, button bar selections, and keystroke actions, allowing browsing and searching sequences to be reconstructed. In a second round of personal interviews, participants recalled critical incidents of using information from the Web. Data from the two interviews and the WebTracker logs constituted the database for analysis. Sixty-one significant episodes of information seeking were identified. A model was developed to describe the common repertoires of information seeking that were observed. On one axis of the model, episodes were plotted according to the four scanning modes identified by Aguilar (1967), Weick and Daft (1983): undirected viewing, conditioned viewing, informal search, and formal search. Each mode is characterized by its own information needs and information seeking strategies. On the other axis of the model, episodes were plotted according to the occurrence of one or more of the six categories of information seeking behaviors identified by Ellis (1989, 1990): starting, chaining, browsing, differentiating, monitoring, and extracting. The study suggests that a behavioral framework that relates motivations (Aguilar) and moves (Ellis) may be helpful in analyzing patterns of Web-based information seeking.",
"Apart from information retrieval there is virtually no other area of information science that has occasioned as much research effort and writing as ‘user studies’. Within user studies the investigation of ‘information needs’ has been the subject of much debate and no little confusion. The aim of this paper is to attempt to reduce this confusion by devoting attention to the definition of some concepts and by proposing the basis for a theory of the motivations for information‐seeking behaviour."
],
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_1",
"@cite_3"
],
"mid": [
"",
"2022555286",
"2142000191",
"2153505326"
]
}
| 0 |
||
0908.0595
|
2949273047
|
Search engine researchers typically depict search as the solitary activity of an individual searcher. In contrast, results from our critical-incident survey of 150 users on Amazon's Mechanical Turk service suggest that social interactions play an important role throughout the search process. Our main contribution is that we have integrated models from previous work in sensemaking and information seeking behavior to present a canonical social model of user activities before, during, and after search, suggesting where in the search process even implicitly shared information may be valuable to individual searchers.
|
However, in addition to explicit collaboration in joint search tasks @cite_9 , we believe that even implicit social experiences could improve the search process. Therefore, the general term social search'' may more suitably describe information seeking and sensemaking habits that make use of a range of possible social interactions: including searches that utilize social and expertise networks or that may be done in shared social workspaces. This notion certainly encompasses collaborative co-located search, as well as remote and asynchronous collaborative and collective search. Our focus in this paper is to explore a model of social search that may offer suggestions for supporting social interactions in the information seeking process.
|
{
"abstract": [
"Today's Web browsers provide limited support for rich information-seeking and information-sharing scenarios. A survey we conducted of 204 knowledge workers at a large technology company has revealed that a large proportion of users engage in searches that include collaborative activities. We present the results of the survey, and then review the implications of these findings for designing new Web search interfaces that provide tools for sharing."
],
"cite_N": [
"@cite_9"
],
"mid": [
"2022555286"
]
}
| 0 |
||
0907.3583
|
2110232575
|
By using different interface adapters for different methods, it is possible to construct a maximally covering web of interface adapters which incurs minimum loss during interface adaptation. We introduce a polynomial-time algorithm that can achieve this. However, we also show that minimizing the number of adapters included in a maximally covering web of interface adapters is an NP-complete problem.
|
@cite_1 implements a network repository of interface adapters for adapting Java interfaces using single chains of adapters. @cite_7 implements a similar adaptation framework for network services. Although it allows an interface adapter to adapt a target interface from multiple source interfaces, only a single interface adapter is used for each target interface, so it has the same limitations as single adapter chains in that not all methods that could be adapted may actually be adapted. Both mention the possibility of lossy interface adaptation, but neither considers how to minimize such loss.
|
{
"abstract": [
"Recently, component models have received much attention from the Software Engineering research community. The goal of each of these models is to increase reuse and to simplify the implementation and composition of new software. While all these models focus on the specification and packaging of components, however, they provide almost no support for their adaptation and composition. This work still has to be done programmatically. In this paper we present Type Based Adaptation, a novel adaptation technique that uses the type information available about a component. We also describe the design and implementation of our reference implementation thereby verifying the feasibility of this approach.",
"To programmatically discover and interact with services in ubiquitous computing environments, an application needs to solve two problems: (1) is it semantically meaningful to interact with a service? If the task is \"printing a file\", a printer service would be appropriate, but a screen rendering service or CD player service would not. (2) If yes, what are the mechanics of interacting with the service - remote invocation mechanics, names of methods, numbers and types of arguments, etc.? Existing service frameworks such as Jini and UPnP conflate these problems - two services are \"semantically compatible\" if and only if their interface signatures match. As a result, interoperability is severely restricted unless there is a single, globally agreed-upon, unique interface for each service type. By separating the two subproblems and delegating different parts of the problem to the user and the system, we show how applications can interoperate with services even when globally unique interfaces do not exist for certain services."
],
"cite_N": [
"@cite_1",
"@cite_7"
],
"mid": [
"2515366057",
"2168075985"
]
}
|
Web of Lossy Adapters for Interface Interoperability: An Algorithm and NP-completeness of Minimization
|
Different services that provide similar functionality will often be accessed using widely different interfaces, especially if standardization is lacking. To avoid having to rewrite separate code for all interfaces that may have to be used, interface adapters can be used to translate calls to one interface into calls to another interface. [4] These interface adapters may not be able to achieve perfect adaptation. It becomes harder to analyze the adaptation loss when combining such interface adapters in order to reduce the number of adapters that must be developed. Our previous work has defined a rigorous mathematical basis for analyzing the loss in single interface adapter chains. [1] This paper extends our previous work by considering the use of a web of interface adapters. This allows the adaptation of interfaces with minimum loss, and we describe a polynomial-time algorithm that can achieve this. However, we also show that finding a web of adapters that can achieve minimum adaptation loss with the minimum number of interface adapters is an NP-complete problem, which implies that reducing the number of adapters included in a web of interface adapters should be done heuristically.
Preliminaries
The basic concepts and notations we use follow those in our previous work. [1] We will be using a range convention for the index notation used to express matrixes and vectors. [2] We take the view that an interface defines multiple methods, and that an interface adapter converts a call to one method in a source interface into calls to one or more methods in a target interface. We assume that if a method in a source interface can be adapted, then it can be adapted perfectly. Any loss would be incurred if a method in a source interface cannot be adapted at all. We also assume that interface adapters do not store any state.
An interface adapter graph is a directed graph where interfaces are nodes and adapters are edges. If there are interfaces I 1 and I 2 with an adapter A that adapts source interface I 1 to target interface I 2 , then I 1 and I 2 would be nodes in the interface adapter graph while A would be a directed edge from I 1 to I 2 .
The method dependency matrix a ji for an adapter A is defined by how the adapter depends on the availability of a method in the source interface in order to implement a method in the target interface. a ji is true if and only if method j in the target interface can be implemented only if method i in the source interface is available. We denote the method dependency matrix associated with an adapter A as depend (A).
Web of lossy adapters
Existing approaches use only a single interface adapter to adapt a given target interface from one or more source interfaces. [6,8,7] These approaches force us to choose among imperfect chains of interface adapters, where one chain might be able to adapt certain methods in the source interface but cannot adapt other methods covered by another chain and vice versa. However, using a web of interface adapters, a directed acyclic interface adapter graph where different adapters can be used to adapt different methods in an interface, can cover all methods that can possibly be adapted, incurring minimum loss.
Algorithm 1 can construct a web of interface adapters that can cover all possible methods in a target interface given a fully functional source interface, which we will refer to as a maximally covering web of interface adapters. It is based on unit propagation for Horn formulae [3], targeted towards building a web of interface adapters. It works in two phases, where it first computes all methods in all interfaces that can be adapted given the source interface, and then extracts only the subgraph relevant for the target interface. Algorithms 2 and 3 are subalgorithms responsible for setup and subgraph extraction, respectively.
Simply constructing a web of interface adapters is not the goal by itself, of course. The real goal is to use the interface adapters to adapt methods from a source interface into those of a target interface. Choosing which adapters should be used for which methods is more complex than in the case for a single chain, where there is no choice at all. Algorithm 4 is an abstract algorithm for determining which interface adapters should be invoked when adapting each Algorithm 1 Constructing maximally covering web of interface adapters.
function
Maximal-Cover(G, s, t) (Q, D, S, M, C) ← Cover-Setup(G, s) while Q is not empty do extract (I, i) from Q for (A = (I, I ′ ), j) ∈ M [I][i] do if C[A][j] > 0 then C[A][j] ← C[A][j] − 1 if C[A][j] = 0 then ⊲ adaptation viable D[I ′ ][j] ← D[I ′ ][j] ∪ {A} if not S[I ′ ][j] then S[I ′ ][j] ← true insert (I ′ , j) into Q ⊲ trigger new dependent end if end if end if end for end while return (Cover-Subgraph(D, t), D) end function
Algorithm 2 Setup for constructing maximal covering.
function Cover-Setup(G = (V, E), s) Q ← empty queue for I ∈ V and method i of I do
D[I][i] ← ∅ ⊲ list of viable adapters S[I][i] ← false ⊲ whether satisfiable M [I][i] ← ∅ for A = (I, I ′ ) ∈ E do M [I][i] ← M [I][i] ∪ {(A, j) | depend (A) ji } ⊲ dependents end for end for for A = (I 1 , I 2 ) ∈ E and method j of I 2 do C[A][j] ← |{i | depend(A) ji }| ⊲ unsatisfied dependency count end for for each method i of s do ⊲ start with source interface S[s][i] = true insert (s, i) into Q end for return (Q, D, S, M, C) end function
Algorithm 3 Extract subgraph comprising web of interface adapters.
function
Cover-Subgraph(D, t) V ′ ← ∅, E ′ ← ∅ Q ← empty queue, Q ′ ← ∅ for method i of t do insert (t, i) in Q and Q ′ end for while Q is not empty do extract (I ′ , j) from Q V ′ ← V ′ ∪ {I ′ } E ′ ← E ′ ∪ D[I ′ ][j] for A = (I, I ′ ) ∈ D[I ′ ][j] do for i such that depend (A) ji do if (I, i) ∈ Q ′ then insert (I, i) into Q and Q ′ end if end for end for end while return (V ′ , E ′ ) end function
method. It needs more information than just the web of interface adapters, which is provided by the value D also returned in algorithm 1.
The interface adapters used to adapt a given method are specified by algorithm 4; the concrete steps involved in actually adapting a method are left to how interface adaptation is actually done, whether it be direct invocation by the interface adapter, call substitution after constructing the call graph, or composition of interface adapters specified in a high-level language. The exact criterion for selecting an adapter in algorithm 4 also does not affect the correctness of the algorithm.
Algorithm 1 constructs a maximally covering web of adapters, but it completely ignores the number of interface adapters it incorporates in the web. It could end up constructing a web with hundreds of interface adapters when less than a dozen would do. However, trying to minimize the number of incorporated interface adapters is an NP-complete problem as we will show in section 4. Invoking the minimum number of interface adapters while actually adapting a method also turns out to be NP-complete.
Minimizing number of adapters
While algorithm 1 can construct a maximally covering web of interface adapters in polynomial time (O(m 2 ) being a loose time bound with a straightforward implementation, where m is the total number of methods), it is unlikely there will be a polynomial-time algorithm for finding a maximally covering web of adapters with the minimum number of interface adapters. This is because the problem is NP-complete, which we will prove with a reduction from one-in-three 3SAT. [5] We formally define MINWEB as the problem of whether there is a web of interface adapters in an interface adapter graph from a given source interface to a given target interface such that it is maximally covering and has at most K interface adapters. Given a candidate boolean expression for one-in-three 3SAT with c clauses and v variables, we will reduce it to a candidate interface adapter graph for MINWEB such that the boolean expression is an instance of one-in-three 3SAT if and only if there is a maximally covering web of interface adapters with at most v + 2c adapters.
For each variable, we create an interface with methods corresponding to all the literals, two for each variable. For each clause, we create an interface with only a single method. We also separately create a source interface with methods corresponding to the possible literals and a target interface with methods corresponding to the clauses.
Starting from the source interface, we connect the interfaces corresponding to variables serially. Between each of these interfaces, we define two adapters, one which makes the method corresponding the successor variable true and the other which makes it false, by making the method correspoding to the positive literal available and the method corresponding to the negative literal unavailable in one adapter and the opposite in the other adapter. Other literals are left alone. This is identical to how a variable handling subgraph is constructed in [1].
From the sink node of the variable handling subgraph, we create three adapters to each of the interfaces corresponding to the clauses. Each adapter corresponds to a literal in the clause, and the sole method in the interface is available only if the method corresponding to the literal is available. And from each interface corresponding to a clause, there is a single adapter to the target interface for the entire graph which makes the method corresponding to the clause available only if the sole method in the clause interface is available.
For the graph constructed this way, the entire graph is obviously maximally covering with 2v + 4c adapters and all methods available at the target interface. If the original boolean expression is an instance of one-in-three 3SAT, then a satisfying assignment can specify a singly-linked path through the variable interfaces, followed by each true literal specifying the adapters to pass through to each clause interface, followed by the adapters to the target interface, and the resulting directed acyclic graph is a maximally covering web of adapters with v + 2c adapters, since all methods will be available at the target interface.
Conversely, if there is a maximally covering web of adapters with v + 2c adapters, then 2c adapters connect to the clause interfaces since all clause interfaces must be included. The remaining v adapters must be a singly-linked path through the variable interfaces, and the selection of adapters for each variable interface specifies a variable assignment which satisfies the original boolean expression with only one true literal in each clause. Therefore MINWEB is NP-complete, and we can also conclude that minimizing the number of required adapters to adapt a single method is also NP-complete by removing the other methods in the target interface.
Conclusions
We described a polynomial-time algorithm which can construct a maximally covering web of interface adapters, which may include a much larger number of interface adapters than necessary. However, we also showed that minimizing the number of interface adapters included in a maximally covering web of interface adapters is an NP-complete problem.
Further work can be done to extend these results by relaxing the assumptions made for this paper. We can consider the case when a method in a target interface can only be partially implemented from methods in a source interface. We can also consider how the quality of adapters should be dealt with in algorithms, or how to deal with adapters that maintain state. Heuristic algorithms which attempt to minimize the number of adapters included in a nearly maximally covering web of interface adapters is another area for future work.
| 2,047 |
0907.3583
|
2110232575
|
By using different interface adapters for different methods, it is possible to construct a maximally covering web of interface adapters which incurs minimum loss during interface adaptation. We introduce a polynomial-time algorithm that can achieve this. However, we also show that minimizing the number of adapters included in a maximally covering web of interface adapters is an NP-complete problem.
|
@cite_2 proposes an interface adaptation framework which attempts to minimize the loss incurred by an interface adapter chain, and @cite_3 rigorously defines the mathematical background required to implement such a framework. These only consider the use of single chains of interface adapters.
|
{
"abstract": [
"Despite providing similar functionality, multiple network services may require the use of different interfaces to access the functionality, and this problem will only become worse with the widespread deployment of ubiquitous computing environments. One way around this problem is to use interface adapters that adapt one interface into another. Chaining these adapters allows flexible interface adaptation with fewer adapters, but the loss incurred because of imperfect interface adaptation must be considered. This study outlines a matrix-based mathematical basis for analysing the chaining of lossy interface adapters. The authors also show that the problem of finding an optimal interface adapter chain is NP-complete with a reduction from 3SAT.",
"A key feature of ubiquitous computing is service continuity which allows a user to transparently continue his task regardless of his movement. For service continuity, the underlying system needs to not only discover a service satisfying a user's request, but also provide an interface differences resolution scheme if the interface of the service found is not the same as that of the service requested. For resolving interface mismatches, one of solutions is to use an interface adapter. The most serious problem in the interface adapter-based approach is the overhead of adapter generation. There are many research efforts about adapter generation load reduction and this paper focuses on an adapter chaining scheme to reduce the number of necessary adapters among different service interfaces. We propose a construction-time adaptation loss evaluation scheme and an adapter chain construction algorithm, which finds an adapter chain with minimal adaptation loss."
],
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"2052260075",
"2132136548"
]
}
|
Web of Lossy Adapters for Interface Interoperability: An Algorithm and NP-completeness of Minimization
|
Different services that provide similar functionality will often be accessed using widely different interfaces, especially if standardization is lacking. To avoid having to rewrite separate code for all interfaces that may have to be used, interface adapters can be used to translate calls to one interface into calls to another interface. [4] These interface adapters may not be able to achieve perfect adaptation. It becomes harder to analyze the adaptation loss when combining such interface adapters in order to reduce the number of adapters that must be developed. Our previous work has defined a rigorous mathematical basis for analyzing the loss in single interface adapter chains. [1] This paper extends our previous work by considering the use of a web of interface adapters. This allows the adaptation of interfaces with minimum loss, and we describe a polynomial-time algorithm that can achieve this. However, we also show that finding a web of adapters that can achieve minimum adaptation loss with the minimum number of interface adapters is an NP-complete problem, which implies that reducing the number of adapters included in a web of interface adapters should be done heuristically.
Preliminaries
The basic concepts and notations we use follow those in our previous work. [1] We will be using a range convention for the index notation used to express matrixes and vectors. [2] We take the view that an interface defines multiple methods, and that an interface adapter converts a call to one method in a source interface into calls to one or more methods in a target interface. We assume that if a method in a source interface can be adapted, then it can be adapted perfectly. Any loss would be incurred if a method in a source interface cannot be adapted at all. We also assume that interface adapters do not store any state.
An interface adapter graph is a directed graph where interfaces are nodes and adapters are edges. If there are interfaces I 1 and I 2 with an adapter A that adapts source interface I 1 to target interface I 2 , then I 1 and I 2 would be nodes in the interface adapter graph while A would be a directed edge from I 1 to I 2 .
The method dependency matrix a ji for an adapter A is defined by how the adapter depends on the availability of a method in the source interface in order to implement a method in the target interface. a ji is true if and only if method j in the target interface can be implemented only if method i in the source interface is available. We denote the method dependency matrix associated with an adapter A as depend (A).
Web of lossy adapters
Existing approaches use only a single interface adapter to adapt a given target interface from one or more source interfaces. [6,8,7] These approaches force us to choose among imperfect chains of interface adapters, where one chain might be able to adapt certain methods in the source interface but cannot adapt other methods covered by another chain and vice versa. However, using a web of interface adapters, a directed acyclic interface adapter graph where different adapters can be used to adapt different methods in an interface, can cover all methods that can possibly be adapted, incurring minimum loss.
Algorithm 1 can construct a web of interface adapters that can cover all possible methods in a target interface given a fully functional source interface, which we will refer to as a maximally covering web of interface adapters. It is based on unit propagation for Horn formulae [3], targeted towards building a web of interface adapters. It works in two phases, where it first computes all methods in all interfaces that can be adapted given the source interface, and then extracts only the subgraph relevant for the target interface. Algorithms 2 and 3 are subalgorithms responsible for setup and subgraph extraction, respectively.
Simply constructing a web of interface adapters is not the goal by itself, of course. The real goal is to use the interface adapters to adapt methods from a source interface into those of a target interface. Choosing which adapters should be used for which methods is more complex than in the case for a single chain, where there is no choice at all. Algorithm 4 is an abstract algorithm for determining which interface adapters should be invoked when adapting each Algorithm 1 Constructing maximally covering web of interface adapters.
function
Maximal-Cover(G, s, t) (Q, D, S, M, C) ← Cover-Setup(G, s) while Q is not empty do extract (I, i) from Q for (A = (I, I ′ ), j) ∈ M [I][i] do if C[A][j] > 0 then C[A][j] ← C[A][j] − 1 if C[A][j] = 0 then ⊲ adaptation viable D[I ′ ][j] ← D[I ′ ][j] ∪ {A} if not S[I ′ ][j] then S[I ′ ][j] ← true insert (I ′ , j) into Q ⊲ trigger new dependent end if end if end if end for end while return (Cover-Subgraph(D, t), D) end function
Algorithm 2 Setup for constructing maximal covering.
function Cover-Setup(G = (V, E), s) Q ← empty queue for I ∈ V and method i of I do
D[I][i] ← ∅ ⊲ list of viable adapters S[I][i] ← false ⊲ whether satisfiable M [I][i] ← ∅ for A = (I, I ′ ) ∈ E do M [I][i] ← M [I][i] ∪ {(A, j) | depend (A) ji } ⊲ dependents end for end for for A = (I 1 , I 2 ) ∈ E and method j of I 2 do C[A][j] ← |{i | depend(A) ji }| ⊲ unsatisfied dependency count end for for each method i of s do ⊲ start with source interface S[s][i] = true insert (s, i) into Q end for return (Q, D, S, M, C) end function
Algorithm 3 Extract subgraph comprising web of interface adapters.
function
Cover-Subgraph(D, t) V ′ ← ∅, E ′ ← ∅ Q ← empty queue, Q ′ ← ∅ for method i of t do insert (t, i) in Q and Q ′ end for while Q is not empty do extract (I ′ , j) from Q V ′ ← V ′ ∪ {I ′ } E ′ ← E ′ ∪ D[I ′ ][j] for A = (I, I ′ ) ∈ D[I ′ ][j] do for i such that depend (A) ji do if (I, i) ∈ Q ′ then insert (I, i) into Q and Q ′ end if end for end for end while return (V ′ , E ′ ) end function
method. It needs more information than just the web of interface adapters, which is provided by the value D also returned in algorithm 1.
The interface adapters used to adapt a given method are specified by algorithm 4; the concrete steps involved in actually adapting a method are left to how interface adaptation is actually done, whether it be direct invocation by the interface adapter, call substitution after constructing the call graph, or composition of interface adapters specified in a high-level language. The exact criterion for selecting an adapter in algorithm 4 also does not affect the correctness of the algorithm.
Algorithm 1 constructs a maximally covering web of adapters, but it completely ignores the number of interface adapters it incorporates in the web. It could end up constructing a web with hundreds of interface adapters when less than a dozen would do. However, trying to minimize the number of incorporated interface adapters is an NP-complete problem as we will show in section 4. Invoking the minimum number of interface adapters while actually adapting a method also turns out to be NP-complete.
Minimizing number of adapters
While algorithm 1 can construct a maximally covering web of interface adapters in polynomial time (O(m 2 ) being a loose time bound with a straightforward implementation, where m is the total number of methods), it is unlikely there will be a polynomial-time algorithm for finding a maximally covering web of adapters with the minimum number of interface adapters. This is because the problem is NP-complete, which we will prove with a reduction from one-in-three 3SAT. [5] We formally define MINWEB as the problem of whether there is a web of interface adapters in an interface adapter graph from a given source interface to a given target interface such that it is maximally covering and has at most K interface adapters. Given a candidate boolean expression for one-in-three 3SAT with c clauses and v variables, we will reduce it to a candidate interface adapter graph for MINWEB such that the boolean expression is an instance of one-in-three 3SAT if and only if there is a maximally covering web of interface adapters with at most v + 2c adapters.
For each variable, we create an interface with methods corresponding to all the literals, two for each variable. For each clause, we create an interface with only a single method. We also separately create a source interface with methods corresponding to the possible literals and a target interface with methods corresponding to the clauses.
Starting from the source interface, we connect the interfaces corresponding to variables serially. Between each of these interfaces, we define two adapters, one which makes the method corresponding the successor variable true and the other which makes it false, by making the method correspoding to the positive literal available and the method corresponding to the negative literal unavailable in one adapter and the opposite in the other adapter. Other literals are left alone. This is identical to how a variable handling subgraph is constructed in [1].
From the sink node of the variable handling subgraph, we create three adapters to each of the interfaces corresponding to the clauses. Each adapter corresponds to a literal in the clause, and the sole method in the interface is available only if the method corresponding to the literal is available. And from each interface corresponding to a clause, there is a single adapter to the target interface for the entire graph which makes the method corresponding to the clause available only if the sole method in the clause interface is available.
For the graph constructed this way, the entire graph is obviously maximally covering with 2v + 4c adapters and all methods available at the target interface. If the original boolean expression is an instance of one-in-three 3SAT, then a satisfying assignment can specify a singly-linked path through the variable interfaces, followed by each true literal specifying the adapters to pass through to each clause interface, followed by the adapters to the target interface, and the resulting directed acyclic graph is a maximally covering web of adapters with v + 2c adapters, since all methods will be available at the target interface.
Conversely, if there is a maximally covering web of adapters with v + 2c adapters, then 2c adapters connect to the clause interfaces since all clause interfaces must be included. The remaining v adapters must be a singly-linked path through the variable interfaces, and the selection of adapters for each variable interface specifies a variable assignment which satisfies the original boolean expression with only one true literal in each clause. Therefore MINWEB is NP-complete, and we can also conclude that minimizing the number of required adapters to adapt a single method is also NP-complete by removing the other methods in the target interface.
Conclusions
We described a polynomial-time algorithm which can construct a maximally covering web of interface adapters, which may include a much larger number of interface adapters than necessary. However, we also showed that minimizing the number of interface adapters included in a maximally covering web of interface adapters is an NP-complete problem.
Further work can be done to extend these results by relaxing the assumptions made for this paper. We can consider the case when a method in a target interface can only be partially implemented from methods in a source interface. We can also consider how the quality of adapters should be dealt with in algorithms, or how to deal with adapters that maintain state. Heuristic algorithms which attempt to minimize the number of adapters included in a nearly maximally covering web of interface adapters is another area for future work.
| 2,047 |
0907.1779
|
2951984019
|
We introduce the classified stable matching problem, a problem motivated by academic hiring. Suppose that a number of institutes are hiring faculty members from a pool of applicants. Both institutes and applicants have preferences over the other side. An institute classifies the applicants based on their research areas (or any other criterion), and, for each class, it sets a lower bound and an upper bound on the number of applicants it would hire in that class. The objective is to find a stable matching from which no group of participants has reason to deviate. Moreover, the matching should respect the upper lower bounds of the classes. In the first part of the paper, we study classified stable matching problems whose classifications belong to a fixed set of order types.'' We show that if the set consists entirely of downward forests, there is a polynomial-time algorithm; otherwise, it is NP-complete to decide the existence of a stable matching. In the second part, we investigate the problem using a polyhedral approach. Suppose that all classifications are laminar families and there is no lower bound. We propose a set of linear inequalities to describe stable matching polytope and prove that it is integral. This integrality allows us to find various optimal stable matchings using Ellipsoid algorithm. A further ramification of our result is the description of the stable matching polytope for the many-to-many (unclassified) stable matching problem. This answers an open question posed by Sethuraman, Teo and Qian.
|
Stable matching problems have drawn the intensive attention of researchers in various disciplines in the past decades since the seminal paper of Gale and Shapley @cite_13 . For a summary, see @cite_22 @cite_9 @cite_5 . Vande Vate @cite_3 initiated the study of stable matching using mathematical programming approach; further developments using this approach can be found in @cite_12 @cite_23 @cite_10 @cite_7 @cite_6 @cite_17 @cite_8 .
|
{
"abstract": [
"The original work of Gale and Shapley on an assignment method using the stable marriage criterion has been extended to find all the stable marriage assignments. The algorithm derived for finding all the stable marriage assignments is proved to satisfy all the conditions of the problem. Algorithm 411 applies to this paper.",
"",
"We study the classical stable marriage and stable roommates problems using a polyhedral approach. We propose a new LP formulation for the stable roommates problem, which has a feasible solution if and only if the underlying roommates problem has a stable matching. Furthermore, for certain special weight functions on the edges, we construct a 2-approximation algorithm for the optimal stable roommates problem. Our technique exploits features of the geometry of fractional solutions of this formulation. For the stable marriage problem, we show that a related geometry allows us to express any fractional solution in the stable marriage polytope as a convex combination of stable marriage solutions. This also leads to a genuinely simple proof of the integrality of the stable marriage polytope.",
"Vande Vate 1989 described the polytope whose extreme points are the stable core matchings in the Marriage Problem. Rothblum 1989 simplified and extended this result. This paper explores a corresponding linear program, its dual and consequences of the fact that the dual solutions have an unusually direct relation to the primal solutions. This close relationship allows us to provide simple proofs both of Vande Vate and Rothblum's results and of other important results about the core of marriage markets. These proofs help explain the structure shared by the marriage problem without sidepayments and the assignment game with sidepayments. The paper further explores \"fractional\" matchings, which may be interpreted as lotteries over possible matches or as time-sharing arrangements. We show that those fractional matchings in the Stable Marriage Polytope form a lattice with respect to a partial ordering that involves stochastic dominance. Thus, all expected utility functions corresponding to the same ordinal preferences will agree on the relevant comparisons. Finally, we provide linear programming proofs of slightly stronger versions of known incentive compatibility results.",
"",
"",
"In a recent paper, Weems introduced the bistable matching problem, and asked if a polynomial-time algorithm exists to decide the feasibility of the bistable roommates problem. We resolve this question in the affirmative using linear programming. In addition, we show that several (old and new) results for the bistable marriage and roommates problem become transparent using the polyhedral approach. This technique has been used recently by the authors to address classical stable matching problems.",
"The stable admissions polytope– the convex hull of the stable assignments of the university admissions problem – is described by a set of linear inequalities. It depends on a new characterization of stability and arguments that exploit and extend a graphical approach that has been fruitful in the analysis of the stable marriage problem.",
"",
"",
"The theory of linear inequalities and linear programming was recently applied to study the stable marriage problem which until then has been studied by mostly combinatorial methods. Here we extend the approach to the general stable matching problem in which the structure of matchable pairs need not be bipartite. New issues arise in the analysis and we combine linear algebra and graph theory to explore them.",
"Baiou and Balinski characterized the stable admissions polytope using a system of linear inequalities. The structure of feasible solutions to this system of inequalities---fractional stable matchings---is the focus of this paper. The main result associates a geometric structure with each fractional stable matching. This insight appears to be interesting in its own right, and can be viewed as a generalization of the lattice structure (for integral stable matchings) to fractional stable matchings. In addition to obtaining simple proofs of many known results, the geometric structure is used to prove the following two results: First, it is shown that assigning each agent their “median” choice among all stable partners results in a stable matching, which can be viewed as a “fair” compromise; second, sufficient conditions are identified under which stable matchings exist in a problem with externalities, in particular, in the stable matching problem with couples."
],
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_23",
"@cite_5",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2017008045",
"",
"2114200231",
"2168987865",
"",
"",
"2138100238",
"2080307469",
"",
"2068115726",
"1977846486",
"2140384255"
]
}
|
Classified Stable Matching
|
Imagine that a number of institutes are recruiting faculty members from a pool of applicants. Both sides have their preferences. It would be ideal if there is a matching from which no applicant and institute have reason to deviate. If an applicant prefers another institute to the one he is assigned to (or maybe he is unassigned) and this institute also prefers him to any one of its assigned applicants, then this institute-applicant pair is a blocking pair. A matching is stable if there is no blocking pair.
The above scenario is the well-studied hospitals/residents problem [7,11] in a different guise. It is known that stable matchings always exist and can be found efficiently by the Gale-Shapley algorithm. However, real world situations can be more complicated. An institute may have its own hiring policy and may find certain sets of applicants together unacceptable. For example, an institute may have reasons to avoid hiring too many applicants graduated from the same school; or it may want to diversify its faculty so that it can have researchers in many different fields.
This concern motivates us to consider the following problem. An institute, besides giving its preference among the applicants, also classifies them based on their expertise (or some other criterion). For each class, it sets an upper bound and a lower bound on the number of applicants it would hire. Each institute defines its own classes and classifies the applicants in its own way (and the classes need not be disjoint). We consider this flexibility a desirable feature, as there are some research fields whose boundaries are blurred; moreover, some versatile researchers may be hard to categorize.
We call the above problem classified stable matching. Even though motivated by academic hiring, it comes up any time objects on one side of the matching have multiple partners that may be classified. For example, the two sides can be jobs and machines; each machine is assigned several jobs but perhaps cannot take two jobs with heavy memory requirements.
To make the problem precise, we introduce necessary notation and terminology. A set A of applicants and a set I of institutes are given. Each applicant/institute has a strictly-ordered (but not necessarily complete) preference list over the other side. The notation x indicates either strictly better or equal in terms of preference of an entity e ∈ A ∪ I while ≻ e means strictly better. For example, if applicant a ∈ A strictly prefers institute i ∈ I to another institute i ′ ∈ I, we write i ≻ a i ′ . The preference list of institute i is denoted as L i . The set of applicants on L i who rank higher (respectively lower) than some particular applicant a are written as L i ≻a (respectively L i ≺a ). An institute i has a capacity Q(i) ∈ Z + , the maximum number of applicants it can hire. It defines its own classification C(i) = {C i j } |C(i)| j=1 , which is a family of sets over the applicants in its preference list. Each class C i j ∈ C(i) has an upperbound q + (C i j ) ∈ Z + and a lowerbound q − (C i j ) ∈ Z + ∪ {0}, on the number of applicants it would hire in that class. Given a matching µ, µ(a) is the institute applicant a is assigned to. We write µ(i) = (a i1 , a i2 , · · · , a ik ), k ≤ Q(i) to denote the set of applicants institute i gets in µ, where a ij are listed in decreasing order based on its preference list. In this paper, we will slightly abuse notation, treating an (ordered) tuple such as µ(i) as a set. Definition 1. Given a tuple t = (a i1 , a i2 , · · · , a ik ) where a ij are ordered based on their decreasing rankings on institute i's preference list, it is said to be a feasible tuple of institute i, or just feasible for short, if the following conditions hold:
k ≤ Q(i); -given any class C i j ∈ C(i), q − (C i j ) ≤ |t ∩ C i j | ≤ q + (C i j ).
Definition 2.
A matching µ is feasible if all the tuples µ(i), i ∈ I are feasible. A feasible matching is stable if and only if there is no blocking group. A blocking group is defined as follows. Let µ(i) = (a i1 , a i2 , · · · , a ik ), k ≤ Q(i). A feasible tuple g = (a ′ i1 , a ′ i2 , · · · , a ′ ik ′ ), k ≤ k ′ ≤ Q(i), forms a blocking group (i;g) with institute i if for 1 ≤ j ≤ k, i a ′ ij µ(a ′ ij ) and a ′ ij i a ij ; -either there exists l, 1 ≤ l ≤ k such that a ′ il ≻ i a il and i ≻ a ′ il µ(a ′ il ), or that k ′ > k.
Informally speaking, the definition requires that for a blocking group to be formed, all involved applicants have to be willing to switch to, or stay with, institute i. The collection of applicants in the blocking group should still respect the upper and lower bounds in each class; moreover, the institute gets a strictly better deal (in the Pareto-optimal sense). Note that when there is no class lower bound, then the stable matching as defined in Definition 2 can be equivalently defined as a feasible matching without the conventional blocking pairs (see Lemma 17 in Section 4). When the class lower bound is present, the definition of the blocking groups captures our intuition that an institute should not indiscriminately replace a lower ranking applicant assigned to it with a higher applicant (with whom it forms a blocking pair), otherwise, the outcome for it may not be a feasible one. In our proofs, we often use the notation µ(i)| a a ′ to denote a tuple formed by replacing a ∈ µ(i) with a ′ . The order of the tuple µ(i)| a a ′ is still based on institute i's preference list. If we write µ(i)|a, then this new tuple is obtained by adding a into µ(i) and re-ordered. In a matching µ, if a class C i j is fully-booked, i.e. |µ(i) ∩ C i j | = q + (C i j ), we often refer to such a class as a "bottleneck" class. We also define an "absorption" operation: given a set B of classes, ℜ(B) returns the set of classes which are not entirely contained in other classes in B.
Our Results It would be of interest to know how complicated the classifications of the institutes can be while still allowing the problem a polynomial time algorithm. In this work, we study the classified stable matching problems whose classifications belong to a fixed set of "order types." The order type of a classification is the inclusion poset of all non-empty intersections of classes. We introduce necessary definitions to make our statement precise. Definition 3. The class inclusion poset P (i) = (C(i), ) of an institute i is composed of sets of the elements from L i :
C(i) = {C|C = C i j ∩ C i k , where C i j , C i k ∈ C(i)} 1 . In P (i), C i j ≻ C i k if C i j ⊃ C i k ; and C i j C i k if C i j ⊃ C i k and C i k ⊃ C i j .
Definition 4. Let P = {P 1 , P 2 , · · · , P k } be a set of posets. A classified stable matching instance (A, I) belongs to the group of P-classified stable matching problems if for each poset P j ∈ P, there exists an institute i ∈ I whose class inclusion poset P (i) is isomorphic to P j and conversely, every class inclusion poset P (i) is isomorphic to a poset in P.
We call a poset a downward forest if given any element, no two of its successors are incomparable. Our first main result is the following dichotomy theorem.
Theorem 5. Let P = {P 1 , P 2 , · · · , P k } be a set of posets. P-classified stable matching problems can be solved in polynomial time if every poset P j ∈ P is a downward forest; on the other hand, if P contains a poset P j which is not a downward forest, the existence of a stable matching is NP-complete.
We remark that if P is entirely composed of downward forests, then every classification C(i) must be a laminar family 2 . In this case, we call the problem laminar classified stable matching (henceforth LCSM).
We present an O(m 2 ) time algorithm for LCSM, where m is the total size of all preferences. Our algorithm is extended from the Gale-Shapley algorithm. Though intuitive, its correctness is difficult to argue due to various constraints 3 . Furthermore, we show that several well-known structural results in the hospitals/residents problem can be further generalized in LCSM. On the other hand, if some institute i has a classification C(i) violating laminarity, then P must contain a poset which has a "V" (where the "bottom" is induced by two intersecting classes in C(i) which are its parents "on top.") We will make use of this fact to design a gadget for our NP-complete reduction. In particular, in our reduction, all institutes only use upperbound constraints. Sections 2 and 3 will be devoted to these results.
Our dichotomy theorem implies a certain limit on the freedom of the classifications defined by the institutes. For example, an institute may want to classify the applicants based on two different criteria simultaneously (say by research fields and gender); however, our result implies this may cause the problem to become intractable.
In the second part, we study LCSM using a mathematical programming approach. Assume that there is no lower bound on the classes. We extend the set of linear inequalities used by Baïou and Balinski [3] to describe stable matchings and generalize a bin-packing algorithm of Sethuraman, Teo, and Qian [22] to prove that the polytope is integral. The integrality of our polytope allows us to use suitable objective functions to obtain various optimal stable matchings using Ellipsoid algorithm. As our LP has an exponential number of constraints, we also design a separation oracle.
By studying the geometric structure of fractional stable matchings, we are able to generalize a theorem of Teo and Sethuraman [23]: in (one-to-one) stable marriage, given any number of stable matchings, if we assign every man his median choice among all women with whom he is matched in the given set of matchings and we do similarly for women, the outcome is still a stable matching. This theorem has been generalized in the context of hospitals/residents problem [5,13,22]. We prove that in LCSM, this theorem still holds: if we apply this "median choice operation" on all applicants, the outcome is still a stable matching 4 .
A final ramification of our polyhedral result is an answer to an open question posed by Sethuraman, Teo and Qian [22]: how do we describe the stable matching polytope in the classical "unclassified" many-to-many stable matching problem? We show this problem can be reduced to LCSM by suitable cloning and classifications.
All the polyhedral results will be presented in Section 4. In Section 5 we conclude. Omitted proofs and details can be found in the appendix.
An Algorithm for Laminar Classified Stable Matching
In this section, we present a polynomial time algorithm to find a stable matching if it exists in the given LCSM instance, otherwise, to report that none exists.
We pre-process our instance as follows. If applicant a is on institute i's preference list, we add a class C i a1 = {a} into C(i). Furthermore, we also add a class C i ♯ into C(i) including all applicants in L i . After this pre-processing, the set of classes in C(i) form a tree whose root is the C i ♯ ; moreover, an applicant a belongs to a sequence of classes a(C(i)) = (C i a1 , C i a2 , · · · , C i az (= C i ♯ )), which forms a path from the leaf to the root in the tree (i.e., C i aj is a super class of C i aj ′ , provided j ′ < j.) For each non-leaf class C i j , let c(C i j ) denote the set of its child classes in the tree. We can assume without loss of generality that
q − (C i j ) ≥ C i k ∈c(C i j ) q − (C i k ) for any non-leaf class C i j . Finally, let q + (C i ♯ ) := Q(i), q − (C i ♯ ) := C i k ∈c(C i ♯ ) q − (C i k )
; for all applicants a ∈ L i , q + (C i a1 ) := 1 and q − (C i a1 ) := 0. Our algorithm finds an applicant-optimal-institute-pessimal stable matching. The applicant-optimality means that all applicants get the best outcome among all stable matchings; on the other hand, institute-pessimality means that all institutes get an outcome which is "lexicographically" the worst for them. To be precise, suppose that µ(i) = (a i1 , a i2 , · · · , a ik ) and µ ′ (i) = (a i1 , a i2 , · · · , a ik ) are the outcomes of two stable matchings for institute i 5 . If there exists k ′ ≤ k so that a ij = a ′ ij , for all 1 ≤ j ≤ k ′ − 1 and a ik ′ ≻ i a ′ ik ′ , then institute i is lexicographically better off in µ than in µ ′ . We now sketch the high-level idea of our algorithm. We let applicants "propose" to the institutes from the top of their preference lists. Institutes make the decision of acceptance/rejection of the proposals based on certain rules (to be explained shortly). Applicants, if rejected, propose to the next highest-ranking institutes on their lists. The algorithm terminates when all applicants either end up with some institutes, or run out of their lists. Then we check whether the final outcome meets the upper and lower bounds of all classes. If yes, the outcome is a stable matching; if no, there is no stable matching in the given instance.
How the institutes make the acceptance/rejection decisions is the core of our algorithm. Intuitively, when an institute gets a proposal, it should consider two things: (i) will adding this new applicant 5 In LCSM, an institute always gets the same number of applicants in all stable matchings. See Theorem 15 below. violate the upper bound of some class? (ii) will adding this applicant deprive other classes of their necessary minimum requirement? If the answer to any of the two questions is positive, the institute should not just take the new applicant unconditionally; instead, it has to reject someone it currently has (not necessarily the new comer).
Below we will design two invariants for all classes of an institute. Suppose that institute i gets a proposal from applicant a, who belongs to a sequence of classes a(C(i)) = (C i a1 , C i a2 , · · · , C i ♯ ). We check this sequence of classes from the leave to the root. If adding applicant a into class C i aj does not violate these invariants, we climb up and see if adding applicant a into C i a(j+1) violates the invariant. If we can reach all the way to C i ♯ without violating the invariants in any class in a(C(i)), applicant a is just added into institute i's new collection. If, on the other hand, adding applicant a into C i a(j+1) violates the invariants, institute i rejects some applicant in C i a(j+1) who is from a sequence of subclasses of C i a(j+1) which can afford to lose one applicant.
We define a deficiency number ∆(C i j ) for each class C i j ∈ C(i). Intuitively, the deficiency number indicates how many more applicants are necessary for class C i j to meet the lower bound of all its subclasses. This intuition translates into the following invariant:
Invariant A: ∆(C i j ) ≥ C i k ∈c(C i j ) ∆(C i k ), ∀C i j ∈ C(i), c(C i j ) = ∅, ∀i ∈ I.
In the beginning, ∆(C i j ) is set to q − (C i j ) and we will explain how ∆(C i j ) is updated shortly. Its main purpose is to make sure that after adding some applicants into C i j , there is still enough "space" for other applicants to be added into C i j so that we can satisfy the lower bound of all subclasses of C i j . In particular, we maintain
Invariant B: q − (C i j ) ≤ |µ(i) ∩ C i j | + ∆(C i j ) ≤ q + (C i j ), ∀C i j ∈ C(i), ∀i ∈ I.
We now explain how ∆(C i j ) is updated. Under normal circumstances, we decrease ∆(C i j ) by 1 once we add a new applicant into C i j . However, if Invariant A is already "tight", i.e., ∆(C i j ) = C i k ∈c(C i j ) ∆(C i k ), then we add the new applicant C i j without decreasing ∆(C i j ). The same situation may repeat until the point that |µ(i) ∩ C i j | + ∆(C i j ) = q + (C i j ) and adding another new applicant in C i j is about to violate Invariant B. In this case, something has to be done to ensure that Invariant B holds: some applicant in C i j has to be rejected, and the question is whom? Let us call a class a surplus class if |µ(i) ∩ C i j | + ∆(C i j ) > q − (C i j ) and we define an affluent set for each class C i j as follows:
$(C i j , µ(i)) = {a|a ∈ µ(i) ∩ C i j ; for each C i j ′ ∈ a(C(i)) and C i j ′ ⊂ C i j , |µ(C i j ′ )| + ∆(C i j ′ ) > q − (C i j ′ )}.
In words, the affluent set $(C i j , µ(i)) is composed of the set of applicants currently assigned to institute i, part of C i j , and each of whom belonging to a sequence of surplus subclasses of C i j . In our algorithm, to prevent Invariant B from being violated in a non-leaf class C i j , institute i rejects the lowest ranking applicant a in the affluent set $(C i j , µ(i)). The pseudo-code of the algorithm is presented in Figure 1.
Initialization 0: ∀i ∈ I, ∀C i j ∈ C(i), ∆(C i j ) := q − (C i j ); Algorithm 1: While there exists an applicant a unassigned and he has not been rejected by all institutes on his list 2: Applicant a proposes to the highest ranking institute i to whom he has not proposed so far; 3:
Assume that a(C(i)) = (C i a1 , C i a2 , · · · , C i az (= C i ♯ )); 4: µ(i) := µ(i) ∪ {a} // Institute i accepts applicant a provisionally; 5:
For t = 2 To z // applicant a can be added into C i a1 directly; 6:
If
∆(C i at ) > P C i k ∈c(C i at ) ∆(C i k ) Then ∆(C i at ) := ∆(C i at ) − 1; 7: If #(C i at ) + ∆(C i at ) > q + (C i at ) Then 8 Let $(C i at , µ(i)) = {a|a ∈ µ(i) ∩ C i at ; for each C i j ′ ∈ a(C(i)) and C i j ′ ⊂ C i at , |µ(C i j ′ )| + ∆(C i j ′ ) > q − (C i j ′ )}; 9
Let the lowest ranking applicant in $(C i at , µ(i)) be a † ; 10 µ(i) := µ(i)\{a † } // Institute i rejects applicant a † ; 11: GOTO 1; 12: If there exists an institute i with ∆(C i ♯ ) > 0 Then Report "There is no stable matching"; 13: Else Return the outcome µ, which is a stable matching; Fig. 1. The pseudo code of the algorithm. It outputs the applicant-optimal-institute-pessimal matching µ if it exists; otherwise, it reports that there is no stable matching.
Correctness of the Algorithm
In our discussion, C i at is a class in a(C(i)), where t is the index based on the size of the class C i at in a(C(i)). Assume that during the execution of the algorithm, applicant a proposes to institute i and when the index t of the For loop of Line 5 becomes l and results in a † being rejected, we say applicant a is stopped at class C i al , and class C i al causes applicant a † to be rejected. The first lemma describes some basic behavior of our algorithm. Lemma 6. (i) Immediately before the end of the while loop, Invariants A and B hold. (ii) Let applicant a be the new proposer and assume he is stopped at class C i al . Then (iia) Between the time interval that he makes the new proposal and he is stopped at C i al , ∆(C i at ) remains unchanged, for all 1 ≤ t ≤ l; moreover, given any class C i at , 2 ≤ t ≤ l, ∆(C i at ) = C i k ∈c(C i at ) ∆(C i k ). (iib) When a is stopped at a non-leaf class C i al , $(C i al , µ(i)) = ∅; in particular, any class C i at , 1 ≤ t ≤ l − 1, is temporarily a surplus class.
(iii) Immediately before the end of the while loop, if class C i j is a non-leaf surplus class, then ∆(C i j ) =
C i k ∈c(C i j ) ∆(C i k ).
(iv) Suppose that applicant a is the new proposer and C i al ∈ a(C(i)) causes applicant a † to be rejected and a † (C(i)) = (C i a † 1 , C i a † 2 , · · · , C i a † l † (= C i al ), · · · ). Then immediately before the end of the while loop,
∆(C i a † t ′ ) = C i k ∈c(C i a † t ′ ) ∆(C i k ), for all 2 ≤ t ′ ≤ l † ; moreover, |µ(i) ∩ C i a † l † | + ∆(C i a † l † ) = q + (C i a † l † ).
Proof. (i) can be proved by induction on the number of proposals institute i gets. For (iia), since Invariant A is maintained, if ∆(C i at ) is decreased for some class C i at , 1 ≤ t ≤ l, the algorithm will ensure that applicant a would not be stopped in any class, leading to a contradiction. Now by (iia), the set of classes {C i at } l−1 t=1 are (temporarily) surplus classes when applicant a is stopped at C i al , so $(C i al , µ(i)) = ∅, establishing (iib). Note that this also guarantees that the proposed algorithm is never "stuck." (iii) can be proved inductively on the number of proposals that institute i gets. Assuming a is the new proposer, there are two cases: (1) Suppose that applicant a is not stopped in any class. Then a class C i at ∈ a(C(i)) can become surplus only if the stated condition holds ; (2) Suppose that applicant a is stopped in some class, which causes a † to be rejected. Let the smallest class containing both a and a † be C i al ′ . Applying (iia) and observing the algorithm, it can be verified that only a class C i at ⊂ C i al ′ can become a surplus class and for such a class, the stated condition holds. Finally, for the first part of (iv), let C i al ′ denote the smallest class containing both a and a † . Given
a class C i a † t ′ , if C i al ′ ⊆ C i a † t ′ ⊆ C i al , (iia) gives the proof. If C i a † t ′ ⊂ C i al ′ ,
observe that the former must have been a surplus class right before applicant a made the new proposal. Moreover, before applicant a proposed, (iii) implies that for a non-leaf class C i a † t ′ ⊂ C i al ′ , the stated condition regarding the deficiency numbers is true. The last statement of (iv) is by the algorithm and Invariant B.
⊓ ⊔
Lemma 7. Assume that a † (C(i)) = (C i a † 1 , C i a † 2 , · · · , C i a † l † , · · · )
. During the execution of the algorithm, suppose that class C i a † l † causes applicant a † to be rejected. In the subsequent execution of the algorithm, assuming that µ(i) is the assignment of institute i at the end of the while loop, then
there exists l ‡ , where l ‡ ≥ l † such that |µ(i) ∩ C i a † l ‡ | + ∆(C i a † l ‡ ) = q + (C i a † l ‡ ); furthermore, for all 2 ≤ t ≤ l ‡ , all applicants in $(C i a † t , µ(i)) rank higher than a † . Moreover, for all 2 ≤ t ≤ l ‡ , ∆(C i a † t ) = C i k ∈c(C i a † t ) ∆(C i k ).
Proof. We prove based on the induction on the number of proposals institute i receives after a † is rejected. The base case is when a † is just rejected. Let l ‡ = l † . Then it is obvious that all applicants in the affluent sets $(C i a † t , µ(i)), 2 ≤ t ≤ l ‡ , rank higher than a † and the rest of the lemma holds by Lemma 6(iv).
For the induction step, let a be the new proposer. There are four cases. Except the second case, we let l ‡ remain unchanged after a's proposal.
-Suppose that a ∈ C i a † l ‡ and he does not cause anyone in C i a † l ‡ to be rejected. Then the proof is trivial.
-Suppose that a ∈ C i a † l ‡ and he is stopped in class C i al , which causes an applicant a * ∈ C i a † l ‡ to be rejected. a * must be part of the affluent set $(C i a † l ‡ , µ(i)) before a proposed. By induction hypothesis, a * ≻ i a † . Moreover, since a * is chosen to be rejected, all the applicants in the (new) affluent sets $(C i a † t , µ(i)), for each class C i a † t , where C i a † l ‡ ⊂ C i a † t ⊆ C i al , rank higher than a * , hence, also higher than a † . Now let C i al be the new C i a † l ‡ and the rest of the lemma follows from Lemma 6(iv).
-Suppose that a ∈ C i a † l ‡ and he is not stopped in C i a † l ‡ or any of its subclasses. We argue that a must be accepted without causing anyone to be rejected; moreover, the applicants in all affluent sets $(C i a † t , µ(i)), for all 1 ≤ t ≤ l ‡ remain unchanged. Let the smallest class in a † (C(i)) containing a be C i a †l . Note that before a proposed, the induction hypothesis states that |µ
(i) ∩ C i a † l ‡ | + ∆(C i a † l ‡ ) = q + (C i a † l ‡ ).
As applicant a is not stopped at C i a † l ‡ , the set of values ∆(C i a † t ),l ≤ t ≤ l ‡ , must have decreased during his proposal and this implies that he will not be stopped in any class. Now let a(C(i)) = (C i a1 , · · · , C i al , C i a(l+1) (= C i a †l ), · · · ). Since ∆(C i a †l ) = C i k ∈c(C i a †l ) ∆(C i k ) before applicant a proposed by the induction hypothesis, for ∆(C i a †l ) to decrease, ∆(C i al ) must have decreased as well. Choose the smallest class C i al * ⊂ C i a †l whose value ∆(C i al * ) has decreased during a's proposal. We claim that C i al * must have been a non-surplus class before and after applicant a's proposal. If the claim is true, then all the affluent sets $(C i a † t , µ(i)), for all 1 ≤ t ≤ l ‡ , remain unchanged after applicant a's proposal.
It is obvious that C i al * = C i a1 . So assume that C i al * is a non-leaf class. Suppose for a contradiction that C i al * was a surplus class before a proposed. Lemma 6(iii) implies that
∆(C i a † l * ) = C i k ∈c(C i a † l * ) ∆(C i k ) before a proposed.
Then for ∆(C i a † l * ) to decrease during a's proposal, ∆(C i a † (l * −1) ) must have decreased as well. But then this contradicts our choice of C i a † l * . So we establish that C i al * was not surplus and remains so after a's proposal. -Suppose that a ∈ C i a † l ‡ and when he reaches a subclass of C i a † l ‡ or the class itself, the latter causes some applicant a * to be rejected. To avoid trivialities, assume a = a * . Let the smallest class in a † (C(i)) containing a be C i a †l and the smallest class in a † (C(i)) containing a * be C i a † l * . Below we only argue that the case that C i a †l ⊆ C i a † l * . The other case that C i a † l * ⊂ C i a †l follows essentially the same argument. After a's proposal, observe that only the affluent sets $(C i a † t , µ(i)),l ≤ t < l * , can have new members (who are from the child class of C i a †l containing a). Without loss of generality, let G be the set of new members added into one of the any above sets. To complete the proof, we need to show that either G = ∅ or all members in G rank higher than a † . If before applicant a proposed, a * belonged to a sequence of surplus classes C i a * t ⊂ C i a † l * , he was also part of the affluent set $(C i a † l * , µ(i)) or part of µ(i)∩C i a † 1 before a proposed. By induction hypothesis, a * ≻ i a † . Observing Lemma 6(iib), all applicants in G must rank higher than a * , hence also than a † . On the other hand, if a * belongs to some class C i a * t ⊂ C i a † l * which was not surplus before a proposed, then C i a * l = C i a * l * and C i a * t must also contain a and remain a non-surplus class after a's proposal. In this case G = ∅.
⊓ ⊔
The following lemma is an abstraction of several counting arguments that we will use afterwards.
Lemma 8. Let each class C i j be associated with two numbers α i j and β i j and q − (C i j ) ≤ α i j , β i j ≤ q + (C i j ). Given any non-leaf class C i j , α i j = C i k ∈c(C i j ) α i k and β i j ≥ C i k ∈c(C i j ) β i k ; moreover, if β i j = C i k ∈c(C i j ) β i k then such a non-leaf class C i j is said to be tight in β. If β i j > q − (C i j )
, then C i j has to be tight in β.
(i) Given a non-leaf class
C i a † l † with α i a † l † < β i a † l † , we can find a sequence of classes C i a † l † ⊃ · · · ⊃ C i a † 1 , where α i a † t < β i a † t , for 1 ≤ t ≤ l † . (ii) Given a non-leaf class C i x with α i x ≤ β i x , suppose that there exists a leaf class C i a φ 1 ⊂ C i x such that α i a φ 1 > β i a φ 1 . Moreover, all classes C i a φ t are tight in β, where C i a φ 1 ⊆ C i a φ t ⊆ C i x , then we can find a class C i x ′ , where C i a φ 1 ⊂ C i x ′ ⊆ C i x , α i x ′ ≤ β i x ′ ,
and two sequences of classes with the following properties:
(iia) C i a φ 1 ⊂ C i a φ 2 ⊂ · · · ⊂ C i a φ l φ ⊂ C i x ′ , where α i a φ t > β i a φ t for 1 ≤ t ≤ l φ ; (iib) C i x ′ ⊃ C i a † l † ⊃ · · · ⊃ C i a † 1 , where α i a † t < β i a † t , for 1 ≤ t ≤ l † . Proof. For (i), since q − (C i a † l † ) ≤ α i a † l † < β i a † l † , class C i a † l † is tight in β. Therefore, C i k ∈c(C i a † l † ) α i k = α i a † l † < β i a † l † = C i k ∈c(C i a † l † ) β i k . By counting, there exists a class C i a † (l † −1) ∈ c(C i a † l † ) with q − (C i a † (l † −1) ) ≤ α i a † (l † −1) < β i a † (l † −1)
. Repeating the same argument gives us the sequence of classes. For (ii), let us climb up the tree from C i a φ 1 until we meet a class C i
x ′ ⊆ C i x with α i x ′ ≤ β i x ′ .
This gives us the sequence of classes stated in (iia).
Now since the class C i x ′ is tight in β, C i k ∈c(C i x ′ ) α i k = α i x ′ ≤ β i x ′ = C i k ∈c(C i x ′ ) β i k . Moreover, as C i a φ l φ ∈ c(C i x ′ ) and α i a φ l φ > β i a φ l φ , by counting, we can find another class C i a † l † ∈ c(C i x ′ )\{C i a φ l φ } such that β i a † l † > α i a † l † ≥ q − (C i a † l † )
. Now applying (i) gives us the sequence of classes in (iib).
⊓ ⊔
We say that (i; a) is a stable pair if there exists any stable matching in which applicant is assigned to institute i. A stable pair is by-passed if institute i rejects applicant a during the execution of our algorithm.
Lemma 9. During the execution of the algorithm, if an applicant a φ is rejected by institute i, then (i; a φ ) is not a stable pair.
Proof. We prove by contradiction. Assume that (i; a φ ) is the first by-passed stable pair and there exists a stable matching µ φ in which µ φ (a φ ) = i. For each class C i j ∈ C(i), we associate two numbers
α i j := |µ φ (i) ∩ C i j | and β i j := |µ(i) ∩ C i j | + ∆(C i j ).
Here ∆(·)s are the values recorded in the algorithm right after a φ is rejected (before the end of the while loop); similarly, µ(i) is the assignment of i at that point.
It is obvious that α i a φ 1 > β i a φ 1 and the class C i x causing a φ to be rejected is not C i a φ 1 . By
Lemma 6(iv), all classes C i a φ t are tight in β, where C i a φ 1 ⊂ C i a φ t ⊆ C i x .
It can be checked all the conditions as stated in Lemma 8(ii) are satisfied. In particular,
β i x = q + (C i x ) ≥ α i x ; moreover, if β i j > q − (C i j ), C i j must be tight (by Lemma 6(iii)). So, we can find two sequences of classes {C i a φ t } l φ t=1 and {C i a † t } l † t=1 , where C i a φ l φ , C i a † l † ∈ c(C i x ′ ) and C i x ′ ⊆ C i x ,
with the following properties:
q + (C i a φ t ) ≥ |µ φ (i) ∩ C i a φ t | > |µ(i) ∩ C i a φ t | + ∆(C i a φ t ) ≥ q − (C i a φ t ), ∀t, 1 ≤ t ≤ l φ ; q − (C i a † t ) ≤ |µ φ (i) ∩ C i a † t | < |µ(i) ∩ C i a † t | + ∆(C i a † t ) ≤ q + (C i a † t ), ∀t, 1 ≤ t ≤ l † .
The second set of inequalities implies that the classes {C i a † t } l † t=1 are surplus in µ. Thus there exists an applicant a † ∈ (µ(i)\µ φ (i)) ∩ C i a † 1 . Since (i; a φ ) is the first by-passed stable pair, i ≻ a † µ φ (a † ) and since a φ is rejected instead of a † , a † ≻ i a φ . Now observe the tuple µ φ (i)| a φ a † is feasible due to the above two sets of strict inequalities. Thus we have a group (i; µ φ (i)| a φ a † ) to block µ φ , a contradiction.
⊓ ⊔ Lemma 10. At the termination of the algorithm, if there exists an institute i ∈ I such that ∆(C i ♯ ) > 0, there is no stable matching in the given instance.
Proof. Suppose, for a contradiction, that there exists an institute i with ∆(C i ♯ ) > 0 and there is a stable matching µ φ . Let µ be the assignment when the algorithm terminates. By Lemma 9, if an applicant is unmatched in µ, he cannot be assigned in µ φ either. So |µ φ | ≤ |µ|. In the following, ∆(·)s refer to values recorded in the final outcome of the algorithm. Consider two cases.
-Suppose that |µ φ (i)| > |µ(i) ∩ C i ♯ |. Then as |µ φ | ≤ |µ|, we can find another institute i ′ = i such that |µ φ (i ′ )| < |µ(i ′ ) ∩ C i ′ ♯ |. For each class C i ′ j ∈ C(i ′ ), let α i ′ j := |µ φ (i ′ ) ∩ C i ′ j | and β i ′ j := |µ(i ′ ) ∩ C i ′ j | + ∆(C i ′ j )
. It can be checked that the condition stated in Lemma 8(i) is satisfied (note that those β i ′ j fulfill the condition due to Lemma 6(iii)). Therefore, we can find a sequence of
classes {C i ′ a † t } l † t=1 , where C i ′ a † l † = C i ′ ♯ , and |µ φ (i ′ ) ∩ C i ′ a † t | < |µ(i ′ ) ∩ C i ′ a † t | + ∆(C i ′ a † t ) ≤ q + (C i ′ a † t ), ∀t, 1 ≤ t ≤ l † , where the second inequality follows from Invariant B. Then there exists an applicant a † ∈ (µ(i ′ )\µ φ (i ′ )) ∩ C i ′ a † 1 . By Lemma 9, i ′ ≻ a † µ φ (a † )
, giving us a group (i ′ ; µ φ (i ′ )|a † ) to block µ φ , a contradiction. Note the feasibility of µ φ (i ′ )|a † is due to the above set of strict inequalities.
-Suppose that |µ φ (i)| ≤ |µ(i) ∩ C i ♯ |.
We first claim that C i ♯ must be a surplus class in µ(i). If not,
then q − (C i ♯ ) = ∆(C i ♯ ) + |µ(i) ∩ C i ♯ | > |µ(i) ∩ C i ♯ |, implying that |µ φ (i)| ≥ q − (C i ♯ ) > |µ(i) ∩ C i ♯ |, a contradiction. So C i
♯ is a surplus class, and by Lemma 6(iii),
|µ φ (i)| = C i k ∈c(C i ♯ ) |µ φ (i) ∩ C i k | ≤ |µ(i) ∩ C i ♯ | < |µ(i) ∩ C i ♯ | + ∆(C i ♯ ) = C i k ∈c(C i ♯ ) |µ(i) ∩ C i k | + ∆(C i k ).
For each class C i j ∈ C(i), let α i j := |µ φ (i)∩C i j | and β i j := |µ(i)∩C i j |+∆(C i j ) and invoke Lemma 8(i). The above inequality implies that α i ♯ < β i ♯ and note that by Lemma 6(iii), the condition regarding β is satisfied. Thus we have a sequence of surplus classes
C i a † l † (= C i ♯ ) ⊃ · · · ⊃ C i a † 1 so that q − (C i a † t ) ≤ |µ φ (i) ∩ C i a φ t | < |µ(i) ∩ C i a † t | + ∆(C i a † t ) ≤ q + (C i a † t ), ∀t, 1 ≤ t ≤ l † , implying that there exists an applicant a † ∈ (µ(i)\µ φ (i)) ∩ C i a † 1 and i ≻ a † µ φ (a † ) by virtue of Lemma 9. The tuple µ φ (i)|a † is feasible because of the above set of strict inequalities. Now (i; µ φ (i)|a φ ) blocks µ φ , a contradiction.
⊓ ⊔ Lemma 11. Suppose that in the final outcome µ, for each institute i ∈ I, ∆(C i ♯ ) = 0. Then µ is a stable matching.
Proof. For a contradiction, assume that a group (i; g) blocks µ. Let a φ to be the highest ranking applicant in g\µ(i). Since a φ is part of the blocking group, he must have proposed to and been rejected by institute i during the execution of the algorithm, thus i ≻ a φ µ(a φ ). By Lemma 7, there
exists a class C i a φ l ‡ such that |µ(i) ∩ C i a φ l ‡ | + ∆(C i a φ l ‡ ) = |µ(i) ∩ C i a φ l ‡ | = q + (C i a φ l ‡ ). Moreover, it is obvious that |g ∩ C i a φ 1 | > |µ(i) ∩ C i a φ 1 |.
We now make use of Lemma 8(ii) by letting α i j := |g ∩ C i j | and β i j := |µ(i) ∩ C i j | for each class C i j ∈ C(i). Note that all classes are tight in β, C i a φ 1 ⊂ C i a φ l ‡ , and
|µ(i) ∩ C i a φ l ‡ | = q + (C i a φ l ‡ ) ≥ |g ∩ C i a φ l ‡ |,
satisfying all the necessary conditions. Thus, we can discover a sequence of classes
{C i a † t } l † t=1 stated in Lemma 8(iib), where C i a † l † ∈ c(C i a φ l ) and C i a φ 1 ⊂ C i a φ l ⊆ C i a φ l ‡ , such that q − (C i a † t ) ≤ |g ∩ C i a † t | < |µ(i) ∩ C i a † t | ≤ q + (C i a † t ), ∀j, 1 ≤ t ≤ l † , and there exists an applicant a † ∈ (µ(i)\g) ∩ C i a † 1 .
The above set of strict inequalities mean that all classes C i a † t , 1 ≤ t ≤ l † , are surplus classes in µ. Then a † forms part of the affluent set $(C i a φ l , µ(i)). By Lemma 7, they all rank higher than a φ . This contradicts our assumption that a φ is the highestranking applicant in g\µ(i).
⊓ ⊔ Lemma 12. Suppose that in the final outcome µ, for each institute i ∈ I, ∆(C i ♯ ) = 0. Then µ is an institute-pessimal stable matching.
Proof. Suppose, for a contradiction, that there exists a stable matching µ φ such that there exists an institute i which is lexicographically better off in µ than in µ φ . Let a † be the highest ranking
applicant in µ(i)\µ φ (i). By Lemma 9, i ≻ a † µ φ (i). If |µ φ (i) ∩ C i a † t | < |µ(i) ∩ C i a † t | ≤ q + (C i a † t ), for all classes C i a † t ∈ a † (C(i)), then (i; µ φ (i)|a φ ) blocks µ φ , a contradiction. So choose the smallest class C i x ∈ a † (C(i)) such that |µ φ (i) ∩ C i x | ≥ |µ(i) ∩ C i x |. It is clear that C i x ⊃ C i a † 1 . Now we apply Lemma 8(ii) by letting α i j := |µ(i) ∩ C i j | and β i j := |µ φ (i) ∩ C i j | for each class C i j ∈ C(i).
It can be checked all conditions stated in Lemma 8(ii) are satisfied. So there exists a class
C i x ′ such that C i a † 1 ⊂ C i x ′ ⊆ C i x and we can find two sequences of classes {C i a φ t } l φ t=1 and {C i a † t } l † t=1 , where C i a φ l φ , C i a † l † ∈ c(C i x ′ )
, with the following properties:
q + (C i a † t ) ≥ |µ(i) ∩ C i a † t | > |µ φ (i) ∩ C i a † t | ≥ q − (C i a † t ), ∀t, 1 ≤ t ≤ l † ; q − (C i a φ t ) ≤ |µ(i) ∩ C i a φ t | < |µ φ (i) ∩ C i a φ t | ≤ q + (C i a φ t ), ∀t, 1 ≤ t ≤ l φ .
The second set of inequalities implies that we can find an applicant a φ ∈ (µ φ (i)\µ(i)) ∩ C i a φ 1 . Recall that we choose a † to be the highest ranking applicant in µ(i)\µ φ (i), so a † ≻ i a φ . Now we have a group (i; µ φ (i)| a φ a † ) to block µ φ to get a contradiction. The feasibility of µ φ (i)| a φ a † is due to the above two sets of strict inequalities.
⊓ ⊔ Based on Lemmas 9, 10, 11, and 12, we can draw the conclusion in this section.
Theorem 13. In O(m 2 ) time,
where m is the total size of all preferences, the proposed algorithm discovers the applicant-optimal-institute-pessimal stable matching if stable matchings exist in the given LCSM instance; otherwise, it correctly reports that there is no stable matching. Moreover, if there is no lower bound on the classes, there always exists a stable matching.
To see the complexity, first note that there can be only O(m) proposals. The critical thing in the implementation of our algorithm is to find out the lowest ranking applicant in each affluent set efficiently. This can be done by remembering the lowest ranking applicant in each class and this information can be updated in each proposal in O(m) time, since the number of classes of each institute is O(m), given that the classes form a laminar family.
Structures of Laminar Classified Stable Matching
Recall that we define the "absorption" operation as follows. Given a family of classes B, ℜ(B) returns the set of classes which are not entirely contained in other classes in B. Note that in LCSM, ℜ(B) will be composed of a pairwise disjoint set of classes.
We review the well-known rural hospitals theorem [8,15].
Theorem 14. (Rural Hospitals Theorem) In the hospitals/residents problem, the following holds.
(i) A hospital gets the same number of residents in all stable matchings, and as a result, all stable matchings are of the same cardinality. (ii) A resident who is assigned in one stable matching gets assigned in all other stable matchings;
conversely, an unassigned resident in a stable matching remains unassigned in all other stable matchings. (iii) An under-subscribed hospital gets the same set of residents in all other stable matchings.
It turns out that rural hospitals theorem can be generalized in LCSM. On the other hand, if some institutes use intersecting classes in their classifications, rural hospitals theorem fails (stable matching size may differ). See the appendix for such an example.
Theorem 15. (Generalized Rural Hospitals Theorem in LCSM) Let µ be a stable matching. Given any institute i, suppose that B is the set of bottleneck classes in µ(i) and D is the subset of classes in C(i) such that ℜ(B) ∪ D partitions L i . The following holds.
(i) An institute gets the same number of applicants in all stable matchings, and as a result, all stable matchings are of the same cardinality. (ii) An applicant who is assigned in one stable matching gets assigned in all other stable matchings;
conversely, an unassigned applicant in a stable matching remains unassigned in all other stable matchings. (iii) Every class C i k ∈ ℜ(B) ∪ D has the same number of applicants in all stable matchings. (iv) In a class C i k ⊆ C ∈ D, or in a class C i k which contains only classes in D, the same set of applicant in class C i k will be assigned to institute i in all stable matchings.
(v) A class C i k can have different sets of applicants in different stable matchings only if C i k ⊆ C ∈ ℜ(B) or C i k ⊇ C ∈ ℜ(B).
Proof. We choose µ † to be the applicant-optimal stable matching.
Claim A: Suppose that a ∈ µ † (i)\µ(i). Then there exists a class C i al ∈ a(C(i)) such that (i) |µ(i) ∩ C i al | = q + (C i al ), and (ii) a ∈ C i al ⊆ C ∈ ℜ(B). Proof of Claim A. If for all classes C i at ∈ a(C(i)), |µ(i) ∩ C i at | < q + (C i at )
, then as µ † is applicantoptimal, i ≻ a µ(a), so (i; µ(i)|a) blocks µ, a contradiction. This establishes (i).(ii) follows easily. ⊓ ⊔ LetB ⊆ B be the subset of these bottleneck classes containing at least one applicant µ † (i)\µ(i).
By Claim A(ii), ℜ(B) ⊆ ℜ(B). This implies that for all classes C i k ∈ (ℜ(B)\ℜ(B)) ∪ D, |µ(i) ∩ C i k | ≥ |µ † (i) ∩ C i k |. Combining this fact with Claim A(ii), we have |µ(i)| = C i k ∈(ℜ(B)\ℜ(B))∪D |µ(i) ∩ C i k | + C i k ∈ℜ(B) |µ(i) ∩ C i k | ≥ C i k ∈(ℜ(B)\ℜ(B))∪D |µ † (i) ∩ C i k | + C i k ∈ℜ(B) q + (C i al ) (*) = C i k ∈(ℜ(B)\ℜ(B))∪D |µ † (i) ∩ C i k | + C i k ∈ℜ(B) |µ † (i) ∩ C i k | = |µ † (i)|.
Thus, |µ| ≥ |µ † | and it cannot happen that |µ| > |µ † |, otherwise, there exists an applicant who is assigned in µ but not in µ † . This contradicts the assumption that the latter is applicant-optimal. This completes the proof of (i) and (ii) of the theorem.
Since |µ| = |µ † |, Inequality (*) holds with equality. We make two observations here.
Observation 1: For each class C i k ∈ ℜ(B), it is also a bottleneck in µ † (i). Observation 2: an applicant a ∈ µ † (i)\µ(i) must belong to a bottleneck class in µ † (i).
Let B † be the set of bottleneck classes in µ † (i) and choose D † so that ℜ(B † ) ∪ D † partitions L i . By Observation 2, each applicant in µ † (i) ∩ C i k , where C i k ∈ D † , must be part of µ(i). So for each class -There exists another class C i k ′ ∈ D † so that |µ(i)∩C i k ′ | < |µ † (i)∩C i k ′ |. Then we have a contradiction to Observation 2.
C i k ∈ D † , |µ(i) ∩ C i k | ≥ |µ † (i) ∩ C i k |. We claim that it cannot happen that |µ(i) ∩ C i k | > |µ † (i) ∩ C i k |.
-There exists another class
C i k ′ ∈ ℜ(B † ) so that |µ(i)∩C i k ′ | < |µ † (i)∩C i k ′ |.
For each class C i j ∈ C(i), let α i j := |µ(i) ∩ C i j | and β i j := |µ † (i) ∩ C i j |. Then we can invoke Lemma 8(i) and find an applicant a φ ∈ µ † (i)\µ(i) so that for each class C i
a φ t ∈ a φ (C(i)), C i a φ t ⊆ C i k ′ , |µ(i) ∩ C i a φ t | < |µ † (i) ∩ C i a φ t | ≤ q + (C i a φ t ).
Then by Claim A(ii) and Observation 1, there must exist another class C i k ′′ ∈ ℜ(B) containing a φ and C i k ′′ ⊃ C i k ′ . By Observation 1, C i k ′′ is also a bottleneck class in µ † (i). This contradicts the assumption that C i k ′ ∈ ℜ(B † ). So we have that for each class
C i k ∈ D † , |µ(i) ∩ C i k | = |µ † (i) ∩ C i k |.
For each class C i k ∈ B † , we can use the same argument to show that |µ(i) ∩ C i k | = |µ † (i) ∩ C i k |. This gives us (iii) and (iv). (v) is a consequence of (iv).
⊓ ⊔
NP-completeness of P-Classified Stable Matching
Theorem 16. Suppose that the set of posets P = {P 1 , P 2 , · · · , P k } contains a poset which is not a downward forest. Then it is NP-complete to decide the existence of a stable matching in P-classified stable matching. This NP-completeness holds even if there is no lower bound on the classes.
Our reduction is from one-in-three sat. It is involved and technical, so we just highlight the idea here. As P must contain a poset that has a "V " in it, some institutes use intersecting classes. In this case, even if there is no lower bound on the classes, it is possible that the given instance disallows any stable matching. We make use of this fact to design a special gadget. The main technical difficulty of our reduction lies in that in the most strict case, we can use at most two classes in each institute's classification.
Polyhedral Approach
In this section, we take a polyhedral approach to studying LCSM. We make the simplifying assumption that there is no lower bound. In this scenario, we can use a simpler definition to define a stable matching.
Lemma 17. In LCSM, if there is no lower bound, i.e., given any class C i j , q − (C i j ) = 0, then a stable matching as defined in Definition 2 can be equivalently defined as follows. A feasible matching µ is stable if and only if there is no blocking pair. A pair (i, a) is blocking, given that µ(i) = (a i1 , a i2 , · · · , a ik ),
k ≤ Q(i), if -i ≻ a µ(a); -for any class C i at ∈ a(C(i)), |L i ≻a ∩ µ(i) ∩ C i at | < q + (C i at ).
The definition of blocking pairs suggests a generalization of the comb used by Baïou and Balinski [3].
Definition 18. Let Γ = I ×A denote the set of acceptable institute-applicant pairs. The shaft S(A i ), based on a feasible tuple A i of institute i, is defined as follows: a) is defined for every (i, a) ∈ Γ as follows:
S(A i ) = {(i, a ′ ) ∈ Γ : ∀C i j ∈ a ′ (C(i)), |L i ≻a ′ ∩ A i ∩ C i j | < q + (C i j )}. The tooth T (i,T (i, a) = {(i ′ , a) ∈ Γ : i ′ a i}.
In words, (i, a ′ ) forms part of the shaft S(A i ), only if the collection of a ′ and all applicants in A i ranking strictly higher than a ′ does not violate the quota of any class in a ′ (C(i)). We often refer to an applicant a ∈ A i as a tooth-applicant.
We associate a |Γ |-vector x µ (or simply x when the context is clear) with a matching µ: x µ ia = 1 if µ(a) = i, otherwise, x µ ia = 0. Suppose thatΓ ⊆ Γ . Then x(Γ ) = (i,a)∈Γ x ia . We define a comb K(i, S(A i )) as the union of the teeth {T (i, a i )} a i ∈A i and the shaft S(A i ).
Lemma 19. Every stable matching solution x satisfies the comb inequality for any comb K(i, S(A i )):
x(K(i, S(A i )) ≡ x(S(A i )) + a j ∈A i x(T (i, a j )\{i, a j }) ≥ |A i |.
It takes a somehow involved counting argument to prove this lemma. Here is the intuition about why the comb inequality captures the stability condition of a matching. The value of the tooth x(T (i, a)) reflects the "happiness" of the applicant a ∈ A i . If x(T (i, a)) = 0, applicant a has reason to shift to institute i; on the other hand, the values collected from the shaft x(S(A i )) indicates the "happiness" of institute i: whether it is getting enough high ranking applicants (of the "right" class). An overall small comb value x(K(i, S(A i ))) thus expresses the likelihood of a blocking group including i and some of the applicants in A i . Now let K i denote the set all combs of institute i. We write down the linear program:
i:(i,a)∈Γ x ia ≤ 1, ∀a ∈ A (1) a:(i,a)∈Γ,a∈C i j x ia ≤ q + (C i j ), ∀i ∈ I, ∀C i j ∈ C(i)(2)
x(K(i, S(A i ))) = (i,a)∈K(i,S(A i ))
x ia ≥ |A i |, ∀K(i, S(A i )) ∈ K i , ∀i ∈ I (3) x ia ≥ 0, ∀(i, a) ∈ Γ(4)
Suppose there is no classification, i.e., Hospitals/Residents problem. Then this LP reduces to the one formulated by Baïou and Balinski [3]. However, it turns out that this polytope is not integral. The example in Figure 2 demonstrates the non-integrality of the polytope. In particular, observe that since µ is applicant-optimal, in all other stable matchings, applicant a 3 can only be matched to i 5 . However, the value x i 1 a 3 = 0.2 > 0 indicates that x is outside of the convex hull of integral stable matchings.
Here we make a critical observation. Suppose that in a certain matching µ φ , applicant a 3 is assigned to i 1 . Then a 2 cannot be assigned to i 1 due to the bound q + (C 1 1 ) (see Constraint (2)). If µ φ is to be stable, then a 2 must be assigned to some institute ranking higher than i 1 on his list (in this example there is none), otherwise, (i, µ φ (i 1 )| a 3 a 2 ) is bound to be a blocking group in µ φ . Thus, the required constraint to avoid this particular counter-example can be written as
x(T (i 1 , a 2 )\{i 1 , a 2 }) ≥ x i 1 a 3 .
We now formalize the above observation. Given any class C i j ∈ C(i), we define a class-tuple t i j = (a i1 , a i2 , · · · , a iq + (C i j ) ). Such a tuple fulfills the following two conditions:
Institute Preferences
Classifications Class bounds i1:a1a6a7a2a3 4) is not integral. Since µ is applicant-optimal, in all other stable matchings, applicant a 3 can only be matched to i 5 . However, the value x i 1 a 3 = 0.2 > 0 indicates that x is outside of the convex hull of integral stable matchings.
C 1 1 = {a2, a3} Q(i1) = 2, q + (C 1 1 ) = 1 i2:a4a7 Q(i2) = 1 i3:a2a4 Q(i3) = 1 i4:a5a6 Q(i4) = 1 i5:a3a5a7a1 C 5 1 = {a3, a5} Q(i5) = 2, q + (C 5 1 ) = 1 Applicant1. t i j ⊆ C i j ; 2. if C i
j is a non-leaf class, then given any subclass
C i k of C i j , |t i j ∩ C i k | ≤ q + (C i k ).
Let L i ≺t i j denote the set of applicants ranking lower than all applicants in t i j and L i t i j the set of applicants ranking at least as high as the lowest ranking applicant in t i j .
Lemma 20. Every stable matching solution x satisfies the following inequality for any class-tuple t i j :
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia .
As before, it takes a somehow involved counting argument to prove the lemma but its basic idea is already portrayed in the above example. Now let T i j denote the set of class-tuples in class C i j ∈ C(i) and L i ≺t i j denote the set of applicants ranking lower than all applicants in t i j . We add the following sets of constraints.
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia , ∀t i j ∈ T i j , ∀T i j(5)
Let P f sm denote the set of all solutions satisfying (1)-(5) and P sm the convex hull of all (integral) stable matchings. In this section, our main result is P f sm = P sm . We say (i, a) are matched under x if x ia > 0.
Definition 21. Let x ∈ P f sm and Ω i (x) be the set of applicants that are matched to institute i under x. Let Ω i (x) be composed of a i1 , a i2 , · · · , ordered based on the decreasing preference of institute i. H i (x) as a tuple composed of applicants chosen based on the following procedure: adding a ij greedily unless adding the next applicant into H i (x) will cause H i (x) to violate the quota of some class. Equivalently, a il ∈ H i (x) only if there exists a class C i j ∈ a il (C(i)) such that Proof. We need to show that given any class C i j ∈ C(i), |E i (x)∩C i j | ≤ q + (C i j ). We proceed by induction on the height of C i j in the tree structure of C(i). The base case is a leaf class.
Define
|H i (x) ∩ {a it } l−1 t=1 | = q + (C i j ). 2. Define E i (x)If |E i (x) ∩ C i j | > q + (C i j )
, form a class-tuple by picking the first q + (C i j ) applicants in E i (x) ∩ C i j . Then Constraint (5) is violated in such a class-tuple. For the induction step, if |E i (x)∩C i j | > q + (C i j ), again choose the q + (C i j ) highestranking applicants in E i (x) ∩ C i j and we claim they form a class-tuple of C i j , the reason being that by induction hypothesis, given any (5) is again violated in such a class-tuple.
C i k ⊂ C i j , |E i (x) ∩ C i k | ≤ q + (C i k ). Now Constraint
⊓ ⊔ Lemma 23. Suppose that x ∈ P f sm .
(i) For each institute i ∈ I, we can find two sets U and V of pairwise disjoint classes so that U ∪ V partitions L i and all applicants in Ω i (x)\H i (x) belong to the classes in U . Moreover,
(ia) |H i (x)| = C i k ∈U q + (C i k ) + C i k ∈V |H i (x) ∩ C i k |; (ib) for each class C i k ∈ U , |H i (x) ∩ C i k | = |E i (x) ∩ C i k | = q + (C i k );
for each class C i k ∈ V and each applicant a ∈ C i k , if x ia > 0, then x ia = 1; (ic) for each class C i k ∈ U , a∈C i k x ia = q + (C i k ). (ii) For every applicant a ∈ H i (x), x(T (i, a)) = i∈I x ia = 1; moreover, given any two institutes i,
i ′ ∈ I, H i (x) ∩ H i ′ (x) = ∅. (iii) |H i (x)| = |E i (x)| for all institutes i ∈ I. (iv) a∈A x ia = |E i (x)| for all institutes i ∈ I.
Proof. For (i), given any applicant a ∈ Ω i (x)\H i (x), by Definition 21, there exists some class C i j ∈ a(C(i)) for which |H i (x) ∩ C i j | = q + (C i j ). Let B be the set of classes C i j which contain at least one applicant in Ω i (x)\H i (x) and |C i j ∩ H i (x)| = q + (C i j ). Let U := ℜ(B) and choose V in such a way so that U ∪ V partitions L i . Now (ia) is a consequence of counting. We will prove (ib)(ic) afterwards.
For (ii), by definition of H i (x), none of the applicants in Ω i (x)\H i (x) contributes to the shaft x(S(H i (x))). As a result, for Constraint (3) to hold for the comb K(i, S(H i (x))), every tooth-applicant a ∈ H i (x) must contribute at least 1, and indeed, by Constraint (1), exactly 1. So we have the first statement of (ii). The second statement holds because it cannot happen that x(T (i, a)) = x(T (i ′ , a)) = 1, given that x ia > 0 and x i ′ a > 0.
For (iii), By Definition 21, all sets E i (x) are disjoint; thus, every applicant who is matched under x belongs to exactly one E i (x) and at most one H i (x) by (ii). Therefore, i∈I |E i (x)| ≥ i∈I |H i (x)| and we just need to show that for each institute i, |E i (x)| ≤ |H i (x)|, and this follows by using (ia):
|H i (x)| = C i k ∈U q + (C i k ) + C i k ∈V |H i (x) ∩ C i k | ≥ C i k ∈U |E i (x) ∩ C i k | + C i k ∈V |E i (x) ∩ C i k | = |E i (x)|,(6)
where the inequality follows from Lemma 22 and the fact all applicants in Ω i (x)\H i (x) are in classes in U . So this establishes (iii). Moreover, as Inequality (6) must hold with equality throughout, for each class C i k ∈ V , if applicant a ∈ C i k is matched to institute i under x, he must belong to both H i (x) and E i (x), implying x ia = 1; given any class
C i k ∈ U , |H i (x) ∩ C i k | = |E i (x) ∩ C i k | = q + (C i k ). So we have (ib).
For (iv), consider the comb K(i, S(E i (x))). By definition, x(T (i, a)\{(i, a)}) = 0 for each applicant a ∈ E i (x). So
x(K(i, S(E i (x)))) = x(S(E i (x))) = C i k ∈V |E i (x) ∩ C i k | + C i k ∈U a ′ ∈C i k ,(i,a ′ )∈S(E i (x)) x ia ′ ≤ C i k ∈V |E i (x) ∩ C i k | + C i k ∈U q + (C i k ) = |E i (x)|,
where the inequality follows from Constraint (2) and the rest can be deduced from (ib). By Constraint (3), the above inequality must hold with equality. So for each class
C i k ∈ U , a ′ ∈C i k ,(i,a ′ )∈S(E i (x)) x ia ′ = a ′ ∈C i k x ia ′ = q + (C i k )
, giving us (ic) and implying that there is no applicant in C i k ∈ U who is matched to institute i under x ranking lower than all applicants in E i (x) ∩ C i k . The proof of (iv) follows by
a∈A x ia = C i k ∈V a∈C i k x ia + C i k ∈U a∈C i k x ia = C i k ∈V |E i (x) ∩ C i k | + C i k ∈U q + (C i k ) = |E i (x)|.
⊓ ⊔
Packing Algorithm
We now introduce a packing algorithm to establish the integrality of the polytope. Our algorithm is generalized from that proposed by Sethuraman, Teo, and Qian [22]. Given x ∈ P f sm , for each institute i, we create |E i (x)| "bins," each of size (height) 1; each bin is indexed by (i, j), where 1 ≤ j ≤ |E i (x)|. Each x ia > 0 is an "item" to be packed into the bins. Bins are filled from the bottom to the top. When the context is clear, we often refer to those items x ia as simply applicants; if applicant a ∈ C i j , then the item x ia is said to belong to the class C i j . In Phase 0, each institute i puts the items x ia , if a ∈ H i (x), into each of its |E i (x)| bins. In the following phase, t = 1, 2, · · · , our algorithm proceeds by first finding out the set L t of bins with maximum available space;
then assigning each of the bins in L t one item.
The assignment in each phase proceeds by steps, indexed by l = 1, 2, · · · , |L t |. The order of the bins in L t to be examined does not matter. How the institute i chooses the items to be put into its bins is the crucial part in which our algorithm differs from that of Sethuraman, Teo, and Qian. We maintain the following invariant.
Invariant C: The collection of the least preferred items in the |E i (x)| bins (e.g., the items currently on top of institute i's bins) should respect of the quotas of the classes in C(i).
Subject to this invariant, institute i chooses the best remaining item and adds it into the bin (i, j), which has the maximum available space in the current phase. This unavoidably raises another issue: how can we be sure that there is at least one remaining item for institute i to put into the bin (i, j) without violating Invariant C? We will address this issue in our proof.
Theorem 24. Let x ∈ P f sm . Let M i,j be the set of applicants assigned to bin (i, j) at the end of any step of the packing procedure and a i,j be the lowest-ranking applicant of institute i in bin (i, j) (implying x ia i,j is on top of bin (i, j)). Then (i) In any step, suppose that the algorithm is examining bin (i, j). Then institute i can find at least one item in its remaining items to add into bin (i, j) without violating Invariant C;
(ii) For all bins (i, j), x(M i,j \{a i,j }) + x(T (i, a i,j )) = x(M i,j ) + x(T (i, a i,j )\{(i, a i,j )}) = 1; (iii) At the end of any step, institute i can organize a comb K(i, S(A i )) where A i is composed of appli- cants in {a i,j ′ } |E i (x)| j ′ =1 so that x(K(i, S(A i )) = |E i (x)| j ′ =1 x(M i,j ′ ) + |E i (x)| j ′ =1 x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = |E i (x)|;
(iv) At the end of any step, an item x ia is not put into institute i's bins if and only if there exists a class C i at ∈ a(C(i)) so that |{a i,j ′ }
|E i (x)| j ′ =1 ∩ C i at ∩ L i ≻a | = q + (C i at ). (v) If x ia is packed and x i ′ a is not, then i ′ ≻ a i;
(vi) At the end of any phase, the a i,j in all bins are distinct. In particular, for any applicant a who is matched under x, there exists some bin (i, j) such that a = a i,j .
Proof. We first assume that (ii) holds and prove (i). Observe that (ii) implies that given any applicant a ∈ E i (x), its corresponding item x ia , if already put into a bin, must be on its top and fills it completely. Since (i, j) currently has available space, at least one applicant in E i (x) is not in institute i's bins yet. We claim that there exists at least one remaining applicant in E i (x) that can be added into bin (i, j). Suppose not. Let the set of applicants in E i (x) that are not put into i's bins be G. Given any applicant a ∈ G, there must exist some class
C i k ∈ a(C(i)) for which | 1≤j ′ ≤|E i (x)|,j ′ =j a i,j ′ ∩ C i k | = q + (C i k ). Let B be the set of classes C i k that contains at least one applicant in G and | 1≤j ′ ≤|E i (x)|,j ′ =j a i,j ′ ∩ C i k | = q + (C i k ). Let G ′ be (E i (x)\G)\ C i k ∈ℜ(B) C i k , the subset of applicants in E i (x)
that are already put into the bins but not belonging to any class in ℜ(B). Note that none of the applicants in G ′ can be in the bin (i, j). Thus, by counting the number of the bins minus (i, j), we have
|E i (x)| − 1 ≥ |G ′ | + C i k ∈ℜ(B) | |E i (x)| j ′ =1,j ′ =j a i,j ′ ∩ C i k | = |G ′ | + C i k ∈ℜ(B) q + (C i k )
Note that all applicants in E i (x)\G ′ are in some class in ℜ(B) (either they are already put into the bins or not). Then by the pigeonhole principle, there is at least one class C i k ∈ ℜ(B) for which
|(E i (x)\G ′ ) ∩ C i k | > q + (C i k ), contradicting Lemma 22.
We now prove (ii)-(vi) by induction on the number of phases. In the beginning, (ii)(v)(vi) holds by Lemma 23(ii)(iii). (iii)(iv) hold by setting A i := H i (x) and observation Definition 21 and Lemma 23(ii).
Suppose that the theorem holds up to Phase t. Let α be the maximum available space in Phase t + 1. Suppose that the algorithm is examining bin (i, j) and institute i chooses item x ia to be put into this bin. From (vi) of the induction hypothesis, applicant a is on top of another bin (i ′ , j ′ ), where i ′ = i, in the beginning of phase t + 1. Then by (ii)(v) of the induction hypothesis,
x(T (i, a)) ≤ x(T (i ′ , a)) − x i ′ a = 1 − x(M i ′ ,j ′ ) ≤ α,(7)
where the last inequality follows from our assumption that in Phase t + 1, the maximum available space is α. Note also that We first prove (iv). Since x ia is not put into the bin before this step, by (iv) of the induction hypothesis, there exists some class C i al ∈ a(C(i)) for which
x(T (i, a)) = α, then (i ′ , j ′ ) ∈ L t+1 (bin (i ′ , j ′ ) is also examined in Phase t + 1). (8) Assume that A i is a tuple composed of applicants in {a i,j ′ } |E i (x)| j ′ =1 . For our induction step, let A i := A i | a i,|A i ∩ C i al ∩ L i ≻a | = q + (C i al ).
Let C i al be the smallest such class. Since x ia is allowed to put on top of x ia i,j , a ij ≻ i a and a ij ∈ C i al , otherwise, Invariant C regarding q + (C i al ) is violated. Now we show that all other items x ia ′ fulfill the condition stated in (iv). There are two cases.
-Suppose that x ia ′ is not put into the bins yet.
• Suppose that a i,j ≻ i a ′ ≻ i a. We claim that it cannot happen that for all classes C i a ′ t ∈ a ′ (C(i)),
|A i ∩ C i a ′ t ∩ L i ≻a ′ | < q + (C i a ′ t )
, otherwise, A i | a a ′ is still feasible, in which case institute i would have chosen x ia ′ , instead of x ia to put into bin (i, j), a contradiction.
• Suppose that a i,j ≻ i a ≻ i a ′ . By (iv) of the induction hypothesis, there exists a class C i a ′ l ′ ∈ a ′ (C(i)) for which
|A i ∩ C i a ′ l ′ ∩ L i ≻a ′ | = q + (C i a ′ l ′ ). If C i a ′ l ′ ⊂ C i al , it is easy to see that |A i ∩ C i a ′ l ′ ∩ L i ≻a ′ | = q + (C i a ′ l ′ ); if C i a ′ l ′ ⊂ C i al , then C i al ∈ a ′ (C(i)) and we have |A i ∩ C i al ∩ L i ≻a ′ | = q + (C i al )
. In both situations, the condition of (iv) regarding x ia ′ is satisfied.
-Suppose that x ia ′ is already put into the bins. It is trivial if a ′ ≻ i a, so assume that a ≻ i a ′ . We claim that none of the classes C i a ′ t ∈ a ′ (C(i)) can be a subclass of C i al or C i al itself. Otherwise, C i al ∈ a ′ (C(i)), and we have q
+ (C i al ) = |A i ∩ C i al ∩ L i ≻a | ≥ |A i ∩ C i al ∩ L i ≻a ′ |, a contradiction to (iv) of the induction hypothesis. Now since for every class C i a ′ t ∈ a ′ (C(i)), we have C i a ′ t ⊆ C i al , we have |A i ∩ C i a ′ t ∩ L i ≻a ′ | = |A i ∩ C i a ′ t ∩ L i ≻a ′ | < q + (C i a ′ t ),
where the strict inequality is due to the induction hypothesis.
We notice that the quantity
|E i (x)| j ′ =1 x(M i,j ′ ) is exactly the sum of the shaft x(S(A i )) (before x ia
is added) or x(S(A i )) (after x ia is added) by observing (iv). Below let x(M i,j ) and x(M i,j ) denote the total size of the items in bin (i, j) before and after x ia is added into it. So x(M i,j ) = x(M i,j ) + x ia . Now we can derive the following:
x(K(i, S(A i ))) = x(S(A i )) + x(T (i, a)\{(i, a)}) + |E i (x)| j ′ =1,j ′ =j x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = x(M i,j ) + x ia + x(T (i, a)\{(i, a)}) + |E i (x)| j ′ =1,j ′ =j x(M i,j ′ ) + x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = x(M i,j ) + x(T (i, a)) + |E i (x)| − 1 (by (ii) of the induction hypothesis) ≥ |E i (x)| (by Constraint (3))
For the above inequality to hold,
x(M i,j ) + x(T (i, a)) ≥ 1.(9)
Since x(M i,j ) = 1− α and x(T (i, a)) ≤ α by Inequality (7), Inequality (9) must hold with equality, implying that x(K(i, S(A i ))) = |E i (x)|, giving us (iii).
Since institute i puts x ia into bin (i, j), the "new" M i,j and the "new" a i,j (=a) satisfies
x(M i,j ) + x(T (i, a)\{(i, a)}) = 1.
This establishes (ii). (v) follows because Inequality (7) must hold with equality throughout. Therefore, there is no institute i ′′ which ranks strictly between i and i ′ and x i ′′ a > 0.
Finally for (vi), note that x(T (i, a)) = α if the item x ia is put into some bin in Phase t+1. All such items are the least preferred items in their respective "old" bins (immediately before Phase t + 1), it means the items on top of the newly-packed bins are still distinct. Moreover, from (8), if a bin (i, j) is not examined in Phase t + 1, then its least preferred applicant cannot be packed in phase t + 1 either.
⊓ ⊔
We define an assignment µ α based on a number α ∈ [0, 1) as follows. Assume that there is a line of height α "cutting through" all the bins horizontally. If an item x ia whose position in i's bins intersects α, applicant a is assigned to institute i. In the case this cutting line of height α intersects two items in the same bin, we choose the item occupying the higher position. More precisely:
Given α ∈ [0, 1), for each institute i ∈ I, we define an assignment as follows: µ α (i) = {a :
1 − x(T (i, a)) ≤ α < 1 − x(T (i, a)) + x ia }.
Theorem 25. The polytope determined by Constraints (1)-(5) is integral.
Proof. We generate uniformly at random a number α ∈ [0, 1) and use it to define an assignment µ α . To facilitate the discussion, we choose the largest α ′ ≤ α so that µ α ′ = µ α . Intuitively, this can be regarded as lowering the cutting line from α to α ′ without modifying the assignment, and 1 − α ′ is exactly the maximum available space in the beginning of a certain phase l during the execution of our packing algorithm. Note that the assignment µ α is then equivalent to giving those applicants (items) on top of institute i's bins to i at the end of phase l.
We now argue that µ α is a stable matching. First, it is a matching by Theorem 24(vi). The matching respects the quota of all classes since Invariant C is maintained. What remains to be argued is the stability of µ α . Suppose, for a contradiction, (i, a φ ) is a blocking pair. We consider the possible cases.
-Suppose that x ia φ > 0 and x ia φ is not put into the bins yet at the end of Phase l. Then by Theorem 24(iv) and the definition of blocking pairs, (i, a φ ) cannot block µ α . -Suppose that x ia φ > 0 and x ia φ is already put into the bins at the end of Phase l. If µ α (a φ ) = i, there is nothing to prove. So assume µ α (a φ ) = i and this means that the item x ia φ is "buried" under some other item on top of some of i's bins at the end of Phase l. Then by Theorem 24(v), a φ is assigned to some other institute ranking higher than i, contradicting the assumption that (i, a φ ) is a blocking pair. -Suppose that x ia φ = 0. There are two subcases.
• Suppose that for each of the classes C i a φ t ∈ a φ (C(i)), |µ α (i) ∩ C i a φ t | < q + (C i a φ t ). Then we can form a new feasible tuple µ α (i)|a φ . It can be inferred from the definition of the shaft that x(S(µ α (i)|a φ )) ≤ x(S(µ α (i)). Moreover, by Theorem 24(iii), we have x(K(i, S(µ α (i))) = |E i (x)|. Now by Constraint (3),
|E i (x)| + 1 ≤ x(K(i, S(µ α (i)|a φ ))) ≤ x(S(µ α (i)) + x(T (i, a φ )\{(i, a φ )}) + a∈µ α x(T (i, a)\{(i, a)}) = x(K(i, S(µ α (i)))) + x(T (i, a φ )\{(i, a φ )}) = |E i (x)| + x(T (i, a φ )\{(i, a φ )}). As a result, x(T (i, a φ )\{(i, a φ )}) = 1, implying that µ α (a φ ) ≻ a φ i, a contradiction to the assumption that (i, a) blocks µ α . • Suppose that there exists a class C i a φ l φ ∈ a φ (C(i)) for which |µ α (i) ∩ C i a φ l φ | = q + (C i a φ l φ ). Let C i
a φ l φ be the smallest such class. By definition of blocking pairs, there must exist an applicant a † ∈ µ α (i) ∩ C i a φ l φ who ranks lower than a φ . Choose a † to be the lowest ranking such applicant in µ α (i). We make the following critical observation:
x(S(µ α (i)| a † a φ )) ≤ x(S(µ α (i))) − x ia † .(10)
To see this, we first argue that given an item x ia > 0, if it does not contribute to the shaft S(µ φ (i)), then it cannot contribute to shaft S(µ α (i)| a † a φ ) either. It is trivial if a ≻ i a † . So assume that a † ≻ i a. First suppose that a ∈ C i a φ l φ . Then given any class C i at ∈ a(C(i)),
|µ α (i) ∩ C i at ∩ L i ≻a | = |µ α (i)| a † a φ ∩ C i at ∩ L i ≻a |,
and Theorem 24(iv) states that there is a class
C i al ∈ a(C(i)) such that |µ α (i) ∩ C i al ∩ L i ≻a | = q + (C i al ). Secondly suppose that a ∈ C i a φ l φ . Observe that q + (C i a φ l φ ) = |µ α (i)| a † a φ ∩ C i a φ l φ ∩ L i ≻a † | = |µ φ (i)| a † a φ ∩ C i a φ l φ ∩ L i ≻a | (
the first equality follows from the choice of a † ). In both cases, we conclude that x ia cannot contribute to the shaft S(µ φ (i)| a † a φ ). The term x ia † does not contribute to the shaft S(µ φ (i)| a † a φ ) by the same argument. Now using Constraint (3), Theorem 24(iii), and Inequality (10), we have i, a φ )).
|E i (x)| ≤ x(K(i, S(µ α (i)| a † a φ ))) ≤ x(S(µ α (i))) − x ia † + x(T (i, a φ )\{(i, a φ )}) + a∈µ α (i)\{a † } x(T (i, a)\{(i, a)})) = |E i (x)| − x(T (i, a † )) + x(T (
(Note that x ia φ = 0).
Therefore,
x(T (i, a φ )) ≥ x(T (i, a † )) ≥ 1 − α ′ ≥ 1 − α.
So µ α (a φ ) ≻ a φ i, again a contradiction to the assumption that (i, a φ ) blocks µ α . So we have established that the generated assignment µ α is a stable matching. Now the remaining proof is the same as in [23]. Assume that µ α (i, a) = 1 if and only if applicant a is assigned to institute i under µ α . Then a)dα and x can be written as a convex combination of µ α as α varies over the interval [0, 1). The integrality of the polytope thus follows.
Exp[µ α (i, a)] = x ia . Then x ia = 1 0 µ α (i,
⊓ ⊔
Optimal Stable Matching
Since our polytope is integral, we can write suitable objective functions to target for various optimal stable matchings using Ellipsoid algorithm [10]. As the proposed LP has an exponential number of constraints, we also design a separation oracle to get a polynomial time algorithm. The basic idea of our oracle is based on dynamic programming.
Median-Choice Stable Matching
An application of our polyhedral result is the following.
Theorem 26. Suppose that in the given instance, all classifications are laminar families and there is no lower bound, q − (C i j ) = 0 for any class C i j . Let µ 1 , µ 2 , · · · , µ k be stable matchings. If we assign every applicant to his median choice among all the k matchings, the outcome is a stable matching.
Proof. Let x µt be the solution based on µ t for any 1 ≤ t ≤ k and apply our packing algorithm on the fractional solution x = P k t=1 xµ t k . Then let α = 0.5 and µ 0.5 be the stable matching resulted from the cutting line of height α = 0.5. We make the following observation based on Theorem 24:
Suppose that applicant a is matched under x and those institutes with which he is matched are i 1 , i 2 , · · · , i k ′ , ordered based on their rankings on a's preference list. Assume that he is matched to i t n t times among the k given stable matchings. At the termination of the packing algorithm, each of the items x i l a , 1 ≤ l ≤ k ′ , appears in institute i l 's bins and its position is from l−1 t=1 nt k to l t=1 nt k . Now µ 0.5 gives every applicant his median choice follows easily from the above observation.
⊓ ⊔
Using similar ideas, we can show that an applicant-optimal stable matching must be institute-(lexicographical)-pessimal and similarly an applicant-pessimal stable matching must be institute-(lexicographical)-optimal: by taking x as the average of all stable matchings and consider the two matching µ ǫ and µ 1−ǫ with arbitrary small ǫ > 0. Hence, it is tempting to conjecture that the median choice stable matching is also a lexicographical median outcome for the institutes. Somehow surprisingly, it turns out not to be the case and a counter-example can be found in the appendix.
Polytope for Many-to-Many "Unclassified" Stable Matching
In the many-to-many stable matching problem, each entity e ∈ I ∪ A has a quota Q(e) ∈ Z + and a preference over a subset of the other side. A matching µ is feasible if given any entity e ∈ I ∪ A, (1) |µ(e)| ≤ Q(e), and (2) µ(e) is a subset of the entities on e ′ s preference list. A feasible matching µ is stable if there is no blocking pair (i, a), which means that i prefers a to one of the assignments µ(i), or if |µ(i)| < Q(i) and a ∈ µ(i); and similarly a prefers i to one of his assignments µ(a), or if |µ(a)| < Q(a) and i ∈ µ(a).
We now transform the problem into (many-to-one) LCSM. For each applicant a ∈ A, we create Q(a) copies, each of which retains the original preference of a. All institutes replace the applicants by their clones on their lists. To break ties, all institutes rank the clones of the same applicant in an arbitrary but fixed manner. Finally, each institute treats the clones of the same applicant as a class with upper bound 1. It can be shown that the stable matchings in the original instance and in the transformed LCSM instance have a one-one correspondence. Thus, we can use Constraints (1)- (5) to describe the former 6 .
Conclusion and Future Work
In this paper, we introduce classified stable matching and present a dichotomy theorem to draw a line between its polynomial solvability and NP-completeness. We also study the problem using the polyhedral approach and propose polynomial time algorithms to obtain various optimal matchings.
We choose the terms "institutes" and "applicants" in our problem definition, instead of the more conventional hospitals and residents, for a reason. We are aware that in real-world academics, many departments not only have ranking over their job candidates but also classify them based on their research areas. When they make their hiring decision, they have to take the quota of the classes into consideration. And in fact, we were originally motivated by this common practice.
classified stable matching has happened in real world. In a hospitals/residents matching program in Scotland, certain hospitals declared that they did not want more than one female physician. Roth [16] proposed an algorithm to show that stable matchings always exist.
There are quite a few questions that remain open. The obvious one would be to write an LP to describe LCSM with both upper bounds and lower bounds. Even though we can obtain various optimal stable matchings, the Ellipsoid algorithm can be inefficient. It would be nicer to have fast combinatorial algorithms. The rotation structure of Gusfield and Irving [11] seems the way to go.
A An Example for Section 2.2
In contrast to the generalized rural hospitals theorem in LCSM, if some institutes use intersecting classes, stable matching sizes may differ. Figure 3 is an example.
Institute Preferences
Classifications Quota i1:a1a2a3
C 1 1 = {a1, a2}, C 1 2 = {a1, a3} Q(i1) = 2, q + (C 1 1 ) = 1, q + (C 1 2 ) = 1 i2:a2a1a3a4 C 2 1 = {a2, a1}, C 2 2 = {a2, a3}, C 2 3 = {a2, a4} Q(i2) = 2, q + (C 2 1 ) = 1, q + (C 2 2 ) = 1, q + (C 2 3 ) = 1 Applicant
B Missing Proofs of Section 3
In this section, we prove Theorem 16. We assume that the set of posets P = {P 1 , P 2 , · · · , P k } contains a poset which is not a downward forest. Moreover, we assume that there is no lower bound on the classes. Without loss of generality, we assume that P 1 is not a downward forest. Such a poset must have a "V." By definition, there exists institute i whose class inclusion poset P (i) is isomorphic to P 1 . This implies that institute i must have two intersecting classes in C(i). In the following, we will present a reduction in which all institutes use at most two classes (that can be intersecting). It is straightforward to use some dummy institutes and applicants to "pad" our reduction so that every poset P j ∈ P is isomorphic to some class inclusion poset of the institutes in the derived instance. Our reduction is from one-in-three-sat. We will use an instance in which there is no negative literal. (NP-completeness still holds under this restriction [9].)
The overall goal is to design a reduction so that the derived P-classified stable matching instance allows a stable matching if and only if the given instance φ = c 1 ∧ c 2 ∧ · · · ∧ c k is satisfiable. We will build a set of clause gadgets to represent each clause c j . For every pair of literals which belong to the same clause, we create a literal-pair gadget. Such a gadget will guarantee that at most one literal it represents can be "activated" (set to TRUE). The clause gadget interacts with the literalpair gadgets in such a way that if the clause is to be satisfied, exactly one literal it contains can be activated.
Literal-Pair Gadget Suppose that x j i and x j i ′ both belong to the same clause c j . We create a gadget Υ j i,i ′ composed of four applicants {a j i,t } 2 t=1 ∪ {a j i ′ ,t } 2 t=1 and two institutes {I j i , I j i ′ } whose preferences and classifications are summarized below.
a j i,1 : I j i ≻ Γ (a j i,1 ) ≻ I j i ′ I j i : a j i,2 ≻ a j i,1 ≻ a j i ′ ,2 ≻ a j i ′ ,1 ≻ Ψ (I j i ) C I j i 1 = {a j i,1 , a j i,2 }, C I j i 2 = {a j i,1 , a j i ′ ,1 } a j i,2 : I j i ′ ≻ I j i Q(I j i ) = 2, q + (C I j i 1 ) = 1, q + (C I j i 2 ) = 1 a j i ′ ,1 : I j i ≻ Γ (a j i ′ ,1 ) ≻ I j i ′ I j i ′ : a j i,1 ≻ a j i,2 ≻ a j i ′ ,1 ≻ a j i ′ ,2 C I j i ′ 1 = {a j i,1 , a j i,2 } a j i ′ ,2 : I j i ′ ≻ I j i Q(I j i ′ ) = 2, q + (C I j i ′ 1 ) = 1
We postpone the explanation of the Γ and Ψ functions for the time being. We first make the following claim.
Claim B: Suppose that in a stable matching µ, the only possible assignments for
{a j i,1 , a j i,2 , a j i ′ ,1 , a j i ′ ,2 } are {I j i , I j i ′ }.
Then there can only be three possible outcomes in µ.
1. µ(a j i,1 ) = I j i , µ(a j i,2 ) = I j i ′ , µ(a j i ′ ,1 ) = I j i ′ , µ(a j i ′ ,2 ) = I j i . (In this case, we say x i is activated while x i ′ remains deactivated.) 2. µ(a j i,1 ) = I j i ′ , µ(a j i,2 ) = I j i , µ(a j i ′ ,1 ) = I j i , µ(a j i ′ ,2 ) = I j i ′ . (In this case, we say x i ′ is activated while x i remains deactivated.) 3. µ(a j i,1 ) = I j i ′ , µ(a j i,2 ) = I j i , µ(a j i ′ ,1 ) = I j i ′ , µ(a j i ′ ,2 ) = I j i . () = I j i , µ(a j i,2 ) = I j i ′ , µ(a j i ′ ,1 ) = I j i , µ(a j i ′ ,2 ) = I j i ′ will not happen due to the quota q + (C I j i 2 )
. This case corresponds to the situation that x i and x i ′ are both activated and is what we want to avoid.
We now explain how to realize the supposition in Claim B about the fixed potential assignments for {a j i,t } 2 t=1 ∪ {a j i ′ ,t } 2 t=1 in a stable matching. It can be easily checked that if a j i,1 is matched to some institute in Γ (a j i,1 ), or either of {a j i,1 , a j i,2 } is unmatched; or if either of {a j i ′ ,1 , a j i ′ ,2 } is unmatched, then there must exist a blocking group involving a subset of
{I j i , I j i ′ , {a j i,t } 2 t=1 , {a j i ′ ,t } 2 t=1 }.
However, the following matching µ φ can happen in which a j i ′ ,1 is matched to some institute in Γ (a j i ′ ,1 ) but there is no blocking group : µ φ (a j i,1 ) = I j i , µ φ (a j i,2 ) = µ φ (a j i ′ ,2 ) = I j i ′ , µ φ (a j i ′ ,1 ) ∈ Γ (a j i ′ ,1 ). 7 To prevent the above scenario from happening (i.e., we want µ φ to be unstable), we introduce another gadget Υ j i , associated with I j i , to guarantee a blocking group will appear. We now list the preferences and classifications of the members of Υ j i below.
a j i,1 : I j i,4 ≻ I j i,1 ≻ I j i,3 ≻ I j i,2 I j i,1 : a j i,5 ≻ a j i,2 ≻ a j i,4 ≻ a j i,6 ≻ a j i,3 ≻ a j i,1 Q(I j i,1 ) = 2 a j i,2 : I j i,3 ≻ I j i,4 ≻ I j i,2 ≻ I j i,1 I j i,2 : a j i,4 ≻ a j i,6 ≻ a j i,2 ≻ a j i,3 ≻ a j i,1 ≻ a j i,5 C I j i,2 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,2 2 = {a j i,3 , a j i,4 , a j i,5 } a j i,3 : I j i,4 ≻ I j i,3 ≻ I j i,1 ≻ I j i,2 Q(I j i,2 ) = 2, q + (C I j i,2 1 ) = 1, q + (C I j i,2 2 ) = 1 a j i,4 : I j i,4 ≻ I j i,1 ≻ I j i,2 ≻ I j i,3 I j i,3 : a j i,4 ≻ a j i,5 ≻ a j i,6 ≻ a j i,3 ≻ a j i,1 ≻ a j i,2 C I j i,3 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,3 2 = {a j i,3 , a j i,4 , a j i,5 } a j i,5 : I j i,2 ≻ I j i,4 ≻ I j i,3 ≻ I j i,1 Q(I j i,3 ) = 2, q + (C I j i,3 1 ) = 1, q + (C I j i,3 2 ) = 1 a j i,6 : I j i,2 ≻ I j i,4 ≻ I j i,3 ≻ I j i,1 I j i,4 : a j i,4 ≻ a j i,1 ≻ a j i,6 ≻ a j i,2 ≻ a j i,3 ≻ a j i,4 C I j i,4 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,4 2 = {a j i,3 , a j i,4 , a j i,5 } Q(I j i,4 ) = 2, q + (C I j i,4 1 ) = 1, q + (C I j i,4
2 ) = 1 7 It can be verified that if a j i,1 is matched to some institute in Γ (a j i ′ ,1 ), the above assignment is the only possibility that no blocking group arises.
The above instance Υ j i has the following features, every one of which is crucial in our construction. 1. In a matching µ φ , suppose that institute I j i is only assigned a j i,1 while a j i ′ ,1 is assigned to some institutes in Γ (a j i ′ ,1 ) (the problematic case we discussed above). As a result, institute I j i can take one more applicant from the set {a j i,t } 6 t=1 . By Feature A, there must exist a blocking group involving the members in Υ j i . More importantly, this blocking group need not be composed of I j i and two applicants from {a j i,t } 6 t=1 . 2. In a matching µ φ , suppose that institutes I j i is assigned two applicants from the set {a j i,t , a j i ′ ,t } 2 t=1 . Then I j i,1 can be regarded as being removed from the instance Υ j i . And there exists a stable matching among the other members of the instance Υ j i . This explains the necessity of Feature B. 3. Finally, since I j i already uses two intersecting classes, I j i,1 should not use any more classes. This explains the reason why Feature C is necessary.
j i into gadget Υ j i,i ′ . To be precise, let Ψ (I j i ) = a j i,5 ≻ a j i,2 ≻ a j i,4 ≻ a j i,6 ≻ a j i,3 ≻ a j i
We have left the functions Γ (a j i,1 ) and Γ (a j i ′ ,1 ) unexplained so far. They contain institutes belonging to the clauses gadgets, which will be the final component in our construction.
Clause Gadget Suppose that c j = x j 1 ∨ x j 2 ∨ x j 3 .
We create a clause gadgetΥ j composed of two institutes {Î j t } 2 t=1 and six applicants {â j t } 6 t=1 . Their preferences and classifications are summarized below.
We now explain how the Λ functions in the clause gadgets interact with the Γ functions in the literal-pair gadgets. The former is composed of applicants in the literal-pair gadgets while the latter is composed of institutes in the clause gadgets. Our intuition is that the only possible stable matchings in the clause gadgets will enforce exactly one of its three literals to be activated. To be precise, let π(X) denote an arbitrary order among the elements in the set X. Then:
â j 1 :Î j 2 ≻Î j 1Î j 1 :â j 5 ≻â j 1 ≻â j 2 ≻ Λ(x j 1 ) ≻â j 6 ≻ Λ(x j 2 ) ≻â j 3 ≻ Λ(x j 3 ) ≻â j 4 a j 2 :Î j 1 ≻2 j 1
Finally, we remark that the three possible outcomes in µ listed in the lemma will guarantee that exactly one of the three literals in clause c j can be activated. The reason is again the same as in the last two cases that we just explained. This completes the proof of Claim C. ⊓ ⊔ Now by Claim C, we establish Theorem 16 C Missing Proofs of Section 4 Lemma 17. In LCSM, if there is no lower bound, i.e., given any class C i j , q − (C i j ) = 0, then a stable matching as defined in Definition 2 can be equivalently defined as follows. A feasible matching µ is stable if and only if there is no blocking pair. A pair (i, a) is blocking, given that µ(i) = (a i1 , a i2 , · · · , a ik ),
k ≤ Q(i), if -i ≻ a µ(a); -for any class C i at ∈ a(C(i)), |L i ≻a ∩ µ(i) ∩ C i at | < q + (C i at ).
Proof. If we have a blocking group (i; g), institute i and the highest ranking applicant in g\µ(i) must be a blocking pair. Conversely, given a blocking pair (i; a), assuming that |µ(i)| = Q(i) (the case that |µ(i)| < Q(i) follows a similar argument), we can form a blocking group (i; µ(i)| a † a), where a † is chosen as follows: (1) if there exists a class C i at ∈ a(C(i)) such that |µ(i) ∩ C i at | = q + (C i at ), choose the smallest such class C i at ∈ a(C(i)) and let a † be the lowest ranking applicant in µ(i) ∩ C i at ; (2) otherwise, a † is simply the lowest ranking applicant in µ(i).
⊓ ⊔ Lemma 19. Every stable matching solution x satisfies the comb inequality for any comb K(i, S(A i )):
x(K(i, S(A i )) ≡ x(S(A i )) + a j ∈A i x(T (i, a j )\{i, a j }) ≥ |A i |.
We use the following notation to facilitate the proof. Give a tuple A i , we define y ia as follows:
y ia = 1 either a ∈ A i , x(T (i, a)) = 1; or a ∈ A i , x ia = 1, and (i, a) ∈ S(A i ); 0 o.w.
Let y(C i j ) = a∈L i ∩C i j y ia . This quantity indicates how much a class C i j contributes to the comb value x(K(i, S(A i ))). Thus, if U is a set of classes in C(i) partitioning L i , then x(K(i, S(A i ))) = C i j ∈U y(C i j ).
Proof. We prove by showing that if x(K(i, S(A i ))) < |A i |, there exists a blocking pair (i, a † ), where a † ∈ A i . We proceed by contradiction. First note that there exists a non-empty subset G ⊆ A i of applicants a for whom x(T (i, a)) = 0, otherwise, x(K(i, S(A i ))) ≥ |A i |, an immediate contradiction. For each applicant a ∈ G, there must exist a class C i al ∈ a(C(i)) for which a ′ ∈L i ) is a blocking pair and we are done. Now for each applicant a ∈ G, choose the smallest class C i al for which a ′ ∈L i ≻a ∩C i al x ia ′ = q + (C i al ) and denote this class as C a . We introduce a procedure to organize a set U of disjoint classes.
≻a ∩C i al x ia ′ = q + (C i al ), otherwise, (i, a
Let G be composed of a 1 , a 2 , · · · , a |G| ordered based on their decreasing rankings on L i For i = 1 To |G| if a i ∈ C ∈ U , then do nothing else U := U \{C|C ∈ U, C ⊂ C a l } //C a l may be a superclass of some classes in U U := U ∪ {C a l }. // adding C a l into U Claim The output U from the above procedure comprises of a disjoint set of classes containing all applicants in G, and for each class C i j ∈ U , y(C i j ) ≥ q + (C i j ). We will prove the claim shortly. Now
x(K(i, S(A i ))) = C i j ∈U y(C i j ) + |A i \{∪ C i j ∈U C i j }| ≥ C i j ∈U q + (C i j ) + |A i \{∪ C i j ∈U C i j }| ≥ |A i |,
a contradiction. ⊓ ⊔ Proof of the Claim. It is easy to see that the classes in U are disjoint and contain all applicants in G. Below we show that during the execution of the procedure, if C i j ∈ U , then y(C i j ) ≥ q + (C i j ). We proceed by induction on the number of times U is updated. In the base case U is an empty set so there is nothing to prove.
For the induction step, assume that a l is being examined and C a l is about to be added into U . Observe that even though a∈L i ≻a l ∩Ca l x ia = q + (C a l ), there is no guarantee that if x ia = 1, then
y ia = 1 for each a ∈ L i ≻a l ∩ C a l .
The reason is that there may exist some class C i j ∈ a(C(i)) for which
|A i ∩ C i j ∩ L i ≻a | = q + (C i j ) and a ∈ A i . Then (i, a)
is not part of the shaft x(S(A i )) and y ia = 0. To deal with the above situation, we need to do some case analysis. Let B be the set of subclasses
C i j of C a l for which |A i ∩C i j ∩L i ≻a l | = q + (C i j )
. Choose D to be the subclasses of C a l so that ℜ(B∪U )∪D partitions C a l . We make three observations below.
(i) for each class C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , y(C i j ) ≥ q + (C i j ) ≥ a∈L i a l ∩C i j x ia .
(ii) for each class C i j ∈ D, if a ∈ L i ≻a l ∩ C i j and x ia = 1, then y ia = 1. (iii) for each class C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , then for each applicant a ∈ L i ≻a l ∩ C i j ∩ A i , either a ∈ G and a ∈ C ∈ U , or that a ∈ G (implying that x(T (i, a)) = 1). Moreover,
y(C i j ) ≥ a∈L i ≻a l ∩C i j x ia
(i) is because of the induction hypothesis and the feasibility assumption of x. (ii) follows from the fact that a ranks higher than a l and the way we define a class in D. For (iii), first notice that if C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , then such a class C i j must be part of ℜ(B) and C i j may contain some classes in U . Now suppose that a i ∈ G ∩ L i ≻a l but does not belong to any class in U . Then our procedure would have added the class C a i into U before examining a l , a contradiction. To see the last statement of (iii), let G ′ be set of applicants in L i ≻a l ∩ C i j ∩ A i who do not belong to any classes in U . Then
y(C i j ) ≥ C i k ∈U,C i k ⊂C i j y(C i k ) + |G ′ | ≥ C i k ∈U,C i k ⊂C i j q + (C i k ) + |G ′ | ≥ q + (C i j ) ≥ a∈L i ≻a l ∩C i j x ia ,
where the first inequality follows from the first part of (iii), the second inequality the induction hypothesis, the third the fact that C i j ∈ ℜ(B) (thus |L i ≻a l ∩ C i j ∩ A i | = q + (C i j )), and the fourth the feasibility assumption of x. Now combining all the three observations, we conclude that
y(C a l ) = C i j ∈ℜ(B∪U ) y(C i j ) + C i k ∈D y(C i j ) ≥ C i k ∈ℜ(B l ∪U )∪D l a∈L i ≻a l ∩C i k x ia = q + (C i j ),
and the induction step is completed. ⊓ ⊔ Lemma 20. Every stable matching solution x satisfies the following inequality for any class-tuple t i j :
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia (*)
Proof. We prove by contradiction. Suppose that in a given class-tuple t i j (*) does not hold. We will show that we can find a blocking pair (i, a † ), where a † ∈ t i j . Let the set of applicants a ∈ t i j with x (T (i, a)
) = 0 be G, α = a ′ ∈L i ≺t i j ∩C i j x ia ′ > 0, and β = a ′ ∈t i j x ia ′ .
By assumption, at most α − 1 applicants a ∈ t i j have x(T (i, a)\{(i, a)}) = 1. Thus,
|G| ≥ q + (C i j ) − β − α + 1.(11)
Claim: At least one applicant a † ∈ G belongs to a sequence of classes C i a † t ∈ a † (C(i)) such
that if C i a † t ⊆ C i j , then a ′ ∈L i ≻a † ∩C i a † t x ia ′ < q + (C i a † t ).
We will prove the claim shortly. Observe that given any class
C i k ⊃ C i j , a ′ ∈L i ≻a † ∩C i k x ia ′ < q + (C i k )
: as α > 0, some applicant a φ ∈ C i k ranking lower than a † has x ia φ = 1 and Constraint (2) enforces that
a ′ ∈L i ∩C i k x ia ′ ≤ q + (C i k ).
Combining the above facts, we conclude that (i, a † ) is a blocking pair. ⊓ ⊔ Proof of the Claim. We prove by contradiction. Suppose that for every applicant a ∈ G, there exists some class C i at ∈ a(C(i)), C i at ⊆ C i j , and a ′ ∈ L i ≻a ∩C i at x ia ′ = q + (C i at ). Let B be the set of classes C i k ⊆ C i j such that C i k contains an applicant a ∈ G and a ′ ∈L i ≻a ∩C i k x ia ′ = q + (C i k ) (which then will equal a ′ ∈L i t i j ∩C i k x ia ′ due to Constraint (2)). For each class C i k ∈ ℜ(B),
a∈L i t i j ∩C i k x ia = q + (C i k ) ≥ |t i j ∩ C i k | = a ′ ∈L i t i j ∩t i j ∩C i k x ia ′ + |G ∩ C i k |,(12)
where the first inequality follows from the definition of the class-tuple. Now we have
q + (C i j ) − α − β ≥ C i k ∈ℜ(B) a ′ ∈(L i ≻a ‡ ∩C i k )\t i j x ia ′ ≥ C i k ∈ℜ(B) |G ∩ C i k | = |G| ≥ q + (C i j ) − α − β + 1,
a contradiction. Note that the first inequality follows from Constraint (2), the second inequality from (12), the equality right after is because every applicant in G belongs to some class in B, and the last inequality is due to (11). ⊓ ⊔
D Separation Oracle in Section 4.1
It is clear that Constraints (1)(2)(4) can be separated in polynomial time. So we assume that x satisfies these constraints and focus on finding a violated Constraint (3) and/or Constraint (5).
Separating Constraint (3)
We first make an observation. For each institute i, it suffices to check whether all the combs with exactly Q(i) teeth satisfy Constraint (3). To see this, suppose that there is a feasible tuple A i with less than Q(i) applicants and x(K(i, S(A i ))) < |A i |. Then we can add suitable applicants into A i to get a feasible tuple A i with exactly Q(i) applicants. Noticing that
x(S(A i )) ≤ x(S(A i )), we have
x(K(i, S(A i ))) ≤ x(S(A i )) +
≤ |A i | + |A i | − |A i | = |A i |,
where the last inequality follows from our assumption that x satisfies Constraint (1).
To illustrate our idea, we first explain how to deal with the case that the original classification C(i) is just a partition over L i (before we add the pseudo root class C i ♯ ). We want to find out the tuple A i of length Q(i), whose lowest ranking applicant is a † , which gives the smallest x(K(i, S(A i ))). If we have this information for all possible a † , we are done. Note that because of our previous discussion, if there is no feasible tuple of length Q(i) whose lowest ranking applicant is a † , we can ignore those cases.
Our main idea is to decompose the value of x(K(i, S(A i ))) based on the classes and use dynamic programming to find out the combinations of the tooth-applicants that give the smallest comb values. More precisely, Definition 27. Assume that A i j ⊆ C i j , 0 ≤ |A i j | ≤ q + (C i j ), and all applicants in A i j rank higher than a † . Let Note that this definition requires that if x ia contributes to x(A i j , a † ), then a has to rank higher than a † , belongs to C i j , and the (i, a) is part of the shaft S(A i j ). Suppose that we have properly stored all the possible values of Z(C i j , s j , a † ) and assume that a † ∈ C i j ′ . Then for each class C i j = C i j ′ , assume that 0 ≤ s j ≤ q + (C i j ) and for class C i j ′ , 0 ≤ s j ′ ≤ q + (C i j ′ ) − 1, then the tuple A i whose lowest ranking applicant is a † , that gives the smallest comb value is the following one:
x(K(i, S(A i ))) = x(T (i, a † )) + min s j :
P C i j ∈C(i) s j =Q(i)−1 C i j ∈C(i) Z(C i j , s j , a † ).
The above quantity can be calculated using standard dynamic programming technique. So the question boils down to how to calculate Z(C i j , s j , a † ). There are two cases.
For the induction step, let C i j be a non-leaf class and assume that a ‡ ∈ C i k ′ ∈ c(C i j ). To calculate Z(C i j , s j , a ‡ , a † ), we need to find out a feasible tuple A i j of size s j , all of whose applicants rank at least as high as a ‡ so that x(A i j , a † ) is minimized. Observe that a feasible tuple A i j can be decomposed into a set of tuples
A i j = C i k ∈c(C i j ) A i k , where A i k ⊆ C i k ∈ c(C i j ).
1. Suppose that s j < q + (C i j ). Then by definition, x(S(A i j ), a † ) = C i k ∈c(C i j ) x(S(A i k ), a † ). So
x(A i j , a † ) = For each class C i k ∈ c(C i j ), the minimum quantity a∈A i k x(T (i, a)\{(i, a)}) + x(S(A i k ), a † ) is exactly Z(C i k , s k , a ‡ , a † ). As a result, for each class C i k = C i k ′ , let 0 ≤ s k ≤ q + (C i k ), and for class C i k ′ , let 0 ≤ s k ′ ≤ q + (C i k ′ ) − 1:
Z(C i j , s j , a ‡ , a † ) = x(T (i, a ‡ )) + min s k :
P s k =s j −1 C i k ∈c(C i j )
Z(C i k , s k , a ‡ , a † ).
Thus, we can find out Z(C i j , s j , a ‡ , a † ) by dynamic programming. 2. Suppose that s j = q + (C i j ). Note that this time since the class C i j will be "saturated", the term x(S(A i j ), a † ) does not get any positive values x ia , provided that a ∈ C i j ∩ (L i ≻a † ∩ L i ≺a ‡ ). So x(S(A i j ), a † ) = C i k ∈c(C i j ) x(S(A i k ), a ‡ ) and this implies that
x(A i j , a † ) = Let a ‡ be the lowest ranking applicant that ranks higher than a ‡ . Then for each class, C i k ∈ c(C i j ), the minimum quantity a∈A i k x(T (i, a)\{(i, a)}) + x(S(A i k ), a ‡ ) is exactly Z(C i k , s k , a ‡ , a ‡ ). Assuming that for each class C i k = C i k ′ , let 0 ≤ s k ≤ q + (C i k ), and let 0 ≤ s k ′ ≤ q + (C i k ′ ) − 1, we have Z(C i j , s j , a ‡ , a † ) = x(T (i, a ‡ )) + min s k :
P s k =s j −1 C i k ∈c(C i j )
Z(C i k , s k , a ‡ , a ‡ ).
As before, this can be calculated by dynamic programming. ⊓ ⊔ Now choose the smallest Z(C i ♯ , Q(i) − 1, a ‡ , a † ) among all possible a ‡ who rank higher than a † and assume that A i ♯ is the corresponding tuple. It is easy to see that among all feasible tuples A i of length Q(i) whose lowest ranking applicant is a † , the one has the smallest comb value x(K(i, S(A i )), is exactly the tuple A i ♯ ∪ {a † }.
Separating Constraint (5) We again make use of dynamic programming. The idea is similar to the previous one and the task is much simpler, so we will be brief. Suppose that we are checking all the class-tuples T i j corresponding to class C i j . Let T i j,a † ⊆ T i j be the subset of class-tuples whose lowest ranking applicants is a † . We need to find out the class-tuple t i j,a † ∈ T i j,a † with the smallest value
x(T (i, a † )\{(i, a † )}) + a∈t i j,a † \{a † }
x(T (i, a)\{(i, a)}),
and check whether this value is no less than a∈C i j ∩L i ≺a †
x ia . If it is, then we are sure that all classtuples in T i j,a † satisfy Constraint (5), otherwise, we find a violated constraint. The above quantity can be easily calculated by dynamic programming as before.
E A Counter Example for Section 4.2
The example shown in Figure 4 contains five stable matchings. If we apply the median choice operation on all of them, we get the stable matching µ 2 , which does not give institutes i 1 and i 2 their lexicographical median outcome.
Institute Preferences
Classifications Class Bounds i1:axaya1a2a3a4 C 1 1 = {a1, a2}, C 1 2 = {a3, a4} Q(i1) = 2, q + (C 1 1 ) = 1, q + (C 1 2 ) = 1 i2:azawa2a1a4a3 C 2 1 = {a1, a2}, C 2 2 = {a3, a4} Q(i2) = 2, q + (C 2 1 ) = 1, q + (C 2 2 ) = 1 i3:a1a2a3a4axayazaw Q(i3) = 4
Applicant Preferences a1:i2i1i3 a2:i1i2i3 a3:i2i1i3 a4:i1i2i3 ax:i3i1 ay:i3i1 az:i3i2 aw:i3i2
Stable Matchings µ1 = {(i1; ax, ay), (i2; az, aw), (i3; a1, a2, a3, a4)} µ2 = {(i1; a1, a3), (i2; a2, a4), (i3; ax, ay, az, aw)} µ3 = {(i1; a1, a4), (i2; a2, a3), (i3; ax, ay, az, aw)} µ4 = {(i1; a2, a3), (i2; a1, a4), (i3; ax, ay, az, aw)} µ5 = {(i1; a2, a4), (i2; a1, a3), (i3; ax, ay, az, aw)} Fig. 4. An example of median choice stable matching which does not give the institutes their lexicographically median outcome.
| 23,073 |
0907.1779
|
2951984019
|
We introduce the classified stable matching problem, a problem motivated by academic hiring. Suppose that a number of institutes are hiring faculty members from a pool of applicants. Both institutes and applicants have preferences over the other side. An institute classifies the applicants based on their research areas (or any other criterion), and, for each class, it sets a lower bound and an upper bound on the number of applicants it would hire in that class. The objective is to find a stable matching from which no group of participants has reason to deviate. Moreover, the matching should respect the upper lower bounds of the classes. In the first part of the paper, we study classified stable matching problems whose classifications belong to a fixed set of order types.'' We show that if the set consists entirely of downward forests, there is a polynomial-time algorithm; otherwise, it is NP-complete to decide the existence of a stable matching. In the second part, we investigate the problem using a polyhedral approach. Suppose that all classifications are laminar families and there is no lower bound. We propose a set of linear inequalities to describe stable matching polytope and prove that it is integral. This integrality allows us to find various optimal stable matchings using Ellipsoid algorithm. A further ramification of our result is the description of the stable matching polytope for the many-to-many (unclassified) stable matching problem. This answers an open question posed by Sethuraman, Teo and Qian.
|
Fleiner @cite_18 studied the many-many stable matching in a much more general context. Using a fixed-point approach, he proved that stable matchings always exist provided that the preference of each entity is a . Roughly speaking, such a function can be realized by imposing a matroid over a linear order of elements. In , supposing that there is no lower bound on the classes, then each laminar family is equivalent to a partition matroid. We prove that stable matchings always exist in this situation. Hence, our algorithm in Section 2 can be seen as a constructive proof of a special case of Fleiner's existence theorem.
|
{
"abstract": [
"We describe a fixed-point based approach to the theory of bipartite stable matchings. By this, we provide a common framework that links together seemingly distant results, like the stable marriage theorem of Gale and Shapley, the Mendelsohn-Dulmage theorem, the Kundu-Lawler theorem, Tarski's fixed-point theorem, the Cantor-Bernstein theorem, Pym's linking theorem, or the monochromatic path theorem of In this framework, we formulate a matroid-generalization of the stable marriage theorem and study the lattice structure of generalized stable matchings. Based on the theory of lattice polyhedra and blocking polyhedra, we extend results of Vande Vate and Rothblum on the bipartite stable matching polytope."
],
"cite_N": [
"@cite_18"
],
"mid": [
"1978696336"
]
}
|
Classified Stable Matching
|
Imagine that a number of institutes are recruiting faculty members from a pool of applicants. Both sides have their preferences. It would be ideal if there is a matching from which no applicant and institute have reason to deviate. If an applicant prefers another institute to the one he is assigned to (or maybe he is unassigned) and this institute also prefers him to any one of its assigned applicants, then this institute-applicant pair is a blocking pair. A matching is stable if there is no blocking pair.
The above scenario is the well-studied hospitals/residents problem [7,11] in a different guise. It is known that stable matchings always exist and can be found efficiently by the Gale-Shapley algorithm. However, real world situations can be more complicated. An institute may have its own hiring policy and may find certain sets of applicants together unacceptable. For example, an institute may have reasons to avoid hiring too many applicants graduated from the same school; or it may want to diversify its faculty so that it can have researchers in many different fields.
This concern motivates us to consider the following problem. An institute, besides giving its preference among the applicants, also classifies them based on their expertise (or some other criterion). For each class, it sets an upper bound and a lower bound on the number of applicants it would hire. Each institute defines its own classes and classifies the applicants in its own way (and the classes need not be disjoint). We consider this flexibility a desirable feature, as there are some research fields whose boundaries are blurred; moreover, some versatile researchers may be hard to categorize.
We call the above problem classified stable matching. Even though motivated by academic hiring, it comes up any time objects on one side of the matching have multiple partners that may be classified. For example, the two sides can be jobs and machines; each machine is assigned several jobs but perhaps cannot take two jobs with heavy memory requirements.
To make the problem precise, we introduce necessary notation and terminology. A set A of applicants and a set I of institutes are given. Each applicant/institute has a strictly-ordered (but not necessarily complete) preference list over the other side. The notation x indicates either strictly better or equal in terms of preference of an entity e ∈ A ∪ I while ≻ e means strictly better. For example, if applicant a ∈ A strictly prefers institute i ∈ I to another institute i ′ ∈ I, we write i ≻ a i ′ . The preference list of institute i is denoted as L i . The set of applicants on L i who rank higher (respectively lower) than some particular applicant a are written as L i ≻a (respectively L i ≺a ). An institute i has a capacity Q(i) ∈ Z + , the maximum number of applicants it can hire. It defines its own classification C(i) = {C i j } |C(i)| j=1 , which is a family of sets over the applicants in its preference list. Each class C i j ∈ C(i) has an upperbound q + (C i j ) ∈ Z + and a lowerbound q − (C i j ) ∈ Z + ∪ {0}, on the number of applicants it would hire in that class. Given a matching µ, µ(a) is the institute applicant a is assigned to. We write µ(i) = (a i1 , a i2 , · · · , a ik ), k ≤ Q(i) to denote the set of applicants institute i gets in µ, where a ij are listed in decreasing order based on its preference list. In this paper, we will slightly abuse notation, treating an (ordered) tuple such as µ(i) as a set. Definition 1. Given a tuple t = (a i1 , a i2 , · · · , a ik ) where a ij are ordered based on their decreasing rankings on institute i's preference list, it is said to be a feasible tuple of institute i, or just feasible for short, if the following conditions hold:
k ≤ Q(i); -given any class C i j ∈ C(i), q − (C i j ) ≤ |t ∩ C i j | ≤ q + (C i j ).
Definition 2.
A matching µ is feasible if all the tuples µ(i), i ∈ I are feasible. A feasible matching is stable if and only if there is no blocking group. A blocking group is defined as follows. Let µ(i) = (a i1 , a i2 , · · · , a ik ), k ≤ Q(i). A feasible tuple g = (a ′ i1 , a ′ i2 , · · · , a ′ ik ′ ), k ≤ k ′ ≤ Q(i), forms a blocking group (i;g) with institute i if for 1 ≤ j ≤ k, i a ′ ij µ(a ′ ij ) and a ′ ij i a ij ; -either there exists l, 1 ≤ l ≤ k such that a ′ il ≻ i a il and i ≻ a ′ il µ(a ′ il ), or that k ′ > k.
Informally speaking, the definition requires that for a blocking group to be formed, all involved applicants have to be willing to switch to, or stay with, institute i. The collection of applicants in the blocking group should still respect the upper and lower bounds in each class; moreover, the institute gets a strictly better deal (in the Pareto-optimal sense). Note that when there is no class lower bound, then the stable matching as defined in Definition 2 can be equivalently defined as a feasible matching without the conventional blocking pairs (see Lemma 17 in Section 4). When the class lower bound is present, the definition of the blocking groups captures our intuition that an institute should not indiscriminately replace a lower ranking applicant assigned to it with a higher applicant (with whom it forms a blocking pair), otherwise, the outcome for it may not be a feasible one. In our proofs, we often use the notation µ(i)| a a ′ to denote a tuple formed by replacing a ∈ µ(i) with a ′ . The order of the tuple µ(i)| a a ′ is still based on institute i's preference list. If we write µ(i)|a, then this new tuple is obtained by adding a into µ(i) and re-ordered. In a matching µ, if a class C i j is fully-booked, i.e. |µ(i) ∩ C i j | = q + (C i j ), we often refer to such a class as a "bottleneck" class. We also define an "absorption" operation: given a set B of classes, ℜ(B) returns the set of classes which are not entirely contained in other classes in B.
Our Results It would be of interest to know how complicated the classifications of the institutes can be while still allowing the problem a polynomial time algorithm. In this work, we study the classified stable matching problems whose classifications belong to a fixed set of "order types." The order type of a classification is the inclusion poset of all non-empty intersections of classes. We introduce necessary definitions to make our statement precise. Definition 3. The class inclusion poset P (i) = (C(i), ) of an institute i is composed of sets of the elements from L i :
C(i) = {C|C = C i j ∩ C i k , where C i j , C i k ∈ C(i)} 1 . In P (i), C i j ≻ C i k if C i j ⊃ C i k ; and C i j C i k if C i j ⊃ C i k and C i k ⊃ C i j .
Definition 4. Let P = {P 1 , P 2 , · · · , P k } be a set of posets. A classified stable matching instance (A, I) belongs to the group of P-classified stable matching problems if for each poset P j ∈ P, there exists an institute i ∈ I whose class inclusion poset P (i) is isomorphic to P j and conversely, every class inclusion poset P (i) is isomorphic to a poset in P.
We call a poset a downward forest if given any element, no two of its successors are incomparable. Our first main result is the following dichotomy theorem.
Theorem 5. Let P = {P 1 , P 2 , · · · , P k } be a set of posets. P-classified stable matching problems can be solved in polynomial time if every poset P j ∈ P is a downward forest; on the other hand, if P contains a poset P j which is not a downward forest, the existence of a stable matching is NP-complete.
We remark that if P is entirely composed of downward forests, then every classification C(i) must be a laminar family 2 . In this case, we call the problem laminar classified stable matching (henceforth LCSM).
We present an O(m 2 ) time algorithm for LCSM, where m is the total size of all preferences. Our algorithm is extended from the Gale-Shapley algorithm. Though intuitive, its correctness is difficult to argue due to various constraints 3 . Furthermore, we show that several well-known structural results in the hospitals/residents problem can be further generalized in LCSM. On the other hand, if some institute i has a classification C(i) violating laminarity, then P must contain a poset which has a "V" (where the "bottom" is induced by two intersecting classes in C(i) which are its parents "on top.") We will make use of this fact to design a gadget for our NP-complete reduction. In particular, in our reduction, all institutes only use upperbound constraints. Sections 2 and 3 will be devoted to these results.
Our dichotomy theorem implies a certain limit on the freedom of the classifications defined by the institutes. For example, an institute may want to classify the applicants based on two different criteria simultaneously (say by research fields and gender); however, our result implies this may cause the problem to become intractable.
In the second part, we study LCSM using a mathematical programming approach. Assume that there is no lower bound on the classes. We extend the set of linear inequalities used by Baïou and Balinski [3] to describe stable matchings and generalize a bin-packing algorithm of Sethuraman, Teo, and Qian [22] to prove that the polytope is integral. The integrality of our polytope allows us to use suitable objective functions to obtain various optimal stable matchings using Ellipsoid algorithm. As our LP has an exponential number of constraints, we also design a separation oracle.
By studying the geometric structure of fractional stable matchings, we are able to generalize a theorem of Teo and Sethuraman [23]: in (one-to-one) stable marriage, given any number of stable matchings, if we assign every man his median choice among all women with whom he is matched in the given set of matchings and we do similarly for women, the outcome is still a stable matching. This theorem has been generalized in the context of hospitals/residents problem [5,13,22]. We prove that in LCSM, this theorem still holds: if we apply this "median choice operation" on all applicants, the outcome is still a stable matching 4 .
A final ramification of our polyhedral result is an answer to an open question posed by Sethuraman, Teo and Qian [22]: how do we describe the stable matching polytope in the classical "unclassified" many-to-many stable matching problem? We show this problem can be reduced to LCSM by suitable cloning and classifications.
All the polyhedral results will be presented in Section 4. In Section 5 we conclude. Omitted proofs and details can be found in the appendix.
An Algorithm for Laminar Classified Stable Matching
In this section, we present a polynomial time algorithm to find a stable matching if it exists in the given LCSM instance, otherwise, to report that none exists.
We pre-process our instance as follows. If applicant a is on institute i's preference list, we add a class C i a1 = {a} into C(i). Furthermore, we also add a class C i ♯ into C(i) including all applicants in L i . After this pre-processing, the set of classes in C(i) form a tree whose root is the C i ♯ ; moreover, an applicant a belongs to a sequence of classes a(C(i)) = (C i a1 , C i a2 , · · · , C i az (= C i ♯ )), which forms a path from the leaf to the root in the tree (i.e., C i aj is a super class of C i aj ′ , provided j ′ < j.) For each non-leaf class C i j , let c(C i j ) denote the set of its child classes in the tree. We can assume without loss of generality that
q − (C i j ) ≥ C i k ∈c(C i j ) q − (C i k ) for any non-leaf class C i j . Finally, let q + (C i ♯ ) := Q(i), q − (C i ♯ ) := C i k ∈c(C i ♯ ) q − (C i k )
; for all applicants a ∈ L i , q + (C i a1 ) := 1 and q − (C i a1 ) := 0. Our algorithm finds an applicant-optimal-institute-pessimal stable matching. The applicant-optimality means that all applicants get the best outcome among all stable matchings; on the other hand, institute-pessimality means that all institutes get an outcome which is "lexicographically" the worst for them. To be precise, suppose that µ(i) = (a i1 , a i2 , · · · , a ik ) and µ ′ (i) = (a i1 , a i2 , · · · , a ik ) are the outcomes of two stable matchings for institute i 5 . If there exists k ′ ≤ k so that a ij = a ′ ij , for all 1 ≤ j ≤ k ′ − 1 and a ik ′ ≻ i a ′ ik ′ , then institute i is lexicographically better off in µ than in µ ′ . We now sketch the high-level idea of our algorithm. We let applicants "propose" to the institutes from the top of their preference lists. Institutes make the decision of acceptance/rejection of the proposals based on certain rules (to be explained shortly). Applicants, if rejected, propose to the next highest-ranking institutes on their lists. The algorithm terminates when all applicants either end up with some institutes, or run out of their lists. Then we check whether the final outcome meets the upper and lower bounds of all classes. If yes, the outcome is a stable matching; if no, there is no stable matching in the given instance.
How the institutes make the acceptance/rejection decisions is the core of our algorithm. Intuitively, when an institute gets a proposal, it should consider two things: (i) will adding this new applicant 5 In LCSM, an institute always gets the same number of applicants in all stable matchings. See Theorem 15 below. violate the upper bound of some class? (ii) will adding this applicant deprive other classes of their necessary minimum requirement? If the answer to any of the two questions is positive, the institute should not just take the new applicant unconditionally; instead, it has to reject someone it currently has (not necessarily the new comer).
Below we will design two invariants for all classes of an institute. Suppose that institute i gets a proposal from applicant a, who belongs to a sequence of classes a(C(i)) = (C i a1 , C i a2 , · · · , C i ♯ ). We check this sequence of classes from the leave to the root. If adding applicant a into class C i aj does not violate these invariants, we climb up and see if adding applicant a into C i a(j+1) violates the invariant. If we can reach all the way to C i ♯ without violating the invariants in any class in a(C(i)), applicant a is just added into institute i's new collection. If, on the other hand, adding applicant a into C i a(j+1) violates the invariants, institute i rejects some applicant in C i a(j+1) who is from a sequence of subclasses of C i a(j+1) which can afford to lose one applicant.
We define a deficiency number ∆(C i j ) for each class C i j ∈ C(i). Intuitively, the deficiency number indicates how many more applicants are necessary for class C i j to meet the lower bound of all its subclasses. This intuition translates into the following invariant:
Invariant A: ∆(C i j ) ≥ C i k ∈c(C i j ) ∆(C i k ), ∀C i j ∈ C(i), c(C i j ) = ∅, ∀i ∈ I.
In the beginning, ∆(C i j ) is set to q − (C i j ) and we will explain how ∆(C i j ) is updated shortly. Its main purpose is to make sure that after adding some applicants into C i j , there is still enough "space" for other applicants to be added into C i j so that we can satisfy the lower bound of all subclasses of C i j . In particular, we maintain
Invariant B: q − (C i j ) ≤ |µ(i) ∩ C i j | + ∆(C i j ) ≤ q + (C i j ), ∀C i j ∈ C(i), ∀i ∈ I.
We now explain how ∆(C i j ) is updated. Under normal circumstances, we decrease ∆(C i j ) by 1 once we add a new applicant into C i j . However, if Invariant A is already "tight", i.e., ∆(C i j ) = C i k ∈c(C i j ) ∆(C i k ), then we add the new applicant C i j without decreasing ∆(C i j ). The same situation may repeat until the point that |µ(i) ∩ C i j | + ∆(C i j ) = q + (C i j ) and adding another new applicant in C i j is about to violate Invariant B. In this case, something has to be done to ensure that Invariant B holds: some applicant in C i j has to be rejected, and the question is whom? Let us call a class a surplus class if |µ(i) ∩ C i j | + ∆(C i j ) > q − (C i j ) and we define an affluent set for each class C i j as follows:
$(C i j , µ(i)) = {a|a ∈ µ(i) ∩ C i j ; for each C i j ′ ∈ a(C(i)) and C i j ′ ⊂ C i j , |µ(C i j ′ )| + ∆(C i j ′ ) > q − (C i j ′ )}.
In words, the affluent set $(C i j , µ(i)) is composed of the set of applicants currently assigned to institute i, part of C i j , and each of whom belonging to a sequence of surplus subclasses of C i j . In our algorithm, to prevent Invariant B from being violated in a non-leaf class C i j , institute i rejects the lowest ranking applicant a in the affluent set $(C i j , µ(i)). The pseudo-code of the algorithm is presented in Figure 1.
Initialization 0: ∀i ∈ I, ∀C i j ∈ C(i), ∆(C i j ) := q − (C i j ); Algorithm 1: While there exists an applicant a unassigned and he has not been rejected by all institutes on his list 2: Applicant a proposes to the highest ranking institute i to whom he has not proposed so far; 3:
Assume that a(C(i)) = (C i a1 , C i a2 , · · · , C i az (= C i ♯ )); 4: µ(i) := µ(i) ∪ {a} // Institute i accepts applicant a provisionally; 5:
For t = 2 To z // applicant a can be added into C i a1 directly; 6:
If
∆(C i at ) > P C i k ∈c(C i at ) ∆(C i k ) Then ∆(C i at ) := ∆(C i at ) − 1; 7: If #(C i at ) + ∆(C i at ) > q + (C i at ) Then 8 Let $(C i at , µ(i)) = {a|a ∈ µ(i) ∩ C i at ; for each C i j ′ ∈ a(C(i)) and C i j ′ ⊂ C i at , |µ(C i j ′ )| + ∆(C i j ′ ) > q − (C i j ′ )}; 9
Let the lowest ranking applicant in $(C i at , µ(i)) be a † ; 10 µ(i) := µ(i)\{a † } // Institute i rejects applicant a † ; 11: GOTO 1; 12: If there exists an institute i with ∆(C i ♯ ) > 0 Then Report "There is no stable matching"; 13: Else Return the outcome µ, which is a stable matching; Fig. 1. The pseudo code of the algorithm. It outputs the applicant-optimal-institute-pessimal matching µ if it exists; otherwise, it reports that there is no stable matching.
Correctness of the Algorithm
In our discussion, C i at is a class in a(C(i)), where t is the index based on the size of the class C i at in a(C(i)). Assume that during the execution of the algorithm, applicant a proposes to institute i and when the index t of the For loop of Line 5 becomes l and results in a † being rejected, we say applicant a is stopped at class C i al , and class C i al causes applicant a † to be rejected. The first lemma describes some basic behavior of our algorithm. Lemma 6. (i) Immediately before the end of the while loop, Invariants A and B hold. (ii) Let applicant a be the new proposer and assume he is stopped at class C i al . Then (iia) Between the time interval that he makes the new proposal and he is stopped at C i al , ∆(C i at ) remains unchanged, for all 1 ≤ t ≤ l; moreover, given any class C i at , 2 ≤ t ≤ l, ∆(C i at ) = C i k ∈c(C i at ) ∆(C i k ). (iib) When a is stopped at a non-leaf class C i al , $(C i al , µ(i)) = ∅; in particular, any class C i at , 1 ≤ t ≤ l − 1, is temporarily a surplus class.
(iii) Immediately before the end of the while loop, if class C i j is a non-leaf surplus class, then ∆(C i j ) =
C i k ∈c(C i j ) ∆(C i k ).
(iv) Suppose that applicant a is the new proposer and C i al ∈ a(C(i)) causes applicant a † to be rejected and a † (C(i)) = (C i a † 1 , C i a † 2 , · · · , C i a † l † (= C i al ), · · · ). Then immediately before the end of the while loop,
∆(C i a † t ′ ) = C i k ∈c(C i a † t ′ ) ∆(C i k ), for all 2 ≤ t ′ ≤ l † ; moreover, |µ(i) ∩ C i a † l † | + ∆(C i a † l † ) = q + (C i a † l † ).
Proof. (i) can be proved by induction on the number of proposals institute i gets. For (iia), since Invariant A is maintained, if ∆(C i at ) is decreased for some class C i at , 1 ≤ t ≤ l, the algorithm will ensure that applicant a would not be stopped in any class, leading to a contradiction. Now by (iia), the set of classes {C i at } l−1 t=1 are (temporarily) surplus classes when applicant a is stopped at C i al , so $(C i al , µ(i)) = ∅, establishing (iib). Note that this also guarantees that the proposed algorithm is never "stuck." (iii) can be proved inductively on the number of proposals that institute i gets. Assuming a is the new proposer, there are two cases: (1) Suppose that applicant a is not stopped in any class. Then a class C i at ∈ a(C(i)) can become surplus only if the stated condition holds ; (2) Suppose that applicant a is stopped in some class, which causes a † to be rejected. Let the smallest class containing both a and a † be C i al ′ . Applying (iia) and observing the algorithm, it can be verified that only a class C i at ⊂ C i al ′ can become a surplus class and for such a class, the stated condition holds. Finally, for the first part of (iv), let C i al ′ denote the smallest class containing both a and a † . Given
a class C i a † t ′ , if C i al ′ ⊆ C i a † t ′ ⊆ C i al , (iia) gives the proof. If C i a † t ′ ⊂ C i al ′ ,
observe that the former must have been a surplus class right before applicant a made the new proposal. Moreover, before applicant a proposed, (iii) implies that for a non-leaf class C i a † t ′ ⊂ C i al ′ , the stated condition regarding the deficiency numbers is true. The last statement of (iv) is by the algorithm and Invariant B.
⊓ ⊔
Lemma 7. Assume that a † (C(i)) = (C i a † 1 , C i a † 2 , · · · , C i a † l † , · · · )
. During the execution of the algorithm, suppose that class C i a † l † causes applicant a † to be rejected. In the subsequent execution of the algorithm, assuming that µ(i) is the assignment of institute i at the end of the while loop, then
there exists l ‡ , where l ‡ ≥ l † such that |µ(i) ∩ C i a † l ‡ | + ∆(C i a † l ‡ ) = q + (C i a † l ‡ ); furthermore, for all 2 ≤ t ≤ l ‡ , all applicants in $(C i a † t , µ(i)) rank higher than a † . Moreover, for all 2 ≤ t ≤ l ‡ , ∆(C i a † t ) = C i k ∈c(C i a † t ) ∆(C i k ).
Proof. We prove based on the induction on the number of proposals institute i receives after a † is rejected. The base case is when a † is just rejected. Let l ‡ = l † . Then it is obvious that all applicants in the affluent sets $(C i a † t , µ(i)), 2 ≤ t ≤ l ‡ , rank higher than a † and the rest of the lemma holds by Lemma 6(iv).
For the induction step, let a be the new proposer. There are four cases. Except the second case, we let l ‡ remain unchanged after a's proposal.
-Suppose that a ∈ C i a † l ‡ and he does not cause anyone in C i a † l ‡ to be rejected. Then the proof is trivial.
-Suppose that a ∈ C i a † l ‡ and he is stopped in class C i al , which causes an applicant a * ∈ C i a † l ‡ to be rejected. a * must be part of the affluent set $(C i a † l ‡ , µ(i)) before a proposed. By induction hypothesis, a * ≻ i a † . Moreover, since a * is chosen to be rejected, all the applicants in the (new) affluent sets $(C i a † t , µ(i)), for each class C i a † t , where C i a † l ‡ ⊂ C i a † t ⊆ C i al , rank higher than a * , hence, also higher than a † . Now let C i al be the new C i a † l ‡ and the rest of the lemma follows from Lemma 6(iv).
-Suppose that a ∈ C i a † l ‡ and he is not stopped in C i a † l ‡ or any of its subclasses. We argue that a must be accepted without causing anyone to be rejected; moreover, the applicants in all affluent sets $(C i a † t , µ(i)), for all 1 ≤ t ≤ l ‡ remain unchanged. Let the smallest class in a † (C(i)) containing a be C i a †l . Note that before a proposed, the induction hypothesis states that |µ
(i) ∩ C i a † l ‡ | + ∆(C i a † l ‡ ) = q + (C i a † l ‡ ).
As applicant a is not stopped at C i a † l ‡ , the set of values ∆(C i a † t ),l ≤ t ≤ l ‡ , must have decreased during his proposal and this implies that he will not be stopped in any class. Now let a(C(i)) = (C i a1 , · · · , C i al , C i a(l+1) (= C i a †l ), · · · ). Since ∆(C i a †l ) = C i k ∈c(C i a †l ) ∆(C i k ) before applicant a proposed by the induction hypothesis, for ∆(C i a †l ) to decrease, ∆(C i al ) must have decreased as well. Choose the smallest class C i al * ⊂ C i a †l whose value ∆(C i al * ) has decreased during a's proposal. We claim that C i al * must have been a non-surplus class before and after applicant a's proposal. If the claim is true, then all the affluent sets $(C i a † t , µ(i)), for all 1 ≤ t ≤ l ‡ , remain unchanged after applicant a's proposal.
It is obvious that C i al * = C i a1 . So assume that C i al * is a non-leaf class. Suppose for a contradiction that C i al * was a surplus class before a proposed. Lemma 6(iii) implies that
∆(C i a † l * ) = C i k ∈c(C i a † l * ) ∆(C i k ) before a proposed.
Then for ∆(C i a † l * ) to decrease during a's proposal, ∆(C i a † (l * −1) ) must have decreased as well. But then this contradicts our choice of C i a † l * . So we establish that C i al * was not surplus and remains so after a's proposal. -Suppose that a ∈ C i a † l ‡ and when he reaches a subclass of C i a † l ‡ or the class itself, the latter causes some applicant a * to be rejected. To avoid trivialities, assume a = a * . Let the smallest class in a † (C(i)) containing a be C i a †l and the smallest class in a † (C(i)) containing a * be C i a † l * . Below we only argue that the case that C i a †l ⊆ C i a † l * . The other case that C i a † l * ⊂ C i a †l follows essentially the same argument. After a's proposal, observe that only the affluent sets $(C i a † t , µ(i)),l ≤ t < l * , can have new members (who are from the child class of C i a †l containing a). Without loss of generality, let G be the set of new members added into one of the any above sets. To complete the proof, we need to show that either G = ∅ or all members in G rank higher than a † . If before applicant a proposed, a * belonged to a sequence of surplus classes C i a * t ⊂ C i a † l * , he was also part of the affluent set $(C i a † l * , µ(i)) or part of µ(i)∩C i a † 1 before a proposed. By induction hypothesis, a * ≻ i a † . Observing Lemma 6(iib), all applicants in G must rank higher than a * , hence also than a † . On the other hand, if a * belongs to some class C i a * t ⊂ C i a † l * which was not surplus before a proposed, then C i a * l = C i a * l * and C i a * t must also contain a and remain a non-surplus class after a's proposal. In this case G = ∅.
⊓ ⊔
The following lemma is an abstraction of several counting arguments that we will use afterwards.
Lemma 8. Let each class C i j be associated with two numbers α i j and β i j and q − (C i j ) ≤ α i j , β i j ≤ q + (C i j ). Given any non-leaf class C i j , α i j = C i k ∈c(C i j ) α i k and β i j ≥ C i k ∈c(C i j ) β i k ; moreover, if β i j = C i k ∈c(C i j ) β i k then such a non-leaf class C i j is said to be tight in β. If β i j > q − (C i j )
, then C i j has to be tight in β.
(i) Given a non-leaf class
C i a † l † with α i a † l † < β i a † l † , we can find a sequence of classes C i a † l † ⊃ · · · ⊃ C i a † 1 , where α i a † t < β i a † t , for 1 ≤ t ≤ l † . (ii) Given a non-leaf class C i x with α i x ≤ β i x , suppose that there exists a leaf class C i a φ 1 ⊂ C i x such that α i a φ 1 > β i a φ 1 . Moreover, all classes C i a φ t are tight in β, where C i a φ 1 ⊆ C i a φ t ⊆ C i x , then we can find a class C i x ′ , where C i a φ 1 ⊂ C i x ′ ⊆ C i x , α i x ′ ≤ β i x ′ ,
and two sequences of classes with the following properties:
(iia) C i a φ 1 ⊂ C i a φ 2 ⊂ · · · ⊂ C i a φ l φ ⊂ C i x ′ , where α i a φ t > β i a φ t for 1 ≤ t ≤ l φ ; (iib) C i x ′ ⊃ C i a † l † ⊃ · · · ⊃ C i a † 1 , where α i a † t < β i a † t , for 1 ≤ t ≤ l † . Proof. For (i), since q − (C i a † l † ) ≤ α i a † l † < β i a † l † , class C i a † l † is tight in β. Therefore, C i k ∈c(C i a † l † ) α i k = α i a † l † < β i a † l † = C i k ∈c(C i a † l † ) β i k . By counting, there exists a class C i a † (l † −1) ∈ c(C i a † l † ) with q − (C i a † (l † −1) ) ≤ α i a † (l † −1) < β i a † (l † −1)
. Repeating the same argument gives us the sequence of classes. For (ii), let us climb up the tree from C i a φ 1 until we meet a class C i
x ′ ⊆ C i x with α i x ′ ≤ β i x ′ .
This gives us the sequence of classes stated in (iia).
Now since the class C i x ′ is tight in β, C i k ∈c(C i x ′ ) α i k = α i x ′ ≤ β i x ′ = C i k ∈c(C i x ′ ) β i k . Moreover, as C i a φ l φ ∈ c(C i x ′ ) and α i a φ l φ > β i a φ l φ , by counting, we can find another class C i a † l † ∈ c(C i x ′ )\{C i a φ l φ } such that β i a † l † > α i a † l † ≥ q − (C i a † l † )
. Now applying (i) gives us the sequence of classes in (iib).
⊓ ⊔
We say that (i; a) is a stable pair if there exists any stable matching in which applicant is assigned to institute i. A stable pair is by-passed if institute i rejects applicant a during the execution of our algorithm.
Lemma 9. During the execution of the algorithm, if an applicant a φ is rejected by institute i, then (i; a φ ) is not a stable pair.
Proof. We prove by contradiction. Assume that (i; a φ ) is the first by-passed stable pair and there exists a stable matching µ φ in which µ φ (a φ ) = i. For each class C i j ∈ C(i), we associate two numbers
α i j := |µ φ (i) ∩ C i j | and β i j := |µ(i) ∩ C i j | + ∆(C i j ).
Here ∆(·)s are the values recorded in the algorithm right after a φ is rejected (before the end of the while loop); similarly, µ(i) is the assignment of i at that point.
It is obvious that α i a φ 1 > β i a φ 1 and the class C i x causing a φ to be rejected is not C i a φ 1 . By
Lemma 6(iv), all classes C i a φ t are tight in β, where C i a φ 1 ⊂ C i a φ t ⊆ C i x .
It can be checked all the conditions as stated in Lemma 8(ii) are satisfied. In particular,
β i x = q + (C i x ) ≥ α i x ; moreover, if β i j > q − (C i j ), C i j must be tight (by Lemma 6(iii)). So, we can find two sequences of classes {C i a φ t } l φ t=1 and {C i a † t } l † t=1 , where C i a φ l φ , C i a † l † ∈ c(C i x ′ ) and C i x ′ ⊆ C i x ,
with the following properties:
q + (C i a φ t ) ≥ |µ φ (i) ∩ C i a φ t | > |µ(i) ∩ C i a φ t | + ∆(C i a φ t ) ≥ q − (C i a φ t ), ∀t, 1 ≤ t ≤ l φ ; q − (C i a † t ) ≤ |µ φ (i) ∩ C i a † t | < |µ(i) ∩ C i a † t | + ∆(C i a † t ) ≤ q + (C i a † t ), ∀t, 1 ≤ t ≤ l † .
The second set of inequalities implies that the classes {C i a † t } l † t=1 are surplus in µ. Thus there exists an applicant a † ∈ (µ(i)\µ φ (i)) ∩ C i a † 1 . Since (i; a φ ) is the first by-passed stable pair, i ≻ a † µ φ (a † ) and since a φ is rejected instead of a † , a † ≻ i a φ . Now observe the tuple µ φ (i)| a φ a † is feasible due to the above two sets of strict inequalities. Thus we have a group (i; µ φ (i)| a φ a † ) to block µ φ , a contradiction.
⊓ ⊔ Lemma 10. At the termination of the algorithm, if there exists an institute i ∈ I such that ∆(C i ♯ ) > 0, there is no stable matching in the given instance.
Proof. Suppose, for a contradiction, that there exists an institute i with ∆(C i ♯ ) > 0 and there is a stable matching µ φ . Let µ be the assignment when the algorithm terminates. By Lemma 9, if an applicant is unmatched in µ, he cannot be assigned in µ φ either. So |µ φ | ≤ |µ|. In the following, ∆(·)s refer to values recorded in the final outcome of the algorithm. Consider two cases.
-Suppose that |µ φ (i)| > |µ(i) ∩ C i ♯ |. Then as |µ φ | ≤ |µ|, we can find another institute i ′ = i such that |µ φ (i ′ )| < |µ(i ′ ) ∩ C i ′ ♯ |. For each class C i ′ j ∈ C(i ′ ), let α i ′ j := |µ φ (i ′ ) ∩ C i ′ j | and β i ′ j := |µ(i ′ ) ∩ C i ′ j | + ∆(C i ′ j )
. It can be checked that the condition stated in Lemma 8(i) is satisfied (note that those β i ′ j fulfill the condition due to Lemma 6(iii)). Therefore, we can find a sequence of
classes {C i ′ a † t } l † t=1 , where C i ′ a † l † = C i ′ ♯ , and |µ φ (i ′ ) ∩ C i ′ a † t | < |µ(i ′ ) ∩ C i ′ a † t | + ∆(C i ′ a † t ) ≤ q + (C i ′ a † t ), ∀t, 1 ≤ t ≤ l † , where the second inequality follows from Invariant B. Then there exists an applicant a † ∈ (µ(i ′ )\µ φ (i ′ )) ∩ C i ′ a † 1 . By Lemma 9, i ′ ≻ a † µ φ (a † )
, giving us a group (i ′ ; µ φ (i ′ )|a † ) to block µ φ , a contradiction. Note the feasibility of µ φ (i ′ )|a † is due to the above set of strict inequalities.
-Suppose that |µ φ (i)| ≤ |µ(i) ∩ C i ♯ |.
We first claim that C i ♯ must be a surplus class in µ(i). If not,
then q − (C i ♯ ) = ∆(C i ♯ ) + |µ(i) ∩ C i ♯ | > |µ(i) ∩ C i ♯ |, implying that |µ φ (i)| ≥ q − (C i ♯ ) > |µ(i) ∩ C i ♯ |, a contradiction. So C i
♯ is a surplus class, and by Lemma 6(iii),
|µ φ (i)| = C i k ∈c(C i ♯ ) |µ φ (i) ∩ C i k | ≤ |µ(i) ∩ C i ♯ | < |µ(i) ∩ C i ♯ | + ∆(C i ♯ ) = C i k ∈c(C i ♯ ) |µ(i) ∩ C i k | + ∆(C i k ).
For each class C i j ∈ C(i), let α i j := |µ φ (i)∩C i j | and β i j := |µ(i)∩C i j |+∆(C i j ) and invoke Lemma 8(i). The above inequality implies that α i ♯ < β i ♯ and note that by Lemma 6(iii), the condition regarding β is satisfied. Thus we have a sequence of surplus classes
C i a † l † (= C i ♯ ) ⊃ · · · ⊃ C i a † 1 so that q − (C i a † t ) ≤ |µ φ (i) ∩ C i a φ t | < |µ(i) ∩ C i a † t | + ∆(C i a † t ) ≤ q + (C i a † t ), ∀t, 1 ≤ t ≤ l † , implying that there exists an applicant a † ∈ (µ(i)\µ φ (i)) ∩ C i a † 1 and i ≻ a † µ φ (a † ) by virtue of Lemma 9. The tuple µ φ (i)|a † is feasible because of the above set of strict inequalities. Now (i; µ φ (i)|a φ ) blocks µ φ , a contradiction.
⊓ ⊔ Lemma 11. Suppose that in the final outcome µ, for each institute i ∈ I, ∆(C i ♯ ) = 0. Then µ is a stable matching.
Proof. For a contradiction, assume that a group (i; g) blocks µ. Let a φ to be the highest ranking applicant in g\µ(i). Since a φ is part of the blocking group, he must have proposed to and been rejected by institute i during the execution of the algorithm, thus i ≻ a φ µ(a φ ). By Lemma 7, there
exists a class C i a φ l ‡ such that |µ(i) ∩ C i a φ l ‡ | + ∆(C i a φ l ‡ ) = |µ(i) ∩ C i a φ l ‡ | = q + (C i a φ l ‡ ). Moreover, it is obvious that |g ∩ C i a φ 1 | > |µ(i) ∩ C i a φ 1 |.
We now make use of Lemma 8(ii) by letting α i j := |g ∩ C i j | and β i j := |µ(i) ∩ C i j | for each class C i j ∈ C(i). Note that all classes are tight in β, C i a φ 1 ⊂ C i a φ l ‡ , and
|µ(i) ∩ C i a φ l ‡ | = q + (C i a φ l ‡ ) ≥ |g ∩ C i a φ l ‡ |,
satisfying all the necessary conditions. Thus, we can discover a sequence of classes
{C i a † t } l † t=1 stated in Lemma 8(iib), where C i a † l † ∈ c(C i a φ l ) and C i a φ 1 ⊂ C i a φ l ⊆ C i a φ l ‡ , such that q − (C i a † t ) ≤ |g ∩ C i a † t | < |µ(i) ∩ C i a † t | ≤ q + (C i a † t ), ∀j, 1 ≤ t ≤ l † , and there exists an applicant a † ∈ (µ(i)\g) ∩ C i a † 1 .
The above set of strict inequalities mean that all classes C i a † t , 1 ≤ t ≤ l † , are surplus classes in µ. Then a † forms part of the affluent set $(C i a φ l , µ(i)). By Lemma 7, they all rank higher than a φ . This contradicts our assumption that a φ is the highestranking applicant in g\µ(i).
⊓ ⊔ Lemma 12. Suppose that in the final outcome µ, for each institute i ∈ I, ∆(C i ♯ ) = 0. Then µ is an institute-pessimal stable matching.
Proof. Suppose, for a contradiction, that there exists a stable matching µ φ such that there exists an institute i which is lexicographically better off in µ than in µ φ . Let a † be the highest ranking
applicant in µ(i)\µ φ (i). By Lemma 9, i ≻ a † µ φ (i). If |µ φ (i) ∩ C i a † t | < |µ(i) ∩ C i a † t | ≤ q + (C i a † t ), for all classes C i a † t ∈ a † (C(i)), then (i; µ φ (i)|a φ ) blocks µ φ , a contradiction. So choose the smallest class C i x ∈ a † (C(i)) such that |µ φ (i) ∩ C i x | ≥ |µ(i) ∩ C i x |. It is clear that C i x ⊃ C i a † 1 . Now we apply Lemma 8(ii) by letting α i j := |µ(i) ∩ C i j | and β i j := |µ φ (i) ∩ C i j | for each class C i j ∈ C(i).
It can be checked all conditions stated in Lemma 8(ii) are satisfied. So there exists a class
C i x ′ such that C i a † 1 ⊂ C i x ′ ⊆ C i x and we can find two sequences of classes {C i a φ t } l φ t=1 and {C i a † t } l † t=1 , where C i a φ l φ , C i a † l † ∈ c(C i x ′ )
, with the following properties:
q + (C i a † t ) ≥ |µ(i) ∩ C i a † t | > |µ φ (i) ∩ C i a † t | ≥ q − (C i a † t ), ∀t, 1 ≤ t ≤ l † ; q − (C i a φ t ) ≤ |µ(i) ∩ C i a φ t | < |µ φ (i) ∩ C i a φ t | ≤ q + (C i a φ t ), ∀t, 1 ≤ t ≤ l φ .
The second set of inequalities implies that we can find an applicant a φ ∈ (µ φ (i)\µ(i)) ∩ C i a φ 1 . Recall that we choose a † to be the highest ranking applicant in µ(i)\µ φ (i), so a † ≻ i a φ . Now we have a group (i; µ φ (i)| a φ a † ) to block µ φ to get a contradiction. The feasibility of µ φ (i)| a φ a † is due to the above two sets of strict inequalities.
⊓ ⊔ Based on Lemmas 9, 10, 11, and 12, we can draw the conclusion in this section.
Theorem 13. In O(m 2 ) time,
where m is the total size of all preferences, the proposed algorithm discovers the applicant-optimal-institute-pessimal stable matching if stable matchings exist in the given LCSM instance; otherwise, it correctly reports that there is no stable matching. Moreover, if there is no lower bound on the classes, there always exists a stable matching.
To see the complexity, first note that there can be only O(m) proposals. The critical thing in the implementation of our algorithm is to find out the lowest ranking applicant in each affluent set efficiently. This can be done by remembering the lowest ranking applicant in each class and this information can be updated in each proposal in O(m) time, since the number of classes of each institute is O(m), given that the classes form a laminar family.
Structures of Laminar Classified Stable Matching
Recall that we define the "absorption" operation as follows. Given a family of classes B, ℜ(B) returns the set of classes which are not entirely contained in other classes in B. Note that in LCSM, ℜ(B) will be composed of a pairwise disjoint set of classes.
We review the well-known rural hospitals theorem [8,15].
Theorem 14. (Rural Hospitals Theorem) In the hospitals/residents problem, the following holds.
(i) A hospital gets the same number of residents in all stable matchings, and as a result, all stable matchings are of the same cardinality. (ii) A resident who is assigned in one stable matching gets assigned in all other stable matchings;
conversely, an unassigned resident in a stable matching remains unassigned in all other stable matchings. (iii) An under-subscribed hospital gets the same set of residents in all other stable matchings.
It turns out that rural hospitals theorem can be generalized in LCSM. On the other hand, if some institutes use intersecting classes in their classifications, rural hospitals theorem fails (stable matching size may differ). See the appendix for such an example.
Theorem 15. (Generalized Rural Hospitals Theorem in LCSM) Let µ be a stable matching. Given any institute i, suppose that B is the set of bottleneck classes in µ(i) and D is the subset of classes in C(i) such that ℜ(B) ∪ D partitions L i . The following holds.
(i) An institute gets the same number of applicants in all stable matchings, and as a result, all stable matchings are of the same cardinality. (ii) An applicant who is assigned in one stable matching gets assigned in all other stable matchings;
conversely, an unassigned applicant in a stable matching remains unassigned in all other stable matchings. (iii) Every class C i k ∈ ℜ(B) ∪ D has the same number of applicants in all stable matchings. (iv) In a class C i k ⊆ C ∈ D, or in a class C i k which contains only classes in D, the same set of applicant in class C i k will be assigned to institute i in all stable matchings.
(v) A class C i k can have different sets of applicants in different stable matchings only if C i k ⊆ C ∈ ℜ(B) or C i k ⊇ C ∈ ℜ(B).
Proof. We choose µ † to be the applicant-optimal stable matching.
Claim A: Suppose that a ∈ µ † (i)\µ(i). Then there exists a class C i al ∈ a(C(i)) such that (i) |µ(i) ∩ C i al | = q + (C i al ), and (ii) a ∈ C i al ⊆ C ∈ ℜ(B). Proof of Claim A. If for all classes C i at ∈ a(C(i)), |µ(i) ∩ C i at | < q + (C i at )
, then as µ † is applicantoptimal, i ≻ a µ(a), so (i; µ(i)|a) blocks µ, a contradiction. This establishes (i).(ii) follows easily. ⊓ ⊔ LetB ⊆ B be the subset of these bottleneck classes containing at least one applicant µ † (i)\µ(i).
By Claim A(ii), ℜ(B) ⊆ ℜ(B). This implies that for all classes C i k ∈ (ℜ(B)\ℜ(B)) ∪ D, |µ(i) ∩ C i k | ≥ |µ † (i) ∩ C i k |. Combining this fact with Claim A(ii), we have |µ(i)| = C i k ∈(ℜ(B)\ℜ(B))∪D |µ(i) ∩ C i k | + C i k ∈ℜ(B) |µ(i) ∩ C i k | ≥ C i k ∈(ℜ(B)\ℜ(B))∪D |µ † (i) ∩ C i k | + C i k ∈ℜ(B) q + (C i al ) (*) = C i k ∈(ℜ(B)\ℜ(B))∪D |µ † (i) ∩ C i k | + C i k ∈ℜ(B) |µ † (i) ∩ C i k | = |µ † (i)|.
Thus, |µ| ≥ |µ † | and it cannot happen that |µ| > |µ † |, otherwise, there exists an applicant who is assigned in µ but not in µ † . This contradicts the assumption that the latter is applicant-optimal. This completes the proof of (i) and (ii) of the theorem.
Since |µ| = |µ † |, Inequality (*) holds with equality. We make two observations here.
Observation 1: For each class C i k ∈ ℜ(B), it is also a bottleneck in µ † (i). Observation 2: an applicant a ∈ µ † (i)\µ(i) must belong to a bottleneck class in µ † (i).
Let B † be the set of bottleneck classes in µ † (i) and choose D † so that ℜ(B † ) ∪ D † partitions L i . By Observation 2, each applicant in µ † (i) ∩ C i k , where C i k ∈ D † , must be part of µ(i). So for each class -There exists another class C i k ′ ∈ D † so that |µ(i)∩C i k ′ | < |µ † (i)∩C i k ′ |. Then we have a contradiction to Observation 2.
C i k ∈ D † , |µ(i) ∩ C i k | ≥ |µ † (i) ∩ C i k |. We claim that it cannot happen that |µ(i) ∩ C i k | > |µ † (i) ∩ C i k |.
-There exists another class
C i k ′ ∈ ℜ(B † ) so that |µ(i)∩C i k ′ | < |µ † (i)∩C i k ′ |.
For each class C i j ∈ C(i), let α i j := |µ(i) ∩ C i j | and β i j := |µ † (i) ∩ C i j |. Then we can invoke Lemma 8(i) and find an applicant a φ ∈ µ † (i)\µ(i) so that for each class C i
a φ t ∈ a φ (C(i)), C i a φ t ⊆ C i k ′ , |µ(i) ∩ C i a φ t | < |µ † (i) ∩ C i a φ t | ≤ q + (C i a φ t ).
Then by Claim A(ii) and Observation 1, there must exist another class C i k ′′ ∈ ℜ(B) containing a φ and C i k ′′ ⊃ C i k ′ . By Observation 1, C i k ′′ is also a bottleneck class in µ † (i). This contradicts the assumption that C i k ′ ∈ ℜ(B † ). So we have that for each class
C i k ∈ D † , |µ(i) ∩ C i k | = |µ † (i) ∩ C i k |.
For each class C i k ∈ B † , we can use the same argument to show that |µ(i) ∩ C i k | = |µ † (i) ∩ C i k |. This gives us (iii) and (iv). (v) is a consequence of (iv).
⊓ ⊔
NP-completeness of P-Classified Stable Matching
Theorem 16. Suppose that the set of posets P = {P 1 , P 2 , · · · , P k } contains a poset which is not a downward forest. Then it is NP-complete to decide the existence of a stable matching in P-classified stable matching. This NP-completeness holds even if there is no lower bound on the classes.
Our reduction is from one-in-three sat. It is involved and technical, so we just highlight the idea here. As P must contain a poset that has a "V " in it, some institutes use intersecting classes. In this case, even if there is no lower bound on the classes, it is possible that the given instance disallows any stable matching. We make use of this fact to design a special gadget. The main technical difficulty of our reduction lies in that in the most strict case, we can use at most two classes in each institute's classification.
Polyhedral Approach
In this section, we take a polyhedral approach to studying LCSM. We make the simplifying assumption that there is no lower bound. In this scenario, we can use a simpler definition to define a stable matching.
Lemma 17. In LCSM, if there is no lower bound, i.e., given any class C i j , q − (C i j ) = 0, then a stable matching as defined in Definition 2 can be equivalently defined as follows. A feasible matching µ is stable if and only if there is no blocking pair. A pair (i, a) is blocking, given that µ(i) = (a i1 , a i2 , · · · , a ik ),
k ≤ Q(i), if -i ≻ a µ(a); -for any class C i at ∈ a(C(i)), |L i ≻a ∩ µ(i) ∩ C i at | < q + (C i at ).
The definition of blocking pairs suggests a generalization of the comb used by Baïou and Balinski [3].
Definition 18. Let Γ = I ×A denote the set of acceptable institute-applicant pairs. The shaft S(A i ), based on a feasible tuple A i of institute i, is defined as follows: a) is defined for every (i, a) ∈ Γ as follows:
S(A i ) = {(i, a ′ ) ∈ Γ : ∀C i j ∈ a ′ (C(i)), |L i ≻a ′ ∩ A i ∩ C i j | < q + (C i j )}. The tooth T (i,T (i, a) = {(i ′ , a) ∈ Γ : i ′ a i}.
In words, (i, a ′ ) forms part of the shaft S(A i ), only if the collection of a ′ and all applicants in A i ranking strictly higher than a ′ does not violate the quota of any class in a ′ (C(i)). We often refer to an applicant a ∈ A i as a tooth-applicant.
We associate a |Γ |-vector x µ (or simply x when the context is clear) with a matching µ: x µ ia = 1 if µ(a) = i, otherwise, x µ ia = 0. Suppose thatΓ ⊆ Γ . Then x(Γ ) = (i,a)∈Γ x ia . We define a comb K(i, S(A i )) as the union of the teeth {T (i, a i )} a i ∈A i and the shaft S(A i ).
Lemma 19. Every stable matching solution x satisfies the comb inequality for any comb K(i, S(A i )):
x(K(i, S(A i )) ≡ x(S(A i )) + a j ∈A i x(T (i, a j )\{i, a j }) ≥ |A i |.
It takes a somehow involved counting argument to prove this lemma. Here is the intuition about why the comb inequality captures the stability condition of a matching. The value of the tooth x(T (i, a)) reflects the "happiness" of the applicant a ∈ A i . If x(T (i, a)) = 0, applicant a has reason to shift to institute i; on the other hand, the values collected from the shaft x(S(A i )) indicates the "happiness" of institute i: whether it is getting enough high ranking applicants (of the "right" class). An overall small comb value x(K(i, S(A i ))) thus expresses the likelihood of a blocking group including i and some of the applicants in A i . Now let K i denote the set all combs of institute i. We write down the linear program:
i:(i,a)∈Γ x ia ≤ 1, ∀a ∈ A (1) a:(i,a)∈Γ,a∈C i j x ia ≤ q + (C i j ), ∀i ∈ I, ∀C i j ∈ C(i)(2)
x(K(i, S(A i ))) = (i,a)∈K(i,S(A i ))
x ia ≥ |A i |, ∀K(i, S(A i )) ∈ K i , ∀i ∈ I (3) x ia ≥ 0, ∀(i, a) ∈ Γ(4)
Suppose there is no classification, i.e., Hospitals/Residents problem. Then this LP reduces to the one formulated by Baïou and Balinski [3]. However, it turns out that this polytope is not integral. The example in Figure 2 demonstrates the non-integrality of the polytope. In particular, observe that since µ is applicant-optimal, in all other stable matchings, applicant a 3 can only be matched to i 5 . However, the value x i 1 a 3 = 0.2 > 0 indicates that x is outside of the convex hull of integral stable matchings.
Here we make a critical observation. Suppose that in a certain matching µ φ , applicant a 3 is assigned to i 1 . Then a 2 cannot be assigned to i 1 due to the bound q + (C 1 1 ) (see Constraint (2)). If µ φ is to be stable, then a 2 must be assigned to some institute ranking higher than i 1 on his list (in this example there is none), otherwise, (i, µ φ (i 1 )| a 3 a 2 ) is bound to be a blocking group in µ φ . Thus, the required constraint to avoid this particular counter-example can be written as
x(T (i 1 , a 2 )\{i 1 , a 2 }) ≥ x i 1 a 3 .
We now formalize the above observation. Given any class C i j ∈ C(i), we define a class-tuple t i j = (a i1 , a i2 , · · · , a iq + (C i j ) ). Such a tuple fulfills the following two conditions:
Institute Preferences
Classifications Class bounds i1:a1a6a7a2a3 4) is not integral. Since µ is applicant-optimal, in all other stable matchings, applicant a 3 can only be matched to i 5 . However, the value x i 1 a 3 = 0.2 > 0 indicates that x is outside of the convex hull of integral stable matchings.
C 1 1 = {a2, a3} Q(i1) = 2, q + (C 1 1 ) = 1 i2:a4a7 Q(i2) = 1 i3:a2a4 Q(i3) = 1 i4:a5a6 Q(i4) = 1 i5:a3a5a7a1 C 5 1 = {a3, a5} Q(i5) = 2, q + (C 5 1 ) = 1 Applicant1. t i j ⊆ C i j ; 2. if C i
j is a non-leaf class, then given any subclass
C i k of C i j , |t i j ∩ C i k | ≤ q + (C i k ).
Let L i ≺t i j denote the set of applicants ranking lower than all applicants in t i j and L i t i j the set of applicants ranking at least as high as the lowest ranking applicant in t i j .
Lemma 20. Every stable matching solution x satisfies the following inequality for any class-tuple t i j :
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia .
As before, it takes a somehow involved counting argument to prove the lemma but its basic idea is already portrayed in the above example. Now let T i j denote the set of class-tuples in class C i j ∈ C(i) and L i ≺t i j denote the set of applicants ranking lower than all applicants in t i j . We add the following sets of constraints.
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia , ∀t i j ∈ T i j , ∀T i j(5)
Let P f sm denote the set of all solutions satisfying (1)-(5) and P sm the convex hull of all (integral) stable matchings. In this section, our main result is P f sm = P sm . We say (i, a) are matched under x if x ia > 0.
Definition 21. Let x ∈ P f sm and Ω i (x) be the set of applicants that are matched to institute i under x. Let Ω i (x) be composed of a i1 , a i2 , · · · , ordered based on the decreasing preference of institute i. H i (x) as a tuple composed of applicants chosen based on the following procedure: adding a ij greedily unless adding the next applicant into H i (x) will cause H i (x) to violate the quota of some class. Equivalently, a il ∈ H i (x) only if there exists a class C i j ∈ a il (C(i)) such that Proof. We need to show that given any class C i j ∈ C(i), |E i (x)∩C i j | ≤ q + (C i j ). We proceed by induction on the height of C i j in the tree structure of C(i). The base case is a leaf class.
Define
|H i (x) ∩ {a it } l−1 t=1 | = q + (C i j ). 2. Define E i (x)If |E i (x) ∩ C i j | > q + (C i j )
, form a class-tuple by picking the first q + (C i j ) applicants in E i (x) ∩ C i j . Then Constraint (5) is violated in such a class-tuple. For the induction step, if |E i (x)∩C i j | > q + (C i j ), again choose the q + (C i j ) highestranking applicants in E i (x) ∩ C i j and we claim they form a class-tuple of C i j , the reason being that by induction hypothesis, given any (5) is again violated in such a class-tuple.
C i k ⊂ C i j , |E i (x) ∩ C i k | ≤ q + (C i k ). Now Constraint
⊓ ⊔ Lemma 23. Suppose that x ∈ P f sm .
(i) For each institute i ∈ I, we can find two sets U and V of pairwise disjoint classes so that U ∪ V partitions L i and all applicants in Ω i (x)\H i (x) belong to the classes in U . Moreover,
(ia) |H i (x)| = C i k ∈U q + (C i k ) + C i k ∈V |H i (x) ∩ C i k |; (ib) for each class C i k ∈ U , |H i (x) ∩ C i k | = |E i (x) ∩ C i k | = q + (C i k );
for each class C i k ∈ V and each applicant a ∈ C i k , if x ia > 0, then x ia = 1; (ic) for each class C i k ∈ U , a∈C i k x ia = q + (C i k ). (ii) For every applicant a ∈ H i (x), x(T (i, a)) = i∈I x ia = 1; moreover, given any two institutes i,
i ′ ∈ I, H i (x) ∩ H i ′ (x) = ∅. (iii) |H i (x)| = |E i (x)| for all institutes i ∈ I. (iv) a∈A x ia = |E i (x)| for all institutes i ∈ I.
Proof. For (i), given any applicant a ∈ Ω i (x)\H i (x), by Definition 21, there exists some class C i j ∈ a(C(i)) for which |H i (x) ∩ C i j | = q + (C i j ). Let B be the set of classes C i j which contain at least one applicant in Ω i (x)\H i (x) and |C i j ∩ H i (x)| = q + (C i j ). Let U := ℜ(B) and choose V in such a way so that U ∪ V partitions L i . Now (ia) is a consequence of counting. We will prove (ib)(ic) afterwards.
For (ii), by definition of H i (x), none of the applicants in Ω i (x)\H i (x) contributes to the shaft x(S(H i (x))). As a result, for Constraint (3) to hold for the comb K(i, S(H i (x))), every tooth-applicant a ∈ H i (x) must contribute at least 1, and indeed, by Constraint (1), exactly 1. So we have the first statement of (ii). The second statement holds because it cannot happen that x(T (i, a)) = x(T (i ′ , a)) = 1, given that x ia > 0 and x i ′ a > 0.
For (iii), By Definition 21, all sets E i (x) are disjoint; thus, every applicant who is matched under x belongs to exactly one E i (x) and at most one H i (x) by (ii). Therefore, i∈I |E i (x)| ≥ i∈I |H i (x)| and we just need to show that for each institute i, |E i (x)| ≤ |H i (x)|, and this follows by using (ia):
|H i (x)| = C i k ∈U q + (C i k ) + C i k ∈V |H i (x) ∩ C i k | ≥ C i k ∈U |E i (x) ∩ C i k | + C i k ∈V |E i (x) ∩ C i k | = |E i (x)|,(6)
where the inequality follows from Lemma 22 and the fact all applicants in Ω i (x)\H i (x) are in classes in U . So this establishes (iii). Moreover, as Inequality (6) must hold with equality throughout, for each class C i k ∈ V , if applicant a ∈ C i k is matched to institute i under x, he must belong to both H i (x) and E i (x), implying x ia = 1; given any class
C i k ∈ U , |H i (x) ∩ C i k | = |E i (x) ∩ C i k | = q + (C i k ). So we have (ib).
For (iv), consider the comb K(i, S(E i (x))). By definition, x(T (i, a)\{(i, a)}) = 0 for each applicant a ∈ E i (x). So
x(K(i, S(E i (x)))) = x(S(E i (x))) = C i k ∈V |E i (x) ∩ C i k | + C i k ∈U a ′ ∈C i k ,(i,a ′ )∈S(E i (x)) x ia ′ ≤ C i k ∈V |E i (x) ∩ C i k | + C i k ∈U q + (C i k ) = |E i (x)|,
where the inequality follows from Constraint (2) and the rest can be deduced from (ib). By Constraint (3), the above inequality must hold with equality. So for each class
C i k ∈ U , a ′ ∈C i k ,(i,a ′ )∈S(E i (x)) x ia ′ = a ′ ∈C i k x ia ′ = q + (C i k )
, giving us (ic) and implying that there is no applicant in C i k ∈ U who is matched to institute i under x ranking lower than all applicants in E i (x) ∩ C i k . The proof of (iv) follows by
a∈A x ia = C i k ∈V a∈C i k x ia + C i k ∈U a∈C i k x ia = C i k ∈V |E i (x) ∩ C i k | + C i k ∈U q + (C i k ) = |E i (x)|.
⊓ ⊔
Packing Algorithm
We now introduce a packing algorithm to establish the integrality of the polytope. Our algorithm is generalized from that proposed by Sethuraman, Teo, and Qian [22]. Given x ∈ P f sm , for each institute i, we create |E i (x)| "bins," each of size (height) 1; each bin is indexed by (i, j), where 1 ≤ j ≤ |E i (x)|. Each x ia > 0 is an "item" to be packed into the bins. Bins are filled from the bottom to the top. When the context is clear, we often refer to those items x ia as simply applicants; if applicant a ∈ C i j , then the item x ia is said to belong to the class C i j . In Phase 0, each institute i puts the items x ia , if a ∈ H i (x), into each of its |E i (x)| bins. In the following phase, t = 1, 2, · · · , our algorithm proceeds by first finding out the set L t of bins with maximum available space;
then assigning each of the bins in L t one item.
The assignment in each phase proceeds by steps, indexed by l = 1, 2, · · · , |L t |. The order of the bins in L t to be examined does not matter. How the institute i chooses the items to be put into its bins is the crucial part in which our algorithm differs from that of Sethuraman, Teo, and Qian. We maintain the following invariant.
Invariant C: The collection of the least preferred items in the |E i (x)| bins (e.g., the items currently on top of institute i's bins) should respect of the quotas of the classes in C(i).
Subject to this invariant, institute i chooses the best remaining item and adds it into the bin (i, j), which has the maximum available space in the current phase. This unavoidably raises another issue: how can we be sure that there is at least one remaining item for institute i to put into the bin (i, j) without violating Invariant C? We will address this issue in our proof.
Theorem 24. Let x ∈ P f sm . Let M i,j be the set of applicants assigned to bin (i, j) at the end of any step of the packing procedure and a i,j be the lowest-ranking applicant of institute i in bin (i, j) (implying x ia i,j is on top of bin (i, j)). Then (i) In any step, suppose that the algorithm is examining bin (i, j). Then institute i can find at least one item in its remaining items to add into bin (i, j) without violating Invariant C;
(ii) For all bins (i, j), x(M i,j \{a i,j }) + x(T (i, a i,j )) = x(M i,j ) + x(T (i, a i,j )\{(i, a i,j )}) = 1; (iii) At the end of any step, institute i can organize a comb K(i, S(A i )) where A i is composed of appli- cants in {a i,j ′ } |E i (x)| j ′ =1 so that x(K(i, S(A i )) = |E i (x)| j ′ =1 x(M i,j ′ ) + |E i (x)| j ′ =1 x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = |E i (x)|;
(iv) At the end of any step, an item x ia is not put into institute i's bins if and only if there exists a class C i at ∈ a(C(i)) so that |{a i,j ′ }
|E i (x)| j ′ =1 ∩ C i at ∩ L i ≻a | = q + (C i at ). (v) If x ia is packed and x i ′ a is not, then i ′ ≻ a i;
(vi) At the end of any phase, the a i,j in all bins are distinct. In particular, for any applicant a who is matched under x, there exists some bin (i, j) such that a = a i,j .
Proof. We first assume that (ii) holds and prove (i). Observe that (ii) implies that given any applicant a ∈ E i (x), its corresponding item x ia , if already put into a bin, must be on its top and fills it completely. Since (i, j) currently has available space, at least one applicant in E i (x) is not in institute i's bins yet. We claim that there exists at least one remaining applicant in E i (x) that can be added into bin (i, j). Suppose not. Let the set of applicants in E i (x) that are not put into i's bins be G. Given any applicant a ∈ G, there must exist some class
C i k ∈ a(C(i)) for which | 1≤j ′ ≤|E i (x)|,j ′ =j a i,j ′ ∩ C i k | = q + (C i k ). Let B be the set of classes C i k that contains at least one applicant in G and | 1≤j ′ ≤|E i (x)|,j ′ =j a i,j ′ ∩ C i k | = q + (C i k ). Let G ′ be (E i (x)\G)\ C i k ∈ℜ(B) C i k , the subset of applicants in E i (x)
that are already put into the bins but not belonging to any class in ℜ(B). Note that none of the applicants in G ′ can be in the bin (i, j). Thus, by counting the number of the bins minus (i, j), we have
|E i (x)| − 1 ≥ |G ′ | + C i k ∈ℜ(B) | |E i (x)| j ′ =1,j ′ =j a i,j ′ ∩ C i k | = |G ′ | + C i k ∈ℜ(B) q + (C i k )
Note that all applicants in E i (x)\G ′ are in some class in ℜ(B) (either they are already put into the bins or not). Then by the pigeonhole principle, there is at least one class C i k ∈ ℜ(B) for which
|(E i (x)\G ′ ) ∩ C i k | > q + (C i k ), contradicting Lemma 22.
We now prove (ii)-(vi) by induction on the number of phases. In the beginning, (ii)(v)(vi) holds by Lemma 23(ii)(iii). (iii)(iv) hold by setting A i := H i (x) and observation Definition 21 and Lemma 23(ii).
Suppose that the theorem holds up to Phase t. Let α be the maximum available space in Phase t + 1. Suppose that the algorithm is examining bin (i, j) and institute i chooses item x ia to be put into this bin. From (vi) of the induction hypothesis, applicant a is on top of another bin (i ′ , j ′ ), where i ′ = i, in the beginning of phase t + 1. Then by (ii)(v) of the induction hypothesis,
x(T (i, a)) ≤ x(T (i ′ , a)) − x i ′ a = 1 − x(M i ′ ,j ′ ) ≤ α,(7)
where the last inequality follows from our assumption that in Phase t + 1, the maximum available space is α. Note also that We first prove (iv). Since x ia is not put into the bin before this step, by (iv) of the induction hypothesis, there exists some class C i al ∈ a(C(i)) for which
x(T (i, a)) = α, then (i ′ , j ′ ) ∈ L t+1 (bin (i ′ , j ′ ) is also examined in Phase t + 1). (8) Assume that A i is a tuple composed of applicants in {a i,j ′ } |E i (x)| j ′ =1 . For our induction step, let A i := A i | a i,|A i ∩ C i al ∩ L i ≻a | = q + (C i al ).
Let C i al be the smallest such class. Since x ia is allowed to put on top of x ia i,j , a ij ≻ i a and a ij ∈ C i al , otherwise, Invariant C regarding q + (C i al ) is violated. Now we show that all other items x ia ′ fulfill the condition stated in (iv). There are two cases.
-Suppose that x ia ′ is not put into the bins yet.
• Suppose that a i,j ≻ i a ′ ≻ i a. We claim that it cannot happen that for all classes C i a ′ t ∈ a ′ (C(i)),
|A i ∩ C i a ′ t ∩ L i ≻a ′ | < q + (C i a ′ t )
, otherwise, A i | a a ′ is still feasible, in which case institute i would have chosen x ia ′ , instead of x ia to put into bin (i, j), a contradiction.
• Suppose that a i,j ≻ i a ≻ i a ′ . By (iv) of the induction hypothesis, there exists a class C i a ′ l ′ ∈ a ′ (C(i)) for which
|A i ∩ C i a ′ l ′ ∩ L i ≻a ′ | = q + (C i a ′ l ′ ). If C i a ′ l ′ ⊂ C i al , it is easy to see that |A i ∩ C i a ′ l ′ ∩ L i ≻a ′ | = q + (C i a ′ l ′ ); if C i a ′ l ′ ⊂ C i al , then C i al ∈ a ′ (C(i)) and we have |A i ∩ C i al ∩ L i ≻a ′ | = q + (C i al )
. In both situations, the condition of (iv) regarding x ia ′ is satisfied.
-Suppose that x ia ′ is already put into the bins. It is trivial if a ′ ≻ i a, so assume that a ≻ i a ′ . We claim that none of the classes C i a ′ t ∈ a ′ (C(i)) can be a subclass of C i al or C i al itself. Otherwise, C i al ∈ a ′ (C(i)), and we have q
+ (C i al ) = |A i ∩ C i al ∩ L i ≻a | ≥ |A i ∩ C i al ∩ L i ≻a ′ |, a contradiction to (iv) of the induction hypothesis. Now since for every class C i a ′ t ∈ a ′ (C(i)), we have C i a ′ t ⊆ C i al , we have |A i ∩ C i a ′ t ∩ L i ≻a ′ | = |A i ∩ C i a ′ t ∩ L i ≻a ′ | < q + (C i a ′ t ),
where the strict inequality is due to the induction hypothesis.
We notice that the quantity
|E i (x)| j ′ =1 x(M i,j ′ ) is exactly the sum of the shaft x(S(A i )) (before x ia
is added) or x(S(A i )) (after x ia is added) by observing (iv). Below let x(M i,j ) and x(M i,j ) denote the total size of the items in bin (i, j) before and after x ia is added into it. So x(M i,j ) = x(M i,j ) + x ia . Now we can derive the following:
x(K(i, S(A i ))) = x(S(A i )) + x(T (i, a)\{(i, a)}) + |E i (x)| j ′ =1,j ′ =j x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = x(M i,j ) + x ia + x(T (i, a)\{(i, a)}) + |E i (x)| j ′ =1,j ′ =j x(M i,j ′ ) + x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = x(M i,j ) + x(T (i, a)) + |E i (x)| − 1 (by (ii) of the induction hypothesis) ≥ |E i (x)| (by Constraint (3))
For the above inequality to hold,
x(M i,j ) + x(T (i, a)) ≥ 1.(9)
Since x(M i,j ) = 1− α and x(T (i, a)) ≤ α by Inequality (7), Inequality (9) must hold with equality, implying that x(K(i, S(A i ))) = |E i (x)|, giving us (iii).
Since institute i puts x ia into bin (i, j), the "new" M i,j and the "new" a i,j (=a) satisfies
x(M i,j ) + x(T (i, a)\{(i, a)}) = 1.
This establishes (ii). (v) follows because Inequality (7) must hold with equality throughout. Therefore, there is no institute i ′′ which ranks strictly between i and i ′ and x i ′′ a > 0.
Finally for (vi), note that x(T (i, a)) = α if the item x ia is put into some bin in Phase t+1. All such items are the least preferred items in their respective "old" bins (immediately before Phase t + 1), it means the items on top of the newly-packed bins are still distinct. Moreover, from (8), if a bin (i, j) is not examined in Phase t + 1, then its least preferred applicant cannot be packed in phase t + 1 either.
⊓ ⊔
We define an assignment µ α based on a number α ∈ [0, 1) as follows. Assume that there is a line of height α "cutting through" all the bins horizontally. If an item x ia whose position in i's bins intersects α, applicant a is assigned to institute i. In the case this cutting line of height α intersects two items in the same bin, we choose the item occupying the higher position. More precisely:
Given α ∈ [0, 1), for each institute i ∈ I, we define an assignment as follows: µ α (i) = {a :
1 − x(T (i, a)) ≤ α < 1 − x(T (i, a)) + x ia }.
Theorem 25. The polytope determined by Constraints (1)-(5) is integral.
Proof. We generate uniformly at random a number α ∈ [0, 1) and use it to define an assignment µ α . To facilitate the discussion, we choose the largest α ′ ≤ α so that µ α ′ = µ α . Intuitively, this can be regarded as lowering the cutting line from α to α ′ without modifying the assignment, and 1 − α ′ is exactly the maximum available space in the beginning of a certain phase l during the execution of our packing algorithm. Note that the assignment µ α is then equivalent to giving those applicants (items) on top of institute i's bins to i at the end of phase l.
We now argue that µ α is a stable matching. First, it is a matching by Theorem 24(vi). The matching respects the quota of all classes since Invariant C is maintained. What remains to be argued is the stability of µ α . Suppose, for a contradiction, (i, a φ ) is a blocking pair. We consider the possible cases.
-Suppose that x ia φ > 0 and x ia φ is not put into the bins yet at the end of Phase l. Then by Theorem 24(iv) and the definition of blocking pairs, (i, a φ ) cannot block µ α . -Suppose that x ia φ > 0 and x ia φ is already put into the bins at the end of Phase l. If µ α (a φ ) = i, there is nothing to prove. So assume µ α (a φ ) = i and this means that the item x ia φ is "buried" under some other item on top of some of i's bins at the end of Phase l. Then by Theorem 24(v), a φ is assigned to some other institute ranking higher than i, contradicting the assumption that (i, a φ ) is a blocking pair. -Suppose that x ia φ = 0. There are two subcases.
• Suppose that for each of the classes C i a φ t ∈ a φ (C(i)), |µ α (i) ∩ C i a φ t | < q + (C i a φ t ). Then we can form a new feasible tuple µ α (i)|a φ . It can be inferred from the definition of the shaft that x(S(µ α (i)|a φ )) ≤ x(S(µ α (i)). Moreover, by Theorem 24(iii), we have x(K(i, S(µ α (i))) = |E i (x)|. Now by Constraint (3),
|E i (x)| + 1 ≤ x(K(i, S(µ α (i)|a φ ))) ≤ x(S(µ α (i)) + x(T (i, a φ )\{(i, a φ )}) + a∈µ α x(T (i, a)\{(i, a)}) = x(K(i, S(µ α (i)))) + x(T (i, a φ )\{(i, a φ )}) = |E i (x)| + x(T (i, a φ )\{(i, a φ )}). As a result, x(T (i, a φ )\{(i, a φ )}) = 1, implying that µ α (a φ ) ≻ a φ i, a contradiction to the assumption that (i, a) blocks µ α . • Suppose that there exists a class C i a φ l φ ∈ a φ (C(i)) for which |µ α (i) ∩ C i a φ l φ | = q + (C i a φ l φ ). Let C i
a φ l φ be the smallest such class. By definition of blocking pairs, there must exist an applicant a † ∈ µ α (i) ∩ C i a φ l φ who ranks lower than a φ . Choose a † to be the lowest ranking such applicant in µ α (i). We make the following critical observation:
x(S(µ α (i)| a † a φ )) ≤ x(S(µ α (i))) − x ia † .(10)
To see this, we first argue that given an item x ia > 0, if it does not contribute to the shaft S(µ φ (i)), then it cannot contribute to shaft S(µ α (i)| a † a φ ) either. It is trivial if a ≻ i a † . So assume that a † ≻ i a. First suppose that a ∈ C i a φ l φ . Then given any class C i at ∈ a(C(i)),
|µ α (i) ∩ C i at ∩ L i ≻a | = |µ α (i)| a † a φ ∩ C i at ∩ L i ≻a |,
and Theorem 24(iv) states that there is a class
C i al ∈ a(C(i)) such that |µ α (i) ∩ C i al ∩ L i ≻a | = q + (C i al ). Secondly suppose that a ∈ C i a φ l φ . Observe that q + (C i a φ l φ ) = |µ α (i)| a † a φ ∩ C i a φ l φ ∩ L i ≻a † | = |µ φ (i)| a † a φ ∩ C i a φ l φ ∩ L i ≻a | (
the first equality follows from the choice of a † ). In both cases, we conclude that x ia cannot contribute to the shaft S(µ φ (i)| a † a φ ). The term x ia † does not contribute to the shaft S(µ φ (i)| a † a φ ) by the same argument. Now using Constraint (3), Theorem 24(iii), and Inequality (10), we have i, a φ )).
|E i (x)| ≤ x(K(i, S(µ α (i)| a † a φ ))) ≤ x(S(µ α (i))) − x ia † + x(T (i, a φ )\{(i, a φ )}) + a∈µ α (i)\{a † } x(T (i, a)\{(i, a)})) = |E i (x)| − x(T (i, a † )) + x(T (
(Note that x ia φ = 0).
Therefore,
x(T (i, a φ )) ≥ x(T (i, a † )) ≥ 1 − α ′ ≥ 1 − α.
So µ α (a φ ) ≻ a φ i, again a contradiction to the assumption that (i, a φ ) blocks µ α . So we have established that the generated assignment µ α is a stable matching. Now the remaining proof is the same as in [23]. Assume that µ α (i, a) = 1 if and only if applicant a is assigned to institute i under µ α . Then a)dα and x can be written as a convex combination of µ α as α varies over the interval [0, 1). The integrality of the polytope thus follows.
Exp[µ α (i, a)] = x ia . Then x ia = 1 0 µ α (i,
⊓ ⊔
Optimal Stable Matching
Since our polytope is integral, we can write suitable objective functions to target for various optimal stable matchings using Ellipsoid algorithm [10]. As the proposed LP has an exponential number of constraints, we also design a separation oracle to get a polynomial time algorithm. The basic idea of our oracle is based on dynamic programming.
Median-Choice Stable Matching
An application of our polyhedral result is the following.
Theorem 26. Suppose that in the given instance, all classifications are laminar families and there is no lower bound, q − (C i j ) = 0 for any class C i j . Let µ 1 , µ 2 , · · · , µ k be stable matchings. If we assign every applicant to his median choice among all the k matchings, the outcome is a stable matching.
Proof. Let x µt be the solution based on µ t for any 1 ≤ t ≤ k and apply our packing algorithm on the fractional solution x = P k t=1 xµ t k . Then let α = 0.5 and µ 0.5 be the stable matching resulted from the cutting line of height α = 0.5. We make the following observation based on Theorem 24:
Suppose that applicant a is matched under x and those institutes with which he is matched are i 1 , i 2 , · · · , i k ′ , ordered based on their rankings on a's preference list. Assume that he is matched to i t n t times among the k given stable matchings. At the termination of the packing algorithm, each of the items x i l a , 1 ≤ l ≤ k ′ , appears in institute i l 's bins and its position is from l−1 t=1 nt k to l t=1 nt k . Now µ 0.5 gives every applicant his median choice follows easily from the above observation.
⊓ ⊔
Using similar ideas, we can show that an applicant-optimal stable matching must be institute-(lexicographical)-pessimal and similarly an applicant-pessimal stable matching must be institute-(lexicographical)-optimal: by taking x as the average of all stable matchings and consider the two matching µ ǫ and µ 1−ǫ with arbitrary small ǫ > 0. Hence, it is tempting to conjecture that the median choice stable matching is also a lexicographical median outcome for the institutes. Somehow surprisingly, it turns out not to be the case and a counter-example can be found in the appendix.
Polytope for Many-to-Many "Unclassified" Stable Matching
In the many-to-many stable matching problem, each entity e ∈ I ∪ A has a quota Q(e) ∈ Z + and a preference over a subset of the other side. A matching µ is feasible if given any entity e ∈ I ∪ A, (1) |µ(e)| ≤ Q(e), and (2) µ(e) is a subset of the entities on e ′ s preference list. A feasible matching µ is stable if there is no blocking pair (i, a), which means that i prefers a to one of the assignments µ(i), or if |µ(i)| < Q(i) and a ∈ µ(i); and similarly a prefers i to one of his assignments µ(a), or if |µ(a)| < Q(a) and i ∈ µ(a).
We now transform the problem into (many-to-one) LCSM. For each applicant a ∈ A, we create Q(a) copies, each of which retains the original preference of a. All institutes replace the applicants by their clones on their lists. To break ties, all institutes rank the clones of the same applicant in an arbitrary but fixed manner. Finally, each institute treats the clones of the same applicant as a class with upper bound 1. It can be shown that the stable matchings in the original instance and in the transformed LCSM instance have a one-one correspondence. Thus, we can use Constraints (1)- (5) to describe the former 6 .
Conclusion and Future Work
In this paper, we introduce classified stable matching and present a dichotomy theorem to draw a line between its polynomial solvability and NP-completeness. We also study the problem using the polyhedral approach and propose polynomial time algorithms to obtain various optimal matchings.
We choose the terms "institutes" and "applicants" in our problem definition, instead of the more conventional hospitals and residents, for a reason. We are aware that in real-world academics, many departments not only have ranking over their job candidates but also classify them based on their research areas. When they make their hiring decision, they have to take the quota of the classes into consideration. And in fact, we were originally motivated by this common practice.
classified stable matching has happened in real world. In a hospitals/residents matching program in Scotland, certain hospitals declared that they did not want more than one female physician. Roth [16] proposed an algorithm to show that stable matchings always exist.
There are quite a few questions that remain open. The obvious one would be to write an LP to describe LCSM with both upper bounds and lower bounds. Even though we can obtain various optimal stable matchings, the Ellipsoid algorithm can be inefficient. It would be nicer to have fast combinatorial algorithms. The rotation structure of Gusfield and Irving [11] seems the way to go.
A An Example for Section 2.2
In contrast to the generalized rural hospitals theorem in LCSM, if some institutes use intersecting classes, stable matching sizes may differ. Figure 3 is an example.
Institute Preferences
Classifications Quota i1:a1a2a3
C 1 1 = {a1, a2}, C 1 2 = {a1, a3} Q(i1) = 2, q + (C 1 1 ) = 1, q + (C 1 2 ) = 1 i2:a2a1a3a4 C 2 1 = {a2, a1}, C 2 2 = {a2, a3}, C 2 3 = {a2, a4} Q(i2) = 2, q + (C 2 1 ) = 1, q + (C 2 2 ) = 1, q + (C 2 3 ) = 1 Applicant
B Missing Proofs of Section 3
In this section, we prove Theorem 16. We assume that the set of posets P = {P 1 , P 2 , · · · , P k } contains a poset which is not a downward forest. Moreover, we assume that there is no lower bound on the classes. Without loss of generality, we assume that P 1 is not a downward forest. Such a poset must have a "V." By definition, there exists institute i whose class inclusion poset P (i) is isomorphic to P 1 . This implies that institute i must have two intersecting classes in C(i). In the following, we will present a reduction in which all institutes use at most two classes (that can be intersecting). It is straightforward to use some dummy institutes and applicants to "pad" our reduction so that every poset P j ∈ P is isomorphic to some class inclusion poset of the institutes in the derived instance. Our reduction is from one-in-three-sat. We will use an instance in which there is no negative literal. (NP-completeness still holds under this restriction [9].)
The overall goal is to design a reduction so that the derived P-classified stable matching instance allows a stable matching if and only if the given instance φ = c 1 ∧ c 2 ∧ · · · ∧ c k is satisfiable. We will build a set of clause gadgets to represent each clause c j . For every pair of literals which belong to the same clause, we create a literal-pair gadget. Such a gadget will guarantee that at most one literal it represents can be "activated" (set to TRUE). The clause gadget interacts with the literalpair gadgets in such a way that if the clause is to be satisfied, exactly one literal it contains can be activated.
Literal-Pair Gadget Suppose that x j i and x j i ′ both belong to the same clause c j . We create a gadget Υ j i,i ′ composed of four applicants {a j i,t } 2 t=1 ∪ {a j i ′ ,t } 2 t=1 and two institutes {I j i , I j i ′ } whose preferences and classifications are summarized below.
a j i,1 : I j i ≻ Γ (a j i,1 ) ≻ I j i ′ I j i : a j i,2 ≻ a j i,1 ≻ a j i ′ ,2 ≻ a j i ′ ,1 ≻ Ψ (I j i ) C I j i 1 = {a j i,1 , a j i,2 }, C I j i 2 = {a j i,1 , a j i ′ ,1 } a j i,2 : I j i ′ ≻ I j i Q(I j i ) = 2, q + (C I j i 1 ) = 1, q + (C I j i 2 ) = 1 a j i ′ ,1 : I j i ≻ Γ (a j i ′ ,1 ) ≻ I j i ′ I j i ′ : a j i,1 ≻ a j i,2 ≻ a j i ′ ,1 ≻ a j i ′ ,2 C I j i ′ 1 = {a j i,1 , a j i,2 } a j i ′ ,2 : I j i ′ ≻ I j i Q(I j i ′ ) = 2, q + (C I j i ′ 1 ) = 1
We postpone the explanation of the Γ and Ψ functions for the time being. We first make the following claim.
Claim B: Suppose that in a stable matching µ, the only possible assignments for
{a j i,1 , a j i,2 , a j i ′ ,1 , a j i ′ ,2 } are {I j i , I j i ′ }.
Then there can only be three possible outcomes in µ.
1. µ(a j i,1 ) = I j i , µ(a j i,2 ) = I j i ′ , µ(a j i ′ ,1 ) = I j i ′ , µ(a j i ′ ,2 ) = I j i . (In this case, we say x i is activated while x i ′ remains deactivated.) 2. µ(a j i,1 ) = I j i ′ , µ(a j i,2 ) = I j i , µ(a j i ′ ,1 ) = I j i , µ(a j i ′ ,2 ) = I j i ′ . (In this case, we say x i ′ is activated while x i remains deactivated.) 3. µ(a j i,1 ) = I j i ′ , µ(a j i,2 ) = I j i , µ(a j i ′ ,1 ) = I j i ′ , µ(a j i ′ ,2 ) = I j i . () = I j i , µ(a j i,2 ) = I j i ′ , µ(a j i ′ ,1 ) = I j i , µ(a j i ′ ,2 ) = I j i ′ will not happen due to the quota q + (C I j i 2 )
. This case corresponds to the situation that x i and x i ′ are both activated and is what we want to avoid.
We now explain how to realize the supposition in Claim B about the fixed potential assignments for {a j i,t } 2 t=1 ∪ {a j i ′ ,t } 2 t=1 in a stable matching. It can be easily checked that if a j i,1 is matched to some institute in Γ (a j i,1 ), or either of {a j i,1 , a j i,2 } is unmatched; or if either of {a j i ′ ,1 , a j i ′ ,2 } is unmatched, then there must exist a blocking group involving a subset of
{I j i , I j i ′ , {a j i,t } 2 t=1 , {a j i ′ ,t } 2 t=1 }.
However, the following matching µ φ can happen in which a j i ′ ,1 is matched to some institute in Γ (a j i ′ ,1 ) but there is no blocking group : µ φ (a j i,1 ) = I j i , µ φ (a j i,2 ) = µ φ (a j i ′ ,2 ) = I j i ′ , µ φ (a j i ′ ,1 ) ∈ Γ (a j i ′ ,1 ). 7 To prevent the above scenario from happening (i.e., we want µ φ to be unstable), we introduce another gadget Υ j i , associated with I j i , to guarantee a blocking group will appear. We now list the preferences and classifications of the members of Υ j i below.
a j i,1 : I j i,4 ≻ I j i,1 ≻ I j i,3 ≻ I j i,2 I j i,1 : a j i,5 ≻ a j i,2 ≻ a j i,4 ≻ a j i,6 ≻ a j i,3 ≻ a j i,1 Q(I j i,1 ) = 2 a j i,2 : I j i,3 ≻ I j i,4 ≻ I j i,2 ≻ I j i,1 I j i,2 : a j i,4 ≻ a j i,6 ≻ a j i,2 ≻ a j i,3 ≻ a j i,1 ≻ a j i,5 C I j i,2 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,2 2 = {a j i,3 , a j i,4 , a j i,5 } a j i,3 : I j i,4 ≻ I j i,3 ≻ I j i,1 ≻ I j i,2 Q(I j i,2 ) = 2, q + (C I j i,2 1 ) = 1, q + (C I j i,2 2 ) = 1 a j i,4 : I j i,4 ≻ I j i,1 ≻ I j i,2 ≻ I j i,3 I j i,3 : a j i,4 ≻ a j i,5 ≻ a j i,6 ≻ a j i,3 ≻ a j i,1 ≻ a j i,2 C I j i,3 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,3 2 = {a j i,3 , a j i,4 , a j i,5 } a j i,5 : I j i,2 ≻ I j i,4 ≻ I j i,3 ≻ I j i,1 Q(I j i,3 ) = 2, q + (C I j i,3 1 ) = 1, q + (C I j i,3 2 ) = 1 a j i,6 : I j i,2 ≻ I j i,4 ≻ I j i,3 ≻ I j i,1 I j i,4 : a j i,4 ≻ a j i,1 ≻ a j i,6 ≻ a j i,2 ≻ a j i,3 ≻ a j i,4 C I j i,4 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,4 2 = {a j i,3 , a j i,4 , a j i,5 } Q(I j i,4 ) = 2, q + (C I j i,4 1 ) = 1, q + (C I j i,4
2 ) = 1 7 It can be verified that if a j i,1 is matched to some institute in Γ (a j i ′ ,1 ), the above assignment is the only possibility that no blocking group arises.
The above instance Υ j i has the following features, every one of which is crucial in our construction. 1. In a matching µ φ , suppose that institute I j i is only assigned a j i,1 while a j i ′ ,1 is assigned to some institutes in Γ (a j i ′ ,1 ) (the problematic case we discussed above). As a result, institute I j i can take one more applicant from the set {a j i,t } 6 t=1 . By Feature A, there must exist a blocking group involving the members in Υ j i . More importantly, this blocking group need not be composed of I j i and two applicants from {a j i,t } 6 t=1 . 2. In a matching µ φ , suppose that institutes I j i is assigned two applicants from the set {a j i,t , a j i ′ ,t } 2 t=1 . Then I j i,1 can be regarded as being removed from the instance Υ j i . And there exists a stable matching among the other members of the instance Υ j i . This explains the necessity of Feature B. 3. Finally, since I j i already uses two intersecting classes, I j i,1 should not use any more classes. This explains the reason why Feature C is necessary.
j i into gadget Υ j i,i ′ . To be precise, let Ψ (I j i ) = a j i,5 ≻ a j i,2 ≻ a j i,4 ≻ a j i,6 ≻ a j i,3 ≻ a j i
We have left the functions Γ (a j i,1 ) and Γ (a j i ′ ,1 ) unexplained so far. They contain institutes belonging to the clauses gadgets, which will be the final component in our construction.
Clause Gadget Suppose that c j = x j 1 ∨ x j 2 ∨ x j 3 .
We create a clause gadgetΥ j composed of two institutes {Î j t } 2 t=1 and six applicants {â j t } 6 t=1 . Their preferences and classifications are summarized below.
We now explain how the Λ functions in the clause gadgets interact with the Γ functions in the literal-pair gadgets. The former is composed of applicants in the literal-pair gadgets while the latter is composed of institutes in the clause gadgets. Our intuition is that the only possible stable matchings in the clause gadgets will enforce exactly one of its three literals to be activated. To be precise, let π(X) denote an arbitrary order among the elements in the set X. Then:
â j 1 :Î j 2 ≻Î j 1Î j 1 :â j 5 ≻â j 1 ≻â j 2 ≻ Λ(x j 1 ) ≻â j 6 ≻ Λ(x j 2 ) ≻â j 3 ≻ Λ(x j 3 ) ≻â j 4 a j 2 :Î j 1 ≻2 j 1
Finally, we remark that the three possible outcomes in µ listed in the lemma will guarantee that exactly one of the three literals in clause c j can be activated. The reason is again the same as in the last two cases that we just explained. This completes the proof of Claim C. ⊓ ⊔ Now by Claim C, we establish Theorem 16 C Missing Proofs of Section 4 Lemma 17. In LCSM, if there is no lower bound, i.e., given any class C i j , q − (C i j ) = 0, then a stable matching as defined in Definition 2 can be equivalently defined as follows. A feasible matching µ is stable if and only if there is no blocking pair. A pair (i, a) is blocking, given that µ(i) = (a i1 , a i2 , · · · , a ik ),
k ≤ Q(i), if -i ≻ a µ(a); -for any class C i at ∈ a(C(i)), |L i ≻a ∩ µ(i) ∩ C i at | < q + (C i at ).
Proof. If we have a blocking group (i; g), institute i and the highest ranking applicant in g\µ(i) must be a blocking pair. Conversely, given a blocking pair (i; a), assuming that |µ(i)| = Q(i) (the case that |µ(i)| < Q(i) follows a similar argument), we can form a blocking group (i; µ(i)| a † a), where a † is chosen as follows: (1) if there exists a class C i at ∈ a(C(i)) such that |µ(i) ∩ C i at | = q + (C i at ), choose the smallest such class C i at ∈ a(C(i)) and let a † be the lowest ranking applicant in µ(i) ∩ C i at ; (2) otherwise, a † is simply the lowest ranking applicant in µ(i).
⊓ ⊔ Lemma 19. Every stable matching solution x satisfies the comb inequality for any comb K(i, S(A i )):
x(K(i, S(A i )) ≡ x(S(A i )) + a j ∈A i x(T (i, a j )\{i, a j }) ≥ |A i |.
We use the following notation to facilitate the proof. Give a tuple A i , we define y ia as follows:
y ia = 1 either a ∈ A i , x(T (i, a)) = 1; or a ∈ A i , x ia = 1, and (i, a) ∈ S(A i ); 0 o.w.
Let y(C i j ) = a∈L i ∩C i j y ia . This quantity indicates how much a class C i j contributes to the comb value x(K(i, S(A i ))). Thus, if U is a set of classes in C(i) partitioning L i , then x(K(i, S(A i ))) = C i j ∈U y(C i j ).
Proof. We prove by showing that if x(K(i, S(A i ))) < |A i |, there exists a blocking pair (i, a † ), where a † ∈ A i . We proceed by contradiction. First note that there exists a non-empty subset G ⊆ A i of applicants a for whom x(T (i, a)) = 0, otherwise, x(K(i, S(A i ))) ≥ |A i |, an immediate contradiction. For each applicant a ∈ G, there must exist a class C i al ∈ a(C(i)) for which a ′ ∈L i ) is a blocking pair and we are done. Now for each applicant a ∈ G, choose the smallest class C i al for which a ′ ∈L i ≻a ∩C i al x ia ′ = q + (C i al ) and denote this class as C a . We introduce a procedure to organize a set U of disjoint classes.
≻a ∩C i al x ia ′ = q + (C i al ), otherwise, (i, a
Let G be composed of a 1 , a 2 , · · · , a |G| ordered based on their decreasing rankings on L i For i = 1 To |G| if a i ∈ C ∈ U , then do nothing else U := U \{C|C ∈ U, C ⊂ C a l } //C a l may be a superclass of some classes in U U := U ∪ {C a l }. // adding C a l into U Claim The output U from the above procedure comprises of a disjoint set of classes containing all applicants in G, and for each class C i j ∈ U , y(C i j ) ≥ q + (C i j ). We will prove the claim shortly. Now
x(K(i, S(A i ))) = C i j ∈U y(C i j ) + |A i \{∪ C i j ∈U C i j }| ≥ C i j ∈U q + (C i j ) + |A i \{∪ C i j ∈U C i j }| ≥ |A i |,
a contradiction. ⊓ ⊔ Proof of the Claim. It is easy to see that the classes in U are disjoint and contain all applicants in G. Below we show that during the execution of the procedure, if C i j ∈ U , then y(C i j ) ≥ q + (C i j ). We proceed by induction on the number of times U is updated. In the base case U is an empty set so there is nothing to prove.
For the induction step, assume that a l is being examined and C a l is about to be added into U . Observe that even though a∈L i ≻a l ∩Ca l x ia = q + (C a l ), there is no guarantee that if x ia = 1, then
y ia = 1 for each a ∈ L i ≻a l ∩ C a l .
The reason is that there may exist some class C i j ∈ a(C(i)) for which
|A i ∩ C i j ∩ L i ≻a | = q + (C i j ) and a ∈ A i . Then (i, a)
is not part of the shaft x(S(A i )) and y ia = 0. To deal with the above situation, we need to do some case analysis. Let B be the set of subclasses
C i j of C a l for which |A i ∩C i j ∩L i ≻a l | = q + (C i j )
. Choose D to be the subclasses of C a l so that ℜ(B∪U )∪D partitions C a l . We make three observations below.
(i) for each class C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , y(C i j ) ≥ q + (C i j ) ≥ a∈L i a l ∩C i j x ia .
(ii) for each class C i j ∈ D, if a ∈ L i ≻a l ∩ C i j and x ia = 1, then y ia = 1. (iii) for each class C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , then for each applicant a ∈ L i ≻a l ∩ C i j ∩ A i , either a ∈ G and a ∈ C ∈ U , or that a ∈ G (implying that x(T (i, a)) = 1). Moreover,
y(C i j ) ≥ a∈L i ≻a l ∩C i j x ia
(i) is because of the induction hypothesis and the feasibility assumption of x. (ii) follows from the fact that a ranks higher than a l and the way we define a class in D. For (iii), first notice that if C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , then such a class C i j must be part of ℜ(B) and C i j may contain some classes in U . Now suppose that a i ∈ G ∩ L i ≻a l but does not belong to any class in U . Then our procedure would have added the class C a i into U before examining a l , a contradiction. To see the last statement of (iii), let G ′ be set of applicants in L i ≻a l ∩ C i j ∩ A i who do not belong to any classes in U . Then
y(C i j ) ≥ C i k ∈U,C i k ⊂C i j y(C i k ) + |G ′ | ≥ C i k ∈U,C i k ⊂C i j q + (C i k ) + |G ′ | ≥ q + (C i j ) ≥ a∈L i ≻a l ∩C i j x ia ,
where the first inequality follows from the first part of (iii), the second inequality the induction hypothesis, the third the fact that C i j ∈ ℜ(B) (thus |L i ≻a l ∩ C i j ∩ A i | = q + (C i j )), and the fourth the feasibility assumption of x. Now combining all the three observations, we conclude that
y(C a l ) = C i j ∈ℜ(B∪U ) y(C i j ) + C i k ∈D y(C i j ) ≥ C i k ∈ℜ(B l ∪U )∪D l a∈L i ≻a l ∩C i k x ia = q + (C i j ),
and the induction step is completed. ⊓ ⊔ Lemma 20. Every stable matching solution x satisfies the following inequality for any class-tuple t i j :
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia (*)
Proof. We prove by contradiction. Suppose that in a given class-tuple t i j (*) does not hold. We will show that we can find a blocking pair (i, a † ), where a † ∈ t i j . Let the set of applicants a ∈ t i j with x (T (i, a)
) = 0 be G, α = a ′ ∈L i ≺t i j ∩C i j x ia ′ > 0, and β = a ′ ∈t i j x ia ′ .
By assumption, at most α − 1 applicants a ∈ t i j have x(T (i, a)\{(i, a)}) = 1. Thus,
|G| ≥ q + (C i j ) − β − α + 1.(11)
Claim: At least one applicant a † ∈ G belongs to a sequence of classes C i a † t ∈ a † (C(i)) such
that if C i a † t ⊆ C i j , then a ′ ∈L i ≻a † ∩C i a † t x ia ′ < q + (C i a † t ).
We will prove the claim shortly. Observe that given any class
C i k ⊃ C i j , a ′ ∈L i ≻a † ∩C i k x ia ′ < q + (C i k )
: as α > 0, some applicant a φ ∈ C i k ranking lower than a † has x ia φ = 1 and Constraint (2) enforces that
a ′ ∈L i ∩C i k x ia ′ ≤ q + (C i k ).
Combining the above facts, we conclude that (i, a † ) is a blocking pair. ⊓ ⊔ Proof of the Claim. We prove by contradiction. Suppose that for every applicant a ∈ G, there exists some class C i at ∈ a(C(i)), C i at ⊆ C i j , and a ′ ∈ L i ≻a ∩C i at x ia ′ = q + (C i at ). Let B be the set of classes C i k ⊆ C i j such that C i k contains an applicant a ∈ G and a ′ ∈L i ≻a ∩C i k x ia ′ = q + (C i k ) (which then will equal a ′ ∈L i t i j ∩C i k x ia ′ due to Constraint (2)). For each class C i k ∈ ℜ(B),
a∈L i t i j ∩C i k x ia = q + (C i k ) ≥ |t i j ∩ C i k | = a ′ ∈L i t i j ∩t i j ∩C i k x ia ′ + |G ∩ C i k |,(12)
where the first inequality follows from the definition of the class-tuple. Now we have
q + (C i j ) − α − β ≥ C i k ∈ℜ(B) a ′ ∈(L i ≻a ‡ ∩C i k )\t i j x ia ′ ≥ C i k ∈ℜ(B) |G ∩ C i k | = |G| ≥ q + (C i j ) − α − β + 1,
a contradiction. Note that the first inequality follows from Constraint (2), the second inequality from (12), the equality right after is because every applicant in G belongs to some class in B, and the last inequality is due to (11). ⊓ ⊔
D Separation Oracle in Section 4.1
It is clear that Constraints (1)(2)(4) can be separated in polynomial time. So we assume that x satisfies these constraints and focus on finding a violated Constraint (3) and/or Constraint (5).
Separating Constraint (3)
We first make an observation. For each institute i, it suffices to check whether all the combs with exactly Q(i) teeth satisfy Constraint (3). To see this, suppose that there is a feasible tuple A i with less than Q(i) applicants and x(K(i, S(A i ))) < |A i |. Then we can add suitable applicants into A i to get a feasible tuple A i with exactly Q(i) applicants. Noticing that
x(S(A i )) ≤ x(S(A i )), we have
x(K(i, S(A i ))) ≤ x(S(A i )) +
≤ |A i | + |A i | − |A i | = |A i |,
where the last inequality follows from our assumption that x satisfies Constraint (1).
To illustrate our idea, we first explain how to deal with the case that the original classification C(i) is just a partition over L i (before we add the pseudo root class C i ♯ ). We want to find out the tuple A i of length Q(i), whose lowest ranking applicant is a † , which gives the smallest x(K(i, S(A i ))). If we have this information for all possible a † , we are done. Note that because of our previous discussion, if there is no feasible tuple of length Q(i) whose lowest ranking applicant is a † , we can ignore those cases.
Our main idea is to decompose the value of x(K(i, S(A i ))) based on the classes and use dynamic programming to find out the combinations of the tooth-applicants that give the smallest comb values. More precisely, Definition 27. Assume that A i j ⊆ C i j , 0 ≤ |A i j | ≤ q + (C i j ), and all applicants in A i j rank higher than a † . Let Note that this definition requires that if x ia contributes to x(A i j , a † ), then a has to rank higher than a † , belongs to C i j , and the (i, a) is part of the shaft S(A i j ). Suppose that we have properly stored all the possible values of Z(C i j , s j , a † ) and assume that a † ∈ C i j ′ . Then for each class C i j = C i j ′ , assume that 0 ≤ s j ≤ q + (C i j ) and for class C i j ′ , 0 ≤ s j ′ ≤ q + (C i j ′ ) − 1, then the tuple A i whose lowest ranking applicant is a † , that gives the smallest comb value is the following one:
x(K(i, S(A i ))) = x(T (i, a † )) + min s j :
P C i j ∈C(i) s j =Q(i)−1 C i j ∈C(i) Z(C i j , s j , a † ).
The above quantity can be calculated using standard dynamic programming technique. So the question boils down to how to calculate Z(C i j , s j , a † ). There are two cases.
For the induction step, let C i j be a non-leaf class and assume that a ‡ ∈ C i k ′ ∈ c(C i j ). To calculate Z(C i j , s j , a ‡ , a † ), we need to find out a feasible tuple A i j of size s j , all of whose applicants rank at least as high as a ‡ so that x(A i j , a † ) is minimized. Observe that a feasible tuple A i j can be decomposed into a set of tuples
A i j = C i k ∈c(C i j ) A i k , where A i k ⊆ C i k ∈ c(C i j ).
1. Suppose that s j < q + (C i j ). Then by definition, x(S(A i j ), a † ) = C i k ∈c(C i j ) x(S(A i k ), a † ). So
x(A i j , a † ) = For each class C i k ∈ c(C i j ), the minimum quantity a∈A i k x(T (i, a)\{(i, a)}) + x(S(A i k ), a † ) is exactly Z(C i k , s k , a ‡ , a † ). As a result, for each class C i k = C i k ′ , let 0 ≤ s k ≤ q + (C i k ), and for class C i k ′ , let 0 ≤ s k ′ ≤ q + (C i k ′ ) − 1:
Z(C i j , s j , a ‡ , a † ) = x(T (i, a ‡ )) + min s k :
P s k =s j −1 C i k ∈c(C i j )
Z(C i k , s k , a ‡ , a † ).
Thus, we can find out Z(C i j , s j , a ‡ , a † ) by dynamic programming. 2. Suppose that s j = q + (C i j ). Note that this time since the class C i j will be "saturated", the term x(S(A i j ), a † ) does not get any positive values x ia , provided that a ∈ C i j ∩ (L i ≻a † ∩ L i ≺a ‡ ). So x(S(A i j ), a † ) = C i k ∈c(C i j ) x(S(A i k ), a ‡ ) and this implies that
x(A i j , a † ) = Let a ‡ be the lowest ranking applicant that ranks higher than a ‡ . Then for each class, C i k ∈ c(C i j ), the minimum quantity a∈A i k x(T (i, a)\{(i, a)}) + x(S(A i k ), a ‡ ) is exactly Z(C i k , s k , a ‡ , a ‡ ). Assuming that for each class C i k = C i k ′ , let 0 ≤ s k ≤ q + (C i k ), and let 0 ≤ s k ′ ≤ q + (C i k ′ ) − 1, we have Z(C i j , s j , a ‡ , a † ) = x(T (i, a ‡ )) + min s k :
P s k =s j −1 C i k ∈c(C i j )
Z(C i k , s k , a ‡ , a ‡ ).
As before, this can be calculated by dynamic programming. ⊓ ⊔ Now choose the smallest Z(C i ♯ , Q(i) − 1, a ‡ , a † ) among all possible a ‡ who rank higher than a † and assume that A i ♯ is the corresponding tuple. It is easy to see that among all feasible tuples A i of length Q(i) whose lowest ranking applicant is a † , the one has the smallest comb value x(K(i, S(A i )), is exactly the tuple A i ♯ ∪ {a † }.
Separating Constraint (5) We again make use of dynamic programming. The idea is similar to the previous one and the task is much simpler, so we will be brief. Suppose that we are checking all the class-tuples T i j corresponding to class C i j . Let T i j,a † ⊆ T i j be the subset of class-tuples whose lowest ranking applicants is a † . We need to find out the class-tuple t i j,a † ∈ T i j,a † with the smallest value
x(T (i, a † )\{(i, a † )}) + a∈t i j,a † \{a † }
x(T (i, a)\{(i, a)}),
and check whether this value is no less than a∈C i j ∩L i ≺a †
x ia . If it is, then we are sure that all classtuples in T i j,a † satisfy Constraint (5), otherwise, we find a violated constraint. The above quantity can be easily calculated by dynamic programming as before.
E A Counter Example for Section 4.2
The example shown in Figure 4 contains five stable matchings. If we apply the median choice operation on all of them, we get the stable matching µ 2 , which does not give institutes i 1 and i 2 their lexicographical median outcome.
Institute Preferences
Classifications Class Bounds i1:axaya1a2a3a4 C 1 1 = {a1, a2}, C 1 2 = {a3, a4} Q(i1) = 2, q + (C 1 1 ) = 1, q + (C 1 2 ) = 1 i2:azawa2a1a4a3 C 2 1 = {a1, a2}, C 2 2 = {a3, a4} Q(i2) = 2, q + (C 2 1 ) = 1, q + (C 2 2 ) = 1 i3:a1a2a3a4axayazaw Q(i3) = 4
Applicant Preferences a1:i2i1i3 a2:i1i2i3 a3:i2i1i3 a4:i1i2i3 ax:i3i1 ay:i3i1 az:i3i2 aw:i3i2
Stable Matchings µ1 = {(i1; ax, ay), (i2; az, aw), (i3; a1, a2, a3, a4)} µ2 = {(i1; a1, a3), (i2; a2, a4), (i3; ax, ay, az, aw)} µ3 = {(i1; a1, a4), (i2; a2, a3), (i3; ax, ay, az, aw)} µ4 = {(i1; a2, a3), (i2; a1, a4), (i3; ax, ay, az, aw)} µ5 = {(i1; a2, a4), (i2; a1, a3), (i3; ax, ay, az, aw)} Fig. 4. An example of median choice stable matching which does not give the institutes their lexicographically median outcome.
| 23,073 |
0907.1779
|
2951984019
|
We introduce the classified stable matching problem, a problem motivated by academic hiring. Suppose that a number of institutes are hiring faculty members from a pool of applicants. Both institutes and applicants have preferences over the other side. An institute classifies the applicants based on their research areas (or any other criterion), and, for each class, it sets a lower bound and an upper bound on the number of applicants it would hire in that class. The objective is to find a stable matching from which no group of participants has reason to deviate. Moreover, the matching should respect the upper lower bounds of the classes. In the first part of the paper, we study classified stable matching problems whose classifications belong to a fixed set of order types.'' We show that if the set consists entirely of downward forests, there is a polynomial-time algorithm; otherwise, it is NP-complete to decide the existence of a stable matching. In the second part, we investigate the problem using a polyhedral approach. Suppose that all classifications are laminar families and there is no lower bound. We propose a set of linear inequalities to describe stable matching polytope and prove that it is integral. This integrality allows us to find various optimal stable matchings using Ellipsoid algorithm. A further ramification of our result is the description of the stable matching polytope for the many-to-many (unclassified) stable matching problem. This answers an open question posed by Sethuraman, Teo and Qian.
|
Abraham, Irving and Manlove introduced the student-project allocation problem @cite_2 . It can be shown that in , if all classifications are just partitions over the applicants and there is no lower bound, our problem is equivalent to a special case of their problem. They posed the open question whether there is a polynomial time algorithm for their problem if there is lower bound on the projects (classes). Our result in Section 2 gives a partial positive answer.
|
{
"abstract": [
"We study the Student-Project Allocation problem (SPA), a generalisation of the classical Hospitals Residents problem (HR). An instance of SPA involves a set of students, projects and lecturers. Each project is offered by a unique lecturer, and both projects and lecturers have capacity constraints. Students have preferences over projects, whilst lecturers have preferences over students. We present two optimal linear-time algorithms for allocating students to projects, subject to the preference and capacity constraints. In particular, each algorithm finds a stable matching of students to projects. Here, the concept of stability generalises the stability definition in the HR context. The stable matching produced by the first algorithm is simultaneously best-possible for all students, whilst the one produced by the second algorithm is simultaneously best-possible for all lecturers. We also prove some structural results concerning the set of stable matchings in a given instance of SPA. The SPA problem model that we consider is very general and has applications to a range of different contexts besides student-project allocation."
],
"cite_N": [
"@cite_2"
],
"mid": [
"2012500540"
]
}
|
Classified Stable Matching
|
Imagine that a number of institutes are recruiting faculty members from a pool of applicants. Both sides have their preferences. It would be ideal if there is a matching from which no applicant and institute have reason to deviate. If an applicant prefers another institute to the one he is assigned to (or maybe he is unassigned) and this institute also prefers him to any one of its assigned applicants, then this institute-applicant pair is a blocking pair. A matching is stable if there is no blocking pair.
The above scenario is the well-studied hospitals/residents problem [7,11] in a different guise. It is known that stable matchings always exist and can be found efficiently by the Gale-Shapley algorithm. However, real world situations can be more complicated. An institute may have its own hiring policy and may find certain sets of applicants together unacceptable. For example, an institute may have reasons to avoid hiring too many applicants graduated from the same school; or it may want to diversify its faculty so that it can have researchers in many different fields.
This concern motivates us to consider the following problem. An institute, besides giving its preference among the applicants, also classifies them based on their expertise (or some other criterion). For each class, it sets an upper bound and a lower bound on the number of applicants it would hire. Each institute defines its own classes and classifies the applicants in its own way (and the classes need not be disjoint). We consider this flexibility a desirable feature, as there are some research fields whose boundaries are blurred; moreover, some versatile researchers may be hard to categorize.
We call the above problem classified stable matching. Even though motivated by academic hiring, it comes up any time objects on one side of the matching have multiple partners that may be classified. For example, the two sides can be jobs and machines; each machine is assigned several jobs but perhaps cannot take two jobs with heavy memory requirements.
To make the problem precise, we introduce necessary notation and terminology. A set A of applicants and a set I of institutes are given. Each applicant/institute has a strictly-ordered (but not necessarily complete) preference list over the other side. The notation x indicates either strictly better or equal in terms of preference of an entity e ∈ A ∪ I while ≻ e means strictly better. For example, if applicant a ∈ A strictly prefers institute i ∈ I to another institute i ′ ∈ I, we write i ≻ a i ′ . The preference list of institute i is denoted as L i . The set of applicants on L i who rank higher (respectively lower) than some particular applicant a are written as L i ≻a (respectively L i ≺a ). An institute i has a capacity Q(i) ∈ Z + , the maximum number of applicants it can hire. It defines its own classification C(i) = {C i j } |C(i)| j=1 , which is a family of sets over the applicants in its preference list. Each class C i j ∈ C(i) has an upperbound q + (C i j ) ∈ Z + and a lowerbound q − (C i j ) ∈ Z + ∪ {0}, on the number of applicants it would hire in that class. Given a matching µ, µ(a) is the institute applicant a is assigned to. We write µ(i) = (a i1 , a i2 , · · · , a ik ), k ≤ Q(i) to denote the set of applicants institute i gets in µ, where a ij are listed in decreasing order based on its preference list. In this paper, we will slightly abuse notation, treating an (ordered) tuple such as µ(i) as a set. Definition 1. Given a tuple t = (a i1 , a i2 , · · · , a ik ) where a ij are ordered based on their decreasing rankings on institute i's preference list, it is said to be a feasible tuple of institute i, or just feasible for short, if the following conditions hold:
k ≤ Q(i); -given any class C i j ∈ C(i), q − (C i j ) ≤ |t ∩ C i j | ≤ q + (C i j ).
Definition 2.
A matching µ is feasible if all the tuples µ(i), i ∈ I are feasible. A feasible matching is stable if and only if there is no blocking group. A blocking group is defined as follows. Let µ(i) = (a i1 , a i2 , · · · , a ik ), k ≤ Q(i). A feasible tuple g = (a ′ i1 , a ′ i2 , · · · , a ′ ik ′ ), k ≤ k ′ ≤ Q(i), forms a blocking group (i;g) with institute i if for 1 ≤ j ≤ k, i a ′ ij µ(a ′ ij ) and a ′ ij i a ij ; -either there exists l, 1 ≤ l ≤ k such that a ′ il ≻ i a il and i ≻ a ′ il µ(a ′ il ), or that k ′ > k.
Informally speaking, the definition requires that for a blocking group to be formed, all involved applicants have to be willing to switch to, or stay with, institute i. The collection of applicants in the blocking group should still respect the upper and lower bounds in each class; moreover, the institute gets a strictly better deal (in the Pareto-optimal sense). Note that when there is no class lower bound, then the stable matching as defined in Definition 2 can be equivalently defined as a feasible matching without the conventional blocking pairs (see Lemma 17 in Section 4). When the class lower bound is present, the definition of the blocking groups captures our intuition that an institute should not indiscriminately replace a lower ranking applicant assigned to it with a higher applicant (with whom it forms a blocking pair), otherwise, the outcome for it may not be a feasible one. In our proofs, we often use the notation µ(i)| a a ′ to denote a tuple formed by replacing a ∈ µ(i) with a ′ . The order of the tuple µ(i)| a a ′ is still based on institute i's preference list. If we write µ(i)|a, then this new tuple is obtained by adding a into µ(i) and re-ordered. In a matching µ, if a class C i j is fully-booked, i.e. |µ(i) ∩ C i j | = q + (C i j ), we often refer to such a class as a "bottleneck" class. We also define an "absorption" operation: given a set B of classes, ℜ(B) returns the set of classes which are not entirely contained in other classes in B.
Our Results It would be of interest to know how complicated the classifications of the institutes can be while still allowing the problem a polynomial time algorithm. In this work, we study the classified stable matching problems whose classifications belong to a fixed set of "order types." The order type of a classification is the inclusion poset of all non-empty intersections of classes. We introduce necessary definitions to make our statement precise. Definition 3. The class inclusion poset P (i) = (C(i), ) of an institute i is composed of sets of the elements from L i :
C(i) = {C|C = C i j ∩ C i k , where C i j , C i k ∈ C(i)} 1 . In P (i), C i j ≻ C i k if C i j ⊃ C i k ; and C i j C i k if C i j ⊃ C i k and C i k ⊃ C i j .
Definition 4. Let P = {P 1 , P 2 , · · · , P k } be a set of posets. A classified stable matching instance (A, I) belongs to the group of P-classified stable matching problems if for each poset P j ∈ P, there exists an institute i ∈ I whose class inclusion poset P (i) is isomorphic to P j and conversely, every class inclusion poset P (i) is isomorphic to a poset in P.
We call a poset a downward forest if given any element, no two of its successors are incomparable. Our first main result is the following dichotomy theorem.
Theorem 5. Let P = {P 1 , P 2 , · · · , P k } be a set of posets. P-classified stable matching problems can be solved in polynomial time if every poset P j ∈ P is a downward forest; on the other hand, if P contains a poset P j which is not a downward forest, the existence of a stable matching is NP-complete.
We remark that if P is entirely composed of downward forests, then every classification C(i) must be a laminar family 2 . In this case, we call the problem laminar classified stable matching (henceforth LCSM).
We present an O(m 2 ) time algorithm for LCSM, where m is the total size of all preferences. Our algorithm is extended from the Gale-Shapley algorithm. Though intuitive, its correctness is difficult to argue due to various constraints 3 . Furthermore, we show that several well-known structural results in the hospitals/residents problem can be further generalized in LCSM. On the other hand, if some institute i has a classification C(i) violating laminarity, then P must contain a poset which has a "V" (where the "bottom" is induced by two intersecting classes in C(i) which are its parents "on top.") We will make use of this fact to design a gadget for our NP-complete reduction. In particular, in our reduction, all institutes only use upperbound constraints. Sections 2 and 3 will be devoted to these results.
Our dichotomy theorem implies a certain limit on the freedom of the classifications defined by the institutes. For example, an institute may want to classify the applicants based on two different criteria simultaneously (say by research fields and gender); however, our result implies this may cause the problem to become intractable.
In the second part, we study LCSM using a mathematical programming approach. Assume that there is no lower bound on the classes. We extend the set of linear inequalities used by Baïou and Balinski [3] to describe stable matchings and generalize a bin-packing algorithm of Sethuraman, Teo, and Qian [22] to prove that the polytope is integral. The integrality of our polytope allows us to use suitable objective functions to obtain various optimal stable matchings using Ellipsoid algorithm. As our LP has an exponential number of constraints, we also design a separation oracle.
By studying the geometric structure of fractional stable matchings, we are able to generalize a theorem of Teo and Sethuraman [23]: in (one-to-one) stable marriage, given any number of stable matchings, if we assign every man his median choice among all women with whom he is matched in the given set of matchings and we do similarly for women, the outcome is still a stable matching. This theorem has been generalized in the context of hospitals/residents problem [5,13,22]. We prove that in LCSM, this theorem still holds: if we apply this "median choice operation" on all applicants, the outcome is still a stable matching 4 .
A final ramification of our polyhedral result is an answer to an open question posed by Sethuraman, Teo and Qian [22]: how do we describe the stable matching polytope in the classical "unclassified" many-to-many stable matching problem? We show this problem can be reduced to LCSM by suitable cloning and classifications.
All the polyhedral results will be presented in Section 4. In Section 5 we conclude. Omitted proofs and details can be found in the appendix.
An Algorithm for Laminar Classified Stable Matching
In this section, we present a polynomial time algorithm to find a stable matching if it exists in the given LCSM instance, otherwise, to report that none exists.
We pre-process our instance as follows. If applicant a is on institute i's preference list, we add a class C i a1 = {a} into C(i). Furthermore, we also add a class C i ♯ into C(i) including all applicants in L i . After this pre-processing, the set of classes in C(i) form a tree whose root is the C i ♯ ; moreover, an applicant a belongs to a sequence of classes a(C(i)) = (C i a1 , C i a2 , · · · , C i az (= C i ♯ )), which forms a path from the leaf to the root in the tree (i.e., C i aj is a super class of C i aj ′ , provided j ′ < j.) For each non-leaf class C i j , let c(C i j ) denote the set of its child classes in the tree. We can assume without loss of generality that
q − (C i j ) ≥ C i k ∈c(C i j ) q − (C i k ) for any non-leaf class C i j . Finally, let q + (C i ♯ ) := Q(i), q − (C i ♯ ) := C i k ∈c(C i ♯ ) q − (C i k )
; for all applicants a ∈ L i , q + (C i a1 ) := 1 and q − (C i a1 ) := 0. Our algorithm finds an applicant-optimal-institute-pessimal stable matching. The applicant-optimality means that all applicants get the best outcome among all stable matchings; on the other hand, institute-pessimality means that all institutes get an outcome which is "lexicographically" the worst for them. To be precise, suppose that µ(i) = (a i1 , a i2 , · · · , a ik ) and µ ′ (i) = (a i1 , a i2 , · · · , a ik ) are the outcomes of two stable matchings for institute i 5 . If there exists k ′ ≤ k so that a ij = a ′ ij , for all 1 ≤ j ≤ k ′ − 1 and a ik ′ ≻ i a ′ ik ′ , then institute i is lexicographically better off in µ than in µ ′ . We now sketch the high-level idea of our algorithm. We let applicants "propose" to the institutes from the top of their preference lists. Institutes make the decision of acceptance/rejection of the proposals based on certain rules (to be explained shortly). Applicants, if rejected, propose to the next highest-ranking institutes on their lists. The algorithm terminates when all applicants either end up with some institutes, or run out of their lists. Then we check whether the final outcome meets the upper and lower bounds of all classes. If yes, the outcome is a stable matching; if no, there is no stable matching in the given instance.
How the institutes make the acceptance/rejection decisions is the core of our algorithm. Intuitively, when an institute gets a proposal, it should consider two things: (i) will adding this new applicant 5 In LCSM, an institute always gets the same number of applicants in all stable matchings. See Theorem 15 below. violate the upper bound of some class? (ii) will adding this applicant deprive other classes of their necessary minimum requirement? If the answer to any of the two questions is positive, the institute should not just take the new applicant unconditionally; instead, it has to reject someone it currently has (not necessarily the new comer).
Below we will design two invariants for all classes of an institute. Suppose that institute i gets a proposal from applicant a, who belongs to a sequence of classes a(C(i)) = (C i a1 , C i a2 , · · · , C i ♯ ). We check this sequence of classes from the leave to the root. If adding applicant a into class C i aj does not violate these invariants, we climb up and see if adding applicant a into C i a(j+1) violates the invariant. If we can reach all the way to C i ♯ without violating the invariants in any class in a(C(i)), applicant a is just added into institute i's new collection. If, on the other hand, adding applicant a into C i a(j+1) violates the invariants, institute i rejects some applicant in C i a(j+1) who is from a sequence of subclasses of C i a(j+1) which can afford to lose one applicant.
We define a deficiency number ∆(C i j ) for each class C i j ∈ C(i). Intuitively, the deficiency number indicates how many more applicants are necessary for class C i j to meet the lower bound of all its subclasses. This intuition translates into the following invariant:
Invariant A: ∆(C i j ) ≥ C i k ∈c(C i j ) ∆(C i k ), ∀C i j ∈ C(i), c(C i j ) = ∅, ∀i ∈ I.
In the beginning, ∆(C i j ) is set to q − (C i j ) and we will explain how ∆(C i j ) is updated shortly. Its main purpose is to make sure that after adding some applicants into C i j , there is still enough "space" for other applicants to be added into C i j so that we can satisfy the lower bound of all subclasses of C i j . In particular, we maintain
Invariant B: q − (C i j ) ≤ |µ(i) ∩ C i j | + ∆(C i j ) ≤ q + (C i j ), ∀C i j ∈ C(i), ∀i ∈ I.
We now explain how ∆(C i j ) is updated. Under normal circumstances, we decrease ∆(C i j ) by 1 once we add a new applicant into C i j . However, if Invariant A is already "tight", i.e., ∆(C i j ) = C i k ∈c(C i j ) ∆(C i k ), then we add the new applicant C i j without decreasing ∆(C i j ). The same situation may repeat until the point that |µ(i) ∩ C i j | + ∆(C i j ) = q + (C i j ) and adding another new applicant in C i j is about to violate Invariant B. In this case, something has to be done to ensure that Invariant B holds: some applicant in C i j has to be rejected, and the question is whom? Let us call a class a surplus class if |µ(i) ∩ C i j | + ∆(C i j ) > q − (C i j ) and we define an affluent set for each class C i j as follows:
$(C i j , µ(i)) = {a|a ∈ µ(i) ∩ C i j ; for each C i j ′ ∈ a(C(i)) and C i j ′ ⊂ C i j , |µ(C i j ′ )| + ∆(C i j ′ ) > q − (C i j ′ )}.
In words, the affluent set $(C i j , µ(i)) is composed of the set of applicants currently assigned to institute i, part of C i j , and each of whom belonging to a sequence of surplus subclasses of C i j . In our algorithm, to prevent Invariant B from being violated in a non-leaf class C i j , institute i rejects the lowest ranking applicant a in the affluent set $(C i j , µ(i)). The pseudo-code of the algorithm is presented in Figure 1.
Initialization 0: ∀i ∈ I, ∀C i j ∈ C(i), ∆(C i j ) := q − (C i j ); Algorithm 1: While there exists an applicant a unassigned and he has not been rejected by all institutes on his list 2: Applicant a proposes to the highest ranking institute i to whom he has not proposed so far; 3:
Assume that a(C(i)) = (C i a1 , C i a2 , · · · , C i az (= C i ♯ )); 4: µ(i) := µ(i) ∪ {a} // Institute i accepts applicant a provisionally; 5:
For t = 2 To z // applicant a can be added into C i a1 directly; 6:
If
∆(C i at ) > P C i k ∈c(C i at ) ∆(C i k ) Then ∆(C i at ) := ∆(C i at ) − 1; 7: If #(C i at ) + ∆(C i at ) > q + (C i at ) Then 8 Let $(C i at , µ(i)) = {a|a ∈ µ(i) ∩ C i at ; for each C i j ′ ∈ a(C(i)) and C i j ′ ⊂ C i at , |µ(C i j ′ )| + ∆(C i j ′ ) > q − (C i j ′ )}; 9
Let the lowest ranking applicant in $(C i at , µ(i)) be a † ; 10 µ(i) := µ(i)\{a † } // Institute i rejects applicant a † ; 11: GOTO 1; 12: If there exists an institute i with ∆(C i ♯ ) > 0 Then Report "There is no stable matching"; 13: Else Return the outcome µ, which is a stable matching; Fig. 1. The pseudo code of the algorithm. It outputs the applicant-optimal-institute-pessimal matching µ if it exists; otherwise, it reports that there is no stable matching.
Correctness of the Algorithm
In our discussion, C i at is a class in a(C(i)), where t is the index based on the size of the class C i at in a(C(i)). Assume that during the execution of the algorithm, applicant a proposes to institute i and when the index t of the For loop of Line 5 becomes l and results in a † being rejected, we say applicant a is stopped at class C i al , and class C i al causes applicant a † to be rejected. The first lemma describes some basic behavior of our algorithm. Lemma 6. (i) Immediately before the end of the while loop, Invariants A and B hold. (ii) Let applicant a be the new proposer and assume he is stopped at class C i al . Then (iia) Between the time interval that he makes the new proposal and he is stopped at C i al , ∆(C i at ) remains unchanged, for all 1 ≤ t ≤ l; moreover, given any class C i at , 2 ≤ t ≤ l, ∆(C i at ) = C i k ∈c(C i at ) ∆(C i k ). (iib) When a is stopped at a non-leaf class C i al , $(C i al , µ(i)) = ∅; in particular, any class C i at , 1 ≤ t ≤ l − 1, is temporarily a surplus class.
(iii) Immediately before the end of the while loop, if class C i j is a non-leaf surplus class, then ∆(C i j ) =
C i k ∈c(C i j ) ∆(C i k ).
(iv) Suppose that applicant a is the new proposer and C i al ∈ a(C(i)) causes applicant a † to be rejected and a † (C(i)) = (C i a † 1 , C i a † 2 , · · · , C i a † l † (= C i al ), · · · ). Then immediately before the end of the while loop,
∆(C i a † t ′ ) = C i k ∈c(C i a † t ′ ) ∆(C i k ), for all 2 ≤ t ′ ≤ l † ; moreover, |µ(i) ∩ C i a † l † | + ∆(C i a † l † ) = q + (C i a † l † ).
Proof. (i) can be proved by induction on the number of proposals institute i gets. For (iia), since Invariant A is maintained, if ∆(C i at ) is decreased for some class C i at , 1 ≤ t ≤ l, the algorithm will ensure that applicant a would not be stopped in any class, leading to a contradiction. Now by (iia), the set of classes {C i at } l−1 t=1 are (temporarily) surplus classes when applicant a is stopped at C i al , so $(C i al , µ(i)) = ∅, establishing (iib). Note that this also guarantees that the proposed algorithm is never "stuck." (iii) can be proved inductively on the number of proposals that institute i gets. Assuming a is the new proposer, there are two cases: (1) Suppose that applicant a is not stopped in any class. Then a class C i at ∈ a(C(i)) can become surplus only if the stated condition holds ; (2) Suppose that applicant a is stopped in some class, which causes a † to be rejected. Let the smallest class containing both a and a † be C i al ′ . Applying (iia) and observing the algorithm, it can be verified that only a class C i at ⊂ C i al ′ can become a surplus class and for such a class, the stated condition holds. Finally, for the first part of (iv), let C i al ′ denote the smallest class containing both a and a † . Given
a class C i a † t ′ , if C i al ′ ⊆ C i a † t ′ ⊆ C i al , (iia) gives the proof. If C i a † t ′ ⊂ C i al ′ ,
observe that the former must have been a surplus class right before applicant a made the new proposal. Moreover, before applicant a proposed, (iii) implies that for a non-leaf class C i a † t ′ ⊂ C i al ′ , the stated condition regarding the deficiency numbers is true. The last statement of (iv) is by the algorithm and Invariant B.
⊓ ⊔
Lemma 7. Assume that a † (C(i)) = (C i a † 1 , C i a † 2 , · · · , C i a † l † , · · · )
. During the execution of the algorithm, suppose that class C i a † l † causes applicant a † to be rejected. In the subsequent execution of the algorithm, assuming that µ(i) is the assignment of institute i at the end of the while loop, then
there exists l ‡ , where l ‡ ≥ l † such that |µ(i) ∩ C i a † l ‡ | + ∆(C i a † l ‡ ) = q + (C i a † l ‡ ); furthermore, for all 2 ≤ t ≤ l ‡ , all applicants in $(C i a † t , µ(i)) rank higher than a † . Moreover, for all 2 ≤ t ≤ l ‡ , ∆(C i a † t ) = C i k ∈c(C i a † t ) ∆(C i k ).
Proof. We prove based on the induction on the number of proposals institute i receives after a † is rejected. The base case is when a † is just rejected. Let l ‡ = l † . Then it is obvious that all applicants in the affluent sets $(C i a † t , µ(i)), 2 ≤ t ≤ l ‡ , rank higher than a † and the rest of the lemma holds by Lemma 6(iv).
For the induction step, let a be the new proposer. There are four cases. Except the second case, we let l ‡ remain unchanged after a's proposal.
-Suppose that a ∈ C i a † l ‡ and he does not cause anyone in C i a † l ‡ to be rejected. Then the proof is trivial.
-Suppose that a ∈ C i a † l ‡ and he is stopped in class C i al , which causes an applicant a * ∈ C i a † l ‡ to be rejected. a * must be part of the affluent set $(C i a † l ‡ , µ(i)) before a proposed. By induction hypothesis, a * ≻ i a † . Moreover, since a * is chosen to be rejected, all the applicants in the (new) affluent sets $(C i a † t , µ(i)), for each class C i a † t , where C i a † l ‡ ⊂ C i a † t ⊆ C i al , rank higher than a * , hence, also higher than a † . Now let C i al be the new C i a † l ‡ and the rest of the lemma follows from Lemma 6(iv).
-Suppose that a ∈ C i a † l ‡ and he is not stopped in C i a † l ‡ or any of its subclasses. We argue that a must be accepted without causing anyone to be rejected; moreover, the applicants in all affluent sets $(C i a † t , µ(i)), for all 1 ≤ t ≤ l ‡ remain unchanged. Let the smallest class in a † (C(i)) containing a be C i a †l . Note that before a proposed, the induction hypothesis states that |µ
(i) ∩ C i a † l ‡ | + ∆(C i a † l ‡ ) = q + (C i a † l ‡ ).
As applicant a is not stopped at C i a † l ‡ , the set of values ∆(C i a † t ),l ≤ t ≤ l ‡ , must have decreased during his proposal and this implies that he will not be stopped in any class. Now let a(C(i)) = (C i a1 , · · · , C i al , C i a(l+1) (= C i a †l ), · · · ). Since ∆(C i a †l ) = C i k ∈c(C i a †l ) ∆(C i k ) before applicant a proposed by the induction hypothesis, for ∆(C i a †l ) to decrease, ∆(C i al ) must have decreased as well. Choose the smallest class C i al * ⊂ C i a †l whose value ∆(C i al * ) has decreased during a's proposal. We claim that C i al * must have been a non-surplus class before and after applicant a's proposal. If the claim is true, then all the affluent sets $(C i a † t , µ(i)), for all 1 ≤ t ≤ l ‡ , remain unchanged after applicant a's proposal.
It is obvious that C i al * = C i a1 . So assume that C i al * is a non-leaf class. Suppose for a contradiction that C i al * was a surplus class before a proposed. Lemma 6(iii) implies that
∆(C i a † l * ) = C i k ∈c(C i a † l * ) ∆(C i k ) before a proposed.
Then for ∆(C i a † l * ) to decrease during a's proposal, ∆(C i a † (l * −1) ) must have decreased as well. But then this contradicts our choice of C i a † l * . So we establish that C i al * was not surplus and remains so after a's proposal. -Suppose that a ∈ C i a † l ‡ and when he reaches a subclass of C i a † l ‡ or the class itself, the latter causes some applicant a * to be rejected. To avoid trivialities, assume a = a * . Let the smallest class in a † (C(i)) containing a be C i a †l and the smallest class in a † (C(i)) containing a * be C i a † l * . Below we only argue that the case that C i a †l ⊆ C i a † l * . The other case that C i a † l * ⊂ C i a †l follows essentially the same argument. After a's proposal, observe that only the affluent sets $(C i a † t , µ(i)),l ≤ t < l * , can have new members (who are from the child class of C i a †l containing a). Without loss of generality, let G be the set of new members added into one of the any above sets. To complete the proof, we need to show that either G = ∅ or all members in G rank higher than a † . If before applicant a proposed, a * belonged to a sequence of surplus classes C i a * t ⊂ C i a † l * , he was also part of the affluent set $(C i a † l * , µ(i)) or part of µ(i)∩C i a † 1 before a proposed. By induction hypothesis, a * ≻ i a † . Observing Lemma 6(iib), all applicants in G must rank higher than a * , hence also than a † . On the other hand, if a * belongs to some class C i a * t ⊂ C i a † l * which was not surplus before a proposed, then C i a * l = C i a * l * and C i a * t must also contain a and remain a non-surplus class after a's proposal. In this case G = ∅.
⊓ ⊔
The following lemma is an abstraction of several counting arguments that we will use afterwards.
Lemma 8. Let each class C i j be associated with two numbers α i j and β i j and q − (C i j ) ≤ α i j , β i j ≤ q + (C i j ). Given any non-leaf class C i j , α i j = C i k ∈c(C i j ) α i k and β i j ≥ C i k ∈c(C i j ) β i k ; moreover, if β i j = C i k ∈c(C i j ) β i k then such a non-leaf class C i j is said to be tight in β. If β i j > q − (C i j )
, then C i j has to be tight in β.
(i) Given a non-leaf class
C i a † l † with α i a † l † < β i a † l † , we can find a sequence of classes C i a † l † ⊃ · · · ⊃ C i a † 1 , where α i a † t < β i a † t , for 1 ≤ t ≤ l † . (ii) Given a non-leaf class C i x with α i x ≤ β i x , suppose that there exists a leaf class C i a φ 1 ⊂ C i x such that α i a φ 1 > β i a φ 1 . Moreover, all classes C i a φ t are tight in β, where C i a φ 1 ⊆ C i a φ t ⊆ C i x , then we can find a class C i x ′ , where C i a φ 1 ⊂ C i x ′ ⊆ C i x , α i x ′ ≤ β i x ′ ,
and two sequences of classes with the following properties:
(iia) C i a φ 1 ⊂ C i a φ 2 ⊂ · · · ⊂ C i a φ l φ ⊂ C i x ′ , where α i a φ t > β i a φ t for 1 ≤ t ≤ l φ ; (iib) C i x ′ ⊃ C i a † l † ⊃ · · · ⊃ C i a † 1 , where α i a † t < β i a † t , for 1 ≤ t ≤ l † . Proof. For (i), since q − (C i a † l † ) ≤ α i a † l † < β i a † l † , class C i a † l † is tight in β. Therefore, C i k ∈c(C i a † l † ) α i k = α i a † l † < β i a † l † = C i k ∈c(C i a † l † ) β i k . By counting, there exists a class C i a † (l † −1) ∈ c(C i a † l † ) with q − (C i a † (l † −1) ) ≤ α i a † (l † −1) < β i a † (l † −1)
. Repeating the same argument gives us the sequence of classes. For (ii), let us climb up the tree from C i a φ 1 until we meet a class C i
x ′ ⊆ C i x with α i x ′ ≤ β i x ′ .
This gives us the sequence of classes stated in (iia).
Now since the class C i x ′ is tight in β, C i k ∈c(C i x ′ ) α i k = α i x ′ ≤ β i x ′ = C i k ∈c(C i x ′ ) β i k . Moreover, as C i a φ l φ ∈ c(C i x ′ ) and α i a φ l φ > β i a φ l φ , by counting, we can find another class C i a † l † ∈ c(C i x ′ )\{C i a φ l φ } such that β i a † l † > α i a † l † ≥ q − (C i a † l † )
. Now applying (i) gives us the sequence of classes in (iib).
⊓ ⊔
We say that (i; a) is a stable pair if there exists any stable matching in which applicant is assigned to institute i. A stable pair is by-passed if institute i rejects applicant a during the execution of our algorithm.
Lemma 9. During the execution of the algorithm, if an applicant a φ is rejected by institute i, then (i; a φ ) is not a stable pair.
Proof. We prove by contradiction. Assume that (i; a φ ) is the first by-passed stable pair and there exists a stable matching µ φ in which µ φ (a φ ) = i. For each class C i j ∈ C(i), we associate two numbers
α i j := |µ φ (i) ∩ C i j | and β i j := |µ(i) ∩ C i j | + ∆(C i j ).
Here ∆(·)s are the values recorded in the algorithm right after a φ is rejected (before the end of the while loop); similarly, µ(i) is the assignment of i at that point.
It is obvious that α i a φ 1 > β i a φ 1 and the class C i x causing a φ to be rejected is not C i a φ 1 . By
Lemma 6(iv), all classes C i a φ t are tight in β, where C i a φ 1 ⊂ C i a φ t ⊆ C i x .
It can be checked all the conditions as stated in Lemma 8(ii) are satisfied. In particular,
β i x = q + (C i x ) ≥ α i x ; moreover, if β i j > q − (C i j ), C i j must be tight (by Lemma 6(iii)). So, we can find two sequences of classes {C i a φ t } l φ t=1 and {C i a † t } l † t=1 , where C i a φ l φ , C i a † l † ∈ c(C i x ′ ) and C i x ′ ⊆ C i x ,
with the following properties:
q + (C i a φ t ) ≥ |µ φ (i) ∩ C i a φ t | > |µ(i) ∩ C i a φ t | + ∆(C i a φ t ) ≥ q − (C i a φ t ), ∀t, 1 ≤ t ≤ l φ ; q − (C i a † t ) ≤ |µ φ (i) ∩ C i a † t | < |µ(i) ∩ C i a † t | + ∆(C i a † t ) ≤ q + (C i a † t ), ∀t, 1 ≤ t ≤ l † .
The second set of inequalities implies that the classes {C i a † t } l † t=1 are surplus in µ. Thus there exists an applicant a † ∈ (µ(i)\µ φ (i)) ∩ C i a † 1 . Since (i; a φ ) is the first by-passed stable pair, i ≻ a † µ φ (a † ) and since a φ is rejected instead of a † , a † ≻ i a φ . Now observe the tuple µ φ (i)| a φ a † is feasible due to the above two sets of strict inequalities. Thus we have a group (i; µ φ (i)| a φ a † ) to block µ φ , a contradiction.
⊓ ⊔ Lemma 10. At the termination of the algorithm, if there exists an institute i ∈ I such that ∆(C i ♯ ) > 0, there is no stable matching in the given instance.
Proof. Suppose, for a contradiction, that there exists an institute i with ∆(C i ♯ ) > 0 and there is a stable matching µ φ . Let µ be the assignment when the algorithm terminates. By Lemma 9, if an applicant is unmatched in µ, he cannot be assigned in µ φ either. So |µ φ | ≤ |µ|. In the following, ∆(·)s refer to values recorded in the final outcome of the algorithm. Consider two cases.
-Suppose that |µ φ (i)| > |µ(i) ∩ C i ♯ |. Then as |µ φ | ≤ |µ|, we can find another institute i ′ = i such that |µ φ (i ′ )| < |µ(i ′ ) ∩ C i ′ ♯ |. For each class C i ′ j ∈ C(i ′ ), let α i ′ j := |µ φ (i ′ ) ∩ C i ′ j | and β i ′ j := |µ(i ′ ) ∩ C i ′ j | + ∆(C i ′ j )
. It can be checked that the condition stated in Lemma 8(i) is satisfied (note that those β i ′ j fulfill the condition due to Lemma 6(iii)). Therefore, we can find a sequence of
classes {C i ′ a † t } l † t=1 , where C i ′ a † l † = C i ′ ♯ , and |µ φ (i ′ ) ∩ C i ′ a † t | < |µ(i ′ ) ∩ C i ′ a † t | + ∆(C i ′ a † t ) ≤ q + (C i ′ a † t ), ∀t, 1 ≤ t ≤ l † , where the second inequality follows from Invariant B. Then there exists an applicant a † ∈ (µ(i ′ )\µ φ (i ′ )) ∩ C i ′ a † 1 . By Lemma 9, i ′ ≻ a † µ φ (a † )
, giving us a group (i ′ ; µ φ (i ′ )|a † ) to block µ φ , a contradiction. Note the feasibility of µ φ (i ′ )|a † is due to the above set of strict inequalities.
-Suppose that |µ φ (i)| ≤ |µ(i) ∩ C i ♯ |.
We first claim that C i ♯ must be a surplus class in µ(i). If not,
then q − (C i ♯ ) = ∆(C i ♯ ) + |µ(i) ∩ C i ♯ | > |µ(i) ∩ C i ♯ |, implying that |µ φ (i)| ≥ q − (C i ♯ ) > |µ(i) ∩ C i ♯ |, a contradiction. So C i
♯ is a surplus class, and by Lemma 6(iii),
|µ φ (i)| = C i k ∈c(C i ♯ ) |µ φ (i) ∩ C i k | ≤ |µ(i) ∩ C i ♯ | < |µ(i) ∩ C i ♯ | + ∆(C i ♯ ) = C i k ∈c(C i ♯ ) |µ(i) ∩ C i k | + ∆(C i k ).
For each class C i j ∈ C(i), let α i j := |µ φ (i)∩C i j | and β i j := |µ(i)∩C i j |+∆(C i j ) and invoke Lemma 8(i). The above inequality implies that α i ♯ < β i ♯ and note that by Lemma 6(iii), the condition regarding β is satisfied. Thus we have a sequence of surplus classes
C i a † l † (= C i ♯ ) ⊃ · · · ⊃ C i a † 1 so that q − (C i a † t ) ≤ |µ φ (i) ∩ C i a φ t | < |µ(i) ∩ C i a † t | + ∆(C i a † t ) ≤ q + (C i a † t ), ∀t, 1 ≤ t ≤ l † , implying that there exists an applicant a † ∈ (µ(i)\µ φ (i)) ∩ C i a † 1 and i ≻ a † µ φ (a † ) by virtue of Lemma 9. The tuple µ φ (i)|a † is feasible because of the above set of strict inequalities. Now (i; µ φ (i)|a φ ) blocks µ φ , a contradiction.
⊓ ⊔ Lemma 11. Suppose that in the final outcome µ, for each institute i ∈ I, ∆(C i ♯ ) = 0. Then µ is a stable matching.
Proof. For a contradiction, assume that a group (i; g) blocks µ. Let a φ to be the highest ranking applicant in g\µ(i). Since a φ is part of the blocking group, he must have proposed to and been rejected by institute i during the execution of the algorithm, thus i ≻ a φ µ(a φ ). By Lemma 7, there
exists a class C i a φ l ‡ such that |µ(i) ∩ C i a φ l ‡ | + ∆(C i a φ l ‡ ) = |µ(i) ∩ C i a φ l ‡ | = q + (C i a φ l ‡ ). Moreover, it is obvious that |g ∩ C i a φ 1 | > |µ(i) ∩ C i a φ 1 |.
We now make use of Lemma 8(ii) by letting α i j := |g ∩ C i j | and β i j := |µ(i) ∩ C i j | for each class C i j ∈ C(i). Note that all classes are tight in β, C i a φ 1 ⊂ C i a φ l ‡ , and
|µ(i) ∩ C i a φ l ‡ | = q + (C i a φ l ‡ ) ≥ |g ∩ C i a φ l ‡ |,
satisfying all the necessary conditions. Thus, we can discover a sequence of classes
{C i a † t } l † t=1 stated in Lemma 8(iib), where C i a † l † ∈ c(C i a φ l ) and C i a φ 1 ⊂ C i a φ l ⊆ C i a φ l ‡ , such that q − (C i a † t ) ≤ |g ∩ C i a † t | < |µ(i) ∩ C i a † t | ≤ q + (C i a † t ), ∀j, 1 ≤ t ≤ l † , and there exists an applicant a † ∈ (µ(i)\g) ∩ C i a † 1 .
The above set of strict inequalities mean that all classes C i a † t , 1 ≤ t ≤ l † , are surplus classes in µ. Then a † forms part of the affluent set $(C i a φ l , µ(i)). By Lemma 7, they all rank higher than a φ . This contradicts our assumption that a φ is the highestranking applicant in g\µ(i).
⊓ ⊔ Lemma 12. Suppose that in the final outcome µ, for each institute i ∈ I, ∆(C i ♯ ) = 0. Then µ is an institute-pessimal stable matching.
Proof. Suppose, for a contradiction, that there exists a stable matching µ φ such that there exists an institute i which is lexicographically better off in µ than in µ φ . Let a † be the highest ranking
applicant in µ(i)\µ φ (i). By Lemma 9, i ≻ a † µ φ (i). If |µ φ (i) ∩ C i a † t | < |µ(i) ∩ C i a † t | ≤ q + (C i a † t ), for all classes C i a † t ∈ a † (C(i)), then (i; µ φ (i)|a φ ) blocks µ φ , a contradiction. So choose the smallest class C i x ∈ a † (C(i)) such that |µ φ (i) ∩ C i x | ≥ |µ(i) ∩ C i x |. It is clear that C i x ⊃ C i a † 1 . Now we apply Lemma 8(ii) by letting α i j := |µ(i) ∩ C i j | and β i j := |µ φ (i) ∩ C i j | for each class C i j ∈ C(i).
It can be checked all conditions stated in Lemma 8(ii) are satisfied. So there exists a class
C i x ′ such that C i a † 1 ⊂ C i x ′ ⊆ C i x and we can find two sequences of classes {C i a φ t } l φ t=1 and {C i a † t } l † t=1 , where C i a φ l φ , C i a † l † ∈ c(C i x ′ )
, with the following properties:
q + (C i a † t ) ≥ |µ(i) ∩ C i a † t | > |µ φ (i) ∩ C i a † t | ≥ q − (C i a † t ), ∀t, 1 ≤ t ≤ l † ; q − (C i a φ t ) ≤ |µ(i) ∩ C i a φ t | < |µ φ (i) ∩ C i a φ t | ≤ q + (C i a φ t ), ∀t, 1 ≤ t ≤ l φ .
The second set of inequalities implies that we can find an applicant a φ ∈ (µ φ (i)\µ(i)) ∩ C i a φ 1 . Recall that we choose a † to be the highest ranking applicant in µ(i)\µ φ (i), so a † ≻ i a φ . Now we have a group (i; µ φ (i)| a φ a † ) to block µ φ to get a contradiction. The feasibility of µ φ (i)| a φ a † is due to the above two sets of strict inequalities.
⊓ ⊔ Based on Lemmas 9, 10, 11, and 12, we can draw the conclusion in this section.
Theorem 13. In O(m 2 ) time,
where m is the total size of all preferences, the proposed algorithm discovers the applicant-optimal-institute-pessimal stable matching if stable matchings exist in the given LCSM instance; otherwise, it correctly reports that there is no stable matching. Moreover, if there is no lower bound on the classes, there always exists a stable matching.
To see the complexity, first note that there can be only O(m) proposals. The critical thing in the implementation of our algorithm is to find out the lowest ranking applicant in each affluent set efficiently. This can be done by remembering the lowest ranking applicant in each class and this information can be updated in each proposal in O(m) time, since the number of classes of each institute is O(m), given that the classes form a laminar family.
Structures of Laminar Classified Stable Matching
Recall that we define the "absorption" operation as follows. Given a family of classes B, ℜ(B) returns the set of classes which are not entirely contained in other classes in B. Note that in LCSM, ℜ(B) will be composed of a pairwise disjoint set of classes.
We review the well-known rural hospitals theorem [8,15].
Theorem 14. (Rural Hospitals Theorem) In the hospitals/residents problem, the following holds.
(i) A hospital gets the same number of residents in all stable matchings, and as a result, all stable matchings are of the same cardinality. (ii) A resident who is assigned in one stable matching gets assigned in all other stable matchings;
conversely, an unassigned resident in a stable matching remains unassigned in all other stable matchings. (iii) An under-subscribed hospital gets the same set of residents in all other stable matchings.
It turns out that rural hospitals theorem can be generalized in LCSM. On the other hand, if some institutes use intersecting classes in their classifications, rural hospitals theorem fails (stable matching size may differ). See the appendix for such an example.
Theorem 15. (Generalized Rural Hospitals Theorem in LCSM) Let µ be a stable matching. Given any institute i, suppose that B is the set of bottleneck classes in µ(i) and D is the subset of classes in C(i) such that ℜ(B) ∪ D partitions L i . The following holds.
(i) An institute gets the same number of applicants in all stable matchings, and as a result, all stable matchings are of the same cardinality. (ii) An applicant who is assigned in one stable matching gets assigned in all other stable matchings;
conversely, an unassigned applicant in a stable matching remains unassigned in all other stable matchings. (iii) Every class C i k ∈ ℜ(B) ∪ D has the same number of applicants in all stable matchings. (iv) In a class C i k ⊆ C ∈ D, or in a class C i k which contains only classes in D, the same set of applicant in class C i k will be assigned to institute i in all stable matchings.
(v) A class C i k can have different sets of applicants in different stable matchings only if C i k ⊆ C ∈ ℜ(B) or C i k ⊇ C ∈ ℜ(B).
Proof. We choose µ † to be the applicant-optimal stable matching.
Claim A: Suppose that a ∈ µ † (i)\µ(i). Then there exists a class C i al ∈ a(C(i)) such that (i) |µ(i) ∩ C i al | = q + (C i al ), and (ii) a ∈ C i al ⊆ C ∈ ℜ(B). Proof of Claim A. If for all classes C i at ∈ a(C(i)), |µ(i) ∩ C i at | < q + (C i at )
, then as µ † is applicantoptimal, i ≻ a µ(a), so (i; µ(i)|a) blocks µ, a contradiction. This establishes (i).(ii) follows easily. ⊓ ⊔ LetB ⊆ B be the subset of these bottleneck classes containing at least one applicant µ † (i)\µ(i).
By Claim A(ii), ℜ(B) ⊆ ℜ(B). This implies that for all classes C i k ∈ (ℜ(B)\ℜ(B)) ∪ D, |µ(i) ∩ C i k | ≥ |µ † (i) ∩ C i k |. Combining this fact with Claim A(ii), we have |µ(i)| = C i k ∈(ℜ(B)\ℜ(B))∪D |µ(i) ∩ C i k | + C i k ∈ℜ(B) |µ(i) ∩ C i k | ≥ C i k ∈(ℜ(B)\ℜ(B))∪D |µ † (i) ∩ C i k | + C i k ∈ℜ(B) q + (C i al ) (*) = C i k ∈(ℜ(B)\ℜ(B))∪D |µ † (i) ∩ C i k | + C i k ∈ℜ(B) |µ † (i) ∩ C i k | = |µ † (i)|.
Thus, |µ| ≥ |µ † | and it cannot happen that |µ| > |µ † |, otherwise, there exists an applicant who is assigned in µ but not in µ † . This contradicts the assumption that the latter is applicant-optimal. This completes the proof of (i) and (ii) of the theorem.
Since |µ| = |µ † |, Inequality (*) holds with equality. We make two observations here.
Observation 1: For each class C i k ∈ ℜ(B), it is also a bottleneck in µ † (i). Observation 2: an applicant a ∈ µ † (i)\µ(i) must belong to a bottleneck class in µ † (i).
Let B † be the set of bottleneck classes in µ † (i) and choose D † so that ℜ(B † ) ∪ D † partitions L i . By Observation 2, each applicant in µ † (i) ∩ C i k , where C i k ∈ D † , must be part of µ(i). So for each class -There exists another class C i k ′ ∈ D † so that |µ(i)∩C i k ′ | < |µ † (i)∩C i k ′ |. Then we have a contradiction to Observation 2.
C i k ∈ D † , |µ(i) ∩ C i k | ≥ |µ † (i) ∩ C i k |. We claim that it cannot happen that |µ(i) ∩ C i k | > |µ † (i) ∩ C i k |.
-There exists another class
C i k ′ ∈ ℜ(B † ) so that |µ(i)∩C i k ′ | < |µ † (i)∩C i k ′ |.
For each class C i j ∈ C(i), let α i j := |µ(i) ∩ C i j | and β i j := |µ † (i) ∩ C i j |. Then we can invoke Lemma 8(i) and find an applicant a φ ∈ µ † (i)\µ(i) so that for each class C i
a φ t ∈ a φ (C(i)), C i a φ t ⊆ C i k ′ , |µ(i) ∩ C i a φ t | < |µ † (i) ∩ C i a φ t | ≤ q + (C i a φ t ).
Then by Claim A(ii) and Observation 1, there must exist another class C i k ′′ ∈ ℜ(B) containing a φ and C i k ′′ ⊃ C i k ′ . By Observation 1, C i k ′′ is also a bottleneck class in µ † (i). This contradicts the assumption that C i k ′ ∈ ℜ(B † ). So we have that for each class
C i k ∈ D † , |µ(i) ∩ C i k | = |µ † (i) ∩ C i k |.
For each class C i k ∈ B † , we can use the same argument to show that |µ(i) ∩ C i k | = |µ † (i) ∩ C i k |. This gives us (iii) and (iv). (v) is a consequence of (iv).
⊓ ⊔
NP-completeness of P-Classified Stable Matching
Theorem 16. Suppose that the set of posets P = {P 1 , P 2 , · · · , P k } contains a poset which is not a downward forest. Then it is NP-complete to decide the existence of a stable matching in P-classified stable matching. This NP-completeness holds even if there is no lower bound on the classes.
Our reduction is from one-in-three sat. It is involved and technical, so we just highlight the idea here. As P must contain a poset that has a "V " in it, some institutes use intersecting classes. In this case, even if there is no lower bound on the classes, it is possible that the given instance disallows any stable matching. We make use of this fact to design a special gadget. The main technical difficulty of our reduction lies in that in the most strict case, we can use at most two classes in each institute's classification.
Polyhedral Approach
In this section, we take a polyhedral approach to studying LCSM. We make the simplifying assumption that there is no lower bound. In this scenario, we can use a simpler definition to define a stable matching.
Lemma 17. In LCSM, if there is no lower bound, i.e., given any class C i j , q − (C i j ) = 0, then a stable matching as defined in Definition 2 can be equivalently defined as follows. A feasible matching µ is stable if and only if there is no blocking pair. A pair (i, a) is blocking, given that µ(i) = (a i1 , a i2 , · · · , a ik ),
k ≤ Q(i), if -i ≻ a µ(a); -for any class C i at ∈ a(C(i)), |L i ≻a ∩ µ(i) ∩ C i at | < q + (C i at ).
The definition of blocking pairs suggests a generalization of the comb used by Baïou and Balinski [3].
Definition 18. Let Γ = I ×A denote the set of acceptable institute-applicant pairs. The shaft S(A i ), based on a feasible tuple A i of institute i, is defined as follows: a) is defined for every (i, a) ∈ Γ as follows:
S(A i ) = {(i, a ′ ) ∈ Γ : ∀C i j ∈ a ′ (C(i)), |L i ≻a ′ ∩ A i ∩ C i j | < q + (C i j )}. The tooth T (i,T (i, a) = {(i ′ , a) ∈ Γ : i ′ a i}.
In words, (i, a ′ ) forms part of the shaft S(A i ), only if the collection of a ′ and all applicants in A i ranking strictly higher than a ′ does not violate the quota of any class in a ′ (C(i)). We often refer to an applicant a ∈ A i as a tooth-applicant.
We associate a |Γ |-vector x µ (or simply x when the context is clear) with a matching µ: x µ ia = 1 if µ(a) = i, otherwise, x µ ia = 0. Suppose thatΓ ⊆ Γ . Then x(Γ ) = (i,a)∈Γ x ia . We define a comb K(i, S(A i )) as the union of the teeth {T (i, a i )} a i ∈A i and the shaft S(A i ).
Lemma 19. Every stable matching solution x satisfies the comb inequality for any comb K(i, S(A i )):
x(K(i, S(A i )) ≡ x(S(A i )) + a j ∈A i x(T (i, a j )\{i, a j }) ≥ |A i |.
It takes a somehow involved counting argument to prove this lemma. Here is the intuition about why the comb inequality captures the stability condition of a matching. The value of the tooth x(T (i, a)) reflects the "happiness" of the applicant a ∈ A i . If x(T (i, a)) = 0, applicant a has reason to shift to institute i; on the other hand, the values collected from the shaft x(S(A i )) indicates the "happiness" of institute i: whether it is getting enough high ranking applicants (of the "right" class). An overall small comb value x(K(i, S(A i ))) thus expresses the likelihood of a blocking group including i and some of the applicants in A i . Now let K i denote the set all combs of institute i. We write down the linear program:
i:(i,a)∈Γ x ia ≤ 1, ∀a ∈ A (1) a:(i,a)∈Γ,a∈C i j x ia ≤ q + (C i j ), ∀i ∈ I, ∀C i j ∈ C(i)(2)
x(K(i, S(A i ))) = (i,a)∈K(i,S(A i ))
x ia ≥ |A i |, ∀K(i, S(A i )) ∈ K i , ∀i ∈ I (3) x ia ≥ 0, ∀(i, a) ∈ Γ(4)
Suppose there is no classification, i.e., Hospitals/Residents problem. Then this LP reduces to the one formulated by Baïou and Balinski [3]. However, it turns out that this polytope is not integral. The example in Figure 2 demonstrates the non-integrality of the polytope. In particular, observe that since µ is applicant-optimal, in all other stable matchings, applicant a 3 can only be matched to i 5 . However, the value x i 1 a 3 = 0.2 > 0 indicates that x is outside of the convex hull of integral stable matchings.
Here we make a critical observation. Suppose that in a certain matching µ φ , applicant a 3 is assigned to i 1 . Then a 2 cannot be assigned to i 1 due to the bound q + (C 1 1 ) (see Constraint (2)). If µ φ is to be stable, then a 2 must be assigned to some institute ranking higher than i 1 on his list (in this example there is none), otherwise, (i, µ φ (i 1 )| a 3 a 2 ) is bound to be a blocking group in µ φ . Thus, the required constraint to avoid this particular counter-example can be written as
x(T (i 1 , a 2 )\{i 1 , a 2 }) ≥ x i 1 a 3 .
We now formalize the above observation. Given any class C i j ∈ C(i), we define a class-tuple t i j = (a i1 , a i2 , · · · , a iq + (C i j ) ). Such a tuple fulfills the following two conditions:
Institute Preferences
Classifications Class bounds i1:a1a6a7a2a3 4) is not integral. Since µ is applicant-optimal, in all other stable matchings, applicant a 3 can only be matched to i 5 . However, the value x i 1 a 3 = 0.2 > 0 indicates that x is outside of the convex hull of integral stable matchings.
C 1 1 = {a2, a3} Q(i1) = 2, q + (C 1 1 ) = 1 i2:a4a7 Q(i2) = 1 i3:a2a4 Q(i3) = 1 i4:a5a6 Q(i4) = 1 i5:a3a5a7a1 C 5 1 = {a3, a5} Q(i5) = 2, q + (C 5 1 ) = 1 Applicant1. t i j ⊆ C i j ; 2. if C i
j is a non-leaf class, then given any subclass
C i k of C i j , |t i j ∩ C i k | ≤ q + (C i k ).
Let L i ≺t i j denote the set of applicants ranking lower than all applicants in t i j and L i t i j the set of applicants ranking at least as high as the lowest ranking applicant in t i j .
Lemma 20. Every stable matching solution x satisfies the following inequality for any class-tuple t i j :
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia .
As before, it takes a somehow involved counting argument to prove the lemma but its basic idea is already portrayed in the above example. Now let T i j denote the set of class-tuples in class C i j ∈ C(i) and L i ≺t i j denote the set of applicants ranking lower than all applicants in t i j . We add the following sets of constraints.
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia , ∀t i j ∈ T i j , ∀T i j(5)
Let P f sm denote the set of all solutions satisfying (1)-(5) and P sm the convex hull of all (integral) stable matchings. In this section, our main result is P f sm = P sm . We say (i, a) are matched under x if x ia > 0.
Definition 21. Let x ∈ P f sm and Ω i (x) be the set of applicants that are matched to institute i under x. Let Ω i (x) be composed of a i1 , a i2 , · · · , ordered based on the decreasing preference of institute i. H i (x) as a tuple composed of applicants chosen based on the following procedure: adding a ij greedily unless adding the next applicant into H i (x) will cause H i (x) to violate the quota of some class. Equivalently, a il ∈ H i (x) only if there exists a class C i j ∈ a il (C(i)) such that Proof. We need to show that given any class C i j ∈ C(i), |E i (x)∩C i j | ≤ q + (C i j ). We proceed by induction on the height of C i j in the tree structure of C(i). The base case is a leaf class.
Define
|H i (x) ∩ {a it } l−1 t=1 | = q + (C i j ). 2. Define E i (x)If |E i (x) ∩ C i j | > q + (C i j )
, form a class-tuple by picking the first q + (C i j ) applicants in E i (x) ∩ C i j . Then Constraint (5) is violated in such a class-tuple. For the induction step, if |E i (x)∩C i j | > q + (C i j ), again choose the q + (C i j ) highestranking applicants in E i (x) ∩ C i j and we claim they form a class-tuple of C i j , the reason being that by induction hypothesis, given any (5) is again violated in such a class-tuple.
C i k ⊂ C i j , |E i (x) ∩ C i k | ≤ q + (C i k ). Now Constraint
⊓ ⊔ Lemma 23. Suppose that x ∈ P f sm .
(i) For each institute i ∈ I, we can find two sets U and V of pairwise disjoint classes so that U ∪ V partitions L i and all applicants in Ω i (x)\H i (x) belong to the classes in U . Moreover,
(ia) |H i (x)| = C i k ∈U q + (C i k ) + C i k ∈V |H i (x) ∩ C i k |; (ib) for each class C i k ∈ U , |H i (x) ∩ C i k | = |E i (x) ∩ C i k | = q + (C i k );
for each class C i k ∈ V and each applicant a ∈ C i k , if x ia > 0, then x ia = 1; (ic) for each class C i k ∈ U , a∈C i k x ia = q + (C i k ). (ii) For every applicant a ∈ H i (x), x(T (i, a)) = i∈I x ia = 1; moreover, given any two institutes i,
i ′ ∈ I, H i (x) ∩ H i ′ (x) = ∅. (iii) |H i (x)| = |E i (x)| for all institutes i ∈ I. (iv) a∈A x ia = |E i (x)| for all institutes i ∈ I.
Proof. For (i), given any applicant a ∈ Ω i (x)\H i (x), by Definition 21, there exists some class C i j ∈ a(C(i)) for which |H i (x) ∩ C i j | = q + (C i j ). Let B be the set of classes C i j which contain at least one applicant in Ω i (x)\H i (x) and |C i j ∩ H i (x)| = q + (C i j ). Let U := ℜ(B) and choose V in such a way so that U ∪ V partitions L i . Now (ia) is a consequence of counting. We will prove (ib)(ic) afterwards.
For (ii), by definition of H i (x), none of the applicants in Ω i (x)\H i (x) contributes to the shaft x(S(H i (x))). As a result, for Constraint (3) to hold for the comb K(i, S(H i (x))), every tooth-applicant a ∈ H i (x) must contribute at least 1, and indeed, by Constraint (1), exactly 1. So we have the first statement of (ii). The second statement holds because it cannot happen that x(T (i, a)) = x(T (i ′ , a)) = 1, given that x ia > 0 and x i ′ a > 0.
For (iii), By Definition 21, all sets E i (x) are disjoint; thus, every applicant who is matched under x belongs to exactly one E i (x) and at most one H i (x) by (ii). Therefore, i∈I |E i (x)| ≥ i∈I |H i (x)| and we just need to show that for each institute i, |E i (x)| ≤ |H i (x)|, and this follows by using (ia):
|H i (x)| = C i k ∈U q + (C i k ) + C i k ∈V |H i (x) ∩ C i k | ≥ C i k ∈U |E i (x) ∩ C i k | + C i k ∈V |E i (x) ∩ C i k | = |E i (x)|,(6)
where the inequality follows from Lemma 22 and the fact all applicants in Ω i (x)\H i (x) are in classes in U . So this establishes (iii). Moreover, as Inequality (6) must hold with equality throughout, for each class C i k ∈ V , if applicant a ∈ C i k is matched to institute i under x, he must belong to both H i (x) and E i (x), implying x ia = 1; given any class
C i k ∈ U , |H i (x) ∩ C i k | = |E i (x) ∩ C i k | = q + (C i k ). So we have (ib).
For (iv), consider the comb K(i, S(E i (x))). By definition, x(T (i, a)\{(i, a)}) = 0 for each applicant a ∈ E i (x). So
x(K(i, S(E i (x)))) = x(S(E i (x))) = C i k ∈V |E i (x) ∩ C i k | + C i k ∈U a ′ ∈C i k ,(i,a ′ )∈S(E i (x)) x ia ′ ≤ C i k ∈V |E i (x) ∩ C i k | + C i k ∈U q + (C i k ) = |E i (x)|,
where the inequality follows from Constraint (2) and the rest can be deduced from (ib). By Constraint (3), the above inequality must hold with equality. So for each class
C i k ∈ U , a ′ ∈C i k ,(i,a ′ )∈S(E i (x)) x ia ′ = a ′ ∈C i k x ia ′ = q + (C i k )
, giving us (ic) and implying that there is no applicant in C i k ∈ U who is matched to institute i under x ranking lower than all applicants in E i (x) ∩ C i k . The proof of (iv) follows by
a∈A x ia = C i k ∈V a∈C i k x ia + C i k ∈U a∈C i k x ia = C i k ∈V |E i (x) ∩ C i k | + C i k ∈U q + (C i k ) = |E i (x)|.
⊓ ⊔
Packing Algorithm
We now introduce a packing algorithm to establish the integrality of the polytope. Our algorithm is generalized from that proposed by Sethuraman, Teo, and Qian [22]. Given x ∈ P f sm , for each institute i, we create |E i (x)| "bins," each of size (height) 1; each bin is indexed by (i, j), where 1 ≤ j ≤ |E i (x)|. Each x ia > 0 is an "item" to be packed into the bins. Bins are filled from the bottom to the top. When the context is clear, we often refer to those items x ia as simply applicants; if applicant a ∈ C i j , then the item x ia is said to belong to the class C i j . In Phase 0, each institute i puts the items x ia , if a ∈ H i (x), into each of its |E i (x)| bins. In the following phase, t = 1, 2, · · · , our algorithm proceeds by first finding out the set L t of bins with maximum available space;
then assigning each of the bins in L t one item.
The assignment in each phase proceeds by steps, indexed by l = 1, 2, · · · , |L t |. The order of the bins in L t to be examined does not matter. How the institute i chooses the items to be put into its bins is the crucial part in which our algorithm differs from that of Sethuraman, Teo, and Qian. We maintain the following invariant.
Invariant C: The collection of the least preferred items in the |E i (x)| bins (e.g., the items currently on top of institute i's bins) should respect of the quotas of the classes in C(i).
Subject to this invariant, institute i chooses the best remaining item and adds it into the bin (i, j), which has the maximum available space in the current phase. This unavoidably raises another issue: how can we be sure that there is at least one remaining item for institute i to put into the bin (i, j) without violating Invariant C? We will address this issue in our proof.
Theorem 24. Let x ∈ P f sm . Let M i,j be the set of applicants assigned to bin (i, j) at the end of any step of the packing procedure and a i,j be the lowest-ranking applicant of institute i in bin (i, j) (implying x ia i,j is on top of bin (i, j)). Then (i) In any step, suppose that the algorithm is examining bin (i, j). Then institute i can find at least one item in its remaining items to add into bin (i, j) without violating Invariant C;
(ii) For all bins (i, j), x(M i,j \{a i,j }) + x(T (i, a i,j )) = x(M i,j ) + x(T (i, a i,j )\{(i, a i,j )}) = 1; (iii) At the end of any step, institute i can organize a comb K(i, S(A i )) where A i is composed of appli- cants in {a i,j ′ } |E i (x)| j ′ =1 so that x(K(i, S(A i )) = |E i (x)| j ′ =1 x(M i,j ′ ) + |E i (x)| j ′ =1 x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = |E i (x)|;
(iv) At the end of any step, an item x ia is not put into institute i's bins if and only if there exists a class C i at ∈ a(C(i)) so that |{a i,j ′ }
|E i (x)| j ′ =1 ∩ C i at ∩ L i ≻a | = q + (C i at ). (v) If x ia is packed and x i ′ a is not, then i ′ ≻ a i;
(vi) At the end of any phase, the a i,j in all bins are distinct. In particular, for any applicant a who is matched under x, there exists some bin (i, j) such that a = a i,j .
Proof. We first assume that (ii) holds and prove (i). Observe that (ii) implies that given any applicant a ∈ E i (x), its corresponding item x ia , if already put into a bin, must be on its top and fills it completely. Since (i, j) currently has available space, at least one applicant in E i (x) is not in institute i's bins yet. We claim that there exists at least one remaining applicant in E i (x) that can be added into bin (i, j). Suppose not. Let the set of applicants in E i (x) that are not put into i's bins be G. Given any applicant a ∈ G, there must exist some class
C i k ∈ a(C(i)) for which | 1≤j ′ ≤|E i (x)|,j ′ =j a i,j ′ ∩ C i k | = q + (C i k ). Let B be the set of classes C i k that contains at least one applicant in G and | 1≤j ′ ≤|E i (x)|,j ′ =j a i,j ′ ∩ C i k | = q + (C i k ). Let G ′ be (E i (x)\G)\ C i k ∈ℜ(B) C i k , the subset of applicants in E i (x)
that are already put into the bins but not belonging to any class in ℜ(B). Note that none of the applicants in G ′ can be in the bin (i, j). Thus, by counting the number of the bins minus (i, j), we have
|E i (x)| − 1 ≥ |G ′ | + C i k ∈ℜ(B) | |E i (x)| j ′ =1,j ′ =j a i,j ′ ∩ C i k | = |G ′ | + C i k ∈ℜ(B) q + (C i k )
Note that all applicants in E i (x)\G ′ are in some class in ℜ(B) (either they are already put into the bins or not). Then by the pigeonhole principle, there is at least one class C i k ∈ ℜ(B) for which
|(E i (x)\G ′ ) ∩ C i k | > q + (C i k ), contradicting Lemma 22.
We now prove (ii)-(vi) by induction on the number of phases. In the beginning, (ii)(v)(vi) holds by Lemma 23(ii)(iii). (iii)(iv) hold by setting A i := H i (x) and observation Definition 21 and Lemma 23(ii).
Suppose that the theorem holds up to Phase t. Let α be the maximum available space in Phase t + 1. Suppose that the algorithm is examining bin (i, j) and institute i chooses item x ia to be put into this bin. From (vi) of the induction hypothesis, applicant a is on top of another bin (i ′ , j ′ ), where i ′ = i, in the beginning of phase t + 1. Then by (ii)(v) of the induction hypothesis,
x(T (i, a)) ≤ x(T (i ′ , a)) − x i ′ a = 1 − x(M i ′ ,j ′ ) ≤ α,(7)
where the last inequality follows from our assumption that in Phase t + 1, the maximum available space is α. Note also that We first prove (iv). Since x ia is not put into the bin before this step, by (iv) of the induction hypothesis, there exists some class C i al ∈ a(C(i)) for which
x(T (i, a)) = α, then (i ′ , j ′ ) ∈ L t+1 (bin (i ′ , j ′ ) is also examined in Phase t + 1). (8) Assume that A i is a tuple composed of applicants in {a i,j ′ } |E i (x)| j ′ =1 . For our induction step, let A i := A i | a i,|A i ∩ C i al ∩ L i ≻a | = q + (C i al ).
Let C i al be the smallest such class. Since x ia is allowed to put on top of x ia i,j , a ij ≻ i a and a ij ∈ C i al , otherwise, Invariant C regarding q + (C i al ) is violated. Now we show that all other items x ia ′ fulfill the condition stated in (iv). There are two cases.
-Suppose that x ia ′ is not put into the bins yet.
• Suppose that a i,j ≻ i a ′ ≻ i a. We claim that it cannot happen that for all classes C i a ′ t ∈ a ′ (C(i)),
|A i ∩ C i a ′ t ∩ L i ≻a ′ | < q + (C i a ′ t )
, otherwise, A i | a a ′ is still feasible, in which case institute i would have chosen x ia ′ , instead of x ia to put into bin (i, j), a contradiction.
• Suppose that a i,j ≻ i a ≻ i a ′ . By (iv) of the induction hypothesis, there exists a class C i a ′ l ′ ∈ a ′ (C(i)) for which
|A i ∩ C i a ′ l ′ ∩ L i ≻a ′ | = q + (C i a ′ l ′ ). If C i a ′ l ′ ⊂ C i al , it is easy to see that |A i ∩ C i a ′ l ′ ∩ L i ≻a ′ | = q + (C i a ′ l ′ ); if C i a ′ l ′ ⊂ C i al , then C i al ∈ a ′ (C(i)) and we have |A i ∩ C i al ∩ L i ≻a ′ | = q + (C i al )
. In both situations, the condition of (iv) regarding x ia ′ is satisfied.
-Suppose that x ia ′ is already put into the bins. It is trivial if a ′ ≻ i a, so assume that a ≻ i a ′ . We claim that none of the classes C i a ′ t ∈ a ′ (C(i)) can be a subclass of C i al or C i al itself. Otherwise, C i al ∈ a ′ (C(i)), and we have q
+ (C i al ) = |A i ∩ C i al ∩ L i ≻a | ≥ |A i ∩ C i al ∩ L i ≻a ′ |, a contradiction to (iv) of the induction hypothesis. Now since for every class C i a ′ t ∈ a ′ (C(i)), we have C i a ′ t ⊆ C i al , we have |A i ∩ C i a ′ t ∩ L i ≻a ′ | = |A i ∩ C i a ′ t ∩ L i ≻a ′ | < q + (C i a ′ t ),
where the strict inequality is due to the induction hypothesis.
We notice that the quantity
|E i (x)| j ′ =1 x(M i,j ′ ) is exactly the sum of the shaft x(S(A i )) (before x ia
is added) or x(S(A i )) (after x ia is added) by observing (iv). Below let x(M i,j ) and x(M i,j ) denote the total size of the items in bin (i, j) before and after x ia is added into it. So x(M i,j ) = x(M i,j ) + x ia . Now we can derive the following:
x(K(i, S(A i ))) = x(S(A i )) + x(T (i, a)\{(i, a)}) + |E i (x)| j ′ =1,j ′ =j x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = x(M i,j ) + x ia + x(T (i, a)\{(i, a)}) + |E i (x)| j ′ =1,j ′ =j x(M i,j ′ ) + x(T (i, a i,j ′ )\{(i, a i,j ′ )}) = x(M i,j ) + x(T (i, a)) + |E i (x)| − 1 (by (ii) of the induction hypothesis) ≥ |E i (x)| (by Constraint (3))
For the above inequality to hold,
x(M i,j ) + x(T (i, a)) ≥ 1.(9)
Since x(M i,j ) = 1− α and x(T (i, a)) ≤ α by Inequality (7), Inequality (9) must hold with equality, implying that x(K(i, S(A i ))) = |E i (x)|, giving us (iii).
Since institute i puts x ia into bin (i, j), the "new" M i,j and the "new" a i,j (=a) satisfies
x(M i,j ) + x(T (i, a)\{(i, a)}) = 1.
This establishes (ii). (v) follows because Inequality (7) must hold with equality throughout. Therefore, there is no institute i ′′ which ranks strictly between i and i ′ and x i ′′ a > 0.
Finally for (vi), note that x(T (i, a)) = α if the item x ia is put into some bin in Phase t+1. All such items are the least preferred items in their respective "old" bins (immediately before Phase t + 1), it means the items on top of the newly-packed bins are still distinct. Moreover, from (8), if a bin (i, j) is not examined in Phase t + 1, then its least preferred applicant cannot be packed in phase t + 1 either.
⊓ ⊔
We define an assignment µ α based on a number α ∈ [0, 1) as follows. Assume that there is a line of height α "cutting through" all the bins horizontally. If an item x ia whose position in i's bins intersects α, applicant a is assigned to institute i. In the case this cutting line of height α intersects two items in the same bin, we choose the item occupying the higher position. More precisely:
Given α ∈ [0, 1), for each institute i ∈ I, we define an assignment as follows: µ α (i) = {a :
1 − x(T (i, a)) ≤ α < 1 − x(T (i, a)) + x ia }.
Theorem 25. The polytope determined by Constraints (1)-(5) is integral.
Proof. We generate uniformly at random a number α ∈ [0, 1) and use it to define an assignment µ α . To facilitate the discussion, we choose the largest α ′ ≤ α so that µ α ′ = µ α . Intuitively, this can be regarded as lowering the cutting line from α to α ′ without modifying the assignment, and 1 − α ′ is exactly the maximum available space in the beginning of a certain phase l during the execution of our packing algorithm. Note that the assignment µ α is then equivalent to giving those applicants (items) on top of institute i's bins to i at the end of phase l.
We now argue that µ α is a stable matching. First, it is a matching by Theorem 24(vi). The matching respects the quota of all classes since Invariant C is maintained. What remains to be argued is the stability of µ α . Suppose, for a contradiction, (i, a φ ) is a blocking pair. We consider the possible cases.
-Suppose that x ia φ > 0 and x ia φ is not put into the bins yet at the end of Phase l. Then by Theorem 24(iv) and the definition of blocking pairs, (i, a φ ) cannot block µ α . -Suppose that x ia φ > 0 and x ia φ is already put into the bins at the end of Phase l. If µ α (a φ ) = i, there is nothing to prove. So assume µ α (a φ ) = i and this means that the item x ia φ is "buried" under some other item on top of some of i's bins at the end of Phase l. Then by Theorem 24(v), a φ is assigned to some other institute ranking higher than i, contradicting the assumption that (i, a φ ) is a blocking pair. -Suppose that x ia φ = 0. There are two subcases.
• Suppose that for each of the classes C i a φ t ∈ a φ (C(i)), |µ α (i) ∩ C i a φ t | < q + (C i a φ t ). Then we can form a new feasible tuple µ α (i)|a φ . It can be inferred from the definition of the shaft that x(S(µ α (i)|a φ )) ≤ x(S(µ α (i)). Moreover, by Theorem 24(iii), we have x(K(i, S(µ α (i))) = |E i (x)|. Now by Constraint (3),
|E i (x)| + 1 ≤ x(K(i, S(µ α (i)|a φ ))) ≤ x(S(µ α (i)) + x(T (i, a φ )\{(i, a φ )}) + a∈µ α x(T (i, a)\{(i, a)}) = x(K(i, S(µ α (i)))) + x(T (i, a φ )\{(i, a φ )}) = |E i (x)| + x(T (i, a φ )\{(i, a φ )}). As a result, x(T (i, a φ )\{(i, a φ )}) = 1, implying that µ α (a φ ) ≻ a φ i, a contradiction to the assumption that (i, a) blocks µ α . • Suppose that there exists a class C i a φ l φ ∈ a φ (C(i)) for which |µ α (i) ∩ C i a φ l φ | = q + (C i a φ l φ ). Let C i
a φ l φ be the smallest such class. By definition of blocking pairs, there must exist an applicant a † ∈ µ α (i) ∩ C i a φ l φ who ranks lower than a φ . Choose a † to be the lowest ranking such applicant in µ α (i). We make the following critical observation:
x(S(µ α (i)| a † a φ )) ≤ x(S(µ α (i))) − x ia † .(10)
To see this, we first argue that given an item x ia > 0, if it does not contribute to the shaft S(µ φ (i)), then it cannot contribute to shaft S(µ α (i)| a † a φ ) either. It is trivial if a ≻ i a † . So assume that a † ≻ i a. First suppose that a ∈ C i a φ l φ . Then given any class C i at ∈ a(C(i)),
|µ α (i) ∩ C i at ∩ L i ≻a | = |µ α (i)| a † a φ ∩ C i at ∩ L i ≻a |,
and Theorem 24(iv) states that there is a class
C i al ∈ a(C(i)) such that |µ α (i) ∩ C i al ∩ L i ≻a | = q + (C i al ). Secondly suppose that a ∈ C i a φ l φ . Observe that q + (C i a φ l φ ) = |µ α (i)| a † a φ ∩ C i a φ l φ ∩ L i ≻a † | = |µ φ (i)| a † a φ ∩ C i a φ l φ ∩ L i ≻a | (
the first equality follows from the choice of a † ). In both cases, we conclude that x ia cannot contribute to the shaft S(µ φ (i)| a † a φ ). The term x ia † does not contribute to the shaft S(µ φ (i)| a † a φ ) by the same argument. Now using Constraint (3), Theorem 24(iii), and Inequality (10), we have i, a φ )).
|E i (x)| ≤ x(K(i, S(µ α (i)| a † a φ ))) ≤ x(S(µ α (i))) − x ia † + x(T (i, a φ )\{(i, a φ )}) + a∈µ α (i)\{a † } x(T (i, a)\{(i, a)})) = |E i (x)| − x(T (i, a † )) + x(T (
(Note that x ia φ = 0).
Therefore,
x(T (i, a φ )) ≥ x(T (i, a † )) ≥ 1 − α ′ ≥ 1 − α.
So µ α (a φ ) ≻ a φ i, again a contradiction to the assumption that (i, a φ ) blocks µ α . So we have established that the generated assignment µ α is a stable matching. Now the remaining proof is the same as in [23]. Assume that µ α (i, a) = 1 if and only if applicant a is assigned to institute i under µ α . Then a)dα and x can be written as a convex combination of µ α as α varies over the interval [0, 1). The integrality of the polytope thus follows.
Exp[µ α (i, a)] = x ia . Then x ia = 1 0 µ α (i,
⊓ ⊔
Optimal Stable Matching
Since our polytope is integral, we can write suitable objective functions to target for various optimal stable matchings using Ellipsoid algorithm [10]. As the proposed LP has an exponential number of constraints, we also design a separation oracle to get a polynomial time algorithm. The basic idea of our oracle is based on dynamic programming.
Median-Choice Stable Matching
An application of our polyhedral result is the following.
Theorem 26. Suppose that in the given instance, all classifications are laminar families and there is no lower bound, q − (C i j ) = 0 for any class C i j . Let µ 1 , µ 2 , · · · , µ k be stable matchings. If we assign every applicant to his median choice among all the k matchings, the outcome is a stable matching.
Proof. Let x µt be the solution based on µ t for any 1 ≤ t ≤ k and apply our packing algorithm on the fractional solution x = P k t=1 xµ t k . Then let α = 0.5 and µ 0.5 be the stable matching resulted from the cutting line of height α = 0.5. We make the following observation based on Theorem 24:
Suppose that applicant a is matched under x and those institutes with which he is matched are i 1 , i 2 , · · · , i k ′ , ordered based on their rankings on a's preference list. Assume that he is matched to i t n t times among the k given stable matchings. At the termination of the packing algorithm, each of the items x i l a , 1 ≤ l ≤ k ′ , appears in institute i l 's bins and its position is from l−1 t=1 nt k to l t=1 nt k . Now µ 0.5 gives every applicant his median choice follows easily from the above observation.
⊓ ⊔
Using similar ideas, we can show that an applicant-optimal stable matching must be institute-(lexicographical)-pessimal and similarly an applicant-pessimal stable matching must be institute-(lexicographical)-optimal: by taking x as the average of all stable matchings and consider the two matching µ ǫ and µ 1−ǫ with arbitrary small ǫ > 0. Hence, it is tempting to conjecture that the median choice stable matching is also a lexicographical median outcome for the institutes. Somehow surprisingly, it turns out not to be the case and a counter-example can be found in the appendix.
Polytope for Many-to-Many "Unclassified" Stable Matching
In the many-to-many stable matching problem, each entity e ∈ I ∪ A has a quota Q(e) ∈ Z + and a preference over a subset of the other side. A matching µ is feasible if given any entity e ∈ I ∪ A, (1) |µ(e)| ≤ Q(e), and (2) µ(e) is a subset of the entities on e ′ s preference list. A feasible matching µ is stable if there is no blocking pair (i, a), which means that i prefers a to one of the assignments µ(i), or if |µ(i)| < Q(i) and a ∈ µ(i); and similarly a prefers i to one of his assignments µ(a), or if |µ(a)| < Q(a) and i ∈ µ(a).
We now transform the problem into (many-to-one) LCSM. For each applicant a ∈ A, we create Q(a) copies, each of which retains the original preference of a. All institutes replace the applicants by their clones on their lists. To break ties, all institutes rank the clones of the same applicant in an arbitrary but fixed manner. Finally, each institute treats the clones of the same applicant as a class with upper bound 1. It can be shown that the stable matchings in the original instance and in the transformed LCSM instance have a one-one correspondence. Thus, we can use Constraints (1)- (5) to describe the former 6 .
Conclusion and Future Work
In this paper, we introduce classified stable matching and present a dichotomy theorem to draw a line between its polynomial solvability and NP-completeness. We also study the problem using the polyhedral approach and propose polynomial time algorithms to obtain various optimal matchings.
We choose the terms "institutes" and "applicants" in our problem definition, instead of the more conventional hospitals and residents, for a reason. We are aware that in real-world academics, many departments not only have ranking over their job candidates but also classify them based on their research areas. When they make their hiring decision, they have to take the quota of the classes into consideration. And in fact, we were originally motivated by this common practice.
classified stable matching has happened in real world. In a hospitals/residents matching program in Scotland, certain hospitals declared that they did not want more than one female physician. Roth [16] proposed an algorithm to show that stable matchings always exist.
There are quite a few questions that remain open. The obvious one would be to write an LP to describe LCSM with both upper bounds and lower bounds. Even though we can obtain various optimal stable matchings, the Ellipsoid algorithm can be inefficient. It would be nicer to have fast combinatorial algorithms. The rotation structure of Gusfield and Irving [11] seems the way to go.
A An Example for Section 2.2
In contrast to the generalized rural hospitals theorem in LCSM, if some institutes use intersecting classes, stable matching sizes may differ. Figure 3 is an example.
Institute Preferences
Classifications Quota i1:a1a2a3
C 1 1 = {a1, a2}, C 1 2 = {a1, a3} Q(i1) = 2, q + (C 1 1 ) = 1, q + (C 1 2 ) = 1 i2:a2a1a3a4 C 2 1 = {a2, a1}, C 2 2 = {a2, a3}, C 2 3 = {a2, a4} Q(i2) = 2, q + (C 2 1 ) = 1, q + (C 2 2 ) = 1, q + (C 2 3 ) = 1 Applicant
B Missing Proofs of Section 3
In this section, we prove Theorem 16. We assume that the set of posets P = {P 1 , P 2 , · · · , P k } contains a poset which is not a downward forest. Moreover, we assume that there is no lower bound on the classes. Without loss of generality, we assume that P 1 is not a downward forest. Such a poset must have a "V." By definition, there exists institute i whose class inclusion poset P (i) is isomorphic to P 1 . This implies that institute i must have two intersecting classes in C(i). In the following, we will present a reduction in which all institutes use at most two classes (that can be intersecting). It is straightforward to use some dummy institutes and applicants to "pad" our reduction so that every poset P j ∈ P is isomorphic to some class inclusion poset of the institutes in the derived instance. Our reduction is from one-in-three-sat. We will use an instance in which there is no negative literal. (NP-completeness still holds under this restriction [9].)
The overall goal is to design a reduction so that the derived P-classified stable matching instance allows a stable matching if and only if the given instance φ = c 1 ∧ c 2 ∧ · · · ∧ c k is satisfiable. We will build a set of clause gadgets to represent each clause c j . For every pair of literals which belong to the same clause, we create a literal-pair gadget. Such a gadget will guarantee that at most one literal it represents can be "activated" (set to TRUE). The clause gadget interacts with the literalpair gadgets in such a way that if the clause is to be satisfied, exactly one literal it contains can be activated.
Literal-Pair Gadget Suppose that x j i and x j i ′ both belong to the same clause c j . We create a gadget Υ j i,i ′ composed of four applicants {a j i,t } 2 t=1 ∪ {a j i ′ ,t } 2 t=1 and two institutes {I j i , I j i ′ } whose preferences and classifications are summarized below.
a j i,1 : I j i ≻ Γ (a j i,1 ) ≻ I j i ′ I j i : a j i,2 ≻ a j i,1 ≻ a j i ′ ,2 ≻ a j i ′ ,1 ≻ Ψ (I j i ) C I j i 1 = {a j i,1 , a j i,2 }, C I j i 2 = {a j i,1 , a j i ′ ,1 } a j i,2 : I j i ′ ≻ I j i Q(I j i ) = 2, q + (C I j i 1 ) = 1, q + (C I j i 2 ) = 1 a j i ′ ,1 : I j i ≻ Γ (a j i ′ ,1 ) ≻ I j i ′ I j i ′ : a j i,1 ≻ a j i,2 ≻ a j i ′ ,1 ≻ a j i ′ ,2 C I j i ′ 1 = {a j i,1 , a j i,2 } a j i ′ ,2 : I j i ′ ≻ I j i Q(I j i ′ ) = 2, q + (C I j i ′ 1 ) = 1
We postpone the explanation of the Γ and Ψ functions for the time being. We first make the following claim.
Claim B: Suppose that in a stable matching µ, the only possible assignments for
{a j i,1 , a j i,2 , a j i ′ ,1 , a j i ′ ,2 } are {I j i , I j i ′ }.
Then there can only be three possible outcomes in µ.
1. µ(a j i,1 ) = I j i , µ(a j i,2 ) = I j i ′ , µ(a j i ′ ,1 ) = I j i ′ , µ(a j i ′ ,2 ) = I j i . (In this case, we say x i is activated while x i ′ remains deactivated.) 2. µ(a j i,1 ) = I j i ′ , µ(a j i,2 ) = I j i , µ(a j i ′ ,1 ) = I j i , µ(a j i ′ ,2 ) = I j i ′ . (In this case, we say x i ′ is activated while x i remains deactivated.) 3. µ(a j i,1 ) = I j i ′ , µ(a j i,2 ) = I j i , µ(a j i ′ ,1 ) = I j i ′ , µ(a j i ′ ,2 ) = I j i . () = I j i , µ(a j i,2 ) = I j i ′ , µ(a j i ′ ,1 ) = I j i , µ(a j i ′ ,2 ) = I j i ′ will not happen due to the quota q + (C I j i 2 )
. This case corresponds to the situation that x i and x i ′ are both activated and is what we want to avoid.
We now explain how to realize the supposition in Claim B about the fixed potential assignments for {a j i,t } 2 t=1 ∪ {a j i ′ ,t } 2 t=1 in a stable matching. It can be easily checked that if a j i,1 is matched to some institute in Γ (a j i,1 ), or either of {a j i,1 , a j i,2 } is unmatched; or if either of {a j i ′ ,1 , a j i ′ ,2 } is unmatched, then there must exist a blocking group involving a subset of
{I j i , I j i ′ , {a j i,t } 2 t=1 , {a j i ′ ,t } 2 t=1 }.
However, the following matching µ φ can happen in which a j i ′ ,1 is matched to some institute in Γ (a j i ′ ,1 ) but there is no blocking group : µ φ (a j i,1 ) = I j i , µ φ (a j i,2 ) = µ φ (a j i ′ ,2 ) = I j i ′ , µ φ (a j i ′ ,1 ) ∈ Γ (a j i ′ ,1 ). 7 To prevent the above scenario from happening (i.e., we want µ φ to be unstable), we introduce another gadget Υ j i , associated with I j i , to guarantee a blocking group will appear. We now list the preferences and classifications of the members of Υ j i below.
a j i,1 : I j i,4 ≻ I j i,1 ≻ I j i,3 ≻ I j i,2 I j i,1 : a j i,5 ≻ a j i,2 ≻ a j i,4 ≻ a j i,6 ≻ a j i,3 ≻ a j i,1 Q(I j i,1 ) = 2 a j i,2 : I j i,3 ≻ I j i,4 ≻ I j i,2 ≻ I j i,1 I j i,2 : a j i,4 ≻ a j i,6 ≻ a j i,2 ≻ a j i,3 ≻ a j i,1 ≻ a j i,5 C I j i,2 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,2 2 = {a j i,3 , a j i,4 , a j i,5 } a j i,3 : I j i,4 ≻ I j i,3 ≻ I j i,1 ≻ I j i,2 Q(I j i,2 ) = 2, q + (C I j i,2 1 ) = 1, q + (C I j i,2 2 ) = 1 a j i,4 : I j i,4 ≻ I j i,1 ≻ I j i,2 ≻ I j i,3 I j i,3 : a j i,4 ≻ a j i,5 ≻ a j i,6 ≻ a j i,3 ≻ a j i,1 ≻ a j i,2 C I j i,3 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,3 2 = {a j i,3 , a j i,4 , a j i,5 } a j i,5 : I j i,2 ≻ I j i,4 ≻ I j i,3 ≻ I j i,1 Q(I j i,3 ) = 2, q + (C I j i,3 1 ) = 1, q + (C I j i,3 2 ) = 1 a j i,6 : I j i,2 ≻ I j i,4 ≻ I j i,3 ≻ I j i,1 I j i,4 : a j i,4 ≻ a j i,1 ≻ a j i,6 ≻ a j i,2 ≻ a j i,3 ≻ a j i,4 C I j i,4 1 = {a j i,1 , a j i,2 , a j i,3 }, C I j i,4 2 = {a j i,3 , a j i,4 , a j i,5 } Q(I j i,4 ) = 2, q + (C I j i,4 1 ) = 1, q + (C I j i,4
2 ) = 1 7 It can be verified that if a j i,1 is matched to some institute in Γ (a j i ′ ,1 ), the above assignment is the only possibility that no blocking group arises.
The above instance Υ j i has the following features, every one of which is crucial in our construction. 1. In a matching µ φ , suppose that institute I j i is only assigned a j i,1 while a j i ′ ,1 is assigned to some institutes in Γ (a j i ′ ,1 ) (the problematic case we discussed above). As a result, institute I j i can take one more applicant from the set {a j i,t } 6 t=1 . By Feature A, there must exist a blocking group involving the members in Υ j i . More importantly, this blocking group need not be composed of I j i and two applicants from {a j i,t } 6 t=1 . 2. In a matching µ φ , suppose that institutes I j i is assigned two applicants from the set {a j i,t , a j i ′ ,t } 2 t=1 . Then I j i,1 can be regarded as being removed from the instance Υ j i . And there exists a stable matching among the other members of the instance Υ j i . This explains the necessity of Feature B. 3. Finally, since I j i already uses two intersecting classes, I j i,1 should not use any more classes. This explains the reason why Feature C is necessary.
j i into gadget Υ j i,i ′ . To be precise, let Ψ (I j i ) = a j i,5 ≻ a j i,2 ≻ a j i,4 ≻ a j i,6 ≻ a j i,3 ≻ a j i
We have left the functions Γ (a j i,1 ) and Γ (a j i ′ ,1 ) unexplained so far. They contain institutes belonging to the clauses gadgets, which will be the final component in our construction.
Clause Gadget Suppose that c j = x j 1 ∨ x j 2 ∨ x j 3 .
We create a clause gadgetΥ j composed of two institutes {Î j t } 2 t=1 and six applicants {â j t } 6 t=1 . Their preferences and classifications are summarized below.
We now explain how the Λ functions in the clause gadgets interact with the Γ functions in the literal-pair gadgets. The former is composed of applicants in the literal-pair gadgets while the latter is composed of institutes in the clause gadgets. Our intuition is that the only possible stable matchings in the clause gadgets will enforce exactly one of its three literals to be activated. To be precise, let π(X) denote an arbitrary order among the elements in the set X. Then:
â j 1 :Î j 2 ≻Î j 1Î j 1 :â j 5 ≻â j 1 ≻â j 2 ≻ Λ(x j 1 ) ≻â j 6 ≻ Λ(x j 2 ) ≻â j 3 ≻ Λ(x j 3 ) ≻â j 4 a j 2 :Î j 1 ≻2 j 1
Finally, we remark that the three possible outcomes in µ listed in the lemma will guarantee that exactly one of the three literals in clause c j can be activated. The reason is again the same as in the last two cases that we just explained. This completes the proof of Claim C. ⊓ ⊔ Now by Claim C, we establish Theorem 16 C Missing Proofs of Section 4 Lemma 17. In LCSM, if there is no lower bound, i.e., given any class C i j , q − (C i j ) = 0, then a stable matching as defined in Definition 2 can be equivalently defined as follows. A feasible matching µ is stable if and only if there is no blocking pair. A pair (i, a) is blocking, given that µ(i) = (a i1 , a i2 , · · · , a ik ),
k ≤ Q(i), if -i ≻ a µ(a); -for any class C i at ∈ a(C(i)), |L i ≻a ∩ µ(i) ∩ C i at | < q + (C i at ).
Proof. If we have a blocking group (i; g), institute i and the highest ranking applicant in g\µ(i) must be a blocking pair. Conversely, given a blocking pair (i; a), assuming that |µ(i)| = Q(i) (the case that |µ(i)| < Q(i) follows a similar argument), we can form a blocking group (i; µ(i)| a † a), where a † is chosen as follows: (1) if there exists a class C i at ∈ a(C(i)) such that |µ(i) ∩ C i at | = q + (C i at ), choose the smallest such class C i at ∈ a(C(i)) and let a † be the lowest ranking applicant in µ(i) ∩ C i at ; (2) otherwise, a † is simply the lowest ranking applicant in µ(i).
⊓ ⊔ Lemma 19. Every stable matching solution x satisfies the comb inequality for any comb K(i, S(A i )):
x(K(i, S(A i )) ≡ x(S(A i )) + a j ∈A i x(T (i, a j )\{i, a j }) ≥ |A i |.
We use the following notation to facilitate the proof. Give a tuple A i , we define y ia as follows:
y ia = 1 either a ∈ A i , x(T (i, a)) = 1; or a ∈ A i , x ia = 1, and (i, a) ∈ S(A i ); 0 o.w.
Let y(C i j ) = a∈L i ∩C i j y ia . This quantity indicates how much a class C i j contributes to the comb value x(K(i, S(A i ))). Thus, if U is a set of classes in C(i) partitioning L i , then x(K(i, S(A i ))) = C i j ∈U y(C i j ).
Proof. We prove by showing that if x(K(i, S(A i ))) < |A i |, there exists a blocking pair (i, a † ), where a † ∈ A i . We proceed by contradiction. First note that there exists a non-empty subset G ⊆ A i of applicants a for whom x(T (i, a)) = 0, otherwise, x(K(i, S(A i ))) ≥ |A i |, an immediate contradiction. For each applicant a ∈ G, there must exist a class C i al ∈ a(C(i)) for which a ′ ∈L i ) is a blocking pair and we are done. Now for each applicant a ∈ G, choose the smallest class C i al for which a ′ ∈L i ≻a ∩C i al x ia ′ = q + (C i al ) and denote this class as C a . We introduce a procedure to organize a set U of disjoint classes.
≻a ∩C i al x ia ′ = q + (C i al ), otherwise, (i, a
Let G be composed of a 1 , a 2 , · · · , a |G| ordered based on their decreasing rankings on L i For i = 1 To |G| if a i ∈ C ∈ U , then do nothing else U := U \{C|C ∈ U, C ⊂ C a l } //C a l may be a superclass of some classes in U U := U ∪ {C a l }. // adding C a l into U Claim The output U from the above procedure comprises of a disjoint set of classes containing all applicants in G, and for each class C i j ∈ U , y(C i j ) ≥ q + (C i j ). We will prove the claim shortly. Now
x(K(i, S(A i ))) = C i j ∈U y(C i j ) + |A i \{∪ C i j ∈U C i j }| ≥ C i j ∈U q + (C i j ) + |A i \{∪ C i j ∈U C i j }| ≥ |A i |,
a contradiction. ⊓ ⊔ Proof of the Claim. It is easy to see that the classes in U are disjoint and contain all applicants in G. Below we show that during the execution of the procedure, if C i j ∈ U , then y(C i j ) ≥ q + (C i j ). We proceed by induction on the number of times U is updated. In the base case U is an empty set so there is nothing to prove.
For the induction step, assume that a l is being examined and C a l is about to be added into U . Observe that even though a∈L i ≻a l ∩Ca l x ia = q + (C a l ), there is no guarantee that if x ia = 1, then
y ia = 1 for each a ∈ L i ≻a l ∩ C a l .
The reason is that there may exist some class C i j ∈ a(C(i)) for which
|A i ∩ C i j ∩ L i ≻a | = q + (C i j ) and a ∈ A i . Then (i, a)
is not part of the shaft x(S(A i )) and y ia = 0. To deal with the above situation, we need to do some case analysis. Let B be the set of subclasses
C i j of C a l for which |A i ∩C i j ∩L i ≻a l | = q + (C i j )
. Choose D to be the subclasses of C a l so that ℜ(B∪U )∪D partitions C a l . We make three observations below.
(i) for each class C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , y(C i j ) ≥ q + (C i j ) ≥ a∈L i a l ∩C i j x ia .
(ii) for each class C i j ∈ D, if a ∈ L i ≻a l ∩ C i j and x ia = 1, then y ia = 1. (iii) for each class C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , then for each applicant a ∈ L i ≻a l ∩ C i j ∩ A i , either a ∈ G and a ∈ C ∈ U , or that a ∈ G (implying that x(T (i, a)) = 1). Moreover,
y(C i j ) ≥ a∈L i ≻a l ∩C i j x ia
(i) is because of the induction hypothesis and the feasibility assumption of x. (ii) follows from the fact that a ranks higher than a l and the way we define a class in D. For (iii), first notice that if C i j ∈ ℜ(B ∪ U ) and C i j ∈ U , then such a class C i j must be part of ℜ(B) and C i j may contain some classes in U . Now suppose that a i ∈ G ∩ L i ≻a l but does not belong to any class in U . Then our procedure would have added the class C a i into U before examining a l , a contradiction. To see the last statement of (iii), let G ′ be set of applicants in L i ≻a l ∩ C i j ∩ A i who do not belong to any classes in U . Then
y(C i j ) ≥ C i k ∈U,C i k ⊂C i j y(C i k ) + |G ′ | ≥ C i k ∈U,C i k ⊂C i j q + (C i k ) + |G ′ | ≥ q + (C i j ) ≥ a∈L i ≻a l ∩C i j x ia ,
where the first inequality follows from the first part of (iii), the second inequality the induction hypothesis, the third the fact that C i j ∈ ℜ(B) (thus |L i ≻a l ∩ C i j ∩ A i | = q + (C i j )), and the fourth the feasibility assumption of x. Now combining all the three observations, we conclude that
y(C a l ) = C i j ∈ℜ(B∪U ) y(C i j ) + C i k ∈D y(C i j ) ≥ C i k ∈ℜ(B l ∪U )∪D l a∈L i ≻a l ∩C i k x ia = q + (C i j ),
and the induction step is completed. ⊓ ⊔ Lemma 20. Every stable matching solution x satisfies the following inequality for any class-tuple t i j :
a ij ∈t i j x(T (i, a ij )\{i, a ij }) ≥ a∈C i j ∩L i ≺t i j x ia (*)
Proof. We prove by contradiction. Suppose that in a given class-tuple t i j (*) does not hold. We will show that we can find a blocking pair (i, a † ), where a † ∈ t i j . Let the set of applicants a ∈ t i j with x (T (i, a)
) = 0 be G, α = a ′ ∈L i ≺t i j ∩C i j x ia ′ > 0, and β = a ′ ∈t i j x ia ′ .
By assumption, at most α − 1 applicants a ∈ t i j have x(T (i, a)\{(i, a)}) = 1. Thus,
|G| ≥ q + (C i j ) − β − α + 1.(11)
Claim: At least one applicant a † ∈ G belongs to a sequence of classes C i a † t ∈ a † (C(i)) such
that if C i a † t ⊆ C i j , then a ′ ∈L i ≻a † ∩C i a † t x ia ′ < q + (C i a † t ).
We will prove the claim shortly. Observe that given any class
C i k ⊃ C i j , a ′ ∈L i ≻a † ∩C i k x ia ′ < q + (C i k )
: as α > 0, some applicant a φ ∈ C i k ranking lower than a † has x ia φ = 1 and Constraint (2) enforces that
a ′ ∈L i ∩C i k x ia ′ ≤ q + (C i k ).
Combining the above facts, we conclude that (i, a † ) is a blocking pair. ⊓ ⊔ Proof of the Claim. We prove by contradiction. Suppose that for every applicant a ∈ G, there exists some class C i at ∈ a(C(i)), C i at ⊆ C i j , and a ′ ∈ L i ≻a ∩C i at x ia ′ = q + (C i at ). Let B be the set of classes C i k ⊆ C i j such that C i k contains an applicant a ∈ G and a ′ ∈L i ≻a ∩C i k x ia ′ = q + (C i k ) (which then will equal a ′ ∈L i t i j ∩C i k x ia ′ due to Constraint (2)). For each class C i k ∈ ℜ(B),
a∈L i t i j ∩C i k x ia = q + (C i k ) ≥ |t i j ∩ C i k | = a ′ ∈L i t i j ∩t i j ∩C i k x ia ′ + |G ∩ C i k |,(12)
where the first inequality follows from the definition of the class-tuple. Now we have
q + (C i j ) − α − β ≥ C i k ∈ℜ(B) a ′ ∈(L i ≻a ‡ ∩C i k )\t i j x ia ′ ≥ C i k ∈ℜ(B) |G ∩ C i k | = |G| ≥ q + (C i j ) − α − β + 1,
a contradiction. Note that the first inequality follows from Constraint (2), the second inequality from (12), the equality right after is because every applicant in G belongs to some class in B, and the last inequality is due to (11). ⊓ ⊔
D Separation Oracle in Section 4.1
It is clear that Constraints (1)(2)(4) can be separated in polynomial time. So we assume that x satisfies these constraints and focus on finding a violated Constraint (3) and/or Constraint (5).
Separating Constraint (3)
We first make an observation. For each institute i, it suffices to check whether all the combs with exactly Q(i) teeth satisfy Constraint (3). To see this, suppose that there is a feasible tuple A i with less than Q(i) applicants and x(K(i, S(A i ))) < |A i |. Then we can add suitable applicants into A i to get a feasible tuple A i with exactly Q(i) applicants. Noticing that
x(S(A i )) ≤ x(S(A i )), we have
x(K(i, S(A i ))) ≤ x(S(A i )) +
≤ |A i | + |A i | − |A i | = |A i |,
where the last inequality follows from our assumption that x satisfies Constraint (1).
To illustrate our idea, we first explain how to deal with the case that the original classification C(i) is just a partition over L i (before we add the pseudo root class C i ♯ ). We want to find out the tuple A i of length Q(i), whose lowest ranking applicant is a † , which gives the smallest x(K(i, S(A i ))). If we have this information for all possible a † , we are done. Note that because of our previous discussion, if there is no feasible tuple of length Q(i) whose lowest ranking applicant is a † , we can ignore those cases.
Our main idea is to decompose the value of x(K(i, S(A i ))) based on the classes and use dynamic programming to find out the combinations of the tooth-applicants that give the smallest comb values. More precisely, Definition 27. Assume that A i j ⊆ C i j , 0 ≤ |A i j | ≤ q + (C i j ), and all applicants in A i j rank higher than a † . Let Note that this definition requires that if x ia contributes to x(A i j , a † ), then a has to rank higher than a † , belongs to C i j , and the (i, a) is part of the shaft S(A i j ). Suppose that we have properly stored all the possible values of Z(C i j , s j , a † ) and assume that a † ∈ C i j ′ . Then for each class C i j = C i j ′ , assume that 0 ≤ s j ≤ q + (C i j ) and for class C i j ′ , 0 ≤ s j ′ ≤ q + (C i j ′ ) − 1, then the tuple A i whose lowest ranking applicant is a † , that gives the smallest comb value is the following one:
x(K(i, S(A i ))) = x(T (i, a † )) + min s j :
P C i j ∈C(i) s j =Q(i)−1 C i j ∈C(i) Z(C i j , s j , a † ).
The above quantity can be calculated using standard dynamic programming technique. So the question boils down to how to calculate Z(C i j , s j , a † ). There are two cases.
For the induction step, let C i j be a non-leaf class and assume that a ‡ ∈ C i k ′ ∈ c(C i j ). To calculate Z(C i j , s j , a ‡ , a † ), we need to find out a feasible tuple A i j of size s j , all of whose applicants rank at least as high as a ‡ so that x(A i j , a † ) is minimized. Observe that a feasible tuple A i j can be decomposed into a set of tuples
A i j = C i k ∈c(C i j ) A i k , where A i k ⊆ C i k ∈ c(C i j ).
1. Suppose that s j < q + (C i j ). Then by definition, x(S(A i j ), a † ) = C i k ∈c(C i j ) x(S(A i k ), a † ). So
x(A i j , a † ) = For each class C i k ∈ c(C i j ), the minimum quantity a∈A i k x(T (i, a)\{(i, a)}) + x(S(A i k ), a † ) is exactly Z(C i k , s k , a ‡ , a † ). As a result, for each class C i k = C i k ′ , let 0 ≤ s k ≤ q + (C i k ), and for class C i k ′ , let 0 ≤ s k ′ ≤ q + (C i k ′ ) − 1:
Z(C i j , s j , a ‡ , a † ) = x(T (i, a ‡ )) + min s k :
P s k =s j −1 C i k ∈c(C i j )
Z(C i k , s k , a ‡ , a † ).
Thus, we can find out Z(C i j , s j , a ‡ , a † ) by dynamic programming. 2. Suppose that s j = q + (C i j ). Note that this time since the class C i j will be "saturated", the term x(S(A i j ), a † ) does not get any positive values x ia , provided that a ∈ C i j ∩ (L i ≻a † ∩ L i ≺a ‡ ). So x(S(A i j ), a † ) = C i k ∈c(C i j ) x(S(A i k ), a ‡ ) and this implies that
x(A i j , a † ) = Let a ‡ be the lowest ranking applicant that ranks higher than a ‡ . Then for each class, C i k ∈ c(C i j ), the minimum quantity a∈A i k x(T (i, a)\{(i, a)}) + x(S(A i k ), a ‡ ) is exactly Z(C i k , s k , a ‡ , a ‡ ). Assuming that for each class C i k = C i k ′ , let 0 ≤ s k ≤ q + (C i k ), and let 0 ≤ s k ′ ≤ q + (C i k ′ ) − 1, we have Z(C i j , s j , a ‡ , a † ) = x(T (i, a ‡ )) + min s k :
P s k =s j −1 C i k ∈c(C i j )
Z(C i k , s k , a ‡ , a ‡ ).
As before, this can be calculated by dynamic programming. ⊓ ⊔ Now choose the smallest Z(C i ♯ , Q(i) − 1, a ‡ , a † ) among all possible a ‡ who rank higher than a † and assume that A i ♯ is the corresponding tuple. It is easy to see that among all feasible tuples A i of length Q(i) whose lowest ranking applicant is a † , the one has the smallest comb value x(K(i, S(A i )), is exactly the tuple A i ♯ ∪ {a † }.
Separating Constraint (5) We again make use of dynamic programming. The idea is similar to the previous one and the task is much simpler, so we will be brief. Suppose that we are checking all the class-tuples T i j corresponding to class C i j . Let T i j,a † ⊆ T i j be the subset of class-tuples whose lowest ranking applicants is a † . We need to find out the class-tuple t i j,a † ∈ T i j,a † with the smallest value
x(T (i, a † )\{(i, a † )}) + a∈t i j,a † \{a † }
x(T (i, a)\{(i, a)}),
and check whether this value is no less than a∈C i j ∩L i ≺a †
x ia . If it is, then we are sure that all classtuples in T i j,a † satisfy Constraint (5), otherwise, we find a violated constraint. The above quantity can be easily calculated by dynamic programming as before.
E A Counter Example for Section 4.2
The example shown in Figure 4 contains five stable matchings. If we apply the median choice operation on all of them, we get the stable matching µ 2 , which does not give institutes i 1 and i 2 their lexicographical median outcome.
Institute Preferences
Classifications Class Bounds i1:axaya1a2a3a4 C 1 1 = {a1, a2}, C 1 2 = {a3, a4} Q(i1) = 2, q + (C 1 1 ) = 1, q + (C 1 2 ) = 1 i2:azawa2a1a4a3 C 2 1 = {a1, a2}, C 2 2 = {a3, a4} Q(i2) = 2, q + (C 2 1 ) = 1, q + (C 2 2 ) = 1 i3:a1a2a3a4axayazaw Q(i3) = 4
Applicant Preferences a1:i2i1i3 a2:i1i2i3 a3:i2i1i3 a4:i1i2i3 ax:i3i1 ay:i3i1 az:i3i2 aw:i3i2
Stable Matchings µ1 = {(i1; ax, ay), (i2; az, aw), (i3; a1, a2, a3, a4)} µ2 = {(i1; a1, a3), (i2; a2, a4), (i3; ax, ay, az, aw)} µ3 = {(i1; a1, a4), (i2; a2, a3), (i3; ax, ay, az, aw)} µ4 = {(i1; a2, a3), (i2; a1, a4), (i3; ax, ay, az, aw)} µ5 = {(i1; a2, a4), (i2; a1, a3), (i3; ax, ay, az, aw)} Fig. 4. An example of median choice stable matching which does not give the institutes their lexicographically median outcome.
| 23,073 |
0906.4026
|
1564940531
|
Even the best information retrieval model cannot always identify the most useful answers to a user query. This is in particular the case with web search systems, where it is known that users tend to minimise their effort to access relevant information. It is, however, believed that the interaction between users and a retrieval system, such as a web search engine, can be exploited to provide better answers to users. Interactive Information Retrieval (IR) systems, in which users access information through a series of interactions with the search system, are concerned with building models for IR, where interaction plays a central role. There are many possible interactions between a user and a search system, ranging from query (re)formulation to relevance feedback. However, capturing them within a single framework is difficult and previously proposed approaches have mostly focused on relevance feedback. In this paper, we propose a general framework for interactive IR that is able to capture the full interaction process in a principled way. Our approach relies upon a generalisation of the probability framework of quantum physics, whose strong geometric component can be a key towards a successful interactive IR model.
|
More general and holistic models have been proposed to build an interactive retrieval system (e.g. Fuhr @cite_12 and @cite_5 ) that rely on a decision-theoretic framework to determine what is the best next action the system should perform, i.e. what documents should be presented to the user. In such approaches, decisions are made based on the relevance of documents when considering past interactions. In this paper, we focus on the latter problem and do not discuss how to select the best next action the system has to perform.
|
{
"abstract": [
"Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word \"java\" to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search. We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine.",
"The classical Probability Ranking Principle (PRP) forms the theoretical basis for probabilistic Information Retrieval (IR) models, which are dominating IR theory since about 20 years. However, the assumptions underlying the PRP often do not hold, and its view is too narrow for interactive information retrieval (IIR). In this article, a new theoretical framework for interactive retrieval is proposed: The basic idea is that during IIR, a user moves between situations. In each situation, the system presents to the user a list of choices, about which s he has to decide, and the first positive decision moves the user to a new situation. Each choice is associated with a number of cost and probability parameters. Based on these parameters, an optimum ordering of the choices can the derived--the PRP for IIR. The relationship of this rule to the classical PRP is described, and issues of further research are pointed out."
],
"cite_N": [
"@cite_5",
"@cite_12"
],
"mid": [
"2108168165",
"1973693034"
]
}
| 0 |
||
0906.4026
|
1564940531
|
Even the best information retrieval model cannot always identify the most useful answers to a user query. This is in particular the case with web search systems, where it is known that users tend to minimise their effort to access relevant information. It is, however, believed that the interaction between users and a retrieval system, such as a web search engine, can be exploited to provide better answers to users. Interactive Information Retrieval (IR) systems, in which users access information through a series of interactions with the search system, are concerned with building models for IR, where interaction plays a central role. There are many possible interactions between a user and a search system, ranging from query (re)formulation to relevance feedback. However, capturing them within a single framework is difficult and previously proposed approaches have mostly focused on relevance feedback. In this paper, we propose a general framework for interactive IR that is able to capture the full interaction process in a principled way. Our approach relies upon a generalisation of the probability framework of quantum physics, whose strong geometric component can be a key towards a successful interactive IR model.
|
The most related work in that field is that of Melucci's @cite_19 , which computes the probability of having a given context @math , where @math is the probability distribution generated by the document vector @math , and the subspace @math is equalled to the context and is built through user interaction. More specifically, given a set of documents deemed relevant, either using user feedback or pseudo-relevance feedback, one can compute a subspace @math corresponding to the principal components of a subspace spanned by those documents. A document vector fully included in this space will be fully relevant (probability of 1), an orthogonal one will be fully irrelevant (zero probability). Melucci's approach is dual to ours, in the sense that instead of representing users in an IN space, he considers documents in a contextual space. Our approach, which relies on an IN space, facilitates the possibility of using the different quantum evolution mechanisms to model the interaction between the user and the retrieval system.
|
{
"abstract": [
"Information retrieval (IR) models based on vector spaces have been investigated for a long time. Nevertheless, they have recently attracted much research interest. In parallel, context has been rediscovered as a crucial issue in information retrieval. This article presents a principled approach to modeling context and its role in ranking information objects using vector spaces. First, the article outlines how a basis of a vector space naturally represents context, both its properties and factors. Second, a ranking function computes the probability of context in the objects represented in a vector space, namely, the probability that a contextual factor has affected the preparation of an object."
],
"cite_N": [
"@cite_19"
],
"mid": [
"2015419638"
]
}
| 0 |
||
0906.3461
|
2150496197
|
A sensor network is a collection of wireless devices that are able to monitor physical or environmental conditions. These devices are expected to operate autonomously, be battery powered and have very limited computational capabilities. This makes the task of protecting a sensor network against misbehavior or possible malfunction a challenging problem. In this document we discuss performance of Artificial immune systems (AIS) when used as the mechanism for detecting misbehavior. We concentrate on performance of respective genes; genes are necessary to measure a network's performance from a sensor's viewpoint. We conclude that the choice of genes has a profound influence on the performance of the AIS. We identified a specific MAC layer based gene that showed to be especially useful for detection. We also discuss implementation details of AIS when used with sensor networks.
|
@cite_20 @cite_7 the authors introduced an AIS based misbehavior detection system for ad hoc wireless networks. They used Glomosim for simulating data traffic, their setup was an area of 800 @math 600m with 40 mobile nodes (speed 1 m s) of which 5-20 are misbehaving; the routing protocol was DSR. Four genes were used to capture local behavior at the network layer. The misbehavior implemented is a subset of misbehavior introduced in this paper; their observed detection rate is about 55 @cite_10 the authors describe an AIS able to detect anomalies at the transport layer of the OSI protocol stack; only a wired TCP IP network is considered. Self is defined as normal pairwise connections. Each detector is represented as a 49-bit string. The pattern matching is based on r-contiguous bits with a fixed @math .
|
{
"abstract": [
"We describe an artificial immune system (AIS) that is distributed, robust, dynamic, diverse and adaptive. It captures many features of the vertebrate immune system and places them in the context of the problem of protecting a network of computers from illegal intrusions.",
"In mobile ad-hoc networks, nodes act both as terminals and information relays, and participate in a common routing protocol, such as Dynamic Source Routing (DSR). The network is vulnerable to routing misbehavior, due to faulty or malicious nodes. Misbehavior detection systems aim at removing this vulnerability. In this paper we investigate the use of an Artificial Immune System (AIS) to detect node misbehavior in a mobile ad-hoc network using DSR. The system is inspired by the natural immune system of vertebrates. Our goal is to build a system that, like its natural counterpart, automatically learns and detects new misbehavior. We describe the first step of our design; it employs negative selection, an algorithm used by the natural immune system. We define how we map the natural immune system concepts such as self, antigen and antibody to a mobile ad-hoc network, and give the resulting algorithm for misbehavior detection. We implemented the system in the network simulator Glomosim; we present detection results and discuss how the system parameters impact the results. Further steps will extend the design by using an analogy to the innate system, danger signals, costimulation and memory cells.",
"In mobile ad-hoc networks, nodes act both as terminals and information relays, and they participate in a common routing protocol, such as Dynamic Source Routing (DSR). The networks are vulnerable to routing misbehavior, due to faulty or malicious nodes. Misbehavior detection systems aim at removing this vulnerability. For this purpose, we use an Artificial Immune System (AIS), a system inspired by the human immune system (HIS). Our goal is to build a system that, like its natural counterpart, automatically learns and detects new misbehavior."
],
"cite_N": [
"@cite_10",
"@cite_7",
"@cite_20"
],
"mid": [
"2495965124",
"2107301538",
"2137046464"
]
}
|
AIS for Misbehavior Detection in Wireless Sensor Networks: Performance and Design Principles
|
Sensor networks [21] can be described as a collection of wireless devices with limited computational abilities which are, due to their ad-hoc communication manner, vulnerable to misbehavior and malfunction. It is therefore necessary to support them with a simple, computationally friendly protection system.
Due to the limitations of sensor networks, there has been an on-going interest in providing them with a protection solution that would fulfill several basic criteria. The first criterion is the ability of self-learning and selftuning. Because maintenance of ad hoc networks by a human operator is expected to be sporadic, they have to have a built-in autonomous mechanism for identifying user behavior that could be potentially damaging to them. This learning mechanism should itself minimize the need for a human intervention, therefore it should be self-tuning to the maximum extent. It must also be computationally conservative and meet the usual condition of high detection rate. The second criterion is the ability to undertake an action against one or several misbehaving users. This should be understood in a wider context of co-operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior. Such a co-operation should have a low message complexity because both the bandwidth and the battery life are of scarce nature. The third and last criterion requires that the protection system does not itself introduce new weaknesses to the systems that it should protect.
An emerging solution that could facilitate implementation of the above criteria are Artificial immune systems (AIS). AIS are based on principles adapted from the Human immune system (HIS) [18,5,17]; the basic ability of HIS is an efficient detection of potentially harmful foreign agents (viruses, bacteria, etc.). The goal of AIS, in our setting, is the identification of nodes with behavior that could possibly negatively impact the stated mission of the sensor network.
One of the key design challenges of AIS is to define a suitable set of efficient genes. Genes form a basis for deciding whether a node misbehaves. They can be characterized as measures that describe a network's performance from a node's viewpoint. Given their purpose, they must be easy to compute and robust against deception.
Misbehavior in wireless sensor networks can take upon different forms: packet dropping, modification of data structures important for routing, modification of packets, skewing of the network's topology or creating ficticious nodes (see [13] for a more complete list). The reason for sensors (possibly fully controlled by an attacker) to execute any form of misbehavior can range from the desire to save battery power to making a given wireless sensor network non-functional. Malfunction can also be considered a type of unwanted behavior.
Artificial Immune Systems
Learning
The process of T-cells maturation in thymus is used as an inspiration for learning in AIS. The maturation of T-cells (detectors) in thymus is a result of a pseudorandom process. After a T-cell is created (see Fig. 1), it undergoes a censoring process called negative selection. During negative selection T-cells that bind self are destroyed. Remaining T-cells are introduced into the body. The recognition of non-self is then done by simply comparing T-cells that survived negative selection with a suspected non-self. This process is depicted in Fig. 2. It is possible that the self set is incomplete, while a T-cell matures (tolerization period) in the thymus. This could lead to producing T-cells that should have been removed from the thymus and can cause an autoimmune reaction, i.e. it leads to false positives.
A deficiency of the negative selection process is that alone it is not sufficient for assessing the damage that a non-self antigen could cause. For example, many bacteria that enter our body are not harmful, therefore an immune reaction is not necessary. T-cells, actors of the adaptive immune system, require co-stimulation from the innate immune system in order to start acting. The innate immune system is able to recognize the presence of harmful non-self antigens and tissue damage, and signal this to certain actors of the adaptive immune system.
The random-generate-and-test approach for producing T-cells (detectors) described above is analyzed in [11]. In general, the number of candidate detectors to the self set size needs to be exponential (if a matching rule with fixed matching probability is used). Another problem is a consistent underfitting of the nonself set; there exist "holes" in the non-self set that are undetectable. In theory, for some matching rules, the number of holes can be very unfavorable [28]. In practical terms, the effect of holes depends on the charac-teristics of the non-self set, representation and matching rule [15]. The advantage of this algorithm is its simplicity and good experimental results in cases when the number of detectors to be produced is fixed and small [26]. A review of other approaches to detector computation can be found in [2].
Sensor Networks
A sensor network can be defined in graph theoretic framework as follows: a sensor network is a net N = (n(t), e(t)) where n(t), e(t) are the set of nodes and edges at time t, respectively. Nodes correspond to sensors that wish to communicate with each other. An edge between two nodes A and B is said to exist when A is within the radio transmission range of B and vice versa. The imposed symmetry of edges is a usual assumption of many mainstream protocols. The change in the cardinality of sets n(t), e(t) can be caused by switching on/off one of the sensors, failure, malfunction, removal, signal propagation, link reliability and other factors.
Data exchange in a point-to-point (uni-cast) scenario usually proceeds as follows: a user initiated data exchange leads to a route query at the network layer of the OSI stack. A routing protocol at that layer attempts to find a route to the data exchange destination. This request may result in a path of non-unit length. This means that a data packet in order to reach the destination has to rely on successive forwarding by intermediate nodes on the path. An example of an ondemand routing protocol often used in sensor networks is DSR [20]. Route search in this protocol is started only when a route to a destination is needed. This is done by flooding the network with RREQ 2 control packets. The destination node or an intermediate node that knows a route to the destination will reply with a RREP control packet. This RREP follows the route back to the source node and updates routing tables at each node that it traverses. A RERR packet is sent to the connection originator when a node finds out that the next node on the forwarding path is not replaying.
At the MAC layer of the OSI protocol stack, the medium reservation is often contention based. In order to transmit a data packet, the IEEE 802.11 MAC protocol uses carrier sensing with an RTS-CTS-DATA-ACK handshake. 3 Should the medium not be available 2 RREQ = Route Request, RREP = Route Reply, RERR = Route Error.
3 RTS = Ready to send, CTS = Clear to send, ACK = Acknowl-or the handshake fails, an exponential back-off algorithm is used. This is combined with a mechanism that makes it easier for neighboring nodes to estimate transmission durations. This is done by exchange of duration values and their subsequent storing in a data structure known as Network allocation vector (NAV). With the goal to save battery power, researchers suggested, a sleep-wake-up schedule for nodes would be appropriate. This means that nodes do not listen continuously to the medium, but switch themselves off and wake up again after a predetermined period of time. Such a sleep and wake-up schedule is similarly to duration values exchanged among nodes. An example of a MAC protocol, designed specifically for sensor networks, that uses such a schedule is the S-MAC [29]. A sleep and wake-up schedule can severely limit operation of a node in promiscuous mode. In promiscuous mode, a node listens to the on-going traffic in the neighborhood and collects information from the overheard packets. This technique is used e.g. in DSR for improved propagation of routing information. Movement of nodes can be modeled by means of a mobility model. A well-known mobility model is the Random waypoint model [20]. In this model, nodes move from the current position to a new randomly generated position at a predetermined speed. After reaching the new destination a new random position is computed. Nodes pause at the current position for a time period t before moving to the new random position.
For more information on sensor networks, we refer the reader to [21].
Summary of Results
Motivated by the positive results reported in [17,26] we have undertaken a detailed performance study of AIS with focus on sensor networks. The general conclusions that can be drawn from the study presented in this document are:
1. Given the ranges of input parameters that we used and considering the computational capabilities of current sensor devices, we conclude that AIS based misbehavior detection offers a decent detection rate.
2. One of the main challenges in designing well performing AIS for sensor networks is the set of "genes". This is similar to observations made in [24]. edgment.
3. Our results suggest that to increase the detection performance, an AIS should benefit from information available at all layers of the OSI protocol stack; this includes also detection performance with regards to a simplistic flavor of misbehavior such as packet dropping. This supports ideas shortly discussed in [30] where the authors suggest that information available at the application layer deserves more attention. 4. We observed that somewhat surprisingly a gene based purely on the MAC layer significantly contributed to the overall detection performance. This gene poses less limitations when a MAC protocol with a sleep-wake-up schedule such as the S-MAC [29] is used.
5. It is desirable to use genes that are "complementary" with respect to each other. We demonstrated that two genes, one that measures correct forwarding of data packets, and the other one that indirectly measures the medium contention, have exactly this property. 6. We only used a single instance of learning and detection mechanism per node. This is different from approach used in [17,26], where one instance was used for each of m possible neighbors. Our performance results show that the approach in [17,26] may not be feasible for sensor networks. It may allow for an easy Sybil attack and, in general, m = n − 1 instances might be necessary, where n is the total number of sensors in the network. Instead, we suggest that flagging a node as misbehaving should, if possible, be based on detection at several nodes. 7. Only less than 5% detectors were used in detecting misbehavior. This suggests that many of the detectors do not comply with constraints imposed by the communications protocols; this is an important fact when designing AIS for sensor networks because the memory capacity at sensors is expected to be very limited.
8. The data traffic properties seem not to impact the performance. This is demonstrated by similar detection performance, when data traffic is modeled as constant bit rate and Poisson distributed data packet stream, respectively. 9. We were unable to distinguish between nodes that misbehave (e.g. deliberately drop data packets) and nodes with a behavior resembling a misbehavior (e.g. drop data packets due to medium contention). This motivates the use of danger signals as described in [1,16]. The approach applied in [26] does, however, not completely fit sensor networks since these might implement only a simplified version of the transport layer.
AIS for Sensor Networks: Design Principles
In our approach, each node produces and maintains its own set of detectors. This means that we applied a direct one-to-one mapping between a human body with a thymus and a node. We represent self, non-self and detector strings as bit-strings. The matching rule employed is the r-contiguous bits matching rule. Two bitstrings of equal length match under the r-contiguous matching rule if there exists a substring of length r at position p in each of them and these substrings are identical. Detectors are produced by the process shown in Fig. 1, i.e. by means of negative selection when detectors are created randomly and tested against a set of self strings. Each antigen consists of several genes. Genes are performance measures that a node can acquire locally without the help from another node. In practical terms this means that an antigen consists of x genes; each of them encodes a performance measure, averaged in our case over a time window. An antigen is then created by concatenating the x genes.
When choosing the correct genes, the choice is limited due to the simplified OSI protocol stack of sensors. For example, Mica2 sensors [9] using the TinyOS operating system do not guarantee any end-to-end connection reliability (transport layer), leaving only data traffic at the lower layers for consideration.
Let us assume that the routing protocol finds for a connection the path s s , s 1 , ..., s i , s i+1 , s i+2 , ..., s d from the source node s s to the destination node s d , where s s = s d and s i+1 = s d . We have used the following genes to capture certain aspects of MAC and routing layer traffic information (we averaged over a time period (window size) of 500 seconds):
MAC Layer:
#1 Ratio of complete MAC layer handshakes between nodes s i and s i+1 and RTS packets sent by s i to s i+1 . If there is no traffic between two nodes this ratio is set to ∞ (a large number). This ratio is averaged over a time period. A complete handshake is defined as a completed sequence of RTS, CTS, DATA, ACK packets between s i and s i+1 .
#2 Ratio of data packets sent from s i to s i+1 and then subsequently forwarded by s i+1 to s i+2 . If there is no traffic between two nodes this ratio is set to ∞ (a large number). This ratio is computed by s i in promiscuous mode and, as in the previous case, averaged over a time period. This gene was adapted from the watchdog idea in [25].
#3 Time delay that a data packet spends at s i+1 before being forwarded to s i+2 . The time delay is observed by s i in promiscuous mode. If there is no traffic between two nodes the time delay is set to zero. This measure is averaged over a time period. This gene is a quantitative extension of the previous gene.
Routing Layer:
#4 The same ratio as in #2 but computed separately for RERR routing packets.
#5 The same delay as in #3 but computed separately for RERR routing packets.
The Gene #1 can be characterized as MAC layer quality oriented -it indirectly measures the medium contention level. The remaining genes are watchdog oriented. This means that they more strictly fit a certain kind of misbehavior. The Gene #2 can help detect whether packets get correctly forwarded; the Gene #3 can help detect whether forwarding of packets does not get intentionally delayed. As we will show later, in the particular type of misbehavior (packet dropping) that we applied, the first two genes come out as "the strongest". The disadvantage of the watchdog based genes is that due to limited battery power, nodes could operate using a sleep-wake-up schedule similar to the one used in the S-MAC. This would mean that the node s i has to stay awake until the node s i+1 (monitored node) correctly transmits to s i+2 . The consequence would be a longer wake-up time and possible restrictions in publishing sleep-wake-up schedules.
In [24] the authors applied a different a set of genes, based only on the DSR routing protocol. The observed set of events was the following: A = RREQ sent, B = RREP sent, C = RERR sent, D = DATA sent and IP source address is not of the monitored (neighboring) node, E = RREQ received, F = RREP received, G = RERR received, H = DATA received and the IP destination address is not of the monitored node. The events D and H take into consideration that the source and destination nodes of a connection might appear as misbehaving as they seem to "deliberately" create and delete data packets. Then the set of their four genes is as follows: The time period (window size) in their case was 10s; * is the Kleene star operator (zero or more occurrences of any event(s) are possible). Similar to our watchdog genes, these genes impose additional requirements on MAC protocols such as the S-MAC. Their dependence on the operation in promiscuous mode is, however, more pronounced as a node has to continuously observe packet events at all monitored nodes.
The research in the area of what and to what extent can be or should be locally measured at a node, is independent of the learning mechanism used (negative selection in both cases). Performance of an AIS can partly depend on the ordering and the number of used genes. Since longer antigens (consisting of more genes) indirectly imply more candidate detectors, the number of genes should be carefully considered. Given x genes, it is possible to order them in x! different ways. In our experience, the rules for ordering genes and the number of genes can be summed up as follows:
1) Keep the number of genes small. In our experiments, we show that with respect to the learning mechanism used and the expected deployment (sensor networks), 2-3 genes are enough for detecting a basic type of misbehavior.
2) Order genes either randomly or use a predetermined fixed order. Defining a utility relation between genes, and ordering genes with respect to it can, in general, lead to problems that are considered intractable. Our results however suggest, it is important to understand relations between different genes, since genes are able to complement each other; this can lead to their increased mutual strength. On the other hand, random ordering adds to robustness of the underlying AIS. For an attacker, it is namely more difficult to deceive, since he does not know how genes are being used. It is currently an open question, how to impose a balanced solution.
3) Genes cannot be considered in isolation. Our experiments show, when a detector matched an antigen under the r-contiguous matching rule, usually this match spanned over several genes. This motivates design of matching rules that would not limit matching to a few neighboring genes, offer more flexibility but still require that a gene remains a partly atomic unit.
Learning and Detection
Learning and detection is done by applying the mechanisms shown in Figs. 1 and 2. The detection itself is very straightforward. In the learning phase, a misbehavior-free period (see [1] on possibilities for circumventing this problem) is necessary so that nodes get a chance to learn what is the normal behavior. When implementing the learning phase, the designer gets to choose from two possibilities: 1) Learning and detection at a node get implemented for each neighboring node separately. This means that different antigens have to get computed for each neighboring node, detector computation is different for each neighboring node and, subsequently, detection is different for each neighboring node. The advantage of this approach is that the node is able to directly determine which neighboring node misbehaves; the disadvantage is that m instances (m is the number of neighbors or node degree) of the negative selection mechanism have to get executed; this can be computationally prohibitive for sensor networks as m can, in general, be equal to the total number of sensor. This allows for an easy Sybil attack [13] in which a neighbor would create several identities; the node would then be unable to recognize that these identities belong to the same neighbor. This approach was used in [26,24].
2) Learning and detection at a node get implemented in a single instance for all neighboring nodes. This means a node is able to recognize anomaly (misbehavior) but it may be unable to determine which one from the m neighboring nodes misbehaves. This implies that nodes would have to cooperate when detecting a misbehaving node, exchange anomaly information and be able to draw a conclusion from the obtained information. An argument for this approach is that in order to detect nodes that misbehave in collusion, it might be necessary to rely to some extent on information exchange among nodes, thus making this a natural solution to the problem. We have used this approach; a postprocessing phase (using the list of misbehaving nodes) was necessary to determine whether a node was correctly flagged as misbehaving or not.
We find the second approach to be more suited for wireless sensor networks. It is namely less computationally demanding. We are unable, at this time, to estimate the frequency of a complete detector set computation.
Both approaches can be classified within the four-
Local and Cooperative Response Data Collection and Preprocessing
Local and Cooperative Detection Learning Figure 3: An four-layer architecture aimed at protecting sensor networks against misbehavior and abuse.
layer architecture (Fig. 3) that we introduced in [14]. The lowermost layer, Data collection and preprocessing, corresponds to genes' computation and antigen construction. The Learning layer corresponds to the negative selection process. The next layer, Local and co-operative detection, suggests, an AIS should benefit from both local and cooperative detection. Both our setup and the setup described in [26,24] only apply local detection. The uppermost layer, Local and cooperative response, implies, an AIS should also have the capability to undertake an action against one or several misbehaving nodes; this should be understood in a wider context of co-operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior. To our best knowledge, there is currently no AIS implementation for sensor networks taking advantage of this layer.
Which r is the correct one? An interesting technical problem is to tune the r parameter for the r-contiguous matching rule so that the underlying AIS offers good detection and false positives rates. One possibility is a lengthy simulation study such as this one. Through multiparameter simulation we were to able to show that r = 10 offers the best performance for our setup. In [12] we experimented with the idea of "growing" and "shrinking" detectors; this idea was motivated by [19]. The initial r 0 for a grow-ing detector can be chosen as r 0 = ⌈l/2⌉, where l is the detector length. The goal is to find the smallest r such that a candidate detector does not match any self antigen. This means, initially, a larger (more specific) r is chosen; the smallest r that fulfills the above condition can be found through binary search. For shrinking detectors, the approach is reciprocal. Our goal was to show that such growing or shrinking detectors would offer a better detection or false positives rate. Short of proving this in a statistically significant manner, we observed that the growing detectors can be used for self tuning the r parameter. The average r value was close to the r determined through simulation (the setup in that case was different from the one described in this document).
Further Optimizations
Our experiments show that only a small number of detectors get ever used (less than 5%). The reason is, they get produced in a random way, not considering structure of the protocols. For example, a detector that is able to detect whether i) data packets got correctly transmitted and ii) 100% of all MAC layers handshakes were incomplete is superfluous as this case should never happen. In [8], the authors conclude: "... uniform coverage of non-self space is not only unnecessary, it is impractical; non-self space is too big". Application driven knowledge can be used to set up a rule based system that would exclude infeasible detectors; see [10] for a rule based system aimed at improved coverage of the nonself set. In [17], it is suggested that unused detectors should get deleted and the lifetime of useful detectors should be extended.
Misbehavior
In a companion paper [13], we have reviewed different types of misbehavior at the MAC, network and transport layers of the OSI protocol stack. We note that solutions to many of these attacks have been already proposed; these are however specific to a given attack. Additionally, due to the limitations of sensor networks, these solutions cannot be directly transfered.
The appeal of AIS based misbehavior detection rests on its simplicity and applicability in an environment that is extremely computationally and bandwidth limited. Misbehavior in sensor networks does not have to be executed by sensors themselves; one or several computationally more powerful platforms (laptops) can be used for the attack. On the other hand, a protection using such more advanced computational platforms is, due to e.g. the need to supply them continuously with electric power, harder to imagine. It would also create a point of special interest for the possible attackers.
Experimental Setup
The purpose of our experiments was to show that AIS are a viable approach for detecting misbehavior in sensor networks. Furthermore, we wanted to cast light on internal performance of an AIS designed to protect sensor networks. One of our central goals was to provide an in-depth analysis of relative usefulness of genes.
Definitions of input and output parameters: The input parameters for our experiments were: r parameter for the r-contiguous matching rule, the (desired) number of detectors and misbehavior level. Misbehavior was modeled as random packet dropping at selected nodes.
The performance (output) measures were arithmetic averages and 95% confidence intervals ci 95% of detection rate, number of false positives, real time to compute detectors, data traffic rate at nodes, number of iterations to compute detectors (number of random tries), number of non-valid detectors, number of different (unique) antigens in a run or a time window, and number of matches for each gene. The detection rate dr is defined as dns ns , where dns is the number of detected non-self strings and ns is the total number of non-self strings. A false positive in our definition is a string that is not self but can still be a result of anomaly that is identical with the effects of a misbehavior. A non-valid detector is a candidate detector that matches a self string and must therefore be removed.
The number of matches for each gene was evaluated using the r-contiguous matching rule; we considered two cases: i) two bit-strings get matched from the left to the right and the first such a match will get reported (matching gets interrupted), ii) two bit-strings get matched from the left to the right and all possible matches will get reported. The time complexity of these two approaches is O(r(l − r)) and Θ(r(l − r)), respectivelly; r ≤ l, where l is the bitstring length. The first approach is exactly what we used when computing the real time necessary for negative selection, the second approach was used when our goal was to evaluate relative usefulness of each gene.
Scenario description: We wanted to capture "self" and "non-self" packet traffic in a large enough synthetic static sensor network and test whether using an AIS we are able to recognize non-self, i.e. misbehavior. The topology of this network was determined by making a snapshot of 1,718 mobile nodes (each with 100m radio radius) moving in a square area of 2,900m×2,950m as prescribed by the random waypoint mobility model; see Figure 5(a). The motivation in using this movement model and then creating a snapshot are the results in our previous paper [7] that deals with structural robustness of sensor network. Our preference was to use a slightly bigger network than it might be necessary, rather than using a network with unknown properties. The computational overhead is negligible; simulation real time mainly depends on the number of events that require processing. Idle nodes increase memory requirements, but memory availability at computers was in our case not a bottleneck.
We chose source and destination pairs for each connection so that several alternative independent routes exist; the idea was to benefit from route repair and route acquisition mechanisms of the DSR routing protocol, so that the added value of AIS based misbehavior detection is obvious.
We used 10 CBR (Constant bit rate) connections. The connections were chosen so that their length is ∼7 hops and so that these connections share some common intermediate nodes; see Figure 5(b). For each packet re-ceived or sent by a node we have captured the following information: IP header type (UDP, 802.11 or DSR in this case), MAC frame type (RTS, CTS, DATA, ACK in the case of 802.11), current simulation clock, node address, next hop destination address, data packet source and destination address and packet size.
Encoding of self and non-self antigens: Each of the five genes was transformed in a 10-bit signature where each bit defines an interval 4 of a gene specific value range. We created self and non-self antigen strings by concatenation of the defined genes. Each self and nonself antigen has therefore a size of 50 bits. The interval representation was chosen in order to avoid carry-bits (the Gray coding is an alternative solution).
Constructing the self and non-self sets: We have randomly chosen 28 non-overlapping 500-second windows in our 4-hour simulation. In each 500-second window self and non-self antigens are computed for each node. This was repeated 20 times for independent Glomosim runs.
Misbehavior modeling: Misbehavior is modeled as random data packet dropping (implemented at the network layer); data packets include both data packets from the transport layer as well as routing protocol packets. that should get dropped will simply not be in- serted into the IP queue); we have randomly chosen 236 nodes and these were forced to drop {10, 30, 50%} of data packets. However, there were only 3-10 nodes with misbehavior and with a statistically significant number of packets for forwarding in each simulation run; see constraint C2 in Section 7.
Detection: A neighboring node gets flagged as misbehaving, if a detector from the detector set matches an antigen. Since we used a single learning phase, we had to complement this process with some routing information analysis. This allowed us to determine, which one from the neighboring nodes is actually the misbehaving one. In the future, we plan to rely on co-operative detection in order to replace such a post-analysis.
Simulation phases: The experiment was done in four phases.
1. 20 independent Glomosim runs were done for one of {10, 30, 50%} misbehavior levels and "normal" traffic. Normal means that no misbehavior took place.
2. Self and non-self antigen computation (encoding).
3. The 20 "normal" traffic runs were used to compute detectors. Given the 28 windows and 20 runs, the sample size was 20×28 = 560, i.e. detectors at each node were discriminated against 560 self antigens.
4. Using the runs with {10, 30, 50%} misbehavior levels, the process shown in Fig. 2 was used for detection; we restricted ourselves to nodes that had in both the normal and misbehavior traffic at least a certain number of data packets to forward (packet threshold).
The experiment was then repeated with different r, desired number of detectors and misbehavior level.
The parameters for this experiment are summarized in Fig. 4. The injection rate and packet sizes were chosen in order to comply with usual data rates of sensors (e.g. 38.4kbps for Mica2; see [9]). We chose the Glomosim simulator [3] over other options (most notably ns2) because of its better scaling characteristics [6] and our familiarity with the tool.
Results Evaluation
When evaluating our results we define two additional constraints: C1. We define a node to be detected as misbehaving if it gets flagged in at least 14 out of the 28 possible windows. This notion indirectly defines the time until a node is pronounced to be misbehaving. We call this a window threshold. (b) Rate of non-valid detectors; for r ≤ 13 is ci 95% < 1%, for r ≥ 16 is the sample size not significant. (c) Number of iterations needed in order to compute the desired number of detectors; for r ≥ 10 is ci 95% < 1%, for r = 7 is ci 95% < 2%. (a) Detection rate vs packet threshold; conf. interval ranges: for mis. level 10% is ci 95% = 3.8-19.8%; for 30% is ci 95% = 11.9-15.9%; for 50% is ci 95% = 11.0-14.2%. (b) The number of unique detectors that matched an antigen in a run. Conf. interval range for 7 ≤ r ≤ 13 is ci 95% = 6.5-10.1%. (c) The number of unique detectors that matched an antigen in a window; each run has 28 windows. Conf. interval range: ci 95% < 0.16%. C2. A node s i has to forward in average at least m packets over the 20 runs in both the "normal" and misbehavior cases in order to be included into our statistics. This constraint was set in order to make the detection process more reliable. It is dubious to flag a neighboring node of s i as misbehaving, if it is based on "normal" runs or runs with misbehavior, in which node s i had no data packets to forward (he was not on a routing path). We call this a packet threshold; m was in our simulations chosen from {500, 1000, 2000, 4000}. Example: for a fixed set of input parameters, a node forwarded in the "normal" runs in average 1,250 packets and in the misbehavior runs (with e.g. level 30%) 750 packets. The node s i would be considered for misbehavior detection if m = 500, but not if m ≥ 1000. In other words, a node has to get a chance to learn what is "normal" and then to use this knowledge on a non-empty packet stream.
Overall Performance
The results related to computation of detectors are shown in Figure 6. In our experiments we have considered the desired number of detectors to be max. 4,000; over this threshold the computational requirements might be too high for current sensor devices. We remind the reader, each time the r parameter is incremented by 1, the number of detectors should double in order to make these two cases comparable. Figure 6(a) shows the real time needed to compute the desired set of detectors. We can see the real time necessary increases proportionally with the desired number of detectors; this complies with the theoretical results presented in [11]. Figure 6(b) shows the percentage of non-valid detectors, i.e. candidate detectors that were found to match a self string (see Figure 1). This result points to where the optimal operation point of an AIS might lie with respect to the choice of r parameter and the choice of a fixed number of detectors to compute. We remind the reader, the larger is the r parameter the smaller is the probability that a detector will match a self string. Therefore overhead connected to choosing the r parameter prohibitively small should be considered when designing an AIS. Figure 6(c) shows the total number of generate-and-test tries needed for computation of detector set of a fixed size; the 95% confidence interval is less than 2%.
In Figure 7(a) we show the dependence of detection ratio on the packet threshold. We conclude that except for some extremely low threshold values (not shown) the detection rate stays constant. This figure also shows that when misbehavior level was set very low, i.e. 10%, the AIS struggled to detect misbehaving nodes. This is partly a result of our coarse encoding with only 10 different levels.
At the 30 and 50% misbehaving levels the detection rate stays solid at about 70-85%. The range of the 95% confidence interval of detection rate is 3.8-19.8%. The fact that the detection rate did not get closer to 100% suggests, either the implemented genes are not sufficient, detection should be extended to protocols at other layers of the OSI protocol stack, a different ordering of genes should have been applied or our ten level encoding was too coarse. It also implicates that watchdog based genes (though they perfectly fit the implemented misbehavior) should not be used in isolation, and in general, that the choice of genes has to be very careful. Figure 7(b) shows the impact of r on detection rate. When r = {7, 10} the AIS performs well, for r > 10 the detection rate decreases. This is caused by the inadequate numbers of detectors used at higher levels of r (we limited ourselves to max. 4,000 detectors). Figure 7(c) shows the number of false positives. We remind that in our definition false positives are both nodes that do not drop any packets and nodes that drop packets due to other reasons than misbehavior.
In a separate experiment we studied whether the 4-hour (560 samples) simulation time was enough to capture the diversity of the self behavior. This was done by trying to detect misbehavior in 20 independent misbehavior-free Glomosim runs (different from those used to compute detectors). We report that we did not observe a single case of an autoimmune reaction.
Detailed Performance
In Fig. 8(a) we show the total number of runs in which a node was identified as misbehaving. The steep decline for values r > 10 (in this and other figures) documents that in these cases it was necessary to produce a higher number of detectors in order to cover the non-self antigen space. The higher the r, the higher is the specificity of a detector, this means that it is able to match a smaller set of non-self antigens.
In Fig. 8(b) and (c) we show the number of detectors that got matched during the detection phase (see Fig. 2). Fig. (b) shows the number of detectors matched per run, Fig. (c) shows the number of detectors matched per window. Fig. (b) is an upper estimate on the number of unique detectors needed in a single run. Given that the total number of detectors was 2,000, there were less than 5% detectors that would get used in the detection phase. The tight confidence intervals 5 for the number of unique detectors matched per window (see Fig. (c)) is a direct consequence of the small variability of antigens as shown in Fig. 9(a). Fig. 9(a) shows the number of unique antigens that were subject to classification into self or non-self. The average for r = {7, 10} is about 1.5. This fact does not directly imply that the variability of the data traffic would be inadequate. It is rather a direct consequence of our choice of genes and their encoding (we only used 10 value levels for encoding). Fig. 9(b) shows the number of matches between a detector and an antigen in the following way. When a detector under the r-contiguous matching rule matches only a single gene within an antigen, we would increment the "single" counter. Otherwise, we would increment the "multiple" counter. It is obvious that with increasing r, it gets more and more probable that a detector would match more than a single gene. The interesting fact is that the detection rate for both r = 7 and r = 10 is about 80% (see Fig. 7(a)) and that the rate of non-valid detectors is very different (see Fig. 6(b)). This means that an interaction between genes has positively affected the later performance measure, without sacrificing on the former one. This leads to a conlusion that genes should not be considered in isolation. Fig. 9(c) shows the performance of Gene #1. The number of matches shows that this gene contributed to the overall detection performance of our AIS. Figs. 10(a-c) sum up performance of the five genes for different values of r. Again, an interesting fact is the contribution of Gene #1 to the overall detection performance. The usefulness of Gene #2 was largely expected as this gene was tailored for the kind of misbehavior that we implemented. The other three genes came out as marginally useful. The importance of the somewhat surprising performance of Gene #1 is that it can be computed in a simplistic way and does not require continuous operation of a node.
The Impact of Data Traffic Pattern
In an additional experiment, we examined the impact of data traffic pattern on the performance. We used two different data traffic models: the constant bit rate 5 For practical reasons we show ci 95% only for 7 ≤ r ≤ 13.
(CBR) and a Poisson distributed data traffic. In many scenarios, sensors are expected to take measurements in constant intervals and, subsequently, send them out for processing. This would create a constant bit rate traffic. Poisson distributed traffic could be a result of sensors taking measurements in an event-driven fashion. For example, a sensor would take a measurement only when a target object (e.g. a person) happens to be in its vicinity.
The setup for this experiment was similar to that presented in Fig. 4 with the additional fact that the data traffic model would now become an input parameter. With the goal to reduce complexity of the experimental setup, we fixed r = 10 and we only considered cases with 500 and 2000 detectors. In order to match the CBR traffic rate, the Poisson distributed data traffic model had a mean arrival expectation of 1 packet per second (λ = 1.0). As in the case with CBR, we computed the detection rate and the rate of false positives with the associated arithmetic averages and 95% confidence intervals.
The results based on these two traffic models were similar, actually, we could not find the difference between them to be statistically significant. This points out that the detection process is robust against some variation in data traffic. This conclusion also reflects positively on the usefulness of the used genes. More importantly, it helped disperse our worries that the results presented in this experimental study could be unacceptably data traffic dependent.
Related Work
In [26,24] the authors introduced an AIS based misbehavior detection system for ad hoc wireless networks. They used Glomosim for simulating data traffic, their setup was an area of 800×600m with 40 mobile nodes (speed 1 m/s) of which 5-20 are misbehaving; the routing protocol was DSR. Four genes were used to capture local behavior at the network layer. The misbehavior implemented is a subset of misbehavior introduced in this paper; their observed detection rate is about 55%. Additionally, a co-stimulation in the form of a danger signal was used in order to inform nodes on a forwarding path about misbehavior, thus propagating information about misbehaving nodes around the network.
In [17] the authors describe an AIS able to detect anomalies at the transport layer of the OSI protocol stack; only a wired TCP/IP network is considered. Self is defined as normal pairwise connections. Each detector is represented as a 49-bit string. The pattern matching is based on r-contiguous bits with a fixed r = 12.
Ref. [23] discusses a network intrusion system that aims at detecting misbehavior by capturing TCP packet headers. They report that their AIS is unsuitable for detecting anomalies in communication networks. This result is questioned in [4] where it is stated that this is due to the choice of problem representation and due to the choice of matching threshold r for r-contiguous bits matching.
To overcome the deficiencies of the generate-and-test approach a different approach is outlined in [22]. Several signals each having a different function are employed in order to detect a specific misbehavior in sensor wireless networks. Unfortunately, no performance analysis was presented and the properties of these signals were not evaluated with respect to their misuse.
The main discerning factor between our work and works shortly discussed above is that we carefully considered hardware parameters of current sensor devices, the set of input parameters was designed in order to target specifically sensor networks and our simulation setup reflects structural qualities of such networks with regards to existence of multiple independent routing paths. In comparison to [26,24] we showed that in case of static sensor networks it is reasonable to expect the detection rate to be above 80%.
Conclusions and Future Work
Although we answered some basic question on the suitability and feasibility of AIS for detecting misbehavior in sensor networks a few questions remain open.
The key question in the design of AIS is the quantity, quality and ordering of genes that are used for measuring behavior at nodes. To answer this question a detailed formal analysis of communications protocols will be needed. The set of genes should be as "complete" as possible with respect to any possible misbehavior. The choice of genes should impose a high degree of sensor network's survivability defined as the capability of a system to fulfill its mission in a timely manner, even in the presence of attacks, failures or accidents [27]. It is therefore of paramount importance that the sensor network's mission is clearly defined and achievable under normal operating conditions. We showed the influence and usefulness of certain genes in order to detect misbehavior and the impact of the r parameter on the detection process. In general, the results in Fig. 10 show that Gene #1 and #2 obtained of all genes the best results, with Gene #2 showing always the best results. The contribution of Gene #1 suggests that observing the MAC layer and the ratio of complete handshakes to the number of RTS packets sent is useful for the implemented misbehaviour.
Gene #2 fits perfectly for the implemented misbehavior. It therefore comes as no surprise that this gene showed the best results in the detection process. The question which remains open is whether the two genes are still as useful when exposed to different attack patterns.
It is currently unclear whether genes that performed well with negative selection, will also be appropriate for generating different flavors of signals as suggested within the danger theory [1,16]. It is our opinion that any set of genes, whether used with negative selection or for generating any such a signal, should aim at capturing intrinsic properties of the interaction among different components of a given sensor network. This contradicts approaches applied in [26,22] where the genes are closely coupled with a given protocol. The reason for this statement is the combined performance of Gene #1 and #2. Their interaction can be understood as follows: data packet dropping implies less medium contention since there are less data packets to get forwarded. Less data packets to forward on the other hand implies easier access to the medium, i.e. the number of complete MAC handshakes should increase. This is an interesting complementary relationship since in order to deceive these two genes, a misbehaving node has to appear to be correctly forwarding data packets and, at the same time, he should not significantly modify the "game" of medium access.
It is improbable that the misbehaving node alone would be able to estimate the impact of dropped packets on the contention level. Therefore, he lacks an important feedback mechanism that would allow him to keep the contention level unchanged. For that, he would need to act in collusion with other nodes. The property of complementarity moves the burden of excessive communication from normally behaving nodes to misbehaving nodes, thus, exploiting the ad hoc (local) nature of sensor networks. Our results thus imply, a "good" mixture of genes should be able to capture interactions that a node is unable to influence when acting alone. It is an open question whether there exist other useful properties of genes, other than complementarity.
We conclude that the random-generate-and-test pro-cess, with no knowledge of the used protocols and their behavior, creates many detectors which might show to be superfluous in detecting misbehavior. A process with some basic knowledge of protocol limitations might lead to improved quality of detectors. In [28] the authors stated that the random-generateand-test process "is innefficient, since a vast number of randomly generated detectors need to be discarded, before the required number of the suitable ones are obtained". Our results show that at r = 10, the rate of discarded detectors is less than 4%. Hence, at least in our setting we could not confirm the above statement. A disturbing fact is, however, that the size of the self set in our setting was probably too small in order to justify the use of negative selection. A counter-balancing argument is here the realistic setup of our simulations and a decent detection rate.
We would like to point out that the Fisher iris and biomedical data sets, used in [28] to argue about the apropriateness of negative selection for anomaly detection, could be very different from data sets generated by our simulations. Our experiments show that anomaly (misbehavior) data sets based on sensor networks could be in general very sparse. This effect can be due to the limiting nature of communications protocols. Since the Fisher iris and biomedical data sets were in [28] not evaluated with respect to some basic properties e.g. degree of clustering, it is hard to compare our results with the results presented therein.
In order to understand the effects of misbehavior better (e.g. the propagation of certain adverse effects), we currently develop a general framework for AIS to be used within the JiST/SWANS network simulator [6].
| 8,253 |
0906.3461
|
2150496197
|
A sensor network is a collection of wireless devices that are able to monitor physical or environmental conditions. These devices are expected to operate autonomously, be battery powered and have very limited computational capabilities. This makes the task of protecting a sensor network against misbehavior or possible malfunction a challenging problem. In this document we discuss performance of Artificial immune systems (AIS) when used as the mechanism for detecting misbehavior. We concentrate on performance of respective genes; genes are necessary to measure a network's performance from a sensor's viewpoint. We conclude that the choice of genes has a profound influence on the performance of the AIS. We identified a specific MAC layer based gene that showed to be especially useful for detection. We also discuss implementation details of AIS when used with sensor networks.
|
Ref. @cite_13 discusses a network intrusion system that aims at detecting misbehavior by capturing TCP packet headers. They report that their AIS is unsuitable for detecting anomalies in communication networks. This result is questioned in @cite_9 where it is stated that this is due to the choice of problem representation and due to the choice of matching threshold @math for @math -contiguous bits matching.
|
{
"abstract": [
"This paper studies a simplified form of LISYS, an artificial immune system for network intrusion detection. The paper describes results based on a new, more controlled data set than that used for earlier studies. The paper also looks at which parameters appear most important for minimizing false positives, as well as the trade-offs and relationships among parameter settings.",
"This paper investigates the role of negative selection in an artificial immune system (AIS) for network intrusion detection. The work focuses on the use of negative selection as a network traffic anomaly detector. The results of the negative selection algorithm experiments show a severe scaling problem for handling real network traffic data. The paper concludes by suggesting that the most appropriate use of negative selection in the AIS is as a filter for invalid detectors, not the generation of competent detectors."
],
"cite_N": [
"@cite_9",
"@cite_13"
],
"mid": [
"1849415316",
"1533021960"
]
}
|
AIS for Misbehavior Detection in Wireless Sensor Networks: Performance and Design Principles
|
Sensor networks [21] can be described as a collection of wireless devices with limited computational abilities which are, due to their ad-hoc communication manner, vulnerable to misbehavior and malfunction. It is therefore necessary to support them with a simple, computationally friendly protection system.
Due to the limitations of sensor networks, there has been an on-going interest in providing them with a protection solution that would fulfill several basic criteria. The first criterion is the ability of self-learning and selftuning. Because maintenance of ad hoc networks by a human operator is expected to be sporadic, they have to have a built-in autonomous mechanism for identifying user behavior that could be potentially damaging to them. This learning mechanism should itself minimize the need for a human intervention, therefore it should be self-tuning to the maximum extent. It must also be computationally conservative and meet the usual condition of high detection rate. The second criterion is the ability to undertake an action against one or several misbehaving users. This should be understood in a wider context of co-operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior. Such a co-operation should have a low message complexity because both the bandwidth and the battery life are of scarce nature. The third and last criterion requires that the protection system does not itself introduce new weaknesses to the systems that it should protect.
An emerging solution that could facilitate implementation of the above criteria are Artificial immune systems (AIS). AIS are based on principles adapted from the Human immune system (HIS) [18,5,17]; the basic ability of HIS is an efficient detection of potentially harmful foreign agents (viruses, bacteria, etc.). The goal of AIS, in our setting, is the identification of nodes with behavior that could possibly negatively impact the stated mission of the sensor network.
One of the key design challenges of AIS is to define a suitable set of efficient genes. Genes form a basis for deciding whether a node misbehaves. They can be characterized as measures that describe a network's performance from a node's viewpoint. Given their purpose, they must be easy to compute and robust against deception.
Misbehavior in wireless sensor networks can take upon different forms: packet dropping, modification of data structures important for routing, modification of packets, skewing of the network's topology or creating ficticious nodes (see [13] for a more complete list). The reason for sensors (possibly fully controlled by an attacker) to execute any form of misbehavior can range from the desire to save battery power to making a given wireless sensor network non-functional. Malfunction can also be considered a type of unwanted behavior.
Artificial Immune Systems
Learning
The process of T-cells maturation in thymus is used as an inspiration for learning in AIS. The maturation of T-cells (detectors) in thymus is a result of a pseudorandom process. After a T-cell is created (see Fig. 1), it undergoes a censoring process called negative selection. During negative selection T-cells that bind self are destroyed. Remaining T-cells are introduced into the body. The recognition of non-self is then done by simply comparing T-cells that survived negative selection with a suspected non-self. This process is depicted in Fig. 2. It is possible that the self set is incomplete, while a T-cell matures (tolerization period) in the thymus. This could lead to producing T-cells that should have been removed from the thymus and can cause an autoimmune reaction, i.e. it leads to false positives.
A deficiency of the negative selection process is that alone it is not sufficient for assessing the damage that a non-self antigen could cause. For example, many bacteria that enter our body are not harmful, therefore an immune reaction is not necessary. T-cells, actors of the adaptive immune system, require co-stimulation from the innate immune system in order to start acting. The innate immune system is able to recognize the presence of harmful non-self antigens and tissue damage, and signal this to certain actors of the adaptive immune system.
The random-generate-and-test approach for producing T-cells (detectors) described above is analyzed in [11]. In general, the number of candidate detectors to the self set size needs to be exponential (if a matching rule with fixed matching probability is used). Another problem is a consistent underfitting of the nonself set; there exist "holes" in the non-self set that are undetectable. In theory, for some matching rules, the number of holes can be very unfavorable [28]. In practical terms, the effect of holes depends on the charac-teristics of the non-self set, representation and matching rule [15]. The advantage of this algorithm is its simplicity and good experimental results in cases when the number of detectors to be produced is fixed and small [26]. A review of other approaches to detector computation can be found in [2].
Sensor Networks
A sensor network can be defined in graph theoretic framework as follows: a sensor network is a net N = (n(t), e(t)) where n(t), e(t) are the set of nodes and edges at time t, respectively. Nodes correspond to sensors that wish to communicate with each other. An edge between two nodes A and B is said to exist when A is within the radio transmission range of B and vice versa. The imposed symmetry of edges is a usual assumption of many mainstream protocols. The change in the cardinality of sets n(t), e(t) can be caused by switching on/off one of the sensors, failure, malfunction, removal, signal propagation, link reliability and other factors.
Data exchange in a point-to-point (uni-cast) scenario usually proceeds as follows: a user initiated data exchange leads to a route query at the network layer of the OSI stack. A routing protocol at that layer attempts to find a route to the data exchange destination. This request may result in a path of non-unit length. This means that a data packet in order to reach the destination has to rely on successive forwarding by intermediate nodes on the path. An example of an ondemand routing protocol often used in sensor networks is DSR [20]. Route search in this protocol is started only when a route to a destination is needed. This is done by flooding the network with RREQ 2 control packets. The destination node or an intermediate node that knows a route to the destination will reply with a RREP control packet. This RREP follows the route back to the source node and updates routing tables at each node that it traverses. A RERR packet is sent to the connection originator when a node finds out that the next node on the forwarding path is not replaying.
At the MAC layer of the OSI protocol stack, the medium reservation is often contention based. In order to transmit a data packet, the IEEE 802.11 MAC protocol uses carrier sensing with an RTS-CTS-DATA-ACK handshake. 3 Should the medium not be available 2 RREQ = Route Request, RREP = Route Reply, RERR = Route Error.
3 RTS = Ready to send, CTS = Clear to send, ACK = Acknowl-or the handshake fails, an exponential back-off algorithm is used. This is combined with a mechanism that makes it easier for neighboring nodes to estimate transmission durations. This is done by exchange of duration values and their subsequent storing in a data structure known as Network allocation vector (NAV). With the goal to save battery power, researchers suggested, a sleep-wake-up schedule for nodes would be appropriate. This means that nodes do not listen continuously to the medium, but switch themselves off and wake up again after a predetermined period of time. Such a sleep and wake-up schedule is similarly to duration values exchanged among nodes. An example of a MAC protocol, designed specifically for sensor networks, that uses such a schedule is the S-MAC [29]. A sleep and wake-up schedule can severely limit operation of a node in promiscuous mode. In promiscuous mode, a node listens to the on-going traffic in the neighborhood and collects information from the overheard packets. This technique is used e.g. in DSR for improved propagation of routing information. Movement of nodes can be modeled by means of a mobility model. A well-known mobility model is the Random waypoint model [20]. In this model, nodes move from the current position to a new randomly generated position at a predetermined speed. After reaching the new destination a new random position is computed. Nodes pause at the current position for a time period t before moving to the new random position.
For more information on sensor networks, we refer the reader to [21].
Summary of Results
Motivated by the positive results reported in [17,26] we have undertaken a detailed performance study of AIS with focus on sensor networks. The general conclusions that can be drawn from the study presented in this document are:
1. Given the ranges of input parameters that we used and considering the computational capabilities of current sensor devices, we conclude that AIS based misbehavior detection offers a decent detection rate.
2. One of the main challenges in designing well performing AIS for sensor networks is the set of "genes". This is similar to observations made in [24]. edgment.
3. Our results suggest that to increase the detection performance, an AIS should benefit from information available at all layers of the OSI protocol stack; this includes also detection performance with regards to a simplistic flavor of misbehavior such as packet dropping. This supports ideas shortly discussed in [30] where the authors suggest that information available at the application layer deserves more attention. 4. We observed that somewhat surprisingly a gene based purely on the MAC layer significantly contributed to the overall detection performance. This gene poses less limitations when a MAC protocol with a sleep-wake-up schedule such as the S-MAC [29] is used.
5. It is desirable to use genes that are "complementary" with respect to each other. We demonstrated that two genes, one that measures correct forwarding of data packets, and the other one that indirectly measures the medium contention, have exactly this property. 6. We only used a single instance of learning and detection mechanism per node. This is different from approach used in [17,26], where one instance was used for each of m possible neighbors. Our performance results show that the approach in [17,26] may not be feasible for sensor networks. It may allow for an easy Sybil attack and, in general, m = n − 1 instances might be necessary, where n is the total number of sensors in the network. Instead, we suggest that flagging a node as misbehaving should, if possible, be based on detection at several nodes. 7. Only less than 5% detectors were used in detecting misbehavior. This suggests that many of the detectors do not comply with constraints imposed by the communications protocols; this is an important fact when designing AIS for sensor networks because the memory capacity at sensors is expected to be very limited.
8. The data traffic properties seem not to impact the performance. This is demonstrated by similar detection performance, when data traffic is modeled as constant bit rate and Poisson distributed data packet stream, respectively. 9. We were unable to distinguish between nodes that misbehave (e.g. deliberately drop data packets) and nodes with a behavior resembling a misbehavior (e.g. drop data packets due to medium contention). This motivates the use of danger signals as described in [1,16]. The approach applied in [26] does, however, not completely fit sensor networks since these might implement only a simplified version of the transport layer.
AIS for Sensor Networks: Design Principles
In our approach, each node produces and maintains its own set of detectors. This means that we applied a direct one-to-one mapping between a human body with a thymus and a node. We represent self, non-self and detector strings as bit-strings. The matching rule employed is the r-contiguous bits matching rule. Two bitstrings of equal length match under the r-contiguous matching rule if there exists a substring of length r at position p in each of them and these substrings are identical. Detectors are produced by the process shown in Fig. 1, i.e. by means of negative selection when detectors are created randomly and tested against a set of self strings. Each antigen consists of several genes. Genes are performance measures that a node can acquire locally without the help from another node. In practical terms this means that an antigen consists of x genes; each of them encodes a performance measure, averaged in our case over a time window. An antigen is then created by concatenating the x genes.
When choosing the correct genes, the choice is limited due to the simplified OSI protocol stack of sensors. For example, Mica2 sensors [9] using the TinyOS operating system do not guarantee any end-to-end connection reliability (transport layer), leaving only data traffic at the lower layers for consideration.
Let us assume that the routing protocol finds for a connection the path s s , s 1 , ..., s i , s i+1 , s i+2 , ..., s d from the source node s s to the destination node s d , where s s = s d and s i+1 = s d . We have used the following genes to capture certain aspects of MAC and routing layer traffic information (we averaged over a time period (window size) of 500 seconds):
MAC Layer:
#1 Ratio of complete MAC layer handshakes between nodes s i and s i+1 and RTS packets sent by s i to s i+1 . If there is no traffic between two nodes this ratio is set to ∞ (a large number). This ratio is averaged over a time period. A complete handshake is defined as a completed sequence of RTS, CTS, DATA, ACK packets between s i and s i+1 .
#2 Ratio of data packets sent from s i to s i+1 and then subsequently forwarded by s i+1 to s i+2 . If there is no traffic between two nodes this ratio is set to ∞ (a large number). This ratio is computed by s i in promiscuous mode and, as in the previous case, averaged over a time period. This gene was adapted from the watchdog idea in [25].
#3 Time delay that a data packet spends at s i+1 before being forwarded to s i+2 . The time delay is observed by s i in promiscuous mode. If there is no traffic between two nodes the time delay is set to zero. This measure is averaged over a time period. This gene is a quantitative extension of the previous gene.
Routing Layer:
#4 The same ratio as in #2 but computed separately for RERR routing packets.
#5 The same delay as in #3 but computed separately for RERR routing packets.
The Gene #1 can be characterized as MAC layer quality oriented -it indirectly measures the medium contention level. The remaining genes are watchdog oriented. This means that they more strictly fit a certain kind of misbehavior. The Gene #2 can help detect whether packets get correctly forwarded; the Gene #3 can help detect whether forwarding of packets does not get intentionally delayed. As we will show later, in the particular type of misbehavior (packet dropping) that we applied, the first two genes come out as "the strongest". The disadvantage of the watchdog based genes is that due to limited battery power, nodes could operate using a sleep-wake-up schedule similar to the one used in the S-MAC. This would mean that the node s i has to stay awake until the node s i+1 (monitored node) correctly transmits to s i+2 . The consequence would be a longer wake-up time and possible restrictions in publishing sleep-wake-up schedules.
In [24] the authors applied a different a set of genes, based only on the DSR routing protocol. The observed set of events was the following: A = RREQ sent, B = RREP sent, C = RERR sent, D = DATA sent and IP source address is not of the monitored (neighboring) node, E = RREQ received, F = RREP received, G = RERR received, H = DATA received and the IP destination address is not of the monitored node. The events D and H take into consideration that the source and destination nodes of a connection might appear as misbehaving as they seem to "deliberately" create and delete data packets. Then the set of their four genes is as follows: The time period (window size) in their case was 10s; * is the Kleene star operator (zero or more occurrences of any event(s) are possible). Similar to our watchdog genes, these genes impose additional requirements on MAC protocols such as the S-MAC. Their dependence on the operation in promiscuous mode is, however, more pronounced as a node has to continuously observe packet events at all monitored nodes.
The research in the area of what and to what extent can be or should be locally measured at a node, is independent of the learning mechanism used (negative selection in both cases). Performance of an AIS can partly depend on the ordering and the number of used genes. Since longer antigens (consisting of more genes) indirectly imply more candidate detectors, the number of genes should be carefully considered. Given x genes, it is possible to order them in x! different ways. In our experience, the rules for ordering genes and the number of genes can be summed up as follows:
1) Keep the number of genes small. In our experiments, we show that with respect to the learning mechanism used and the expected deployment (sensor networks), 2-3 genes are enough for detecting a basic type of misbehavior.
2) Order genes either randomly or use a predetermined fixed order. Defining a utility relation between genes, and ordering genes with respect to it can, in general, lead to problems that are considered intractable. Our results however suggest, it is important to understand relations between different genes, since genes are able to complement each other; this can lead to their increased mutual strength. On the other hand, random ordering adds to robustness of the underlying AIS. For an attacker, it is namely more difficult to deceive, since he does not know how genes are being used. It is currently an open question, how to impose a balanced solution.
3) Genes cannot be considered in isolation. Our experiments show, when a detector matched an antigen under the r-contiguous matching rule, usually this match spanned over several genes. This motivates design of matching rules that would not limit matching to a few neighboring genes, offer more flexibility but still require that a gene remains a partly atomic unit.
Learning and Detection
Learning and detection is done by applying the mechanisms shown in Figs. 1 and 2. The detection itself is very straightforward. In the learning phase, a misbehavior-free period (see [1] on possibilities for circumventing this problem) is necessary so that nodes get a chance to learn what is the normal behavior. When implementing the learning phase, the designer gets to choose from two possibilities: 1) Learning and detection at a node get implemented for each neighboring node separately. This means that different antigens have to get computed for each neighboring node, detector computation is different for each neighboring node and, subsequently, detection is different for each neighboring node. The advantage of this approach is that the node is able to directly determine which neighboring node misbehaves; the disadvantage is that m instances (m is the number of neighbors or node degree) of the negative selection mechanism have to get executed; this can be computationally prohibitive for sensor networks as m can, in general, be equal to the total number of sensor. This allows for an easy Sybil attack [13] in which a neighbor would create several identities; the node would then be unable to recognize that these identities belong to the same neighbor. This approach was used in [26,24].
2) Learning and detection at a node get implemented in a single instance for all neighboring nodes. This means a node is able to recognize anomaly (misbehavior) but it may be unable to determine which one from the m neighboring nodes misbehaves. This implies that nodes would have to cooperate when detecting a misbehaving node, exchange anomaly information and be able to draw a conclusion from the obtained information. An argument for this approach is that in order to detect nodes that misbehave in collusion, it might be necessary to rely to some extent on information exchange among nodes, thus making this a natural solution to the problem. We have used this approach; a postprocessing phase (using the list of misbehaving nodes) was necessary to determine whether a node was correctly flagged as misbehaving or not.
We find the second approach to be more suited for wireless sensor networks. It is namely less computationally demanding. We are unable, at this time, to estimate the frequency of a complete detector set computation.
Both approaches can be classified within the four-
Local and Cooperative Response Data Collection and Preprocessing
Local and Cooperative Detection Learning Figure 3: An four-layer architecture aimed at protecting sensor networks against misbehavior and abuse.
layer architecture (Fig. 3) that we introduced in [14]. The lowermost layer, Data collection and preprocessing, corresponds to genes' computation and antigen construction. The Learning layer corresponds to the negative selection process. The next layer, Local and co-operative detection, suggests, an AIS should benefit from both local and cooperative detection. Both our setup and the setup described in [26,24] only apply local detection. The uppermost layer, Local and cooperative response, implies, an AIS should also have the capability to undertake an action against one or several misbehaving nodes; this should be understood in a wider context of co-operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior. To our best knowledge, there is currently no AIS implementation for sensor networks taking advantage of this layer.
Which r is the correct one? An interesting technical problem is to tune the r parameter for the r-contiguous matching rule so that the underlying AIS offers good detection and false positives rates. One possibility is a lengthy simulation study such as this one. Through multiparameter simulation we were to able to show that r = 10 offers the best performance for our setup. In [12] we experimented with the idea of "growing" and "shrinking" detectors; this idea was motivated by [19]. The initial r 0 for a grow-ing detector can be chosen as r 0 = ⌈l/2⌉, where l is the detector length. The goal is to find the smallest r such that a candidate detector does not match any self antigen. This means, initially, a larger (more specific) r is chosen; the smallest r that fulfills the above condition can be found through binary search. For shrinking detectors, the approach is reciprocal. Our goal was to show that such growing or shrinking detectors would offer a better detection or false positives rate. Short of proving this in a statistically significant manner, we observed that the growing detectors can be used for self tuning the r parameter. The average r value was close to the r determined through simulation (the setup in that case was different from the one described in this document).
Further Optimizations
Our experiments show that only a small number of detectors get ever used (less than 5%). The reason is, they get produced in a random way, not considering structure of the protocols. For example, a detector that is able to detect whether i) data packets got correctly transmitted and ii) 100% of all MAC layers handshakes were incomplete is superfluous as this case should never happen. In [8], the authors conclude: "... uniform coverage of non-self space is not only unnecessary, it is impractical; non-self space is too big". Application driven knowledge can be used to set up a rule based system that would exclude infeasible detectors; see [10] for a rule based system aimed at improved coverage of the nonself set. In [17], it is suggested that unused detectors should get deleted and the lifetime of useful detectors should be extended.
Misbehavior
In a companion paper [13], we have reviewed different types of misbehavior at the MAC, network and transport layers of the OSI protocol stack. We note that solutions to many of these attacks have been already proposed; these are however specific to a given attack. Additionally, due to the limitations of sensor networks, these solutions cannot be directly transfered.
The appeal of AIS based misbehavior detection rests on its simplicity and applicability in an environment that is extremely computationally and bandwidth limited. Misbehavior in sensor networks does not have to be executed by sensors themselves; one or several computationally more powerful platforms (laptops) can be used for the attack. On the other hand, a protection using such more advanced computational platforms is, due to e.g. the need to supply them continuously with electric power, harder to imagine. It would also create a point of special interest for the possible attackers.
Experimental Setup
The purpose of our experiments was to show that AIS are a viable approach for detecting misbehavior in sensor networks. Furthermore, we wanted to cast light on internal performance of an AIS designed to protect sensor networks. One of our central goals was to provide an in-depth analysis of relative usefulness of genes.
Definitions of input and output parameters: The input parameters for our experiments were: r parameter for the r-contiguous matching rule, the (desired) number of detectors and misbehavior level. Misbehavior was modeled as random packet dropping at selected nodes.
The performance (output) measures were arithmetic averages and 95% confidence intervals ci 95% of detection rate, number of false positives, real time to compute detectors, data traffic rate at nodes, number of iterations to compute detectors (number of random tries), number of non-valid detectors, number of different (unique) antigens in a run or a time window, and number of matches for each gene. The detection rate dr is defined as dns ns , where dns is the number of detected non-self strings and ns is the total number of non-self strings. A false positive in our definition is a string that is not self but can still be a result of anomaly that is identical with the effects of a misbehavior. A non-valid detector is a candidate detector that matches a self string and must therefore be removed.
The number of matches for each gene was evaluated using the r-contiguous matching rule; we considered two cases: i) two bit-strings get matched from the left to the right and the first such a match will get reported (matching gets interrupted), ii) two bit-strings get matched from the left to the right and all possible matches will get reported. The time complexity of these two approaches is O(r(l − r)) and Θ(r(l − r)), respectivelly; r ≤ l, where l is the bitstring length. The first approach is exactly what we used when computing the real time necessary for negative selection, the second approach was used when our goal was to evaluate relative usefulness of each gene.
Scenario description: We wanted to capture "self" and "non-self" packet traffic in a large enough synthetic static sensor network and test whether using an AIS we are able to recognize non-self, i.e. misbehavior. The topology of this network was determined by making a snapshot of 1,718 mobile nodes (each with 100m radio radius) moving in a square area of 2,900m×2,950m as prescribed by the random waypoint mobility model; see Figure 5(a). The motivation in using this movement model and then creating a snapshot are the results in our previous paper [7] that deals with structural robustness of sensor network. Our preference was to use a slightly bigger network than it might be necessary, rather than using a network with unknown properties. The computational overhead is negligible; simulation real time mainly depends on the number of events that require processing. Idle nodes increase memory requirements, but memory availability at computers was in our case not a bottleneck.
We chose source and destination pairs for each connection so that several alternative independent routes exist; the idea was to benefit from route repair and route acquisition mechanisms of the DSR routing protocol, so that the added value of AIS based misbehavior detection is obvious.
We used 10 CBR (Constant bit rate) connections. The connections were chosen so that their length is ∼7 hops and so that these connections share some common intermediate nodes; see Figure 5(b). For each packet re-ceived or sent by a node we have captured the following information: IP header type (UDP, 802.11 or DSR in this case), MAC frame type (RTS, CTS, DATA, ACK in the case of 802.11), current simulation clock, node address, next hop destination address, data packet source and destination address and packet size.
Encoding of self and non-self antigens: Each of the five genes was transformed in a 10-bit signature where each bit defines an interval 4 of a gene specific value range. We created self and non-self antigen strings by concatenation of the defined genes. Each self and nonself antigen has therefore a size of 50 bits. The interval representation was chosen in order to avoid carry-bits (the Gray coding is an alternative solution).
Constructing the self and non-self sets: We have randomly chosen 28 non-overlapping 500-second windows in our 4-hour simulation. In each 500-second window self and non-self antigens are computed for each node. This was repeated 20 times for independent Glomosim runs.
Misbehavior modeling: Misbehavior is modeled as random data packet dropping (implemented at the network layer); data packets include both data packets from the transport layer as well as routing protocol packets. that should get dropped will simply not be in- serted into the IP queue); we have randomly chosen 236 nodes and these were forced to drop {10, 30, 50%} of data packets. However, there were only 3-10 nodes with misbehavior and with a statistically significant number of packets for forwarding in each simulation run; see constraint C2 in Section 7.
Detection: A neighboring node gets flagged as misbehaving, if a detector from the detector set matches an antigen. Since we used a single learning phase, we had to complement this process with some routing information analysis. This allowed us to determine, which one from the neighboring nodes is actually the misbehaving one. In the future, we plan to rely on co-operative detection in order to replace such a post-analysis.
Simulation phases: The experiment was done in four phases.
1. 20 independent Glomosim runs were done for one of {10, 30, 50%} misbehavior levels and "normal" traffic. Normal means that no misbehavior took place.
2. Self and non-self antigen computation (encoding).
3. The 20 "normal" traffic runs were used to compute detectors. Given the 28 windows and 20 runs, the sample size was 20×28 = 560, i.e. detectors at each node were discriminated against 560 self antigens.
4. Using the runs with {10, 30, 50%} misbehavior levels, the process shown in Fig. 2 was used for detection; we restricted ourselves to nodes that had in both the normal and misbehavior traffic at least a certain number of data packets to forward (packet threshold).
The experiment was then repeated with different r, desired number of detectors and misbehavior level.
The parameters for this experiment are summarized in Fig. 4. The injection rate and packet sizes were chosen in order to comply with usual data rates of sensors (e.g. 38.4kbps for Mica2; see [9]). We chose the Glomosim simulator [3] over other options (most notably ns2) because of its better scaling characteristics [6] and our familiarity with the tool.
Results Evaluation
When evaluating our results we define two additional constraints: C1. We define a node to be detected as misbehaving if it gets flagged in at least 14 out of the 28 possible windows. This notion indirectly defines the time until a node is pronounced to be misbehaving. We call this a window threshold. (b) Rate of non-valid detectors; for r ≤ 13 is ci 95% < 1%, for r ≥ 16 is the sample size not significant. (c) Number of iterations needed in order to compute the desired number of detectors; for r ≥ 10 is ci 95% < 1%, for r = 7 is ci 95% < 2%. (a) Detection rate vs packet threshold; conf. interval ranges: for mis. level 10% is ci 95% = 3.8-19.8%; for 30% is ci 95% = 11.9-15.9%; for 50% is ci 95% = 11.0-14.2%. (b) The number of unique detectors that matched an antigen in a run. Conf. interval range for 7 ≤ r ≤ 13 is ci 95% = 6.5-10.1%. (c) The number of unique detectors that matched an antigen in a window; each run has 28 windows. Conf. interval range: ci 95% < 0.16%. C2. A node s i has to forward in average at least m packets over the 20 runs in both the "normal" and misbehavior cases in order to be included into our statistics. This constraint was set in order to make the detection process more reliable. It is dubious to flag a neighboring node of s i as misbehaving, if it is based on "normal" runs or runs with misbehavior, in which node s i had no data packets to forward (he was not on a routing path). We call this a packet threshold; m was in our simulations chosen from {500, 1000, 2000, 4000}. Example: for a fixed set of input parameters, a node forwarded in the "normal" runs in average 1,250 packets and in the misbehavior runs (with e.g. level 30%) 750 packets. The node s i would be considered for misbehavior detection if m = 500, but not if m ≥ 1000. In other words, a node has to get a chance to learn what is "normal" and then to use this knowledge on a non-empty packet stream.
Overall Performance
The results related to computation of detectors are shown in Figure 6. In our experiments we have considered the desired number of detectors to be max. 4,000; over this threshold the computational requirements might be too high for current sensor devices. We remind the reader, each time the r parameter is incremented by 1, the number of detectors should double in order to make these two cases comparable. Figure 6(a) shows the real time needed to compute the desired set of detectors. We can see the real time necessary increases proportionally with the desired number of detectors; this complies with the theoretical results presented in [11]. Figure 6(b) shows the percentage of non-valid detectors, i.e. candidate detectors that were found to match a self string (see Figure 1). This result points to where the optimal operation point of an AIS might lie with respect to the choice of r parameter and the choice of a fixed number of detectors to compute. We remind the reader, the larger is the r parameter the smaller is the probability that a detector will match a self string. Therefore overhead connected to choosing the r parameter prohibitively small should be considered when designing an AIS. Figure 6(c) shows the total number of generate-and-test tries needed for computation of detector set of a fixed size; the 95% confidence interval is less than 2%.
In Figure 7(a) we show the dependence of detection ratio on the packet threshold. We conclude that except for some extremely low threshold values (not shown) the detection rate stays constant. This figure also shows that when misbehavior level was set very low, i.e. 10%, the AIS struggled to detect misbehaving nodes. This is partly a result of our coarse encoding with only 10 different levels.
At the 30 and 50% misbehaving levels the detection rate stays solid at about 70-85%. The range of the 95% confidence interval of detection rate is 3.8-19.8%. The fact that the detection rate did not get closer to 100% suggests, either the implemented genes are not sufficient, detection should be extended to protocols at other layers of the OSI protocol stack, a different ordering of genes should have been applied or our ten level encoding was too coarse. It also implicates that watchdog based genes (though they perfectly fit the implemented misbehavior) should not be used in isolation, and in general, that the choice of genes has to be very careful. Figure 7(b) shows the impact of r on detection rate. When r = {7, 10} the AIS performs well, for r > 10 the detection rate decreases. This is caused by the inadequate numbers of detectors used at higher levels of r (we limited ourselves to max. 4,000 detectors). Figure 7(c) shows the number of false positives. We remind that in our definition false positives are both nodes that do not drop any packets and nodes that drop packets due to other reasons than misbehavior.
In a separate experiment we studied whether the 4-hour (560 samples) simulation time was enough to capture the diversity of the self behavior. This was done by trying to detect misbehavior in 20 independent misbehavior-free Glomosim runs (different from those used to compute detectors). We report that we did not observe a single case of an autoimmune reaction.
Detailed Performance
In Fig. 8(a) we show the total number of runs in which a node was identified as misbehaving. The steep decline for values r > 10 (in this and other figures) documents that in these cases it was necessary to produce a higher number of detectors in order to cover the non-self antigen space. The higher the r, the higher is the specificity of a detector, this means that it is able to match a smaller set of non-self antigens.
In Fig. 8(b) and (c) we show the number of detectors that got matched during the detection phase (see Fig. 2). Fig. (b) shows the number of detectors matched per run, Fig. (c) shows the number of detectors matched per window. Fig. (b) is an upper estimate on the number of unique detectors needed in a single run. Given that the total number of detectors was 2,000, there were less than 5% detectors that would get used in the detection phase. The tight confidence intervals 5 for the number of unique detectors matched per window (see Fig. (c)) is a direct consequence of the small variability of antigens as shown in Fig. 9(a). Fig. 9(a) shows the number of unique antigens that were subject to classification into self or non-self. The average for r = {7, 10} is about 1.5. This fact does not directly imply that the variability of the data traffic would be inadequate. It is rather a direct consequence of our choice of genes and their encoding (we only used 10 value levels for encoding). Fig. 9(b) shows the number of matches between a detector and an antigen in the following way. When a detector under the r-contiguous matching rule matches only a single gene within an antigen, we would increment the "single" counter. Otherwise, we would increment the "multiple" counter. It is obvious that with increasing r, it gets more and more probable that a detector would match more than a single gene. The interesting fact is that the detection rate for both r = 7 and r = 10 is about 80% (see Fig. 7(a)) and that the rate of non-valid detectors is very different (see Fig. 6(b)). This means that an interaction between genes has positively affected the later performance measure, without sacrificing on the former one. This leads to a conlusion that genes should not be considered in isolation. Fig. 9(c) shows the performance of Gene #1. The number of matches shows that this gene contributed to the overall detection performance of our AIS. Figs. 10(a-c) sum up performance of the five genes for different values of r. Again, an interesting fact is the contribution of Gene #1 to the overall detection performance. The usefulness of Gene #2 was largely expected as this gene was tailored for the kind of misbehavior that we implemented. The other three genes came out as marginally useful. The importance of the somewhat surprising performance of Gene #1 is that it can be computed in a simplistic way and does not require continuous operation of a node.
The Impact of Data Traffic Pattern
In an additional experiment, we examined the impact of data traffic pattern on the performance. We used two different data traffic models: the constant bit rate 5 For practical reasons we show ci 95% only for 7 ≤ r ≤ 13.
(CBR) and a Poisson distributed data traffic. In many scenarios, sensors are expected to take measurements in constant intervals and, subsequently, send them out for processing. This would create a constant bit rate traffic. Poisson distributed traffic could be a result of sensors taking measurements in an event-driven fashion. For example, a sensor would take a measurement only when a target object (e.g. a person) happens to be in its vicinity.
The setup for this experiment was similar to that presented in Fig. 4 with the additional fact that the data traffic model would now become an input parameter. With the goal to reduce complexity of the experimental setup, we fixed r = 10 and we only considered cases with 500 and 2000 detectors. In order to match the CBR traffic rate, the Poisson distributed data traffic model had a mean arrival expectation of 1 packet per second (λ = 1.0). As in the case with CBR, we computed the detection rate and the rate of false positives with the associated arithmetic averages and 95% confidence intervals.
The results based on these two traffic models were similar, actually, we could not find the difference between them to be statistically significant. This points out that the detection process is robust against some variation in data traffic. This conclusion also reflects positively on the usefulness of the used genes. More importantly, it helped disperse our worries that the results presented in this experimental study could be unacceptably data traffic dependent.
Related Work
In [26,24] the authors introduced an AIS based misbehavior detection system for ad hoc wireless networks. They used Glomosim for simulating data traffic, their setup was an area of 800×600m with 40 mobile nodes (speed 1 m/s) of which 5-20 are misbehaving; the routing protocol was DSR. Four genes were used to capture local behavior at the network layer. The misbehavior implemented is a subset of misbehavior introduced in this paper; their observed detection rate is about 55%. Additionally, a co-stimulation in the form of a danger signal was used in order to inform nodes on a forwarding path about misbehavior, thus propagating information about misbehaving nodes around the network.
In [17] the authors describe an AIS able to detect anomalies at the transport layer of the OSI protocol stack; only a wired TCP/IP network is considered. Self is defined as normal pairwise connections. Each detector is represented as a 49-bit string. The pattern matching is based on r-contiguous bits with a fixed r = 12.
Ref. [23] discusses a network intrusion system that aims at detecting misbehavior by capturing TCP packet headers. They report that their AIS is unsuitable for detecting anomalies in communication networks. This result is questioned in [4] where it is stated that this is due to the choice of problem representation and due to the choice of matching threshold r for r-contiguous bits matching.
To overcome the deficiencies of the generate-and-test approach a different approach is outlined in [22]. Several signals each having a different function are employed in order to detect a specific misbehavior in sensor wireless networks. Unfortunately, no performance analysis was presented and the properties of these signals were not evaluated with respect to their misuse.
The main discerning factor between our work and works shortly discussed above is that we carefully considered hardware parameters of current sensor devices, the set of input parameters was designed in order to target specifically sensor networks and our simulation setup reflects structural qualities of such networks with regards to existence of multiple independent routing paths. In comparison to [26,24] we showed that in case of static sensor networks it is reasonable to expect the detection rate to be above 80%.
Conclusions and Future Work
Although we answered some basic question on the suitability and feasibility of AIS for detecting misbehavior in sensor networks a few questions remain open.
The key question in the design of AIS is the quantity, quality and ordering of genes that are used for measuring behavior at nodes. To answer this question a detailed formal analysis of communications protocols will be needed. The set of genes should be as "complete" as possible with respect to any possible misbehavior. The choice of genes should impose a high degree of sensor network's survivability defined as the capability of a system to fulfill its mission in a timely manner, even in the presence of attacks, failures or accidents [27]. It is therefore of paramount importance that the sensor network's mission is clearly defined and achievable under normal operating conditions. We showed the influence and usefulness of certain genes in order to detect misbehavior and the impact of the r parameter on the detection process. In general, the results in Fig. 10 show that Gene #1 and #2 obtained of all genes the best results, with Gene #2 showing always the best results. The contribution of Gene #1 suggests that observing the MAC layer and the ratio of complete handshakes to the number of RTS packets sent is useful for the implemented misbehaviour.
Gene #2 fits perfectly for the implemented misbehavior. It therefore comes as no surprise that this gene showed the best results in the detection process. The question which remains open is whether the two genes are still as useful when exposed to different attack patterns.
It is currently unclear whether genes that performed well with negative selection, will also be appropriate for generating different flavors of signals as suggested within the danger theory [1,16]. It is our opinion that any set of genes, whether used with negative selection or for generating any such a signal, should aim at capturing intrinsic properties of the interaction among different components of a given sensor network. This contradicts approaches applied in [26,22] where the genes are closely coupled with a given protocol. The reason for this statement is the combined performance of Gene #1 and #2. Their interaction can be understood as follows: data packet dropping implies less medium contention since there are less data packets to get forwarded. Less data packets to forward on the other hand implies easier access to the medium, i.e. the number of complete MAC handshakes should increase. This is an interesting complementary relationship since in order to deceive these two genes, a misbehaving node has to appear to be correctly forwarding data packets and, at the same time, he should not significantly modify the "game" of medium access.
It is improbable that the misbehaving node alone would be able to estimate the impact of dropped packets on the contention level. Therefore, he lacks an important feedback mechanism that would allow him to keep the contention level unchanged. For that, he would need to act in collusion with other nodes. The property of complementarity moves the burden of excessive communication from normally behaving nodes to misbehaving nodes, thus, exploiting the ad hoc (local) nature of sensor networks. Our results thus imply, a "good" mixture of genes should be able to capture interactions that a node is unable to influence when acting alone. It is an open question whether there exist other useful properties of genes, other than complementarity.
We conclude that the random-generate-and-test pro-cess, with no knowledge of the used protocols and their behavior, creates many detectors which might show to be superfluous in detecting misbehavior. A process with some basic knowledge of protocol limitations might lead to improved quality of detectors. In [28] the authors stated that the random-generateand-test process "is innefficient, since a vast number of randomly generated detectors need to be discarded, before the required number of the suitable ones are obtained". Our results show that at r = 10, the rate of discarded detectors is less than 4%. Hence, at least in our setting we could not confirm the above statement. A disturbing fact is, however, that the size of the self set in our setting was probably too small in order to justify the use of negative selection. A counter-balancing argument is here the realistic setup of our simulations and a decent detection rate.
We would like to point out that the Fisher iris and biomedical data sets, used in [28] to argue about the apropriateness of negative selection for anomaly detection, could be very different from data sets generated by our simulations. Our experiments show that anomaly (misbehavior) data sets based on sensor networks could be in general very sparse. This effect can be due to the limiting nature of communications protocols. Since the Fisher iris and biomedical data sets were in [28] not evaluated with respect to some basic properties e.g. degree of clustering, it is hard to compare our results with the results presented therein.
In order to understand the effects of misbehavior better (e.g. the propagation of certain adverse effects), we currently develop a general framework for AIS to be used within the JiST/SWANS network simulator [6].
| 8,253 |
0906.3461
|
2150496197
|
A sensor network is a collection of wireless devices that are able to monitor physical or environmental conditions. These devices are expected to operate autonomously, be battery powered and have very limited computational capabilities. This makes the task of protecting a sensor network against misbehavior or possible malfunction a challenging problem. In this document we discuss performance of Artificial immune systems (AIS) when used as the mechanism for detecting misbehavior. We concentrate on performance of respective genes; genes are necessary to measure a network's performance from a sensor's viewpoint. We conclude that the choice of genes has a profound influence on the performance of the AIS. We identified a specific MAC layer based gene that showed to be especially useful for detection. We also discuss implementation details of AIS when used with sensor networks.
|
To overcome the deficiencies of the generate-and-test approach a different approach is outlined in @cite_4 . Several signals each having a different function are employed in order to detect a specific misbehavior in sensor wireless networks. Unfortunately, no performance analysis was presented and the properties of these signals were not evaluated with respect to their misuse.
|
{
"abstract": [
"There is a list of unique immune features that are currently absent from the existing artificial immune systems and other intelligent paradigms. We argue that some of AIS features can be inherent in an application itself, and thus this type of application would be a more appropriate substrate in which to develop and integrate the benefits brought by AIS. We claim here that sensor networks are such an application area, in which the ideas from AIS can be readily applied. The objective of this paper is to illustrate how closely a Danger Theory based AIS – in particular the Dendritic Cell Algorithm matches the structure and functional requirements of sensor networks. This paper also introduces a new sensor network attack called an Interest Cache Poisoning Attack and discusses how the DCA can be applied to detect this attack."
],
"cite_N": [
"@cite_4"
],
"mid": [
"2139772053"
]
}
|
AIS for Misbehavior Detection in Wireless Sensor Networks: Performance and Design Principles
|
Sensor networks [21] can be described as a collection of wireless devices with limited computational abilities which are, due to their ad-hoc communication manner, vulnerable to misbehavior and malfunction. It is therefore necessary to support them with a simple, computationally friendly protection system.
Due to the limitations of sensor networks, there has been an on-going interest in providing them with a protection solution that would fulfill several basic criteria. The first criterion is the ability of self-learning and selftuning. Because maintenance of ad hoc networks by a human operator is expected to be sporadic, they have to have a built-in autonomous mechanism for identifying user behavior that could be potentially damaging to them. This learning mechanism should itself minimize the need for a human intervention, therefore it should be self-tuning to the maximum extent. It must also be computationally conservative and meet the usual condition of high detection rate. The second criterion is the ability to undertake an action against one or several misbehaving users. This should be understood in a wider context of co-operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior. Such a co-operation should have a low message complexity because both the bandwidth and the battery life are of scarce nature. The third and last criterion requires that the protection system does not itself introduce new weaknesses to the systems that it should protect.
An emerging solution that could facilitate implementation of the above criteria are Artificial immune systems (AIS). AIS are based on principles adapted from the Human immune system (HIS) [18,5,17]; the basic ability of HIS is an efficient detection of potentially harmful foreign agents (viruses, bacteria, etc.). The goal of AIS, in our setting, is the identification of nodes with behavior that could possibly negatively impact the stated mission of the sensor network.
One of the key design challenges of AIS is to define a suitable set of efficient genes. Genes form a basis for deciding whether a node misbehaves. They can be characterized as measures that describe a network's performance from a node's viewpoint. Given their purpose, they must be easy to compute and robust against deception.
Misbehavior in wireless sensor networks can take upon different forms: packet dropping, modification of data structures important for routing, modification of packets, skewing of the network's topology or creating ficticious nodes (see [13] for a more complete list). The reason for sensors (possibly fully controlled by an attacker) to execute any form of misbehavior can range from the desire to save battery power to making a given wireless sensor network non-functional. Malfunction can also be considered a type of unwanted behavior.
Artificial Immune Systems
Learning
The process of T-cells maturation in thymus is used as an inspiration for learning in AIS. The maturation of T-cells (detectors) in thymus is a result of a pseudorandom process. After a T-cell is created (see Fig. 1), it undergoes a censoring process called negative selection. During negative selection T-cells that bind self are destroyed. Remaining T-cells are introduced into the body. The recognition of non-self is then done by simply comparing T-cells that survived negative selection with a suspected non-self. This process is depicted in Fig. 2. It is possible that the self set is incomplete, while a T-cell matures (tolerization period) in the thymus. This could lead to producing T-cells that should have been removed from the thymus and can cause an autoimmune reaction, i.e. it leads to false positives.
A deficiency of the negative selection process is that alone it is not sufficient for assessing the damage that a non-self antigen could cause. For example, many bacteria that enter our body are not harmful, therefore an immune reaction is not necessary. T-cells, actors of the adaptive immune system, require co-stimulation from the innate immune system in order to start acting. The innate immune system is able to recognize the presence of harmful non-self antigens and tissue damage, and signal this to certain actors of the adaptive immune system.
The random-generate-and-test approach for producing T-cells (detectors) described above is analyzed in [11]. In general, the number of candidate detectors to the self set size needs to be exponential (if a matching rule with fixed matching probability is used). Another problem is a consistent underfitting of the nonself set; there exist "holes" in the non-self set that are undetectable. In theory, for some matching rules, the number of holes can be very unfavorable [28]. In practical terms, the effect of holes depends on the charac-teristics of the non-self set, representation and matching rule [15]. The advantage of this algorithm is its simplicity and good experimental results in cases when the number of detectors to be produced is fixed and small [26]. A review of other approaches to detector computation can be found in [2].
Sensor Networks
A sensor network can be defined in graph theoretic framework as follows: a sensor network is a net N = (n(t), e(t)) where n(t), e(t) are the set of nodes and edges at time t, respectively. Nodes correspond to sensors that wish to communicate with each other. An edge between two nodes A and B is said to exist when A is within the radio transmission range of B and vice versa. The imposed symmetry of edges is a usual assumption of many mainstream protocols. The change in the cardinality of sets n(t), e(t) can be caused by switching on/off one of the sensors, failure, malfunction, removal, signal propagation, link reliability and other factors.
Data exchange in a point-to-point (uni-cast) scenario usually proceeds as follows: a user initiated data exchange leads to a route query at the network layer of the OSI stack. A routing protocol at that layer attempts to find a route to the data exchange destination. This request may result in a path of non-unit length. This means that a data packet in order to reach the destination has to rely on successive forwarding by intermediate nodes on the path. An example of an ondemand routing protocol often used in sensor networks is DSR [20]. Route search in this protocol is started only when a route to a destination is needed. This is done by flooding the network with RREQ 2 control packets. The destination node or an intermediate node that knows a route to the destination will reply with a RREP control packet. This RREP follows the route back to the source node and updates routing tables at each node that it traverses. A RERR packet is sent to the connection originator when a node finds out that the next node on the forwarding path is not replaying.
At the MAC layer of the OSI protocol stack, the medium reservation is often contention based. In order to transmit a data packet, the IEEE 802.11 MAC protocol uses carrier sensing with an RTS-CTS-DATA-ACK handshake. 3 Should the medium not be available 2 RREQ = Route Request, RREP = Route Reply, RERR = Route Error.
3 RTS = Ready to send, CTS = Clear to send, ACK = Acknowl-or the handshake fails, an exponential back-off algorithm is used. This is combined with a mechanism that makes it easier for neighboring nodes to estimate transmission durations. This is done by exchange of duration values and their subsequent storing in a data structure known as Network allocation vector (NAV). With the goal to save battery power, researchers suggested, a sleep-wake-up schedule for nodes would be appropriate. This means that nodes do not listen continuously to the medium, but switch themselves off and wake up again after a predetermined period of time. Such a sleep and wake-up schedule is similarly to duration values exchanged among nodes. An example of a MAC protocol, designed specifically for sensor networks, that uses such a schedule is the S-MAC [29]. A sleep and wake-up schedule can severely limit operation of a node in promiscuous mode. In promiscuous mode, a node listens to the on-going traffic in the neighborhood and collects information from the overheard packets. This technique is used e.g. in DSR for improved propagation of routing information. Movement of nodes can be modeled by means of a mobility model. A well-known mobility model is the Random waypoint model [20]. In this model, nodes move from the current position to a new randomly generated position at a predetermined speed. After reaching the new destination a new random position is computed. Nodes pause at the current position for a time period t before moving to the new random position.
For more information on sensor networks, we refer the reader to [21].
Summary of Results
Motivated by the positive results reported in [17,26] we have undertaken a detailed performance study of AIS with focus on sensor networks. The general conclusions that can be drawn from the study presented in this document are:
1. Given the ranges of input parameters that we used and considering the computational capabilities of current sensor devices, we conclude that AIS based misbehavior detection offers a decent detection rate.
2. One of the main challenges in designing well performing AIS for sensor networks is the set of "genes". This is similar to observations made in [24]. edgment.
3. Our results suggest that to increase the detection performance, an AIS should benefit from information available at all layers of the OSI protocol stack; this includes also detection performance with regards to a simplistic flavor of misbehavior such as packet dropping. This supports ideas shortly discussed in [30] where the authors suggest that information available at the application layer deserves more attention. 4. We observed that somewhat surprisingly a gene based purely on the MAC layer significantly contributed to the overall detection performance. This gene poses less limitations when a MAC protocol with a sleep-wake-up schedule such as the S-MAC [29] is used.
5. It is desirable to use genes that are "complementary" with respect to each other. We demonstrated that two genes, one that measures correct forwarding of data packets, and the other one that indirectly measures the medium contention, have exactly this property. 6. We only used a single instance of learning and detection mechanism per node. This is different from approach used in [17,26], where one instance was used for each of m possible neighbors. Our performance results show that the approach in [17,26] may not be feasible for sensor networks. It may allow for an easy Sybil attack and, in general, m = n − 1 instances might be necessary, where n is the total number of sensors in the network. Instead, we suggest that flagging a node as misbehaving should, if possible, be based on detection at several nodes. 7. Only less than 5% detectors were used in detecting misbehavior. This suggests that many of the detectors do not comply with constraints imposed by the communications protocols; this is an important fact when designing AIS for sensor networks because the memory capacity at sensors is expected to be very limited.
8. The data traffic properties seem not to impact the performance. This is demonstrated by similar detection performance, when data traffic is modeled as constant bit rate and Poisson distributed data packet stream, respectively. 9. We were unable to distinguish between nodes that misbehave (e.g. deliberately drop data packets) and nodes with a behavior resembling a misbehavior (e.g. drop data packets due to medium contention). This motivates the use of danger signals as described in [1,16]. The approach applied in [26] does, however, not completely fit sensor networks since these might implement only a simplified version of the transport layer.
AIS for Sensor Networks: Design Principles
In our approach, each node produces and maintains its own set of detectors. This means that we applied a direct one-to-one mapping between a human body with a thymus and a node. We represent self, non-self and detector strings as bit-strings. The matching rule employed is the r-contiguous bits matching rule. Two bitstrings of equal length match under the r-contiguous matching rule if there exists a substring of length r at position p in each of them and these substrings are identical. Detectors are produced by the process shown in Fig. 1, i.e. by means of negative selection when detectors are created randomly and tested against a set of self strings. Each antigen consists of several genes. Genes are performance measures that a node can acquire locally without the help from another node. In practical terms this means that an antigen consists of x genes; each of them encodes a performance measure, averaged in our case over a time window. An antigen is then created by concatenating the x genes.
When choosing the correct genes, the choice is limited due to the simplified OSI protocol stack of sensors. For example, Mica2 sensors [9] using the TinyOS operating system do not guarantee any end-to-end connection reliability (transport layer), leaving only data traffic at the lower layers for consideration.
Let us assume that the routing protocol finds for a connection the path s s , s 1 , ..., s i , s i+1 , s i+2 , ..., s d from the source node s s to the destination node s d , where s s = s d and s i+1 = s d . We have used the following genes to capture certain aspects of MAC and routing layer traffic information (we averaged over a time period (window size) of 500 seconds):
MAC Layer:
#1 Ratio of complete MAC layer handshakes between nodes s i and s i+1 and RTS packets sent by s i to s i+1 . If there is no traffic between two nodes this ratio is set to ∞ (a large number). This ratio is averaged over a time period. A complete handshake is defined as a completed sequence of RTS, CTS, DATA, ACK packets between s i and s i+1 .
#2 Ratio of data packets sent from s i to s i+1 and then subsequently forwarded by s i+1 to s i+2 . If there is no traffic between two nodes this ratio is set to ∞ (a large number). This ratio is computed by s i in promiscuous mode and, as in the previous case, averaged over a time period. This gene was adapted from the watchdog idea in [25].
#3 Time delay that a data packet spends at s i+1 before being forwarded to s i+2 . The time delay is observed by s i in promiscuous mode. If there is no traffic between two nodes the time delay is set to zero. This measure is averaged over a time period. This gene is a quantitative extension of the previous gene.
Routing Layer:
#4 The same ratio as in #2 but computed separately for RERR routing packets.
#5 The same delay as in #3 but computed separately for RERR routing packets.
The Gene #1 can be characterized as MAC layer quality oriented -it indirectly measures the medium contention level. The remaining genes are watchdog oriented. This means that they more strictly fit a certain kind of misbehavior. The Gene #2 can help detect whether packets get correctly forwarded; the Gene #3 can help detect whether forwarding of packets does not get intentionally delayed. As we will show later, in the particular type of misbehavior (packet dropping) that we applied, the first two genes come out as "the strongest". The disadvantage of the watchdog based genes is that due to limited battery power, nodes could operate using a sleep-wake-up schedule similar to the one used in the S-MAC. This would mean that the node s i has to stay awake until the node s i+1 (monitored node) correctly transmits to s i+2 . The consequence would be a longer wake-up time and possible restrictions in publishing sleep-wake-up schedules.
In [24] the authors applied a different a set of genes, based only on the DSR routing protocol. The observed set of events was the following: A = RREQ sent, B = RREP sent, C = RERR sent, D = DATA sent and IP source address is not of the monitored (neighboring) node, E = RREQ received, F = RREP received, G = RERR received, H = DATA received and the IP destination address is not of the monitored node. The events D and H take into consideration that the source and destination nodes of a connection might appear as misbehaving as they seem to "deliberately" create and delete data packets. Then the set of their four genes is as follows: The time period (window size) in their case was 10s; * is the Kleene star operator (zero or more occurrences of any event(s) are possible). Similar to our watchdog genes, these genes impose additional requirements on MAC protocols such as the S-MAC. Their dependence on the operation in promiscuous mode is, however, more pronounced as a node has to continuously observe packet events at all monitored nodes.
The research in the area of what and to what extent can be or should be locally measured at a node, is independent of the learning mechanism used (negative selection in both cases). Performance of an AIS can partly depend on the ordering and the number of used genes. Since longer antigens (consisting of more genes) indirectly imply more candidate detectors, the number of genes should be carefully considered. Given x genes, it is possible to order them in x! different ways. In our experience, the rules for ordering genes and the number of genes can be summed up as follows:
1) Keep the number of genes small. In our experiments, we show that with respect to the learning mechanism used and the expected deployment (sensor networks), 2-3 genes are enough for detecting a basic type of misbehavior.
2) Order genes either randomly or use a predetermined fixed order. Defining a utility relation between genes, and ordering genes with respect to it can, in general, lead to problems that are considered intractable. Our results however suggest, it is important to understand relations between different genes, since genes are able to complement each other; this can lead to their increased mutual strength. On the other hand, random ordering adds to robustness of the underlying AIS. For an attacker, it is namely more difficult to deceive, since he does not know how genes are being used. It is currently an open question, how to impose a balanced solution.
3) Genes cannot be considered in isolation. Our experiments show, when a detector matched an antigen under the r-contiguous matching rule, usually this match spanned over several genes. This motivates design of matching rules that would not limit matching to a few neighboring genes, offer more flexibility but still require that a gene remains a partly atomic unit.
Learning and Detection
Learning and detection is done by applying the mechanisms shown in Figs. 1 and 2. The detection itself is very straightforward. In the learning phase, a misbehavior-free period (see [1] on possibilities for circumventing this problem) is necessary so that nodes get a chance to learn what is the normal behavior. When implementing the learning phase, the designer gets to choose from two possibilities: 1) Learning and detection at a node get implemented for each neighboring node separately. This means that different antigens have to get computed for each neighboring node, detector computation is different for each neighboring node and, subsequently, detection is different for each neighboring node. The advantage of this approach is that the node is able to directly determine which neighboring node misbehaves; the disadvantage is that m instances (m is the number of neighbors or node degree) of the negative selection mechanism have to get executed; this can be computationally prohibitive for sensor networks as m can, in general, be equal to the total number of sensor. This allows for an easy Sybil attack [13] in which a neighbor would create several identities; the node would then be unable to recognize that these identities belong to the same neighbor. This approach was used in [26,24].
2) Learning and detection at a node get implemented in a single instance for all neighboring nodes. This means a node is able to recognize anomaly (misbehavior) but it may be unable to determine which one from the m neighboring nodes misbehaves. This implies that nodes would have to cooperate when detecting a misbehaving node, exchange anomaly information and be able to draw a conclusion from the obtained information. An argument for this approach is that in order to detect nodes that misbehave in collusion, it might be necessary to rely to some extent on information exchange among nodes, thus making this a natural solution to the problem. We have used this approach; a postprocessing phase (using the list of misbehaving nodes) was necessary to determine whether a node was correctly flagged as misbehaving or not.
We find the second approach to be more suited for wireless sensor networks. It is namely less computationally demanding. We are unable, at this time, to estimate the frequency of a complete detector set computation.
Both approaches can be classified within the four-
Local and Cooperative Response Data Collection and Preprocessing
Local and Cooperative Detection Learning Figure 3: An four-layer architecture aimed at protecting sensor networks against misbehavior and abuse.
layer architecture (Fig. 3) that we introduced in [14]. The lowermost layer, Data collection and preprocessing, corresponds to genes' computation and antigen construction. The Learning layer corresponds to the negative selection process. The next layer, Local and co-operative detection, suggests, an AIS should benefit from both local and cooperative detection. Both our setup and the setup described in [26,24] only apply local detection. The uppermost layer, Local and cooperative response, implies, an AIS should also have the capability to undertake an action against one or several misbehaving nodes; this should be understood in a wider context of co-operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior. To our best knowledge, there is currently no AIS implementation for sensor networks taking advantage of this layer.
Which r is the correct one? An interesting technical problem is to tune the r parameter for the r-contiguous matching rule so that the underlying AIS offers good detection and false positives rates. One possibility is a lengthy simulation study such as this one. Through multiparameter simulation we were to able to show that r = 10 offers the best performance for our setup. In [12] we experimented with the idea of "growing" and "shrinking" detectors; this idea was motivated by [19]. The initial r 0 for a grow-ing detector can be chosen as r 0 = ⌈l/2⌉, where l is the detector length. The goal is to find the smallest r such that a candidate detector does not match any self antigen. This means, initially, a larger (more specific) r is chosen; the smallest r that fulfills the above condition can be found through binary search. For shrinking detectors, the approach is reciprocal. Our goal was to show that such growing or shrinking detectors would offer a better detection or false positives rate. Short of proving this in a statistically significant manner, we observed that the growing detectors can be used for self tuning the r parameter. The average r value was close to the r determined through simulation (the setup in that case was different from the one described in this document).
Further Optimizations
Our experiments show that only a small number of detectors get ever used (less than 5%). The reason is, they get produced in a random way, not considering structure of the protocols. For example, a detector that is able to detect whether i) data packets got correctly transmitted and ii) 100% of all MAC layers handshakes were incomplete is superfluous as this case should never happen. In [8], the authors conclude: "... uniform coverage of non-self space is not only unnecessary, it is impractical; non-self space is too big". Application driven knowledge can be used to set up a rule based system that would exclude infeasible detectors; see [10] for a rule based system aimed at improved coverage of the nonself set. In [17], it is suggested that unused detectors should get deleted and the lifetime of useful detectors should be extended.
Misbehavior
In a companion paper [13], we have reviewed different types of misbehavior at the MAC, network and transport layers of the OSI protocol stack. We note that solutions to many of these attacks have been already proposed; these are however specific to a given attack. Additionally, due to the limitations of sensor networks, these solutions cannot be directly transfered.
The appeal of AIS based misbehavior detection rests on its simplicity and applicability in an environment that is extremely computationally and bandwidth limited. Misbehavior in sensor networks does not have to be executed by sensors themselves; one or several computationally more powerful platforms (laptops) can be used for the attack. On the other hand, a protection using such more advanced computational platforms is, due to e.g. the need to supply them continuously with electric power, harder to imagine. It would also create a point of special interest for the possible attackers.
Experimental Setup
The purpose of our experiments was to show that AIS are a viable approach for detecting misbehavior in sensor networks. Furthermore, we wanted to cast light on internal performance of an AIS designed to protect sensor networks. One of our central goals was to provide an in-depth analysis of relative usefulness of genes.
Definitions of input and output parameters: The input parameters for our experiments were: r parameter for the r-contiguous matching rule, the (desired) number of detectors and misbehavior level. Misbehavior was modeled as random packet dropping at selected nodes.
The performance (output) measures were arithmetic averages and 95% confidence intervals ci 95% of detection rate, number of false positives, real time to compute detectors, data traffic rate at nodes, number of iterations to compute detectors (number of random tries), number of non-valid detectors, number of different (unique) antigens in a run or a time window, and number of matches for each gene. The detection rate dr is defined as dns ns , where dns is the number of detected non-self strings and ns is the total number of non-self strings. A false positive in our definition is a string that is not self but can still be a result of anomaly that is identical with the effects of a misbehavior. A non-valid detector is a candidate detector that matches a self string and must therefore be removed.
The number of matches for each gene was evaluated using the r-contiguous matching rule; we considered two cases: i) two bit-strings get matched from the left to the right and the first such a match will get reported (matching gets interrupted), ii) two bit-strings get matched from the left to the right and all possible matches will get reported. The time complexity of these two approaches is O(r(l − r)) and Θ(r(l − r)), respectivelly; r ≤ l, where l is the bitstring length. The first approach is exactly what we used when computing the real time necessary for negative selection, the second approach was used when our goal was to evaluate relative usefulness of each gene.
Scenario description: We wanted to capture "self" and "non-self" packet traffic in a large enough synthetic static sensor network and test whether using an AIS we are able to recognize non-self, i.e. misbehavior. The topology of this network was determined by making a snapshot of 1,718 mobile nodes (each with 100m radio radius) moving in a square area of 2,900m×2,950m as prescribed by the random waypoint mobility model; see Figure 5(a). The motivation in using this movement model and then creating a snapshot are the results in our previous paper [7] that deals with structural robustness of sensor network. Our preference was to use a slightly bigger network than it might be necessary, rather than using a network with unknown properties. The computational overhead is negligible; simulation real time mainly depends on the number of events that require processing. Idle nodes increase memory requirements, but memory availability at computers was in our case not a bottleneck.
We chose source and destination pairs for each connection so that several alternative independent routes exist; the idea was to benefit from route repair and route acquisition mechanisms of the DSR routing protocol, so that the added value of AIS based misbehavior detection is obvious.
We used 10 CBR (Constant bit rate) connections. The connections were chosen so that their length is ∼7 hops and so that these connections share some common intermediate nodes; see Figure 5(b). For each packet re-ceived or sent by a node we have captured the following information: IP header type (UDP, 802.11 or DSR in this case), MAC frame type (RTS, CTS, DATA, ACK in the case of 802.11), current simulation clock, node address, next hop destination address, data packet source and destination address and packet size.
Encoding of self and non-self antigens: Each of the five genes was transformed in a 10-bit signature where each bit defines an interval 4 of a gene specific value range. We created self and non-self antigen strings by concatenation of the defined genes. Each self and nonself antigen has therefore a size of 50 bits. The interval representation was chosen in order to avoid carry-bits (the Gray coding is an alternative solution).
Constructing the self and non-self sets: We have randomly chosen 28 non-overlapping 500-second windows in our 4-hour simulation. In each 500-second window self and non-self antigens are computed for each node. This was repeated 20 times for independent Glomosim runs.
Misbehavior modeling: Misbehavior is modeled as random data packet dropping (implemented at the network layer); data packets include both data packets from the transport layer as well as routing protocol packets. that should get dropped will simply not be in- serted into the IP queue); we have randomly chosen 236 nodes and these were forced to drop {10, 30, 50%} of data packets. However, there were only 3-10 nodes with misbehavior and with a statistically significant number of packets for forwarding in each simulation run; see constraint C2 in Section 7.
Detection: A neighboring node gets flagged as misbehaving, if a detector from the detector set matches an antigen. Since we used a single learning phase, we had to complement this process with some routing information analysis. This allowed us to determine, which one from the neighboring nodes is actually the misbehaving one. In the future, we plan to rely on co-operative detection in order to replace such a post-analysis.
Simulation phases: The experiment was done in four phases.
1. 20 independent Glomosim runs were done for one of {10, 30, 50%} misbehavior levels and "normal" traffic. Normal means that no misbehavior took place.
2. Self and non-self antigen computation (encoding).
3. The 20 "normal" traffic runs were used to compute detectors. Given the 28 windows and 20 runs, the sample size was 20×28 = 560, i.e. detectors at each node were discriminated against 560 self antigens.
4. Using the runs with {10, 30, 50%} misbehavior levels, the process shown in Fig. 2 was used for detection; we restricted ourselves to nodes that had in both the normal and misbehavior traffic at least a certain number of data packets to forward (packet threshold).
The experiment was then repeated with different r, desired number of detectors and misbehavior level.
The parameters for this experiment are summarized in Fig. 4. The injection rate and packet sizes were chosen in order to comply with usual data rates of sensors (e.g. 38.4kbps for Mica2; see [9]). We chose the Glomosim simulator [3] over other options (most notably ns2) because of its better scaling characteristics [6] and our familiarity with the tool.
Results Evaluation
When evaluating our results we define two additional constraints: C1. We define a node to be detected as misbehaving if it gets flagged in at least 14 out of the 28 possible windows. This notion indirectly defines the time until a node is pronounced to be misbehaving. We call this a window threshold. (b) Rate of non-valid detectors; for r ≤ 13 is ci 95% < 1%, for r ≥ 16 is the sample size not significant. (c) Number of iterations needed in order to compute the desired number of detectors; for r ≥ 10 is ci 95% < 1%, for r = 7 is ci 95% < 2%. (a) Detection rate vs packet threshold; conf. interval ranges: for mis. level 10% is ci 95% = 3.8-19.8%; for 30% is ci 95% = 11.9-15.9%; for 50% is ci 95% = 11.0-14.2%. (b) The number of unique detectors that matched an antigen in a run. Conf. interval range for 7 ≤ r ≤ 13 is ci 95% = 6.5-10.1%. (c) The number of unique detectors that matched an antigen in a window; each run has 28 windows. Conf. interval range: ci 95% < 0.16%. C2. A node s i has to forward in average at least m packets over the 20 runs in both the "normal" and misbehavior cases in order to be included into our statistics. This constraint was set in order to make the detection process more reliable. It is dubious to flag a neighboring node of s i as misbehaving, if it is based on "normal" runs or runs with misbehavior, in which node s i had no data packets to forward (he was not on a routing path). We call this a packet threshold; m was in our simulations chosen from {500, 1000, 2000, 4000}. Example: for a fixed set of input parameters, a node forwarded in the "normal" runs in average 1,250 packets and in the misbehavior runs (with e.g. level 30%) 750 packets. The node s i would be considered for misbehavior detection if m = 500, but not if m ≥ 1000. In other words, a node has to get a chance to learn what is "normal" and then to use this knowledge on a non-empty packet stream.
Overall Performance
The results related to computation of detectors are shown in Figure 6. In our experiments we have considered the desired number of detectors to be max. 4,000; over this threshold the computational requirements might be too high for current sensor devices. We remind the reader, each time the r parameter is incremented by 1, the number of detectors should double in order to make these two cases comparable. Figure 6(a) shows the real time needed to compute the desired set of detectors. We can see the real time necessary increases proportionally with the desired number of detectors; this complies with the theoretical results presented in [11]. Figure 6(b) shows the percentage of non-valid detectors, i.e. candidate detectors that were found to match a self string (see Figure 1). This result points to where the optimal operation point of an AIS might lie with respect to the choice of r parameter and the choice of a fixed number of detectors to compute. We remind the reader, the larger is the r parameter the smaller is the probability that a detector will match a self string. Therefore overhead connected to choosing the r parameter prohibitively small should be considered when designing an AIS. Figure 6(c) shows the total number of generate-and-test tries needed for computation of detector set of a fixed size; the 95% confidence interval is less than 2%.
In Figure 7(a) we show the dependence of detection ratio on the packet threshold. We conclude that except for some extremely low threshold values (not shown) the detection rate stays constant. This figure also shows that when misbehavior level was set very low, i.e. 10%, the AIS struggled to detect misbehaving nodes. This is partly a result of our coarse encoding with only 10 different levels.
At the 30 and 50% misbehaving levels the detection rate stays solid at about 70-85%. The range of the 95% confidence interval of detection rate is 3.8-19.8%. The fact that the detection rate did not get closer to 100% suggests, either the implemented genes are not sufficient, detection should be extended to protocols at other layers of the OSI protocol stack, a different ordering of genes should have been applied or our ten level encoding was too coarse. It also implicates that watchdog based genes (though they perfectly fit the implemented misbehavior) should not be used in isolation, and in general, that the choice of genes has to be very careful. Figure 7(b) shows the impact of r on detection rate. When r = {7, 10} the AIS performs well, for r > 10 the detection rate decreases. This is caused by the inadequate numbers of detectors used at higher levels of r (we limited ourselves to max. 4,000 detectors). Figure 7(c) shows the number of false positives. We remind that in our definition false positives are both nodes that do not drop any packets and nodes that drop packets due to other reasons than misbehavior.
In a separate experiment we studied whether the 4-hour (560 samples) simulation time was enough to capture the diversity of the self behavior. This was done by trying to detect misbehavior in 20 independent misbehavior-free Glomosim runs (different from those used to compute detectors). We report that we did not observe a single case of an autoimmune reaction.
Detailed Performance
In Fig. 8(a) we show the total number of runs in which a node was identified as misbehaving. The steep decline for values r > 10 (in this and other figures) documents that in these cases it was necessary to produce a higher number of detectors in order to cover the non-self antigen space. The higher the r, the higher is the specificity of a detector, this means that it is able to match a smaller set of non-self antigens.
In Fig. 8(b) and (c) we show the number of detectors that got matched during the detection phase (see Fig. 2). Fig. (b) shows the number of detectors matched per run, Fig. (c) shows the number of detectors matched per window. Fig. (b) is an upper estimate on the number of unique detectors needed in a single run. Given that the total number of detectors was 2,000, there were less than 5% detectors that would get used in the detection phase. The tight confidence intervals 5 for the number of unique detectors matched per window (see Fig. (c)) is a direct consequence of the small variability of antigens as shown in Fig. 9(a). Fig. 9(a) shows the number of unique antigens that were subject to classification into self or non-self. The average for r = {7, 10} is about 1.5. This fact does not directly imply that the variability of the data traffic would be inadequate. It is rather a direct consequence of our choice of genes and their encoding (we only used 10 value levels for encoding). Fig. 9(b) shows the number of matches between a detector and an antigen in the following way. When a detector under the r-contiguous matching rule matches only a single gene within an antigen, we would increment the "single" counter. Otherwise, we would increment the "multiple" counter. It is obvious that with increasing r, it gets more and more probable that a detector would match more than a single gene. The interesting fact is that the detection rate for both r = 7 and r = 10 is about 80% (see Fig. 7(a)) and that the rate of non-valid detectors is very different (see Fig. 6(b)). This means that an interaction between genes has positively affected the later performance measure, without sacrificing on the former one. This leads to a conlusion that genes should not be considered in isolation. Fig. 9(c) shows the performance of Gene #1. The number of matches shows that this gene contributed to the overall detection performance of our AIS. Figs. 10(a-c) sum up performance of the five genes for different values of r. Again, an interesting fact is the contribution of Gene #1 to the overall detection performance. The usefulness of Gene #2 was largely expected as this gene was tailored for the kind of misbehavior that we implemented. The other three genes came out as marginally useful. The importance of the somewhat surprising performance of Gene #1 is that it can be computed in a simplistic way and does not require continuous operation of a node.
The Impact of Data Traffic Pattern
In an additional experiment, we examined the impact of data traffic pattern on the performance. We used two different data traffic models: the constant bit rate 5 For practical reasons we show ci 95% only for 7 ≤ r ≤ 13.
(CBR) and a Poisson distributed data traffic. In many scenarios, sensors are expected to take measurements in constant intervals and, subsequently, send them out for processing. This would create a constant bit rate traffic. Poisson distributed traffic could be a result of sensors taking measurements in an event-driven fashion. For example, a sensor would take a measurement only when a target object (e.g. a person) happens to be in its vicinity.
The setup for this experiment was similar to that presented in Fig. 4 with the additional fact that the data traffic model would now become an input parameter. With the goal to reduce complexity of the experimental setup, we fixed r = 10 and we only considered cases with 500 and 2000 detectors. In order to match the CBR traffic rate, the Poisson distributed data traffic model had a mean arrival expectation of 1 packet per second (λ = 1.0). As in the case with CBR, we computed the detection rate and the rate of false positives with the associated arithmetic averages and 95% confidence intervals.
The results based on these two traffic models were similar, actually, we could not find the difference between them to be statistically significant. This points out that the detection process is robust against some variation in data traffic. This conclusion also reflects positively on the usefulness of the used genes. More importantly, it helped disperse our worries that the results presented in this experimental study could be unacceptably data traffic dependent.
Related Work
In [26,24] the authors introduced an AIS based misbehavior detection system for ad hoc wireless networks. They used Glomosim for simulating data traffic, their setup was an area of 800×600m with 40 mobile nodes (speed 1 m/s) of which 5-20 are misbehaving; the routing protocol was DSR. Four genes were used to capture local behavior at the network layer. The misbehavior implemented is a subset of misbehavior introduced in this paper; their observed detection rate is about 55%. Additionally, a co-stimulation in the form of a danger signal was used in order to inform nodes on a forwarding path about misbehavior, thus propagating information about misbehaving nodes around the network.
In [17] the authors describe an AIS able to detect anomalies at the transport layer of the OSI protocol stack; only a wired TCP/IP network is considered. Self is defined as normal pairwise connections. Each detector is represented as a 49-bit string. The pattern matching is based on r-contiguous bits with a fixed r = 12.
Ref. [23] discusses a network intrusion system that aims at detecting misbehavior by capturing TCP packet headers. They report that their AIS is unsuitable for detecting anomalies in communication networks. This result is questioned in [4] where it is stated that this is due to the choice of problem representation and due to the choice of matching threshold r for r-contiguous bits matching.
To overcome the deficiencies of the generate-and-test approach a different approach is outlined in [22]. Several signals each having a different function are employed in order to detect a specific misbehavior in sensor wireless networks. Unfortunately, no performance analysis was presented and the properties of these signals were not evaluated with respect to their misuse.
The main discerning factor between our work and works shortly discussed above is that we carefully considered hardware parameters of current sensor devices, the set of input parameters was designed in order to target specifically sensor networks and our simulation setup reflects structural qualities of such networks with regards to existence of multiple independent routing paths. In comparison to [26,24] we showed that in case of static sensor networks it is reasonable to expect the detection rate to be above 80%.
Conclusions and Future Work
Although we answered some basic question on the suitability and feasibility of AIS for detecting misbehavior in sensor networks a few questions remain open.
The key question in the design of AIS is the quantity, quality and ordering of genes that are used for measuring behavior at nodes. To answer this question a detailed formal analysis of communications protocols will be needed. The set of genes should be as "complete" as possible with respect to any possible misbehavior. The choice of genes should impose a high degree of sensor network's survivability defined as the capability of a system to fulfill its mission in a timely manner, even in the presence of attacks, failures or accidents [27]. It is therefore of paramount importance that the sensor network's mission is clearly defined and achievable under normal operating conditions. We showed the influence and usefulness of certain genes in order to detect misbehavior and the impact of the r parameter on the detection process. In general, the results in Fig. 10 show that Gene #1 and #2 obtained of all genes the best results, with Gene #2 showing always the best results. The contribution of Gene #1 suggests that observing the MAC layer and the ratio of complete handshakes to the number of RTS packets sent is useful for the implemented misbehaviour.
Gene #2 fits perfectly for the implemented misbehavior. It therefore comes as no surprise that this gene showed the best results in the detection process. The question which remains open is whether the two genes are still as useful when exposed to different attack patterns.
It is currently unclear whether genes that performed well with negative selection, will also be appropriate for generating different flavors of signals as suggested within the danger theory [1,16]. It is our opinion that any set of genes, whether used with negative selection or for generating any such a signal, should aim at capturing intrinsic properties of the interaction among different components of a given sensor network. This contradicts approaches applied in [26,22] where the genes are closely coupled with a given protocol. The reason for this statement is the combined performance of Gene #1 and #2. Their interaction can be understood as follows: data packet dropping implies less medium contention since there are less data packets to get forwarded. Less data packets to forward on the other hand implies easier access to the medium, i.e. the number of complete MAC handshakes should increase. This is an interesting complementary relationship since in order to deceive these two genes, a misbehaving node has to appear to be correctly forwarding data packets and, at the same time, he should not significantly modify the "game" of medium access.
It is improbable that the misbehaving node alone would be able to estimate the impact of dropped packets on the contention level. Therefore, he lacks an important feedback mechanism that would allow him to keep the contention level unchanged. For that, he would need to act in collusion with other nodes. The property of complementarity moves the burden of excessive communication from normally behaving nodes to misbehaving nodes, thus, exploiting the ad hoc (local) nature of sensor networks. Our results thus imply, a "good" mixture of genes should be able to capture interactions that a node is unable to influence when acting alone. It is an open question whether there exist other useful properties of genes, other than complementarity.
We conclude that the random-generate-and-test pro-cess, with no knowledge of the used protocols and their behavior, creates many detectors which might show to be superfluous in detecting misbehavior. A process with some basic knowledge of protocol limitations might lead to improved quality of detectors. In [28] the authors stated that the random-generateand-test process "is innefficient, since a vast number of randomly generated detectors need to be discarded, before the required number of the suitable ones are obtained". Our results show that at r = 10, the rate of discarded detectors is less than 4%. Hence, at least in our setting we could not confirm the above statement. A disturbing fact is, however, that the size of the self set in our setting was probably too small in order to justify the use of negative selection. A counter-balancing argument is here the realistic setup of our simulations and a decent detection rate.
We would like to point out that the Fisher iris and biomedical data sets, used in [28] to argue about the apropriateness of negative selection for anomaly detection, could be very different from data sets generated by our simulations. Our experiments show that anomaly (misbehavior) data sets based on sensor networks could be in general very sparse. This effect can be due to the limiting nature of communications protocols. Since the Fisher iris and biomedical data sets were in [28] not evaluated with respect to some basic properties e.g. degree of clustering, it is hard to compare our results with the results presented therein.
In order to understand the effects of misbehavior better (e.g. the propagation of certain adverse effects), we currently develop a general framework for AIS to be used within the JiST/SWANS network simulator [6].
| 8,253 |
0906.3461
|
2150496197
|
A sensor network is a collection of wireless devices that are able to monitor physical or environmental conditions. These devices are expected to operate autonomously, be battery powered and have very limited computational capabilities. This makes the task of protecting a sensor network against misbehavior or possible malfunction a challenging problem. In this document we discuss performance of Artificial immune systems (AIS) when used as the mechanism for detecting misbehavior. We concentrate on performance of respective genes; genes are necessary to measure a network's performance from a sensor's viewpoint. We conclude that the choice of genes has a profound influence on the performance of the AIS. We identified a specific MAC layer based gene that showed to be especially useful for detection. We also discuss implementation details of AIS when used with sensor networks.
|
The main discerning factor between our work and works shortly discussed above is that we carefully considered hardware parameters of current sensor devices, the set of input parameters was designed in order to target specifically sensor networks and our simulation setup reflects structural qualities of such networks with regards to existence of multiple independent routing paths. In comparison to @cite_20 @cite_7 we showed that in case of static sensor networks it is reasonable to expect the detection rate to be above 80
|
{
"abstract": [
"In mobile ad-hoc networks, nodes act both as terminals and information relays, and participate in a common routing protocol, such as Dynamic Source Routing (DSR). The network is vulnerable to routing misbehavior, due to faulty or malicious nodes. Misbehavior detection systems aim at removing this vulnerability. In this paper we investigate the use of an Artificial Immune System (AIS) to detect node misbehavior in a mobile ad-hoc network using DSR. The system is inspired by the natural immune system of vertebrates. Our goal is to build a system that, like its natural counterpart, automatically learns and detects new misbehavior. We describe the first step of our design; it employs negative selection, an algorithm used by the natural immune system. We define how we map the natural immune system concepts such as self, antigen and antibody to a mobile ad-hoc network, and give the resulting algorithm for misbehavior detection. We implemented the system in the network simulator Glomosim; we present detection results and discuss how the system parameters impact the results. Further steps will extend the design by using an analogy to the innate system, danger signals, costimulation and memory cells.",
"In mobile ad-hoc networks, nodes act both as terminals and information relays, and they participate in a common routing protocol, such as Dynamic Source Routing (DSR). The networks are vulnerable to routing misbehavior, due to faulty or malicious nodes. Misbehavior detection systems aim at removing this vulnerability. For this purpose, we use an Artificial Immune System (AIS), a system inspired by the human immune system (HIS). Our goal is to build a system that, like its natural counterpart, automatically learns and detects new misbehavior."
],
"cite_N": [
"@cite_7",
"@cite_20"
],
"mid": [
"2107301538",
"2137046464"
]
}
|
AIS for Misbehavior Detection in Wireless Sensor Networks: Performance and Design Principles
|
Sensor networks [21] can be described as a collection of wireless devices with limited computational abilities which are, due to their ad-hoc communication manner, vulnerable to misbehavior and malfunction. It is therefore necessary to support them with a simple, computationally friendly protection system.
Due to the limitations of sensor networks, there has been an on-going interest in providing them with a protection solution that would fulfill several basic criteria. The first criterion is the ability of self-learning and selftuning. Because maintenance of ad hoc networks by a human operator is expected to be sporadic, they have to have a built-in autonomous mechanism for identifying user behavior that could be potentially damaging to them. This learning mechanism should itself minimize the need for a human intervention, therefore it should be self-tuning to the maximum extent. It must also be computationally conservative and meet the usual condition of high detection rate. The second criterion is the ability to undertake an action against one or several misbehaving users. This should be understood in a wider context of co-operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior. Such a co-operation should have a low message complexity because both the bandwidth and the battery life are of scarce nature. The third and last criterion requires that the protection system does not itself introduce new weaknesses to the systems that it should protect.
An emerging solution that could facilitate implementation of the above criteria are Artificial immune systems (AIS). AIS are based on principles adapted from the Human immune system (HIS) [18,5,17]; the basic ability of HIS is an efficient detection of potentially harmful foreign agents (viruses, bacteria, etc.). The goal of AIS, in our setting, is the identification of nodes with behavior that could possibly negatively impact the stated mission of the sensor network.
One of the key design challenges of AIS is to define a suitable set of efficient genes. Genes form a basis for deciding whether a node misbehaves. They can be characterized as measures that describe a network's performance from a node's viewpoint. Given their purpose, they must be easy to compute and robust against deception.
Misbehavior in wireless sensor networks can take upon different forms: packet dropping, modification of data structures important for routing, modification of packets, skewing of the network's topology or creating ficticious nodes (see [13] for a more complete list). The reason for sensors (possibly fully controlled by an attacker) to execute any form of misbehavior can range from the desire to save battery power to making a given wireless sensor network non-functional. Malfunction can also be considered a type of unwanted behavior.
Artificial Immune Systems
Learning
The process of T-cells maturation in thymus is used as an inspiration for learning in AIS. The maturation of T-cells (detectors) in thymus is a result of a pseudorandom process. After a T-cell is created (see Fig. 1), it undergoes a censoring process called negative selection. During negative selection T-cells that bind self are destroyed. Remaining T-cells are introduced into the body. The recognition of non-self is then done by simply comparing T-cells that survived negative selection with a suspected non-self. This process is depicted in Fig. 2. It is possible that the self set is incomplete, while a T-cell matures (tolerization period) in the thymus. This could lead to producing T-cells that should have been removed from the thymus and can cause an autoimmune reaction, i.e. it leads to false positives.
A deficiency of the negative selection process is that alone it is not sufficient for assessing the damage that a non-self antigen could cause. For example, many bacteria that enter our body are not harmful, therefore an immune reaction is not necessary. T-cells, actors of the adaptive immune system, require co-stimulation from the innate immune system in order to start acting. The innate immune system is able to recognize the presence of harmful non-self antigens and tissue damage, and signal this to certain actors of the adaptive immune system.
The random-generate-and-test approach for producing T-cells (detectors) described above is analyzed in [11]. In general, the number of candidate detectors to the self set size needs to be exponential (if a matching rule with fixed matching probability is used). Another problem is a consistent underfitting of the nonself set; there exist "holes" in the non-self set that are undetectable. In theory, for some matching rules, the number of holes can be very unfavorable [28]. In practical terms, the effect of holes depends on the charac-teristics of the non-self set, representation and matching rule [15]. The advantage of this algorithm is its simplicity and good experimental results in cases when the number of detectors to be produced is fixed and small [26]. A review of other approaches to detector computation can be found in [2].
Sensor Networks
A sensor network can be defined in graph theoretic framework as follows: a sensor network is a net N = (n(t), e(t)) where n(t), e(t) are the set of nodes and edges at time t, respectively. Nodes correspond to sensors that wish to communicate with each other. An edge between two nodes A and B is said to exist when A is within the radio transmission range of B and vice versa. The imposed symmetry of edges is a usual assumption of many mainstream protocols. The change in the cardinality of sets n(t), e(t) can be caused by switching on/off one of the sensors, failure, malfunction, removal, signal propagation, link reliability and other factors.
Data exchange in a point-to-point (uni-cast) scenario usually proceeds as follows: a user initiated data exchange leads to a route query at the network layer of the OSI stack. A routing protocol at that layer attempts to find a route to the data exchange destination. This request may result in a path of non-unit length. This means that a data packet in order to reach the destination has to rely on successive forwarding by intermediate nodes on the path. An example of an ondemand routing protocol often used in sensor networks is DSR [20]. Route search in this protocol is started only when a route to a destination is needed. This is done by flooding the network with RREQ 2 control packets. The destination node or an intermediate node that knows a route to the destination will reply with a RREP control packet. This RREP follows the route back to the source node and updates routing tables at each node that it traverses. A RERR packet is sent to the connection originator when a node finds out that the next node on the forwarding path is not replaying.
At the MAC layer of the OSI protocol stack, the medium reservation is often contention based. In order to transmit a data packet, the IEEE 802.11 MAC protocol uses carrier sensing with an RTS-CTS-DATA-ACK handshake. 3 Should the medium not be available 2 RREQ = Route Request, RREP = Route Reply, RERR = Route Error.
3 RTS = Ready to send, CTS = Clear to send, ACK = Acknowl-or the handshake fails, an exponential back-off algorithm is used. This is combined with a mechanism that makes it easier for neighboring nodes to estimate transmission durations. This is done by exchange of duration values and their subsequent storing in a data structure known as Network allocation vector (NAV). With the goal to save battery power, researchers suggested, a sleep-wake-up schedule for nodes would be appropriate. This means that nodes do not listen continuously to the medium, but switch themselves off and wake up again after a predetermined period of time. Such a sleep and wake-up schedule is similarly to duration values exchanged among nodes. An example of a MAC protocol, designed specifically for sensor networks, that uses such a schedule is the S-MAC [29]. A sleep and wake-up schedule can severely limit operation of a node in promiscuous mode. In promiscuous mode, a node listens to the on-going traffic in the neighborhood and collects information from the overheard packets. This technique is used e.g. in DSR for improved propagation of routing information. Movement of nodes can be modeled by means of a mobility model. A well-known mobility model is the Random waypoint model [20]. In this model, nodes move from the current position to a new randomly generated position at a predetermined speed. After reaching the new destination a new random position is computed. Nodes pause at the current position for a time period t before moving to the new random position.
For more information on sensor networks, we refer the reader to [21].
Summary of Results
Motivated by the positive results reported in [17,26] we have undertaken a detailed performance study of AIS with focus on sensor networks. The general conclusions that can be drawn from the study presented in this document are:
1. Given the ranges of input parameters that we used and considering the computational capabilities of current sensor devices, we conclude that AIS based misbehavior detection offers a decent detection rate.
2. One of the main challenges in designing well performing AIS for sensor networks is the set of "genes". This is similar to observations made in [24]. edgment.
3. Our results suggest that to increase the detection performance, an AIS should benefit from information available at all layers of the OSI protocol stack; this includes also detection performance with regards to a simplistic flavor of misbehavior such as packet dropping. This supports ideas shortly discussed in [30] where the authors suggest that information available at the application layer deserves more attention. 4. We observed that somewhat surprisingly a gene based purely on the MAC layer significantly contributed to the overall detection performance. This gene poses less limitations when a MAC protocol with a sleep-wake-up schedule such as the S-MAC [29] is used.
5. It is desirable to use genes that are "complementary" with respect to each other. We demonstrated that two genes, one that measures correct forwarding of data packets, and the other one that indirectly measures the medium contention, have exactly this property. 6. We only used a single instance of learning and detection mechanism per node. This is different from approach used in [17,26], where one instance was used for each of m possible neighbors. Our performance results show that the approach in [17,26] may not be feasible for sensor networks. It may allow for an easy Sybil attack and, in general, m = n − 1 instances might be necessary, where n is the total number of sensors in the network. Instead, we suggest that flagging a node as misbehaving should, if possible, be based on detection at several nodes. 7. Only less than 5% detectors were used in detecting misbehavior. This suggests that many of the detectors do not comply with constraints imposed by the communications protocols; this is an important fact when designing AIS for sensor networks because the memory capacity at sensors is expected to be very limited.
8. The data traffic properties seem not to impact the performance. This is demonstrated by similar detection performance, when data traffic is modeled as constant bit rate and Poisson distributed data packet stream, respectively. 9. We were unable to distinguish between nodes that misbehave (e.g. deliberately drop data packets) and nodes with a behavior resembling a misbehavior (e.g. drop data packets due to medium contention). This motivates the use of danger signals as described in [1,16]. The approach applied in [26] does, however, not completely fit sensor networks since these might implement only a simplified version of the transport layer.
AIS for Sensor Networks: Design Principles
In our approach, each node produces and maintains its own set of detectors. This means that we applied a direct one-to-one mapping between a human body with a thymus and a node. We represent self, non-self and detector strings as bit-strings. The matching rule employed is the r-contiguous bits matching rule. Two bitstrings of equal length match under the r-contiguous matching rule if there exists a substring of length r at position p in each of them and these substrings are identical. Detectors are produced by the process shown in Fig. 1, i.e. by means of negative selection when detectors are created randomly and tested against a set of self strings. Each antigen consists of several genes. Genes are performance measures that a node can acquire locally without the help from another node. In practical terms this means that an antigen consists of x genes; each of them encodes a performance measure, averaged in our case over a time window. An antigen is then created by concatenating the x genes.
When choosing the correct genes, the choice is limited due to the simplified OSI protocol stack of sensors. For example, Mica2 sensors [9] using the TinyOS operating system do not guarantee any end-to-end connection reliability (transport layer), leaving only data traffic at the lower layers for consideration.
Let us assume that the routing protocol finds for a connection the path s s , s 1 , ..., s i , s i+1 , s i+2 , ..., s d from the source node s s to the destination node s d , where s s = s d and s i+1 = s d . We have used the following genes to capture certain aspects of MAC and routing layer traffic information (we averaged over a time period (window size) of 500 seconds):
MAC Layer:
#1 Ratio of complete MAC layer handshakes between nodes s i and s i+1 and RTS packets sent by s i to s i+1 . If there is no traffic between two nodes this ratio is set to ∞ (a large number). This ratio is averaged over a time period. A complete handshake is defined as a completed sequence of RTS, CTS, DATA, ACK packets between s i and s i+1 .
#2 Ratio of data packets sent from s i to s i+1 and then subsequently forwarded by s i+1 to s i+2 . If there is no traffic between two nodes this ratio is set to ∞ (a large number). This ratio is computed by s i in promiscuous mode and, as in the previous case, averaged over a time period. This gene was adapted from the watchdog idea in [25].
#3 Time delay that a data packet spends at s i+1 before being forwarded to s i+2 . The time delay is observed by s i in promiscuous mode. If there is no traffic between two nodes the time delay is set to zero. This measure is averaged over a time period. This gene is a quantitative extension of the previous gene.
Routing Layer:
#4 The same ratio as in #2 but computed separately for RERR routing packets.
#5 The same delay as in #3 but computed separately for RERR routing packets.
The Gene #1 can be characterized as MAC layer quality oriented -it indirectly measures the medium contention level. The remaining genes are watchdog oriented. This means that they more strictly fit a certain kind of misbehavior. The Gene #2 can help detect whether packets get correctly forwarded; the Gene #3 can help detect whether forwarding of packets does not get intentionally delayed. As we will show later, in the particular type of misbehavior (packet dropping) that we applied, the first two genes come out as "the strongest". The disadvantage of the watchdog based genes is that due to limited battery power, nodes could operate using a sleep-wake-up schedule similar to the one used in the S-MAC. This would mean that the node s i has to stay awake until the node s i+1 (monitored node) correctly transmits to s i+2 . The consequence would be a longer wake-up time and possible restrictions in publishing sleep-wake-up schedules.
In [24] the authors applied a different a set of genes, based only on the DSR routing protocol. The observed set of events was the following: A = RREQ sent, B = RREP sent, C = RERR sent, D = DATA sent and IP source address is not of the monitored (neighboring) node, E = RREQ received, F = RREP received, G = RERR received, H = DATA received and the IP destination address is not of the monitored node. The events D and H take into consideration that the source and destination nodes of a connection might appear as misbehaving as they seem to "deliberately" create and delete data packets. Then the set of their four genes is as follows: The time period (window size) in their case was 10s; * is the Kleene star operator (zero or more occurrences of any event(s) are possible). Similar to our watchdog genes, these genes impose additional requirements on MAC protocols such as the S-MAC. Their dependence on the operation in promiscuous mode is, however, more pronounced as a node has to continuously observe packet events at all monitored nodes.
The research in the area of what and to what extent can be or should be locally measured at a node, is independent of the learning mechanism used (negative selection in both cases). Performance of an AIS can partly depend on the ordering and the number of used genes. Since longer antigens (consisting of more genes) indirectly imply more candidate detectors, the number of genes should be carefully considered. Given x genes, it is possible to order them in x! different ways. In our experience, the rules for ordering genes and the number of genes can be summed up as follows:
1) Keep the number of genes small. In our experiments, we show that with respect to the learning mechanism used and the expected deployment (sensor networks), 2-3 genes are enough for detecting a basic type of misbehavior.
2) Order genes either randomly or use a predetermined fixed order. Defining a utility relation between genes, and ordering genes with respect to it can, in general, lead to problems that are considered intractable. Our results however suggest, it is important to understand relations between different genes, since genes are able to complement each other; this can lead to their increased mutual strength. On the other hand, random ordering adds to robustness of the underlying AIS. For an attacker, it is namely more difficult to deceive, since he does not know how genes are being used. It is currently an open question, how to impose a balanced solution.
3) Genes cannot be considered in isolation. Our experiments show, when a detector matched an antigen under the r-contiguous matching rule, usually this match spanned over several genes. This motivates design of matching rules that would not limit matching to a few neighboring genes, offer more flexibility but still require that a gene remains a partly atomic unit.
Learning and Detection
Learning and detection is done by applying the mechanisms shown in Figs. 1 and 2. The detection itself is very straightforward. In the learning phase, a misbehavior-free period (see [1] on possibilities for circumventing this problem) is necessary so that nodes get a chance to learn what is the normal behavior. When implementing the learning phase, the designer gets to choose from two possibilities: 1) Learning and detection at a node get implemented for each neighboring node separately. This means that different antigens have to get computed for each neighboring node, detector computation is different for each neighboring node and, subsequently, detection is different for each neighboring node. The advantage of this approach is that the node is able to directly determine which neighboring node misbehaves; the disadvantage is that m instances (m is the number of neighbors or node degree) of the negative selection mechanism have to get executed; this can be computationally prohibitive for sensor networks as m can, in general, be equal to the total number of sensor. This allows for an easy Sybil attack [13] in which a neighbor would create several identities; the node would then be unable to recognize that these identities belong to the same neighbor. This approach was used in [26,24].
2) Learning and detection at a node get implemented in a single instance for all neighboring nodes. This means a node is able to recognize anomaly (misbehavior) but it may be unable to determine which one from the m neighboring nodes misbehaves. This implies that nodes would have to cooperate when detecting a misbehaving node, exchange anomaly information and be able to draw a conclusion from the obtained information. An argument for this approach is that in order to detect nodes that misbehave in collusion, it might be necessary to rely to some extent on information exchange among nodes, thus making this a natural solution to the problem. We have used this approach; a postprocessing phase (using the list of misbehaving nodes) was necessary to determine whether a node was correctly flagged as misbehaving or not.
We find the second approach to be more suited for wireless sensor networks. It is namely less computationally demanding. We are unable, at this time, to estimate the frequency of a complete detector set computation.
Both approaches can be classified within the four-
Local and Cooperative Response Data Collection and Preprocessing
Local and Cooperative Detection Learning Figure 3: An four-layer architecture aimed at protecting sensor networks against misbehavior and abuse.
layer architecture (Fig. 3) that we introduced in [14]. The lowermost layer, Data collection and preprocessing, corresponds to genes' computation and antigen construction. The Learning layer corresponds to the negative selection process. The next layer, Local and co-operative detection, suggests, an AIS should benefit from both local and cooperative detection. Both our setup and the setup described in [26,24] only apply local detection. The uppermost layer, Local and cooperative response, implies, an AIS should also have the capability to undertake an action against one or several misbehaving nodes; this should be understood in a wider context of co-operating wireless devices acting in collusion in order to suppress or minimize the adverse impact of such misbehavior. To our best knowledge, there is currently no AIS implementation for sensor networks taking advantage of this layer.
Which r is the correct one? An interesting technical problem is to tune the r parameter for the r-contiguous matching rule so that the underlying AIS offers good detection and false positives rates. One possibility is a lengthy simulation study such as this one. Through multiparameter simulation we were to able to show that r = 10 offers the best performance for our setup. In [12] we experimented with the idea of "growing" and "shrinking" detectors; this idea was motivated by [19]. The initial r 0 for a grow-ing detector can be chosen as r 0 = ⌈l/2⌉, where l is the detector length. The goal is to find the smallest r such that a candidate detector does not match any self antigen. This means, initially, a larger (more specific) r is chosen; the smallest r that fulfills the above condition can be found through binary search. For shrinking detectors, the approach is reciprocal. Our goal was to show that such growing or shrinking detectors would offer a better detection or false positives rate. Short of proving this in a statistically significant manner, we observed that the growing detectors can be used for self tuning the r parameter. The average r value was close to the r determined through simulation (the setup in that case was different from the one described in this document).
Further Optimizations
Our experiments show that only a small number of detectors get ever used (less than 5%). The reason is, they get produced in a random way, not considering structure of the protocols. For example, a detector that is able to detect whether i) data packets got correctly transmitted and ii) 100% of all MAC layers handshakes were incomplete is superfluous as this case should never happen. In [8], the authors conclude: "... uniform coverage of non-self space is not only unnecessary, it is impractical; non-self space is too big". Application driven knowledge can be used to set up a rule based system that would exclude infeasible detectors; see [10] for a rule based system aimed at improved coverage of the nonself set. In [17], it is suggested that unused detectors should get deleted and the lifetime of useful detectors should be extended.
Misbehavior
In a companion paper [13], we have reviewed different types of misbehavior at the MAC, network and transport layers of the OSI protocol stack. We note that solutions to many of these attacks have been already proposed; these are however specific to a given attack. Additionally, due to the limitations of sensor networks, these solutions cannot be directly transfered.
The appeal of AIS based misbehavior detection rests on its simplicity and applicability in an environment that is extremely computationally and bandwidth limited. Misbehavior in sensor networks does not have to be executed by sensors themselves; one or several computationally more powerful platforms (laptops) can be used for the attack. On the other hand, a protection using such more advanced computational platforms is, due to e.g. the need to supply them continuously with electric power, harder to imagine. It would also create a point of special interest for the possible attackers.
Experimental Setup
The purpose of our experiments was to show that AIS are a viable approach for detecting misbehavior in sensor networks. Furthermore, we wanted to cast light on internal performance of an AIS designed to protect sensor networks. One of our central goals was to provide an in-depth analysis of relative usefulness of genes.
Definitions of input and output parameters: The input parameters for our experiments were: r parameter for the r-contiguous matching rule, the (desired) number of detectors and misbehavior level. Misbehavior was modeled as random packet dropping at selected nodes.
The performance (output) measures were arithmetic averages and 95% confidence intervals ci 95% of detection rate, number of false positives, real time to compute detectors, data traffic rate at nodes, number of iterations to compute detectors (number of random tries), number of non-valid detectors, number of different (unique) antigens in a run or a time window, and number of matches for each gene. The detection rate dr is defined as dns ns , where dns is the number of detected non-self strings and ns is the total number of non-self strings. A false positive in our definition is a string that is not self but can still be a result of anomaly that is identical with the effects of a misbehavior. A non-valid detector is a candidate detector that matches a self string and must therefore be removed.
The number of matches for each gene was evaluated using the r-contiguous matching rule; we considered two cases: i) two bit-strings get matched from the left to the right and the first such a match will get reported (matching gets interrupted), ii) two bit-strings get matched from the left to the right and all possible matches will get reported. The time complexity of these two approaches is O(r(l − r)) and Θ(r(l − r)), respectivelly; r ≤ l, where l is the bitstring length. The first approach is exactly what we used when computing the real time necessary for negative selection, the second approach was used when our goal was to evaluate relative usefulness of each gene.
Scenario description: We wanted to capture "self" and "non-self" packet traffic in a large enough synthetic static sensor network and test whether using an AIS we are able to recognize non-self, i.e. misbehavior. The topology of this network was determined by making a snapshot of 1,718 mobile nodes (each with 100m radio radius) moving in a square area of 2,900m×2,950m as prescribed by the random waypoint mobility model; see Figure 5(a). The motivation in using this movement model and then creating a snapshot are the results in our previous paper [7] that deals with structural robustness of sensor network. Our preference was to use a slightly bigger network than it might be necessary, rather than using a network with unknown properties. The computational overhead is negligible; simulation real time mainly depends on the number of events that require processing. Idle nodes increase memory requirements, but memory availability at computers was in our case not a bottleneck.
We chose source and destination pairs for each connection so that several alternative independent routes exist; the idea was to benefit from route repair and route acquisition mechanisms of the DSR routing protocol, so that the added value of AIS based misbehavior detection is obvious.
We used 10 CBR (Constant bit rate) connections. The connections were chosen so that their length is ∼7 hops and so that these connections share some common intermediate nodes; see Figure 5(b). For each packet re-ceived or sent by a node we have captured the following information: IP header type (UDP, 802.11 or DSR in this case), MAC frame type (RTS, CTS, DATA, ACK in the case of 802.11), current simulation clock, node address, next hop destination address, data packet source and destination address and packet size.
Encoding of self and non-self antigens: Each of the five genes was transformed in a 10-bit signature where each bit defines an interval 4 of a gene specific value range. We created self and non-self antigen strings by concatenation of the defined genes. Each self and nonself antigen has therefore a size of 50 bits. The interval representation was chosen in order to avoid carry-bits (the Gray coding is an alternative solution).
Constructing the self and non-self sets: We have randomly chosen 28 non-overlapping 500-second windows in our 4-hour simulation. In each 500-second window self and non-self antigens are computed for each node. This was repeated 20 times for independent Glomosim runs.
Misbehavior modeling: Misbehavior is modeled as random data packet dropping (implemented at the network layer); data packets include both data packets from the transport layer as well as routing protocol packets. that should get dropped will simply not be in- serted into the IP queue); we have randomly chosen 236 nodes and these were forced to drop {10, 30, 50%} of data packets. However, there were only 3-10 nodes with misbehavior and with a statistically significant number of packets for forwarding in each simulation run; see constraint C2 in Section 7.
Detection: A neighboring node gets flagged as misbehaving, if a detector from the detector set matches an antigen. Since we used a single learning phase, we had to complement this process with some routing information analysis. This allowed us to determine, which one from the neighboring nodes is actually the misbehaving one. In the future, we plan to rely on co-operative detection in order to replace such a post-analysis.
Simulation phases: The experiment was done in four phases.
1. 20 independent Glomosim runs were done for one of {10, 30, 50%} misbehavior levels and "normal" traffic. Normal means that no misbehavior took place.
2. Self and non-self antigen computation (encoding).
3. The 20 "normal" traffic runs were used to compute detectors. Given the 28 windows and 20 runs, the sample size was 20×28 = 560, i.e. detectors at each node were discriminated against 560 self antigens.
4. Using the runs with {10, 30, 50%} misbehavior levels, the process shown in Fig. 2 was used for detection; we restricted ourselves to nodes that had in both the normal and misbehavior traffic at least a certain number of data packets to forward (packet threshold).
The experiment was then repeated with different r, desired number of detectors and misbehavior level.
The parameters for this experiment are summarized in Fig. 4. The injection rate and packet sizes were chosen in order to comply with usual data rates of sensors (e.g. 38.4kbps for Mica2; see [9]). We chose the Glomosim simulator [3] over other options (most notably ns2) because of its better scaling characteristics [6] and our familiarity with the tool.
Results Evaluation
When evaluating our results we define two additional constraints: C1. We define a node to be detected as misbehaving if it gets flagged in at least 14 out of the 28 possible windows. This notion indirectly defines the time until a node is pronounced to be misbehaving. We call this a window threshold. (b) Rate of non-valid detectors; for r ≤ 13 is ci 95% < 1%, for r ≥ 16 is the sample size not significant. (c) Number of iterations needed in order to compute the desired number of detectors; for r ≥ 10 is ci 95% < 1%, for r = 7 is ci 95% < 2%. (a) Detection rate vs packet threshold; conf. interval ranges: for mis. level 10% is ci 95% = 3.8-19.8%; for 30% is ci 95% = 11.9-15.9%; for 50% is ci 95% = 11.0-14.2%. (b) The number of unique detectors that matched an antigen in a run. Conf. interval range for 7 ≤ r ≤ 13 is ci 95% = 6.5-10.1%. (c) The number of unique detectors that matched an antigen in a window; each run has 28 windows. Conf. interval range: ci 95% < 0.16%. C2. A node s i has to forward in average at least m packets over the 20 runs in both the "normal" and misbehavior cases in order to be included into our statistics. This constraint was set in order to make the detection process more reliable. It is dubious to flag a neighboring node of s i as misbehaving, if it is based on "normal" runs or runs with misbehavior, in which node s i had no data packets to forward (he was not on a routing path). We call this a packet threshold; m was in our simulations chosen from {500, 1000, 2000, 4000}. Example: for a fixed set of input parameters, a node forwarded in the "normal" runs in average 1,250 packets and in the misbehavior runs (with e.g. level 30%) 750 packets. The node s i would be considered for misbehavior detection if m = 500, but not if m ≥ 1000. In other words, a node has to get a chance to learn what is "normal" and then to use this knowledge on a non-empty packet stream.
Overall Performance
The results related to computation of detectors are shown in Figure 6. In our experiments we have considered the desired number of detectors to be max. 4,000; over this threshold the computational requirements might be too high for current sensor devices. We remind the reader, each time the r parameter is incremented by 1, the number of detectors should double in order to make these two cases comparable. Figure 6(a) shows the real time needed to compute the desired set of detectors. We can see the real time necessary increases proportionally with the desired number of detectors; this complies with the theoretical results presented in [11]. Figure 6(b) shows the percentage of non-valid detectors, i.e. candidate detectors that were found to match a self string (see Figure 1). This result points to where the optimal operation point of an AIS might lie with respect to the choice of r parameter and the choice of a fixed number of detectors to compute. We remind the reader, the larger is the r parameter the smaller is the probability that a detector will match a self string. Therefore overhead connected to choosing the r parameter prohibitively small should be considered when designing an AIS. Figure 6(c) shows the total number of generate-and-test tries needed for computation of detector set of a fixed size; the 95% confidence interval is less than 2%.
In Figure 7(a) we show the dependence of detection ratio on the packet threshold. We conclude that except for some extremely low threshold values (not shown) the detection rate stays constant. This figure also shows that when misbehavior level was set very low, i.e. 10%, the AIS struggled to detect misbehaving nodes. This is partly a result of our coarse encoding with only 10 different levels.
At the 30 and 50% misbehaving levels the detection rate stays solid at about 70-85%. The range of the 95% confidence interval of detection rate is 3.8-19.8%. The fact that the detection rate did not get closer to 100% suggests, either the implemented genes are not sufficient, detection should be extended to protocols at other layers of the OSI protocol stack, a different ordering of genes should have been applied or our ten level encoding was too coarse. It also implicates that watchdog based genes (though they perfectly fit the implemented misbehavior) should not be used in isolation, and in general, that the choice of genes has to be very careful. Figure 7(b) shows the impact of r on detection rate. When r = {7, 10} the AIS performs well, for r > 10 the detection rate decreases. This is caused by the inadequate numbers of detectors used at higher levels of r (we limited ourselves to max. 4,000 detectors). Figure 7(c) shows the number of false positives. We remind that in our definition false positives are both nodes that do not drop any packets and nodes that drop packets due to other reasons than misbehavior.
In a separate experiment we studied whether the 4-hour (560 samples) simulation time was enough to capture the diversity of the self behavior. This was done by trying to detect misbehavior in 20 independent misbehavior-free Glomosim runs (different from those used to compute detectors). We report that we did not observe a single case of an autoimmune reaction.
Detailed Performance
In Fig. 8(a) we show the total number of runs in which a node was identified as misbehaving. The steep decline for values r > 10 (in this and other figures) documents that in these cases it was necessary to produce a higher number of detectors in order to cover the non-self antigen space. The higher the r, the higher is the specificity of a detector, this means that it is able to match a smaller set of non-self antigens.
In Fig. 8(b) and (c) we show the number of detectors that got matched during the detection phase (see Fig. 2). Fig. (b) shows the number of detectors matched per run, Fig. (c) shows the number of detectors matched per window. Fig. (b) is an upper estimate on the number of unique detectors needed in a single run. Given that the total number of detectors was 2,000, there were less than 5% detectors that would get used in the detection phase. The tight confidence intervals 5 for the number of unique detectors matched per window (see Fig. (c)) is a direct consequence of the small variability of antigens as shown in Fig. 9(a). Fig. 9(a) shows the number of unique antigens that were subject to classification into self or non-self. The average for r = {7, 10} is about 1.5. This fact does not directly imply that the variability of the data traffic would be inadequate. It is rather a direct consequence of our choice of genes and their encoding (we only used 10 value levels for encoding). Fig. 9(b) shows the number of matches between a detector and an antigen in the following way. When a detector under the r-contiguous matching rule matches only a single gene within an antigen, we would increment the "single" counter. Otherwise, we would increment the "multiple" counter. It is obvious that with increasing r, it gets more and more probable that a detector would match more than a single gene. The interesting fact is that the detection rate for both r = 7 and r = 10 is about 80% (see Fig. 7(a)) and that the rate of non-valid detectors is very different (see Fig. 6(b)). This means that an interaction between genes has positively affected the later performance measure, without sacrificing on the former one. This leads to a conlusion that genes should not be considered in isolation. Fig. 9(c) shows the performance of Gene #1. The number of matches shows that this gene contributed to the overall detection performance of our AIS. Figs. 10(a-c) sum up performance of the five genes for different values of r. Again, an interesting fact is the contribution of Gene #1 to the overall detection performance. The usefulness of Gene #2 was largely expected as this gene was tailored for the kind of misbehavior that we implemented. The other three genes came out as marginally useful. The importance of the somewhat surprising performance of Gene #1 is that it can be computed in a simplistic way and does not require continuous operation of a node.
The Impact of Data Traffic Pattern
In an additional experiment, we examined the impact of data traffic pattern on the performance. We used two different data traffic models: the constant bit rate 5 For practical reasons we show ci 95% only for 7 ≤ r ≤ 13.
(CBR) and a Poisson distributed data traffic. In many scenarios, sensors are expected to take measurements in constant intervals and, subsequently, send them out for processing. This would create a constant bit rate traffic. Poisson distributed traffic could be a result of sensors taking measurements in an event-driven fashion. For example, a sensor would take a measurement only when a target object (e.g. a person) happens to be in its vicinity.
The setup for this experiment was similar to that presented in Fig. 4 with the additional fact that the data traffic model would now become an input parameter. With the goal to reduce complexity of the experimental setup, we fixed r = 10 and we only considered cases with 500 and 2000 detectors. In order to match the CBR traffic rate, the Poisson distributed data traffic model had a mean arrival expectation of 1 packet per second (λ = 1.0). As in the case with CBR, we computed the detection rate and the rate of false positives with the associated arithmetic averages and 95% confidence intervals.
The results based on these two traffic models were similar, actually, we could not find the difference between them to be statistically significant. This points out that the detection process is robust against some variation in data traffic. This conclusion also reflects positively on the usefulness of the used genes. More importantly, it helped disperse our worries that the results presented in this experimental study could be unacceptably data traffic dependent.
Related Work
In [26,24] the authors introduced an AIS based misbehavior detection system for ad hoc wireless networks. They used Glomosim for simulating data traffic, their setup was an area of 800×600m with 40 mobile nodes (speed 1 m/s) of which 5-20 are misbehaving; the routing protocol was DSR. Four genes were used to capture local behavior at the network layer. The misbehavior implemented is a subset of misbehavior introduced in this paper; their observed detection rate is about 55%. Additionally, a co-stimulation in the form of a danger signal was used in order to inform nodes on a forwarding path about misbehavior, thus propagating information about misbehaving nodes around the network.
In [17] the authors describe an AIS able to detect anomalies at the transport layer of the OSI protocol stack; only a wired TCP/IP network is considered. Self is defined as normal pairwise connections. Each detector is represented as a 49-bit string. The pattern matching is based on r-contiguous bits with a fixed r = 12.
Ref. [23] discusses a network intrusion system that aims at detecting misbehavior by capturing TCP packet headers. They report that their AIS is unsuitable for detecting anomalies in communication networks. This result is questioned in [4] where it is stated that this is due to the choice of problem representation and due to the choice of matching threshold r for r-contiguous bits matching.
To overcome the deficiencies of the generate-and-test approach a different approach is outlined in [22]. Several signals each having a different function are employed in order to detect a specific misbehavior in sensor wireless networks. Unfortunately, no performance analysis was presented and the properties of these signals were not evaluated with respect to their misuse.
The main discerning factor between our work and works shortly discussed above is that we carefully considered hardware parameters of current sensor devices, the set of input parameters was designed in order to target specifically sensor networks and our simulation setup reflects structural qualities of such networks with regards to existence of multiple independent routing paths. In comparison to [26,24] we showed that in case of static sensor networks it is reasonable to expect the detection rate to be above 80%.
Conclusions and Future Work
Although we answered some basic question on the suitability and feasibility of AIS for detecting misbehavior in sensor networks a few questions remain open.
The key question in the design of AIS is the quantity, quality and ordering of genes that are used for measuring behavior at nodes. To answer this question a detailed formal analysis of communications protocols will be needed. The set of genes should be as "complete" as possible with respect to any possible misbehavior. The choice of genes should impose a high degree of sensor network's survivability defined as the capability of a system to fulfill its mission in a timely manner, even in the presence of attacks, failures or accidents [27]. It is therefore of paramount importance that the sensor network's mission is clearly defined and achievable under normal operating conditions. We showed the influence and usefulness of certain genes in order to detect misbehavior and the impact of the r parameter on the detection process. In general, the results in Fig. 10 show that Gene #1 and #2 obtained of all genes the best results, with Gene #2 showing always the best results. The contribution of Gene #1 suggests that observing the MAC layer and the ratio of complete handshakes to the number of RTS packets sent is useful for the implemented misbehaviour.
Gene #2 fits perfectly for the implemented misbehavior. It therefore comes as no surprise that this gene showed the best results in the detection process. The question which remains open is whether the two genes are still as useful when exposed to different attack patterns.
It is currently unclear whether genes that performed well with negative selection, will also be appropriate for generating different flavors of signals as suggested within the danger theory [1,16]. It is our opinion that any set of genes, whether used with negative selection or for generating any such a signal, should aim at capturing intrinsic properties of the interaction among different components of a given sensor network. This contradicts approaches applied in [26,22] where the genes are closely coupled with a given protocol. The reason for this statement is the combined performance of Gene #1 and #2. Their interaction can be understood as follows: data packet dropping implies less medium contention since there are less data packets to get forwarded. Less data packets to forward on the other hand implies easier access to the medium, i.e. the number of complete MAC handshakes should increase. This is an interesting complementary relationship since in order to deceive these two genes, a misbehaving node has to appear to be correctly forwarding data packets and, at the same time, he should not significantly modify the "game" of medium access.
It is improbable that the misbehaving node alone would be able to estimate the impact of dropped packets on the contention level. Therefore, he lacks an important feedback mechanism that would allow him to keep the contention level unchanged. For that, he would need to act in collusion with other nodes. The property of complementarity moves the burden of excessive communication from normally behaving nodes to misbehaving nodes, thus, exploiting the ad hoc (local) nature of sensor networks. Our results thus imply, a "good" mixture of genes should be able to capture interactions that a node is unable to influence when acting alone. It is an open question whether there exist other useful properties of genes, other than complementarity.
We conclude that the random-generate-and-test pro-cess, with no knowledge of the used protocols and their behavior, creates many detectors which might show to be superfluous in detecting misbehavior. A process with some basic knowledge of protocol limitations might lead to improved quality of detectors. In [28] the authors stated that the random-generateand-test process "is innefficient, since a vast number of randomly generated detectors need to be discarded, before the required number of the suitable ones are obtained". Our results show that at r = 10, the rate of discarded detectors is less than 4%. Hence, at least in our setting we could not confirm the above statement. A disturbing fact is, however, that the size of the self set in our setting was probably too small in order to justify the use of negative selection. A counter-balancing argument is here the realistic setup of our simulations and a decent detection rate.
We would like to point out that the Fisher iris and biomedical data sets, used in [28] to argue about the apropriateness of negative selection for anomaly detection, could be very different from data sets generated by our simulations. Our experiments show that anomaly (misbehavior) data sets based on sensor networks could be in general very sparse. This effect can be due to the limiting nature of communications protocols. Since the Fisher iris and biomedical data sets were in [28] not evaluated with respect to some basic properties e.g. degree of clustering, it is hard to compare our results with the results presented therein.
In order to understand the effects of misbehavior better (e.g. the propagation of certain adverse effects), we currently develop a general framework for AIS to be used within the JiST/SWANS network simulator [6].
| 8,253 |
0906.0872
|
2143172016
|
An approach to the acceleration of parametric weak classifier boosting is proposed. Weak classifier is called parametric if it has fixed number of parameters and, so, can be represented as a point into multidimensional space. Genetic algorithm is used instead of exhaustive search to learn parameters of such classifier. Proposed approach also takes cases when effective algorithm for learning some of the classifier parameters exists into account. Experiments confirm that such an approach can dramatically decrease classifier training time while keeping both training and test errors small.
|
Usage of genetic algorithm for weak learner acceleration was already proposed in several works. For example, in @cite_0 genetic weak learner with special crossover and mutation operators was used to learn classifier based on extended haar feature set. In @cite_7 genetic algorithm was used to select a few thousand weak classifiers with smallest error on unweighed training set before boosting process starts. Then exhaustive search over selected classifiers was performed on each boosting iteration to select the one with minimal weighed loss. In @cite_3 boosting procedure was completly integrated with genetic algorithm. Few classifiers were selected on each boosting iteration from solution population and added to the strong classifier. That selected classifiers were then used to produce new population members by applying genetic operators. Then, in @cite_4 authors used for weak learner some special evolutionary algorithm they've called Evolutionary Hill-Climbing . Crossover operator was not used in it. Instead, @math different mutations were applied to every population member on each algorithm iteration. Result of each mutation was rejected when it did not improve fitness function value.
|
{
"abstract": [
"Recently, P. Viola and M.J. Jones (2001) presented a method for real-time object detection in images using a boosted cascade of simple features. In This work we show how an evolutionary algorithm can be used within the Adaboost framework to find new features providing better classifiers. The evolutionary algorithm replaces the exhaustive search over all features so that even very large feature sets can be searched in reasonable time. Experiments on two different sets of images prove that by the use of evolutionary search we are able to find object detectors that are faster and have higher detection rates.",
"have introduced a fast object detection scheme based on a boosted cascade of haar-like features. In this paper, we introduce a novel ternary feature that enriches the diversity and the flexibility significantly over haar-like features. We also introduce a new genetic algorithm based method for training effective ternary features. Experimental results showed that the rejection rate can reach at 98.5 with only 16 features at the first layer of the cascade detector. We confirmed that the training time can be significantly shortened while the performance of the resulted cascade detector is comparable to the previous methods.",
"This paper presents an efficient method for automatic training of performant visual object detectors, and its successful application to training of a back-view car detec- tor. Our method for training detectors is adaBoost applied to a very general family of visual features (called “control-point” features), with a specific feature-selection weak-learner: evo-HC, which is a hybrid of Hill-Climbing and evolutionary-search. Very good results are obtained for the car-detection application: 95 positive car detection rate with less than one false positive per image frame, computed on an independant validation video. It is also shown that our original hybrid evo-HC weak-learner allows to obtain detection performances that are unreachable in rea- sonable training time with a crude random search. Finally our method seems to be potentially efficient for training detectors of very different kinds of objects, as it was already previously shown to provide state-of-art performance for pedestrian-detection tasks.",
""
],
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_4",
"@cite_7"
],
"mid": [
"2118399788",
"2153485504",
"2026182474",
""
]
}
|
Fast Weak Learner Based on Genetic Algorithm
|
Boosting is one of the commonly used classifier learning approaches. It is machine learning meta-algorithm that iteratively learns additive model consisting of weighed weak classifiers that belong to some classifier family W . In case of two-class classification problem (which we will consider in this paper) boosted classifier usually has form
s(y) = sgn N X i=1 αiwi(y) ! .(1)
There y ∈ Y is a sample to classify, wi ∈ W are weak classifiers learned during boosting procedure, αi are weak classifier weights, wi(y) ∈ {−1, 1}, s(y) ∈ {−1, 1}. Set W is referred to as weak classifier family. That is because it elements should have error rate only slightly better than random guessing. It expresses the key idea of boosting: strong classifier can be built on top of many weak.
There are many boosting procedures that differ by the type of loss being optimized for the final classifier. But no matter what kind of boosting procedure is used, on each iteration it should select (learn) a weak classifier with minimal weighed loss from W family using special algorithm called weak learner. Fast and accurate optimization methods are often not applicable there (especially in the case of discrete classifier parameters), so exhaustive search over weak classifier parameter space is used as a weak learner. Unfortunately, exhaustive search can take a lot of time. For example, learning cascade of boosted classifiers based on haar features with AdaBoost and exhaustive search over classifier parameter space took several weeks in the famous work [Viola and Jones 2001]. That's why it is often very important to decrease weak classifier learning time using some appropriate numerical optimization approach.
One of the widely used approaches to the numerical optimization is genetic algorithm [Goldberg 1989]. It is based on biological evolution ideas. Optimization problem solution is coded as chromosome vector. Initial population of solutions is created using random number generator. Fitness function is then used to assign fitness value to every population member. Solutions with the biggest fitness values are selected for the next step. In the next step, genetic operators (crossover and mutation usually) are applied to selected chromosomes to produce new solutions and to modify existing ones slightly. That modified solutions form up a new generation. Then described process repeats. That's how evolution is modeled. It continues until global or suboptimal solution is found or time allowed for evolution is over. Genetic algorithms are often used for global extremum search in big and complicated search spaces. It makes genetic algorithm good candidate for weak classifier learner.
Population member
Let W be some parametric family of weak classifiers. It means that every weak w ∈ W can be described by set of it's n real-valued parameters x1, . . . , xn. Let's also assume that for last l parameters (l can be equal to zero) there exists some effective learning algorithm LE : R n−l → R l . We will refer to such parameters as to linked. For given values of parameters x1, . . . , x n−l , called free, LE finds optimal values for linked parameters that minimize loss function E : R n → R + . It means that our task is to find values of free parameters that deliver the minimum to the loss function E[x1, . . . , x n−l , LE(x1, . . . , x n−l )]. So, set of parameters x1, . . . , x n−l represents solution to our optimization problem and form up a member of genetic algorithm population.
Fitness function
It is natural to assume that classifier with small error on training set should have greater probability to get to the next generation of genetic algorithm. That allows us to introduce fitness function F : R n−l → R + as follows:
F (x1, . . . , x n−l ) = = 1/E[x1, . . . , x n−l , LE(x1, . . . , x n−l )]. (2)
We do not consider E = 0 case. Classifier can not be called weak if it has zero error value on training set. If such a classifier is presented in a weak classifier family, we can select only that classifier as a whole boosting procedure result.
Genetic representation
Every approach that allows us to code a set of free parameters is appropriate for population member representation. In this work we have selected binary string representation which was confirmed to be effective in function optimization problems. Some alternative representations can be found, for example, in [Goldberg 1989].
To form the binary string classifier representation, each classifier parameter should be first represented as a binary string of fixed length, using fixed-precision encoding. Then all the parameters can be simply concatenated to form the final binary string of fixed length.
Sometimes point p ∈ R n can have no corresponding classifier. For the different families of image region classifiers it is possible, for example, when one of the free parameters representing top-left corner of a classifier window is below zero. In this case fitness function value for the population member representing that point can be forced to be zero. That is how such situations were dealt with in experiments described in section 4. Another possible approach is to select representation and genetic operators in a way that simply does not allow such points to appear. But that approach is less general.
Genetic operators
In this work we've used two most common genetic operators: mutation and crossover. For binary string representation mutation and crossover are usually defined as follows:
• Crossover operator selects random position in the binary string. Then it swaps all the bits to the right of the selected position between two chromosomes. Such crossover implementation is called 1-point crossover.
• Mutation operator changes value of the random chromosome bit to the opposite.
In our case, crossover operator produces two new solutions from the two given chromosomes as following: some of the parameters (placed to the left of the selected position) are taken from the first classifier, some of the parameters (placed to the right) -from the second. And one parameter, probably, can be made from both the the first and the second classifier. Mutation operator simply produces new solution by changing value of the random classifier parameter.
Algorithm summary
Algorithm 1 Genetic weak learner 1: Generate initial population of N random binary strings; 2: for i = 1, . . . , Kmax do 3:
Add ⌈N Rc⌉ members to the population by applying crossover operator to the pairs of the best population members;
4:
Apply mutation operator to ⌈N Rm⌉ random population members;
5:
Calculate value of (2) for each population member;
6:
Remove all the population members except of the N best (the ones with highest value of (2)); 7: end for 8: return weak classifier associated with point represented by best population member as a result;
Algorithm 1 uses elitism as a population member selection approach. It has 4 parameters:
• N > 0 -population size.
• Kmax > 0 -number of generations.
• Rc ∈ (0, 1] -crossover rate.
• Rm ∈ (0, 1] -mutation rate.
Discussion
Advantage of the proposed method lies in the fact that computational complexity of the weak learner does not depend on the size of the weak classifier family. One can achieve balance between training time and classifier performance only by changing values of N , Kmax and S (discussed later). Similar effect can be achieved by shrinking weak classifier family itself. But in most cases prior knowledge about weak classifier performance in boosting is simply not available.
One of the main disadvantages of the proposed weak learner is the fact that many potentially interesting weak classifiers can not be represented as a parameter vector of constant length. For example, decision trees, widely used in boosting, can have variable number of nodes. Misclassification loss we want to optimize should also be more or less stable as a function of classifier free parameters. If small perturbations of the free parameter vector lead to the unpredictable changes in the loss function value, genetic optimization does not make much sense, becoming just a random search. But, unfortunately, that situation happens quite often, especially if classifier parameter count is small. Common example is a situation when one of the free parameters represents feature number and features with close numbers are not correlated at all.
Experiments
Algorithms for experiments
Two boosting-based algorithms were implemented to compare proposed genetic weak learner with original learners proposed by algorithm authors. Viola-Jones [Viola and Jones 2001] and Face alignment via boosted ranking model were selected for that purpose because both algorithms use parametric weak classifiers applied to image regions. These algorithms are based on distinct boosting procedures (AdaBoost and GentleBoost), so loss, sample weight and classifier weight functions used in them differ a lot. Another difference between selected algorithms is a problem they solve: two-class classification in [Viola and Jones 2001] and ranking in . Training time of the naive implementation is quite long for both algorithms, so acceleration of boosting process is necessary.
Weak classifiers used in both algorithms are based on haar features and have common set of adjustable parameters. So, weak classifier in both problems can be represented as wi = (xi, yi, widthi, heighti, typei, gi, ti). There xi, yi, widthi and heighti describe image region, typei encodes haar feature type, gi is a haar feature sign and ti represents weak classifier threshold. Parameters gi and ti are linked because both algorithms have an effective algorithm for learning them. Parameter typei was also made linked: changing feature type during genetic optimization does not make much sense because it can change fitness function value significantly after just one mutation or crossover. Separate algorithm run was performed instead for each feature type. Best result from all the runs was then selected. We've used the same 5 haar feature types as in for training both classifiers.
Run patterns
Comparison of two different genetic algorithm run patterns was also performed in this work. One pattern considered was running genetic optimization once with big population size. Another pattern used was running optimization algorithm multiple times (denoted as S) with small population size and then selecting best found classifier. When population size is small, final solution depends on initial population a lot. So, considerably different results can be obtained for different algorithm runs. While this run pattern produces worse classifiers, it can be implemented on multiprocessor and multicore architectures very efficiently: each processing unit can run it's own genetic simulation. That makes perfect parallel algorithm acceleration possible.
Training and test sets
As in work [Treptow and Zell 2004], [Carbonetto 2002] human faces database was used to train and test classifier for Viola-Jones algorithm. Database was divided in half to form the training and test sets. Each sample has size of 24 × 24 pixels.
Face images with landmarks from FG-NET aging database were used to form the database for learning face alignment ranker proposed in . 600 face images were selected from database and then resized to size of 40×40 pixels. 400 images were used to produce training set and other 200 -for testing. 10 sequential 6-step random landmark position perturbations were then applied to selected face images to produce images of misaligned faces, as described in original paper. Training and test set samples were then made of pairs of images with increasing alignment quality.
Hardware
All the experiments were performed on PC equipped with 2.33 GHz Intel Core 2 Quad processor and 2 GB of DDR2 RAM.
Results
Tables 1 and 3 show average duration of 1 boosting iteration together with comparison to exhaustive search. Tables 2 and 4 show error rate of the final classifiers on the training and test sets. We have not trained any classifier using exhaustive search for boosted ranking model because it would take about a year to finish the process on our training set.
Experiments with Viola-Jones object detector showed that classifier trained using genetic weak learner performs only slightly worse than classifier trained using exhaustive search over classifier space. For N = 400 final classifier even shows better performance. Classifier trained with S = 1, N = 50 and Kmax = 10 accelerates boosting nearly 300 compared to exhaustive search times while still performing good on test set. Classifiers trained with small N and big S values (using second run pattern) perform worse than any other. But, as it was mentioned before, such classifiers can be trained on multiprocessor or multicore systems very efficiently.
Experiments with face alignment via boosted ranking model showed how exactly classifier performance depends on values of S, N and Kmax. Increasing value of the each parameter results in increased training time, but also in increased classifier performance. Nevertheless, difference in training time is much more significant compared to the difference in prediction error. Classifier with S = 1, N = 25 Kmax = 10 was trained 50 times faster than the best obtained classifier for BRM, but it's error is only 1.2 times worse. It makes such a classifier a perfect candidate for preliminary experiments that usually take place before training final classifier starts.
Conclusion
An approach to boosting procedure acceleration was proposed in this work. Approach is based on usage of special genetic weak learner for learning weak classifier on each boosting iteration. Genetic weak learner uses genetic algorithm with binary chromo- somes. That genetic algorithm is designed to solve an optimization problem of selecting weak classifier with the smallest weighed loss from some parametric classifier family. Proposed method was generalized for the case when there exists an effective algorithm for learning some of the parameters of a weak classifier. Experiments have shown that such approach allows us to accelerate training process dramatically for practical tasks while keeping prediction error small.
Genetic weak learner proposed in this work can't be used to boost any tree-based classifiers. That fact limits its usage in many scenarios because stump weak classifiers can not represent any relationships between different object features. So, in the future work we plan to generalize our approach for accelerating tree-based boosting.
Another option for future research is performing additional experiments with classifiers not related to haar features in any way. That will confirm proposed algorithm's profit in computer vision problems not biased towards haar feature usage. In fact, it would be nice to determine different parametric classifier families that can be efficiently boosted using proposed weak learner.
| 2,412 |
0906.0872
|
2143172016
|
An approach to the acceleration of parametric weak classifier boosting is proposed. Weak classifier is called parametric if it has fixed number of parameters and, so, can be represented as a point into multidimensional space. Genetic algorithm is used instead of exhaustive search to learn parameters of such classifier. Proposed approach also takes cases when effective algorithm for learning some of the classifier parameters exists into account. Experiments confirm that such an approach can dramatically decrease classifier training time while keeping both training and test errors small.
|
There were two main reasons for using genetic search instead of any other approaches in these works. Most of the classifiers used in mentioned works were some extensions of the haar classifier family originally proposed in @cite_1 . So, huge size of a weak classifier family do not allow to apply exhaustive search based optimization. And complicated discrete structure of a weak classifier blocks all other optimization options.
|
{
"abstract": [
"This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,[email protected] c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http: crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA"
],
"cite_N": [
"@cite_1"
],
"mid": [
"1761390164"
]
}
|
Fast Weak Learner Based on Genetic Algorithm
|
Boosting is one of the commonly used classifier learning approaches. It is machine learning meta-algorithm that iteratively learns additive model consisting of weighed weak classifiers that belong to some classifier family W . In case of two-class classification problem (which we will consider in this paper) boosted classifier usually has form
s(y) = sgn N X i=1 αiwi(y) ! .(1)
There y ∈ Y is a sample to classify, wi ∈ W are weak classifiers learned during boosting procedure, αi are weak classifier weights, wi(y) ∈ {−1, 1}, s(y) ∈ {−1, 1}. Set W is referred to as weak classifier family. That is because it elements should have error rate only slightly better than random guessing. It expresses the key idea of boosting: strong classifier can be built on top of many weak.
There are many boosting procedures that differ by the type of loss being optimized for the final classifier. But no matter what kind of boosting procedure is used, on each iteration it should select (learn) a weak classifier with minimal weighed loss from W family using special algorithm called weak learner. Fast and accurate optimization methods are often not applicable there (especially in the case of discrete classifier parameters), so exhaustive search over weak classifier parameter space is used as a weak learner. Unfortunately, exhaustive search can take a lot of time. For example, learning cascade of boosted classifiers based on haar features with AdaBoost and exhaustive search over classifier parameter space took several weeks in the famous work [Viola and Jones 2001]. That's why it is often very important to decrease weak classifier learning time using some appropriate numerical optimization approach.
One of the widely used approaches to the numerical optimization is genetic algorithm [Goldberg 1989]. It is based on biological evolution ideas. Optimization problem solution is coded as chromosome vector. Initial population of solutions is created using random number generator. Fitness function is then used to assign fitness value to every population member. Solutions with the biggest fitness values are selected for the next step. In the next step, genetic operators (crossover and mutation usually) are applied to selected chromosomes to produce new solutions and to modify existing ones slightly. That modified solutions form up a new generation. Then described process repeats. That's how evolution is modeled. It continues until global or suboptimal solution is found or time allowed for evolution is over. Genetic algorithms are often used for global extremum search in big and complicated search spaces. It makes genetic algorithm good candidate for weak classifier learner.
Population member
Let W be some parametric family of weak classifiers. It means that every weak w ∈ W can be described by set of it's n real-valued parameters x1, . . . , xn. Let's also assume that for last l parameters (l can be equal to zero) there exists some effective learning algorithm LE : R n−l → R l . We will refer to such parameters as to linked. For given values of parameters x1, . . . , x n−l , called free, LE finds optimal values for linked parameters that minimize loss function E : R n → R + . It means that our task is to find values of free parameters that deliver the minimum to the loss function E[x1, . . . , x n−l , LE(x1, . . . , x n−l )]. So, set of parameters x1, . . . , x n−l represents solution to our optimization problem and form up a member of genetic algorithm population.
Fitness function
It is natural to assume that classifier with small error on training set should have greater probability to get to the next generation of genetic algorithm. That allows us to introduce fitness function F : R n−l → R + as follows:
F (x1, . . . , x n−l ) = = 1/E[x1, . . . , x n−l , LE(x1, . . . , x n−l )]. (2)
We do not consider E = 0 case. Classifier can not be called weak if it has zero error value on training set. If such a classifier is presented in a weak classifier family, we can select only that classifier as a whole boosting procedure result.
Genetic representation
Every approach that allows us to code a set of free parameters is appropriate for population member representation. In this work we have selected binary string representation which was confirmed to be effective in function optimization problems. Some alternative representations can be found, for example, in [Goldberg 1989].
To form the binary string classifier representation, each classifier parameter should be first represented as a binary string of fixed length, using fixed-precision encoding. Then all the parameters can be simply concatenated to form the final binary string of fixed length.
Sometimes point p ∈ R n can have no corresponding classifier. For the different families of image region classifiers it is possible, for example, when one of the free parameters representing top-left corner of a classifier window is below zero. In this case fitness function value for the population member representing that point can be forced to be zero. That is how such situations were dealt with in experiments described in section 4. Another possible approach is to select representation and genetic operators in a way that simply does not allow such points to appear. But that approach is less general.
Genetic operators
In this work we've used two most common genetic operators: mutation and crossover. For binary string representation mutation and crossover are usually defined as follows:
• Crossover operator selects random position in the binary string. Then it swaps all the bits to the right of the selected position between two chromosomes. Such crossover implementation is called 1-point crossover.
• Mutation operator changes value of the random chromosome bit to the opposite.
In our case, crossover operator produces two new solutions from the two given chromosomes as following: some of the parameters (placed to the left of the selected position) are taken from the first classifier, some of the parameters (placed to the right) -from the second. And one parameter, probably, can be made from both the the first and the second classifier. Mutation operator simply produces new solution by changing value of the random classifier parameter.
Algorithm summary
Algorithm 1 Genetic weak learner 1: Generate initial population of N random binary strings; 2: for i = 1, . . . , Kmax do 3:
Add ⌈N Rc⌉ members to the population by applying crossover operator to the pairs of the best population members;
4:
Apply mutation operator to ⌈N Rm⌉ random population members;
5:
Calculate value of (2) for each population member;
6:
Remove all the population members except of the N best (the ones with highest value of (2)); 7: end for 8: return weak classifier associated with point represented by best population member as a result;
Algorithm 1 uses elitism as a population member selection approach. It has 4 parameters:
• N > 0 -population size.
• Kmax > 0 -number of generations.
• Rc ∈ (0, 1] -crossover rate.
• Rm ∈ (0, 1] -mutation rate.
Discussion
Advantage of the proposed method lies in the fact that computational complexity of the weak learner does not depend on the size of the weak classifier family. One can achieve balance between training time and classifier performance only by changing values of N , Kmax and S (discussed later). Similar effect can be achieved by shrinking weak classifier family itself. But in most cases prior knowledge about weak classifier performance in boosting is simply not available.
One of the main disadvantages of the proposed weak learner is the fact that many potentially interesting weak classifiers can not be represented as a parameter vector of constant length. For example, decision trees, widely used in boosting, can have variable number of nodes. Misclassification loss we want to optimize should also be more or less stable as a function of classifier free parameters. If small perturbations of the free parameter vector lead to the unpredictable changes in the loss function value, genetic optimization does not make much sense, becoming just a random search. But, unfortunately, that situation happens quite often, especially if classifier parameter count is small. Common example is a situation when one of the free parameters represents feature number and features with close numbers are not correlated at all.
Experiments
Algorithms for experiments
Two boosting-based algorithms were implemented to compare proposed genetic weak learner with original learners proposed by algorithm authors. Viola-Jones [Viola and Jones 2001] and Face alignment via boosted ranking model were selected for that purpose because both algorithms use parametric weak classifiers applied to image regions. These algorithms are based on distinct boosting procedures (AdaBoost and GentleBoost), so loss, sample weight and classifier weight functions used in them differ a lot. Another difference between selected algorithms is a problem they solve: two-class classification in [Viola and Jones 2001] and ranking in . Training time of the naive implementation is quite long for both algorithms, so acceleration of boosting process is necessary.
Weak classifiers used in both algorithms are based on haar features and have common set of adjustable parameters. So, weak classifier in both problems can be represented as wi = (xi, yi, widthi, heighti, typei, gi, ti). There xi, yi, widthi and heighti describe image region, typei encodes haar feature type, gi is a haar feature sign and ti represents weak classifier threshold. Parameters gi and ti are linked because both algorithms have an effective algorithm for learning them. Parameter typei was also made linked: changing feature type during genetic optimization does not make much sense because it can change fitness function value significantly after just one mutation or crossover. Separate algorithm run was performed instead for each feature type. Best result from all the runs was then selected. We've used the same 5 haar feature types as in for training both classifiers.
Run patterns
Comparison of two different genetic algorithm run patterns was also performed in this work. One pattern considered was running genetic optimization once with big population size. Another pattern used was running optimization algorithm multiple times (denoted as S) with small population size and then selecting best found classifier. When population size is small, final solution depends on initial population a lot. So, considerably different results can be obtained for different algorithm runs. While this run pattern produces worse classifiers, it can be implemented on multiprocessor and multicore architectures very efficiently: each processing unit can run it's own genetic simulation. That makes perfect parallel algorithm acceleration possible.
Training and test sets
As in work [Treptow and Zell 2004], [Carbonetto 2002] human faces database was used to train and test classifier for Viola-Jones algorithm. Database was divided in half to form the training and test sets. Each sample has size of 24 × 24 pixels.
Face images with landmarks from FG-NET aging database were used to form the database for learning face alignment ranker proposed in . 600 face images were selected from database and then resized to size of 40×40 pixels. 400 images were used to produce training set and other 200 -for testing. 10 sequential 6-step random landmark position perturbations were then applied to selected face images to produce images of misaligned faces, as described in original paper. Training and test set samples were then made of pairs of images with increasing alignment quality.
Hardware
All the experiments were performed on PC equipped with 2.33 GHz Intel Core 2 Quad processor and 2 GB of DDR2 RAM.
Results
Tables 1 and 3 show average duration of 1 boosting iteration together with comparison to exhaustive search. Tables 2 and 4 show error rate of the final classifiers on the training and test sets. We have not trained any classifier using exhaustive search for boosted ranking model because it would take about a year to finish the process on our training set.
Experiments with Viola-Jones object detector showed that classifier trained using genetic weak learner performs only slightly worse than classifier trained using exhaustive search over classifier space. For N = 400 final classifier even shows better performance. Classifier trained with S = 1, N = 50 and Kmax = 10 accelerates boosting nearly 300 compared to exhaustive search times while still performing good on test set. Classifiers trained with small N and big S values (using second run pattern) perform worse than any other. But, as it was mentioned before, such classifiers can be trained on multiprocessor or multicore systems very efficiently.
Experiments with face alignment via boosted ranking model showed how exactly classifier performance depends on values of S, N and Kmax. Increasing value of the each parameter results in increased training time, but also in increased classifier performance. Nevertheless, difference in training time is much more significant compared to the difference in prediction error. Classifier with S = 1, N = 25 Kmax = 10 was trained 50 times faster than the best obtained classifier for BRM, but it's error is only 1.2 times worse. It makes such a classifier a perfect candidate for preliminary experiments that usually take place before training final classifier starts.
Conclusion
An approach to boosting procedure acceleration was proposed in this work. Approach is based on usage of special genetic weak learner for learning weak classifier on each boosting iteration. Genetic weak learner uses genetic algorithm with binary chromo- somes. That genetic algorithm is designed to solve an optimization problem of selecting weak classifier with the smallest weighed loss from some parametric classifier family. Proposed method was generalized for the case when there exists an effective algorithm for learning some of the parameters of a weak classifier. Experiments have shown that such approach allows us to accelerate training process dramatically for practical tasks while keeping prediction error small.
Genetic weak learner proposed in this work can't be used to boost any tree-based classifiers. That fact limits its usage in many scenarios because stump weak classifiers can not represent any relationships between different object features. So, in the future work we plan to generalize our approach for accelerating tree-based boosting.
Another option for future research is performing additional experiments with classifiers not related to haar features in any way. That will confirm proposed algorithm's profit in computer vision problems not biased towards haar feature usage. In fact, it would be nice to determine different parametric classifier families that can be efficiently boosted using proposed weak learner.
| 2,412 |
0905.4887
|
1967991235
|
Nonlinear dimensionality reduction (NLDR) algorithms such as Isomap, LLE and Laplacian Eigenmaps address the problem of representing high-dimensional nonlinear data in terms of low-dimensional coordinates which represent the intrinsic structure of the data. This paradigm incorporates the assumption that real-valued coordinates provide a rich enough class of functions to represent the data faithfully and efficiently. On the other hand, there are simple structures which challenge this assumption: the circle, for example, is one-dimensional but its faithful representation requires two real coordinates. In this work, we present a strategy for constructing circle-valued functions on a statistical data set. We develop a machinery of persistent cohomology to identify candidates for significant circle-structures in the data, and we use harmonic smoothing and integration to obtain the circle-valued coordinate functions themselves. We suggest that this enriched class of coordinate functions permits a precise NLDR analysis of a broader range of realistic data sets.
|
There have been other attempts to address the problem of finding good coordinate representations of simple non-Euclidean data spaces. One approach @cite_7 is to use modified versions of multidimensional scaling specifically devised to find the best embedding of a data set into the cylinder, the sphere and so on. The target space has to be chosen in advance. Another class of approaches @cite_11 @cite_8 involves cutting the data manifold along arcs and curves until it has trivial topology. The resulting configuration can then be embedded in Euclidean space in the usual way. In our approach, the number of circular coordinates is not fixed in advance, but is determined experimentally after a persistent homology calculation. Moreover, there is no cutting involved; the coordinate functions respect the original topology of the data.
|
{
"abstract": [
"Manifold learning has become an important tool to characterize high-dimensional data that vary nonlinearly due to a few parameters. Applications to the analysis of medical imagery and human motion patterns have been successful despite the lack of effective tools to parameterize cyclic data sets. This paper offers an initial approach to this problem, and provides for a minimal parameterization of points that are drawn from cylindrical manifolds-data whose (unknown) generative model includes a cyclic and a non-cyclic parameter. Solving for this special case is important for a number of current, practical applications and provides a start toward a general approach to cyclic manifolds. We offer results on synthetic and real data sets and illustrate an application to de-noising cardiac ultrasound images.",
"",
"Numerous methods or algorithms have been designed to solve the problem of nonlinear dimensionality reduction (NLDR). However, very few among them are able to embed efficiently 'circular' manifolds like cylinders or tori, which have one or more essential loops. This paper presents a simple and fast procedure that can tear or cut those manifolds, i.e. break their essential loops, in order to make their embedding in a low-dimensional space easier. The key idea is the following: starting from the available data points, the tearing procedure represents the underlying manifold by a graph and then builds a maximum subgraph with no loops anymore. Because it works with a graph, the procedure can preprocess data for all NLDR techniques that uses the same representation. Recent techniques using geodesic distances (Isomap, geodesic Sammon's mapping, geodesic CCA, etc.) or K-ary neighborhoods (LLE, hLLE, Laplacian eigenmaps) fall in that category. After describing the tearing procedure in details, the paper comments a few experimental results. (c) 2005 Elsevier B.V. All rights reserved."
],
"cite_N": [
"@cite_8",
"@cite_7",
"@cite_11"
],
"mid": [
"2145050861",
"1484191163",
"2076137473"
]
}
|
Persistent Cohomology and Circular Coordinates
|
Nonlinear dimensionality reduction (nldr) algorithms address the following problem: given a high-dimensional collection of data points X ⊂ R N , find a low-dimensional embedding φ : X → R n (for some n ≪ N ) which faithfully preserves the 'intrinsic' structure of the data. For instance, if the data have been obtained by sampling from some unknown manifold M ⊂ R N -perhaps the parameter space of some physical system -then φ might correspond to an n-dimensional coordinate system on M . If M is completely and non-redundantly parametrized by these n coordinates, then the nldr is regarded as having succeeded completely.
Principal components analysis, or linear regression, is the simplest form of dimensionality reduction; the embedding function φ is taken to be a linear projection. This is closely related to (and sometimes identifed with) classical multidimensional scaling [2].
When there are no satisfactory linear projections, it becomes necessary to use nldr. Prominent algorithms for nldr include Locally Linear Embedding [14], Isomap [16], Laplacian Eigenmaps [1], Hessian Eigenmaps [5], and many more.
These techniques share an implicit assumption that the unknown manifold M is well-described by a finite set of coordinate functions φ1, φ2, . . . , φn : M → R. Explicitly, some of the correctness theorems in these studies depend on the hypothesis that M has the topological structure of a convex domain in some R n . This hypothesis guarantees that good coordinates exist, and shifts the burden of proof onto showing that the algorithm recovers these coordinates.
In this paper we ask what happens when this assumption fails. The simplest space which challenges the assumption is the circle, which is one-dimensional but requires two real coordinates for a faithful embedding. Other simple examples include the annulus, the torus, the figure eight, the 2-sphere, the last three of which present topological obstructions to being embedded in the Euclidean space of their natural dimension. We propose that an appropriate response to the problem is to enlarge the class of coordinate functions to include circle-valued coordinates θ : M → S 1 . In a physical setting, circular coordinates occur naturally as angular and phase variables. Spaces like the annulus and the torus are well described by a combination of real and circular coordinates. (The 2-sphere is not so lucky, and must await its day.)
The goal of this paper is to describe a natural procedure for constructing circular coordinates on a nonlinear data set using techniques from classical algebraic topology and its 21st-century grandchild, persistent topology. We direct the reader to [9] as a general reference for algebraic topology, and to [17] for a streamlined account of persistent homology.
Overview
The principle behind our algorithm is the following equation from homotopy theory, valid for topological spaces X with the homotopy type of a cell complex (which covers everything we normally encounter):
[X, S 1 ] = H 1 (X; Z)(1)
The left-hand side denotes the set of equivalence classes of continuous maps from X to the circle S 1 ; two maps are equivalent if they are homotopic (meaning that one map can be deformed continuously into the other); the right-hand side denotes the 1-dimensional cohomology of X, taken with integer coefficients. In other language: S 1 is the classifying space for H 1 , or equivalently S 1 is the Eilenberg-MacLane space K(Z, 1). See section 4.3 of [9]. If X is a contractible space (such as a convex subset of R n ), then H 1 (X; Z) = 0 and Equation (1) tells us not to bother looking for circular functions: all such functions are homotopic to a constant function. On the other hand, if X has nontrivial topology then there may well exist a nonzero cohomology class [α] ∈ H 1 (X; Z); we can then build a continuous function X → S 1 which in some sense reveals [α].
Our strategy divides into the following steps.
1. Represent the given discrete data set as a simplicial complex or filtered simplicial complex.
2. Use persistent cohomology to identify a 'significant' cohomology class in the data. For technical reasons, we carry this out with coefficients in the field Fp of integers modulo p, for some prime p. This gives us [αp] ∈ H 1 (X; Fp).
Lift [αp] to a cohomology class with integer coefficients:
[α] ∈ H 1 (X; Z).
4.
Smoothing: replace the integer cocycle α by a harmonic cocycle in the same cohomology class:ᾱ ∈ C 1 (X; R).
5.
Integrate the harmonic cocycleᾱ to a circle-valued function θ : X → S 1 .
The paper is organized as follows. In Section 2.1, we derive what we need of equation (1). Steps (1)(2)(3)(4)(5) of the algorithm are addressed in Sections 2.2-2.6, respectively. In Section 3 we report some experimental results.
ALGORITHM DETAILS
Cohomology and circular functions
Let X be a finite simplicial complex. Let X 0 , X 1 , X 2 denote the sets of vertices, edges and triangles of X, respectively. We suppose that the vertices are totally ordered (in an arbitrary way). If a < b then the edge between vertices a, b is always written ab and not ba. Similarly, if a < b < c then the triangle with vertices a, b, c is always written abc.
Cohomology can be defined as follows. Let A be a commutative ring (for example A = Z, Fp, R). We define 0-cochains, 1-cochains, and 2-cochains as follows:
C 0 = C 0 (X; A) =˘functions f : X 0 → AC 1 = C 1 (X; A) =˘functions α : X 1 → AC 2 = C 2 (X; A) =˘functions A : X 2 → AT
hese are modules over A. We now define coboundary maps d0 : C 0 → C 1 and d1 :
C 1 → C 2 . (d0f )(ab) = f (b) − f (a) (d1α)(abc) = α(bc) − α(ac) + α(ab)
Let α ∈ C 1 . If d1α = 0 we say that α is a cocycle. If d0f = α admits a solution f ∈ C 0 we say that α is a coboundary. The solution f , if it exists, can be thought of as the discrete integral of α. It is unique up to adding constants on each connected component of X.
It is easily verified that d1d0f = 0 for any f ∈ C 0 . Thus, coboundaries are always cocycles, or equivalently Im(d0) ⊆ Ker(d1). We can measure the difference between coboundaries and cocycles by defining the 1-cohomology of X to be the quotient module H 1 (X; A) = Ker(d1)/ Im(d0).
We say that two cocycles α, β are cohomologous if α − β is a coboundary.
We now consider integer coefficients. The following proposition fulfils part of the promise of equation (1), by producing circle-valued functions from integer cocycles. It will be helpful to think of S 1 as the quotient group R/Z. Proposition 1. Let α ∈ C 1 (X; Z) be a cocycle. Then there exists a continuous function θ : X → R/Z which maps each vertex to 0, and each edge ab around the entire circle with winding number α(ab).
Proof. We can define θ inductively on the vertices, edges, triangles, . . . of X. The vertices and edges follow the prescription in the statement of the proposition. To extend θ to the triangles, it is necessary that the winding number of θ along the boundary of each triangle abc is zero. And indeed this is α(bc) − α(ac) + α(ab) = d1α(abc) = 0. Since the higher homotopy groups of S 1 are all zero ( [9], section 4.3), θ can then be extended to the higher cells of X without obstruction.
The construction in Proposition 1 is unsatisfactory in the sense that all vertices are mapped to the same point. All variation in the circle parameter takes place in the interior of the edges (and higher cells). This is rather unsmooth. For more leeway, we consider real coefficients.
Proposition 2. Letᾱ ∈ C 1 (X; R) be a cocycle. Suppose we can find α ∈ C 1 (X; Z) and f ∈ C 0 (X; R) such thatᾱ = α + d0f . Then there exists a continuous function θ : X → R/Z which maps each edge ab linearly to an interval of length α(ab), measured with sign.
In other words, we can construct a circle-valued function out of any real cocycleᾱ whose cohomology class [ᾱ] lies in the image of the natural homomorphism H 1 (X; Z) → H 1 (X; R).
Proof. Define θ on the vertices of X by setting θ(a) to be f (a) mod Z. For each edge ab, we have
θ(b) − θ(a) = f (b) − f (a) = d0f (ab) =ᾱ(ab) − α(ab)
which is congruent toᾱ(ab) mod Z, since α(ab) is an integer.
It follows that θ can be taken to map ab linearly onto an interval of signed lengthᾱ(ab). Sinceᾱ is a cocyle, θ can be extended to the triangles as before; then to the higher cells.
Proposition 2 suggests the following tactic: from an integer cocycle α we construct a cohomologous real cocyclē α = α + d0f , and then define θ = f mod Z on the vertices of X. If we can constructᾱ so that the edge-lengths |ᾱ(ab)| are small, then the behaviour of θ will be apparent from its restriction to the vertices. See Section 2.5.
Point-cloud data to simplicial complex
We now begin describing the workflow in detail. The input is a point-cloud data set: in other words, a finite set S ⊂ R N or more generally a finite metric space. The first step is to convert S into a simplicial complex and to identify a stablelooking integer cohomology class. This will occupy the next three subsections.
The first lesson of point-cloud topology [7] is that pointclouds are best represented by 1-parameter nested families of simplicial complexes. There are several candidate constructions: the Vietoris-Rips complex X ǫ = Rips(S, ǫ) has vertex set S and includes a k-simplex whenever all k + 1 vertices lie pairwise within distance ǫ of each other. The witness complex X ǫ = Witness(L, S, ǫ) uses a smaller vertex set L ⊂ S and includes a k-simplex when the k + 1 vertices lie close to other points of S, in a certain precise sense (see [3,8]). In both cases, X ǫ ⊆ X ǫ ′ whenever ǫ ≤ ǫ ′ . Either of these constructions will serve our purposes, but the witness complex has the computational advantage of being considerably smaller.
We determine X ǫ only up to its 2-skeleton, since we are interested in H 1 .
Persistent cohomology
Having constructed a 1-parameter family {X ǫ }, we apply the principle of persistence to identify cocycles that are stable across a large range for ǫ. Suppose that ǫ1, ǫ2, . . . , ǫm are the critical values where the complex X ǫ gains new cells. The family can be represented as a diagram
X ǫ 1 −→ X ǫ 2 −→ . . . −→ X ǫm
of simplicial complexes and inclusion maps. For any coefficient field F, the cohomology functor H 1 (−; F) converts this diagram into a diagram of vector spaces and linear maps over F; the arrows are reversed:
H 1 (X ǫ 1 ; F) ←− H 1 (X ǫ 2 ; F) ←− . . . ←− H 1 (X ǫm ; F)
According to the theory of persistence [6,17], such a diagram decomposes as a direct sum of 1-dimensional terms indexed by half-open intervals of the form [ǫi, ǫj ). Each such term corresponds to a cochain α ∈ C i (X ǫ ) that satisfies the cocycle condition for ǫ < ǫj and becomes a coboundary for ǫ < ǫi. The collection of intervals can be displayed graphically as a persistence diagram, by representing each interval [ǫi, ǫj ) as a point (ǫi, ǫj ) in the Cartesian plane above the main diagonal. We think of long intervals as representing trustworthy (i.e. stable) topological information.
Choice of coefficients. The persistence decomposition theorem applies to diagrams of vector spaces over a field. When we work over the ring of integers Z, however, the result is known to fail: there need not be an interval decomposition. This is unfortunate, since we require integer cocycles to construct circle maps. To finesse this problem, we pick an arbitrary prime number p (such as p = 47) and carry out our persistence calculations over the finite field F = Fp. The resulting Fp cocyle must then be converted to integer coefficients: we address this in Section 2.4.
In principle we can use the ideas in [17] to calculate the persistent cohomology intervals and then select a long interval [ǫi, ǫj) and a specific δ ∈ [ǫi, ǫj). We then let X = X δ and take α to be the cocycle in C 1 (X; F) corresponding to the interval.
Explicitly, persistent cocycles can be calculated in the following way. We thank Dmitriy Morozov for this algorithm. Suppose that the simplices in the filtered complex are totally ordered, and labelled σ1, σ2, . . . , σm so that σi arrives at time ǫi. For k = 0, 1, . . . , m we maintain the following information:
• a set of indices I k ⊆ {1, 2, . . . , k} associated with 'live' cocycles;
• a list of cocycles (αi : i ∈ I k ) in C * (X ǫ k ; F).
The cocycle αi involves only σi and those simplices of the same dimension that appear later in the filtration sequence (thus only σj with j ≥ i).
Initially I0 = ∅ and the list of cocycles is empty. To update from k − 1 to k, we compute the coboundaries of the cocycles (αi : i ∈ I k−1 ) of X ǫ k−1 within the larger complex X ǫ k obtained by including the simplex σ k . In fact, these coboundaries must be multiples of the elementary cocycle α = [σ k ] defined by α(σ k ) = 1, and α(σj ) = 0 otherwise. We can write dαi = ci[σ k ]. If all the ci are zero, then we have one new cocycle: let I k = I k−1 ∪{k} and define α k = [σ k ]. Otherwise, we must lose a cocycle. Let j ∈ I k−1 be the largest index for which cj = 0. We delete αj by setting I k = I k−1 \ {j}, and we restore the earlier cocycles by setting αi ← αi − (ci/cj )αj. In this latter case, we write the persistence interval [ǫj , ǫ k ) to the output.
At the end of the process, surviving cocycles are associated with semi-infinite intervals: [ǫi, ∞) for i ∈ Im.
Remark. The reader may be more familiar with persistence diagrams in homology rather than cohomology. In fact, the universal coefficient theorem [9] implies that the two diagrams are identical. The salient point is that cohomology is the vector-space dual of homology, when working with field coefficients. That said, we cannot simply use the usual algorithm for persistent homology: we are interested in obtaining explicit cocycles, whereas the classical algorithm [17] returns cycles.
We will establish the correctness of this algorithm in the archival version of this paper. The expert reader may regard this as an exercise in the theory of persistence.
Lifting to integer coefficients
We now have a simplicial complex X = X δ and a cocycle αp ∈ C 1 (X; Fp). The next step is to 'lift' αp by constructing an integer cocycle α which reduces to αp modulo p.
To show that this is (almost) always possible, note that the short exact sequence of coefficient rings 0 −→ Z ·p −→ Z −→ Fp −→ 0 gives rise to a long exact sequence, called the Bockstein sequence (see Section 3.E of [9]). Here is the relevant section of the sequence:
→ H 1 (X; Z) → H 1 (X; Fp) β → H 2 (X; Z) ·p → H 2 (X; Z) →
By exactness, the Bockstein homomorphism β induces an isomorphism between the cokernel of H 1 (X; Z) → H 1 (X; Fp) and the kernel of H 2 (X; Z) ·p → H 2 (X; Z), and this kernel is precisely the set of p-torsion elements of H 2 (X; Z). If there is no p-torsion, then it follows immediately that the cokernel of the first map is zero. In other words H 1 (X; Z) → H 1 (X; Fp) is surjective; any cocycle αp ∈ C 1 (X; Fp) can be lifted to a cocycle α ∈ C 1 (X; Z).
If we are unluckily sabotaged by p-torsion, then we pick another prime and redo the calculation from scratch: it is enough to pick a prime that does not divide the order of the torsion subgroup of H 2 (X; Z), so almost any prime will do.
In practice, we construct α by taking the coefficients of αp in Fp and replacing them with integers in the correct congruence class modulo p. The default choice is to choose coefficients close to zero. If d1α = 0 then we are done; otherwise it becomes necessary to do some repair work. Certainly d1α ≡ 0 modulo p, so we can write d1α = pη for some η ∈ C 2 (X; Z). In the absence of p-torsion, we can then solve η = d1ζ for ζ ∈ C 1 (X; Z), and then the required lift is α − pζ. Fortunately, this has not proved necessary in any of our examples.
Remark. We expect that p-torsion is extremely rare in 'real' data sets, since it is symptomatic of rather subtle topological phenomena. For instance, the simplest examples which exhibit 2-torsion are the nonorientable closed surfaces (such as the projective plane and the Klein bottle).
Harmonic smoothing
Given an integer cocycle α ∈ C 1 (X; Z), or indeed a real cocycle α ∈ C 1 (X; R), we wish to find the 'smoothest' real cocycleᾱ ∈ C 1 (X; R) cohomologous to α. It turns out that what we want is the harmonic cocycle representing the cohomology class [α].
We define smoothness. Each of the spaces C i (X; R) comes with a natural Euclidean metric:
f 2 = X a ∈X 0 |f (a)| 2 , α 2 = X ab ∈X 1 |α(ab)| 2 , A 2 = X abc ∈X 2 |A(abc)| 2 .
A circle-valued function θ is 'smooth' if its total variation across the edges of X is small. The terms |α(ab)| 2 cap-ture the variation across individual edges; therefore what we must minimize is ᾱ 2 .
Proposition 3. Let α ∈ C 1 (X; R). There is a unique solutionᾱ to the least-squares minimization problem argmin α˘ ᾱ 2 | ∃f ∈ C 0 (X; R),ᾱ = α + d0f¯.
(
2)
Moreover,ᾱ is characterized by the equation d * 0ᾱ = 0, where d * 0 is the adjoint of d0 with respect to the inner products on C 0 , C 1 .
Proof. Note that if d * 0ᾱ = 0 then for any f ∈ C 0 we have
ᾱ + d0f 2 = ᾱ 2 + 2 ᾱ, d0f + d0f 2 = ᾱ 2 + 2 d * 0ᾱ , f + d0f 2 = ᾱ 2 + d0f 2
which implies that such anᾱ must be the unique minimizer. For existence, note that
d * 0 α + d * 0 d0f = 0 certainly has a solution f if Im(d * 0 ) = Im(d * 0 d0)
. But this is a standard fact in finite-dimensional linear algebra: Im(A t ) = Im(A t A) for any real matrix A; this follows from the singular value decomposition, for instance.
Remark. It is customary to construct the Laplacian ∆ =
d * 1 d1 + d0 d * 0 .
The twin equations d1ᾱ = 0 and d * 0ᾱ = 0 immediately imply (and conversely, can be deduced from) the single equation ∆ᾱ = 0; in other wordsᾱ is harmonic.
Integration
The least-squares problem in equation (2) can be solved using a standard algorithm such as LSQR [12]. By Proposition 2 we can use the solution parameter f to define the circular coordinate θ on the vertices of X. This works because the original cocycle α has integer coefficients.
More generally, ifᾱ is an arbitrary real cocycle such that [ᾱ] ∈ Im(H 1 (X; Z) → H 1 (X; R)), it is a straightforward matter to integrateᾱ to a circle-valued function θ on the vertex set X 0 . Suppose that X is connected (if not, each connected component can be treated separately) and pick a starting vertex x0 and assign θ(x0) = 0. One can use Dijkstra's algorithm to find shortest paths to each remaining vertex from x0. When a new vertex b enters the structure via an edge ab, we assign θ(b) = θ(a)+ᾱ(ab) (or θ(a)−ᾱ(ba) if the edge is correctly identified as ba). If a vertex a is connected to x0 by multiple paths then the different possible values of θ(a) differ by an integer; this is where we use the hypothesis thatᾱ is cohomologous to an integer cocyle.
EXPERIMENTS
Software
The following experiments were carried out using the Javabased jPlex simplicial complex software [15], with high-level scripting and numerical analysis in MATLAB. We ran a development version of jPlex to obtain explicit persistent cohomology cocycles. We expect to include the code in the next release of jPlex. We used Paige and Saunders' implementation of LSQR [11] for the least-squares problem in the harmonic smoothing step. Timings were determined using MATLAB's built-in 'tic' and 'toc' commands, and are included for relative comparison against each other.
General procedure
We tested our methods on several synthetic data sets with known topology, ranging from the humble circle itself to a genus-2 surface ('double torus'). Most of the examples were embedded in R 2 or R 3 , with the exception of a sample from a complex projective curve (embedded in CP 2 ) and a synthetic image-like data set (embedded in R 120000 ).
In each case we selected vertices for the filtered simplicial complex: either the whole set, or a smaller well-distributed subset of 'landmarks' selected by iterative furthest-point sampling. We then built a Rips or witness complex, with maximum radius generally chosen to ensure around 10 5 simplices in the complex.
In most cases, we show the persistence diagram produced by the cocycle computation. The chosen value δ is marked on the diagonal, with its upper-left quadrant indicated in green lines. The persistent cocycles available at that parameter value are precisely those contained in that quadrant. Each of those cocycles then produces a circular coordinate.
There are various figures associated with each example. Most important are the correlation scatter plots: each scatter plot compares two circular coordinate functions. These may be functions produced by the computation ('inferred coordinates') or known parameters. These scatter plots are drawn in the unit square, which is of course really a torus S 1 × S 1 .
When the original data are embedded in R 2 or R 3 , we also display the circular coordinates directly on the data set, plotting each point in color according to its coordinate value interpreted on the standard hue-circle. This works less well in grayscale reproductions, of course.
Finally, in certain cases we plot coordinate values against frequency, as a histogram. This distributional information can sometimes be useful in the absence of other information.
Remark. When the goal is to infer the topology of a data set whose structure is unknown, we do not have any 'known parameters' available to us. We can still construct correlation scatter plots between pairs of inferred coordinates, and the distributional histograms for each coordinate individually. We exhort the reader to view the following examples through the lens of the topological inference problem: what structures can be distinguished using scatter plots and histograms (and persistence diagrams) alone?
Noisy circle
We begin with the circle itself, and its tautological circlevalued coordinate.
We picked 400 points distributed along the unit circle. We added a uniform random variable from [0.0, 0.4] to each coordinate. A Rips complex was constructed with maximal radius 0.5, resulting in 23475 simplices. The computation of cohomology finished in 237 seconds.
Parametrizing at 0.4 yielded a single coordinate function, which very closely reproduces the tautological angle function. Parametrizing at 0.14 yielded several possible cocycles. We selected one of those with low persistence; this produced a parametrization which 'snags' around a small gap in the data.
See Figure 1. The left panel in each row shows the histogram of coordinate values; the middle panel shows the correlation scatter plot against the known angle function; the right panel displays the coordinate using color. The high-persistence ('global') coordinate correlates with the angle function with topological degree 1. Variation in that coordinate is uniformly distributed, as seen in the histogram. In contrast, the low-persistence ('local') coordinate has a spiky distribution.
Trefoil torus knot
Another example with circle topology: see Figure 2. We picked 400 points distributed along the (2, 3) torus knot on a torus with radii 2.0 and 1.0. We jittered them by a uniform random variable from [0.0, 0.2] added to each coordinate. We generated a Rips complex up to radius 1.0, acquiring 36936 simplices. We computed persistent cohomology in 70 seconds. As expected, the inferred coordinate correlates strongly with the known parameter with topological degree 1. The histogram shows three 'bulges' corresponding to the three high-density regions of the sampled curve, which occur when the curve approaches the central axis of the torus.
Rotating cube
For a more elaborate data set with S 1 -topology, we generated a sequence of 657 rendered images of a colorful cube rotating around one axis. Each image was regarded as a vector in the Euclidean space R 200·200·3 . From this data we built a witness complex with 50 landmark points and constructed a single circular coordinate. Interpolating the resulting function linearly between the landmarks gave us coordinates for all the points in the family.
See Figure 3. The frequency distribution is comparatively smooth (by which we mean that there are no large spikes in the histogram), which indicates that the coordinate does not have large static regions. The correlation plot of the inferred coordinate against the original known sequence of the cube images shows a correlation with topological degree 1. We show the progression of the animation on an evenly-spaced sample of representative points around the circle.
Pair of circles
See Figure 4 for these two examples. Conjoined circles: we picked 400 points distributed along circles in the plane with radius 1 and with centres at (±1, 0). The points were then jittered by adding noise to each coordinate taken uniformly randomly from the interval [0.0, 0.3]. A Rips complex was constructed with maximal radius 0.5, resulting in 76763 simplices. The cohomology was computed in 378 seconds.
Disjoint circles: 400 points were distributed on circles of radius 1 centered around (±2, 0) in the plane. These points were subsequently disturbed by a uniform random variable from [0.0, 0.5]. We constructed a Rips complex with maximum radius 0.5, which gave us 45809 simplices. The cohomology computation finished in about 117 seconds.
In both cases, our method detects the two most natural circle-valued functions. The scatter plots appear very similar. In the conjoined case, there is some interference between the two circles, near their meeting point.
Torus
See Figure 5. We picked 400 points at random in the unit square, and then used a standard parametrization to map the points onto a torus with inner and outer radii 1.0 and 3.0. These were subsequently jittered by adding a uni- form random variable from [0.0, 0.2] to each coordinate. We constructed a Rips complex with maximal radius √ 3, resulting in 61522 simplices. The corresponding cohomology was computed in 209 seconds.
The two inferred coordinates in this (fairly typical) experimental run recover the original coordinates essentially perfectly: the first inferred coordinate correlates with the meridional coordinate with topological degree −1, while the second inferred coordinate correlates with the longitudinal coordinate with degree 1.
When the original coordinates are unavailable, the important figure is the inferred-versus-inferred scatter plot. In this case the scatter plot is fairly uniformly distributed over the entire coordinate square (i.e. torus). In other words, the two coordinates are decorrelated. This is slightly truer (and more clearly apparent in the scatter plot) for the two original coordinates. Contrast these with the corresponding scatter plots for a pair of circles (conjoined or disjoint).
Elliptic curve
See Figure 6. For fun, we repeated the previous experiment with a torus abstractly defined as the zero set of a homogeneous cubic polynomial in three variables, interpreted as a complex projective curve. We picked 400 points at random on S 5 ⊂ C 3 , subject to the cubic equation
x 2 y + y 2 z + z 2 x = 0.
To interpret these as points in CP 2 , we used the projectively invariant metric d(ξ, η) = cos −1 (|ξ · η|) for all pairs ξ, η ∈ S 5 . With this metric we built a Rips complex with maximal radius 0.15. The resulting complex had 44184 simplices, and the cohomology was computed in 56 seconds. We found two dominant coclasses that survived beyond radius 0.15, and we computed our parametrizations at the 0.15 mark.
The resulting correlation plot quite clearly exhibits the decorrelation which is characteristic of the torus.
Double torus
See Figure 7. We constructed a genus-2 surface by generating 1600 points on a torus with inner and outer radii 1.0 and 3.0; slicing off part of the data set by a plane at distance 3.7 from the axis of the torus, and reflecting the remaining points in that plane. The resulting data set has 3120 points. Out of these, we pick 400 landmark points, and construct a witness complex with maximal radius 0.6. The landmark set yields a covering radius rmax = 0.9982 and a complex with 70605 simplices. The computation took 748 seconds active computer time. We identified the four most significant cocycles.
Note that coordinates 1 and 4 are 'coupled' in the sense that they are supported over the same subtorus of the double torus. The scatter plot shows that the two coordinates appear to be completely decorrelated except for a large mass concentrated at a single point. This mass corresponds to the other subtorus, on which coordinates 1 and 4 are essentially constant. A similar discussion holds for coordinates 2 and 3.
The uncoupled coordinate pairs (1,2), (1,3), (2,4), (3,4) produce scatter plots reminiscent of two conjoined or disjoint circles. (a) Persistence diagram (left); first inferred coordinate (middle); second inferred coordinate (right).
Inferred2
Original 1 Original 2
Inferred1
Inferred2 Original 1 (b) Correlation scatter plots between the two original and two inferred coordinates.
ACKNOWLEDGEMENTS
We are immensely grateful to Dmitriy Morozov: he has given us considerable assistance in implementing the algorithms in this paper. In particular we thank him for the persistent cocycle algorithm.
Thanks also to Jennifer Kloke for sharing her analysis of a visual image data set; this example did not make the present version of this paper. Finally, we thank Gunnar Carlsson, for his support and encouragement as leader of the topological data analysis research group at Stanford; and Robert Ghrist, as leader of the DARPA-funded project Sensor Topology and Minimal Planning (SToMP).
| 5,320 |
0905.3720
|
2952439423
|
Voting is a simple mechanism to aggregate the preferences of agents. Many voting rules have been shown to be NP-hard to manipulate. However, a number of recent theoretical results suggest that this complexity may only be in the worst-case since manipulation is often easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We demonstrate that there is a smooth transition in the probability that a coalition can elect a desired candidate using the veto rule as the size of the manipulating coalition increases. We show that a rescaled probability curve displays a simple and universal form independent of the size of the problem. We argue that manipulation of the veto rule is asymptotically easy for many independent and identically distributed votes even when the coalition of manipulators is critical in size. Based on this argument, we identify a situation in which manipulation is computationally hard. This is when votes are highly correlated and the election is "hung". We show, however, that even a single uncorrelated voter is enough to make manipulation easy again.
|
There have been a number of other recent theoretical results about the computational complexity of manipulating elections. For instance, Procaccia and Rosenschein give a simple greedy procedure that will find a manipulation of a scoring rule for any junta'' distribution of weighted votes in polynomial time with a probability of failure that is an inverse polynomial in @math @cite_6 . A junta'' distribution is concentrated on the hard instances.
|
{
"abstract": [
"Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant."
],
"cite_N": [
"@cite_6"
],
"mid": [
"1493942848"
]
}
|
Where are the really hard manipulation problems? The phase transition in manipulating the veto rule *
|
The Gibbard-Satterthwaite theorem proves that, under some simple assumptions, most voting rules are manipulable. That is, it may pay for an agent not to report their preferences truthfully. One possible escape from this result was proposed by Bartholdi, Tovey and Trick [Bartholdi et al., 1989]. Whilst a manipulation may exist, perhaps it is computationally too difficult to find. Many results have subsequently been proven showing that various voting rules are NP-hard to manipulate under different assumptions including: an unbounded number of candidates; a small number of candidates but weighted votes; and uncertainty in the distribution of votes. See, for instance, [Bartholdi et al., 1989;Bartholdi and Orlin, 1991; * NICTA is funded by the Australian Government through the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. Conitzer et al., 2007]. There is, however, increasing concern that worst-case results like these may not reflect the difficulty of manipulation in practice. Indeed, a number of recent theoretical results suggest that manipulation may often be computationally easy [Conitzer and Sandholm, 2006;Procaccia and Rosenschein, 2007b;Xia and Conitzer, 2008a;Friedgut et al., 2008;Xia and Conitzer, 2008b].
In this paper we show that, in addition to attacking this question theoretically, we can profitably study it empirically. There are several reasons why empirical analysis is useful. First, theoretical analysis is often asymptotic so does not show the size of hidden constants. In addition, elections are typically bounded in size. Can we be sure that asymptotic behaviour is relevant for the finite sized electorates met in practice? Second, theoretical analysis is often restricted to particular distributions (e.g. independent and identically distributed votes). Manipulation may be very different in practice due to correlations between votes. For instance, if all preferences are single-peaked then there are voting rules which cannot be manipulated. It is in the best interests of all agents to state their true preferences. Third, many of these theoretical results about the easiness of manipulation have been hard won and are limited in their scope. For instance, Friedgut et al.
were not able to extend their result beyond three candidates [Friedgut et al., 2008]. An empirical study may quickly suggest if the result extends to more candidates. Finally, empirical studies may suggest new avenues for theoretical study. For example, the experiments reported here suggest a simple and universal form for the probability that a coalition is able to elect a desired candidate. It would be interesting to try to derive this form theoretically.
Finding manipulations
We focus on the veto rule. This is a scoring rule in which each agent gets to cast a veto against one candidate. The candidate with the fewest vetoes wins. We suppose that tie-breaking is in favor of the manipulators. However, it is easy to relax this assumption. There are several reason why we start this investigation into the complexity of manipulation with the veto rule. First, the veto rule is very simple to reason about. This can be contrasted with other voting rules that are computationally hard to manipulate. For example, the STV rule is NP-hard to manipulate [Bartholdi and Orlin, 1991] but this complexity appears to come from reasoning about what happens between the different rounds. Second, the veto rule is on the borderline of tractability since constructive manipulation of the rule by a coalition of weighted agents is NP-hard but destructive manipulation is polynomial [Conitzer et al., 2007]. Third, as the next theorem shows, number partitioning algorithms can be used to compute a successful manipulation of the veto rule. More precisely, manipulation of an election with 3 candidates and weighted votes (which is NP-hard [Conitzer et al., 2007]) can be directly reduced to 2-way number partitioning. We therefore compute manipulations in our experiments using the efficient CKK algorithm [Korf, 1995].
Theorem 1 There exists a successful manipulation of an election with 3 candidates by a weighted coalition using the veto rule iff there exists a partitioning of W ∪ {|a − b|} into two bags such that the difference between their two sums is less than or equal to a+b−2c+ i∈W i where W is the multiset of weights of the manipulating coalition, a, b and c are the weights of vetoes assigned to the three candidates by the non-manipulators and the manipulators wish the candidate with weight c to win.
Proof: It never helps a coalition manipulating the veto rule to veto the candidate that they wish to win. The coalition does, however, need to decide how to divide their vetoes between the candidates that they wish to lose. Consider the case a ≥ b. Suppose the partition has weights w − ∆/2 and w + ∆/2 where 2w = i∈W ∪{|a−b|} i and ∆ is the difference between the two sums. The same partition of vetoes is a successful manipulation iff the winning candidate has no more vetoes than the next best candidate. That is,
c ≤ b + (w − ∆/2). Hence ∆ ≤ 2w + 2b − 2c = (a − b) + 2b − 2c + i∈W i = (a + b − 2c) + 2 i∈W i. In the other case, a < b and ∆ ≤ (b + a − 2c) + i∈W i. Thus ∆ ≤ a + b − 2c + i∈W i. 2
Similar arguments can be given to show that the manipulation of a veto election of p candidates can be reduced to finding a p − 1-way partition of numbers, and that manipulation of any scoring rule with 3 candidates and weighted votes can be reduced to 2-way number partitioning. However, manipulating elections with greater than 3 candidates and scoring rules other than veto or plurality appears to require other computational approaches.
Uniform votes
We consider the case that the n agents veto uniformly at random one of the 3 possible candidates, and vetoes carry weights drawn uniformly from (0, k]. When the coalition is small in size, it has too little weight to be able to change the result. On the other hand, when the coalition is large in size, it is sure to be able to make a favored candidate win. There is thus a transition in the manipulability of the problem as the coalition size increases (see Figure 1).
Based on [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a], we expect the critical coalition size to increase as √ n. In Figure 2, we see that the phase transition displays a simple and universal form when plotted against m/ √ n. The phase transition appears to be smooth, with the probability varying slowly and not approaching a step function as problem size increases. We obtained a good fit with 1− 2 3 e −m/ √ n . Other smooth phase transitions have been seen with 2-coloring [Achlioptas, 1999], 1-in-2 satisfiability and Not-All-Equal 2-satisfiability [Walsh, 2002]. It is interesting to note that all these decision problems are polynomial.
The theoretical results mentioned earlier leave open how hard it is to compute whether a manipulation is possible when the coalition size is critical. Figure 3 displays the computational cost to find a manipulation (or prove none exists) using the efficient CKK algorithm. Even in the critical region where problems may or may not be manipulable, it is easy to compute whether the problem is manipulable. All problems are solved in a few branches. This contrasts with phase transition behaviour in problems like satisfiability [Cheeseman et al., 1991;Mitchell et al., 1992; The x-axis is scaled by 1/ √ n. Gent and Walsh, 1994], constraint satisfaction [Gent et al., 1995], number partitioning [Gent and Walsh, 1996a;] and the traveling salesman problem [Gent and Walsh, 1996b] where the hardest problems occur around the phase transition.
Why hard problems are rare
Based on our reduction of manipulation problems to number partitioning, we give a heuristic argument why hard manipulation problems become vanishing rare as n ; ∞ and m = Θ( √ n). The basic idea is simple: by the time the coalition is large enough to be able to change the result, the variance in scores between the candidates is likely to be so large that computing a successful manipulation or proving none is possible will be easy.
Suppose that the manipulators want candidates A and B to lose so that C wins, and that the non-manipulators have cast vetoes of weight a, b and c for A, B and C respectively. Without loss of generality we suppose that a ≥ b. There are three cases to consider. In the first case, a ≥ c and b ≥ c. It is then easy for the manipulators to make C win since C wins whether they veto A or B. In the second case, a ≥ c > b. Again, it is easy for the manipulators to decide if they can make C win. They all veto B. There is a successful manipulation iff C now wins. In the third case, a < c and b < c. The manipulators must partition their m vetoes between A and B so that the total vetoes received by A and B exceeds those for C. Let d be the deficit in weight between A and C and between B and C. That is,
d = (c − a) + (c − b) = 2c − a − b.
We can model d as the sum of n random variables drawn uniformly with probability 1/3 from [0, 2k] and with probability 2/3 from [−k, 0]. These variables have mean 0 and variance 2k 2 /3. By the Central Limit Theorem, d tends to a normal distribution with mean 0, and variance s 2 = 2nk 2 /3. For a manipulation to be possible, d must be less than w, the sum of the weights of the vetoes of the manipulators. By the Central Limit Theorem, w also tends to a normal distribution with mean µ = mk/2, and variance σ 2 = 2mk 2 /3. A simple heuristic argument due to [Karmarkar et al., 1986] and also based on the Central Limit Theorem upper bounds the optimal partition difference of m numbers from (0, k] by O(k √ m/2 m ). In addition, based on the phase transition in number partitioning ], we expect partitioning problems to be easy unless log 2 (k) = Θ(m). Combining these two observations, we expect hard manipulation problems when 0 ≤ w − d ≤ α √ m for some constant α. The probability of this occurring is:
∞ 0 1 √ 2πσ e − (x−µ) 2 2σ 2 x x−α √ m 1 √ 2πs e − y 2 2s 2 dy dx
By substituting for s, µ and σ, we get:
∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 x x−α √ m 1 4πnk 2 /3 e − y 2
4nk 2 /3 dy dx For n ; ∞, this tends to:
∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 α √ m 4πnk 2 /3 e − x 2 4nk 2 /3 dx As e −z ≤ 1 for z > 0, this is upper bounded by: α √ m 4πnk 2 /3 ∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 dx
Since the integral is bounded by 1, m = Θ( √ n) and log 2 (k) = Θ(m), this upper bound varies as:
O( 1 √
m2 m ) Thus, we expect hard instances of manipulation problems to be exponentially rare. Since even a brute force manipulation algorithm takes O(2 m ) time in the worst-case, we do not expect the hard instances to have a significant impact on the average-case as n (and thus m) grows. We stress this is only a heuristic argument. It makes assumptions about the complexity of manipulation problems (in particular that hard instances should lie within the narrow interval 0 ≤ w − d ≤ α √ m). These assumptions are currently only supported by empirical observation and informal argument. However, the experimental results reported in Figure 3 support these conclusions.
Varying weights
The theoretical analyses of manipulation in [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a] suggest that the probability of an election being manipulable is largely independent of k, the size of the weights attached to the vetoes. Figure 4 demonstrates that this indeed appears to be the case in practice. When weights are varied in size from 2 8 to 2 16 , the probability does not appear to change. In fact, the probability curve fits the same simple and universal form plotted in Figure 2. We also observed that the cost of computing a manipulation or proving that none is possible did not change as the weights were varied in size.
Normally distributed votes
What happens with other distributions of votes? The theoretical analyses of manipulation in [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a] suggest that there is a critical coalition size that increases as Θ( √ n) for many types of independent and identically distributed random votes. Similarly, our heuristic argument about why hard manipulation problems are vanishingly rare depends on application of the Central Limit Theorem. It therefore works with other types of independent and identically distributed random votes. We plot the probability that a coalition of m agents can elect a chosen candidate where n agents have already voted. Vetoes are weighted and drawn from a normal distribution with mean 2 8 and standard deviation 2 7 . The x-axis is scaled by √ n.
We shall consider therefore another type of independent and identically distributed vote. In particular, we study an election in which weights are independently drawn from a normal distribution. Figure 5 shows that there is again a smooth phase transition in manipulability. We also plotted Figure 5 on top of Figures 2 and 4. All curves appear to fit the same simple and universal form. As with uniform weights, the computational cost of deciding if an election is manipulable was small even when the coalition size was critical. Finally, we varied the parameters of the normal distribution. The probability of electing a chosen candidate as well as the cost of computing a manipulation did not appear to depend on the mean or variance of the distribution.
Correlated votes
We conjecture that one place to find hard manipulation problems is where votes are more correlated. For example, consider a "hung" election where all n agents veto the candidate that the manipulators wish to win, but the m manipulators have exactly twice the weight of vetoes of the n agents. This election is finely balanced. The favored candidate of the manipulators wins iff the manipulators perfectly partition their vetoes between the two candidates that they wish to lose. In Figure 6, we plot the probability that the m manipulators can make their preferred candidate win in such a "hung" election as we vary the size of their weights k. Similar to number partitioning ], we see a rapid transition Figure 6: Manipulation of an election where votes are highly correlated and the result is "hung". We plot the probability that a coalition of m agents can elect a chosen candidate. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the other agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators.
in manipulability around log 2 (k)/m ≈ 1. In Figure 7, we observe that there is a rapid increase in the computationally complexity to compute a manipulation around this point.
What happens when the votes are less correlated? We consider an election which is perfectly hung as before except for one agent who votes at random between the three candidates. In Figure 8, we plot the cost of computing a manipulation as the weight of this single random veto increases. Even one uncorrelated vote is enough to make manipulation easy if it has the same magnitude in weight as the vetoes of the manipulators. This suggests that we will only find hard manipulation problems in when votes are highly correlated.
Other related work
There have been a number of other recent theoretical results about the computational complexity of manipulating elections. For instance, Procaccia and Rosenschein give a simple greedy procedure that will find a manipulation of a scoring rule for any "junta" distribution of weighted votes in polynomial time with a probability of failure that is an inverse polynomial in n [Procaccia and Rosenschein, 2007b]. A "junta" distribution is concentrated on the hard instances.
As a second example, Friedgut, Kalai and Nisan prove that if the voting rule is neutral and far from dictatorial and there are 3 candidates then there exists an agent for whom a random manipulation succeeds with probability Ω( 1 n ) [Friedgut et al., 2008]. Xia and Conitzer showed that, starting from different assumptions, a random manipulation would succeed with probability Ω( 1 n ) for 3 or more candidates for STV, for 4 or more candidates for any scoring rule and for 5 or more candidates for Copeland [Xia and Conitzer, 2008b].
Coleman and Teague provide polynomial algorithms to compute a manipulation for the STV rule when either the number of voters or the number of candidates is fixed [Coleman and Teague, 2007]. They also conducted an empirical Figure 7: The cost to decide if a hung election can be manipulated. We plot the cost for the CKK algorithm to decide if a coalition of m agents can manipulate a veto election. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the other agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators.
study which demonstrates that only relatively small coalitions are needed to change the elimination order of the STV rule. They observe that most uniform and random elections are not trivially manipulable using a simple greedy heuristic. Finally, similar phenomena have been observed in the phase transition for the Hamiltonian cycle problem [Frank et al., 1998;Vandegriend and Culberson, 1998]. If the number of edges is small, there is likely to be a node of degree smaller than 2. There cannot therefore be any Hamiltonian cycle. By the time that there are enough edges for all nodes to be of degree 2, there are likely to be many possible Hamiltonian cycles and even a simple heuristic can find one. Thus, the phase transition in the existence of a Hamiltonian cycle is not associated with hard instances of the problem. The behavior seen here is similar. By the time the coalition is large enough to manipulate the result, the variance in scores between the candidates is likely to be so large that computing a successful manipulation or proving none is possible is easy.
Conclusions
We have studied whether computational complexity is a barrier to the manipulation for the veto rule. We showed that there is a smooth transition in the probability that a coalition can elect a desired candidate as the size of the manipulating coalition is varied. We demonstrated that a rescaled probability curve displays a simple universal form independent of problem size. Unlike phase transitions for other NPcomplete problems, hard problems are not associated with this transition. Finally, we studied the impact of correlation between votes. We showed that manipulation is hard when votes are highly correlated and the election is "hung". However, even one uncorrelated voter was enough to make manipulation easy again.
What lessons can be learnt from this study? First, there appears to be an universal form for the probability that a coali- Figure 8: The impact of one random voter on the manipulability of a hung election. We plot the cost for the CKK algorithm to decide if a coalition of m agents can manipulate a veto election. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the non-manipulating agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators except for one random nonmanipulating agent whose weight is uniformly drawn from (0, k ]. When the veto of the one random voter has the same weight as the other voters, it is computationally easy to decide if the election can be manipulated.
tion can manipulate the result. Can we derive this theoretically? Second, whilst we have focused on the veto rule, similar behavior is likely with other voting rules. It would, for instance, be interesting to study a more complex rule like STV which is NP-hard to manipulate without weights. Third, is there a connection between the smoothness of the phase transition and problem hardness? Sharp phase transitions like that for satisfiability are associated with hard decision problems, whilst smooth transitions are associated with easy instances of NP-hard problems and with polynomial problems like 2colorability. Fourth, these results demonstrate that empirical studies improve our understanding of manipulation. It would be interesting to consider similar studies for related problems like preference elicitation [Walsh, 2007;Walsh, 2008;Pini et al., 2008].
| 3,536 |
0905.3720
|
2952439423
|
Voting is a simple mechanism to aggregate the preferences of agents. Many voting rules have been shown to be NP-hard to manipulate. However, a number of recent theoretical results suggest that this complexity may only be in the worst-case since manipulation is often easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We demonstrate that there is a smooth transition in the probability that a coalition can elect a desired candidate using the veto rule as the size of the manipulating coalition increases. We show that a rescaled probability curve displays a simple and universal form independent of the size of the problem. We argue that manipulation of the veto rule is asymptotically easy for many independent and identically distributed votes even when the coalition of manipulators is critical in size. Based on this argument, we identify a situation in which manipulation is computationally hard. This is when votes are highly correlated and the election is "hung". We show, however, that even a single uncorrelated voter is enough to make manipulation easy again.
|
Coleman and Teague provide polynomial algorithms to compute a manipulation for the STV rule when either the number of voters or the number of candidates is fixed @cite_7 . They also conducted an empirical study which demonstrates that only relatively small coalitions are needed to change the elimination order of the STV rule. They observe that most uniform and random elections are not trivially manipulable using a simple greedy heuristic.
|
{
"abstract": [
"We study the manipulation of voting schemes, where a voter lies about their preferences in the hope of improving the election's outcome. All voting schemes are potentially manipulable. However, some, such as the Single Transferable Vote (STV) scheme used in Australian elections, are resistant to manipulation because it is NP-hard to compute the manipulating vote(s). We concentrate on STV and some natural generalisations of it called Scoring Elimination Protocols. We show that the hardness result for STV is true only if both the number of voters and the number of candidates are unbounded---we provide algorithms for a manipulation if either of these is fixed. This means that manipulation would not be hard in practice when either number is small. Next we show that the weighted version of the manipulation problem is NP-hard for all Scoring Elimination Protocols except one, which we provide an algorithm for manipulating. Finally we experimentally test a heuristic for solving the manipulation problem and conclude that it would not usually be effective."
],
"cite_N": [
"@cite_7"
],
"mid": [
"1618180659"
]
}
|
Where are the really hard manipulation problems? The phase transition in manipulating the veto rule *
|
The Gibbard-Satterthwaite theorem proves that, under some simple assumptions, most voting rules are manipulable. That is, it may pay for an agent not to report their preferences truthfully. One possible escape from this result was proposed by Bartholdi, Tovey and Trick [Bartholdi et al., 1989]. Whilst a manipulation may exist, perhaps it is computationally too difficult to find. Many results have subsequently been proven showing that various voting rules are NP-hard to manipulate under different assumptions including: an unbounded number of candidates; a small number of candidates but weighted votes; and uncertainty in the distribution of votes. See, for instance, [Bartholdi et al., 1989;Bartholdi and Orlin, 1991; * NICTA is funded by the Australian Government through the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. Conitzer et al., 2007]. There is, however, increasing concern that worst-case results like these may not reflect the difficulty of manipulation in practice. Indeed, a number of recent theoretical results suggest that manipulation may often be computationally easy [Conitzer and Sandholm, 2006;Procaccia and Rosenschein, 2007b;Xia and Conitzer, 2008a;Friedgut et al., 2008;Xia and Conitzer, 2008b].
In this paper we show that, in addition to attacking this question theoretically, we can profitably study it empirically. There are several reasons why empirical analysis is useful. First, theoretical analysis is often asymptotic so does not show the size of hidden constants. In addition, elections are typically bounded in size. Can we be sure that asymptotic behaviour is relevant for the finite sized electorates met in practice? Second, theoretical analysis is often restricted to particular distributions (e.g. independent and identically distributed votes). Manipulation may be very different in practice due to correlations between votes. For instance, if all preferences are single-peaked then there are voting rules which cannot be manipulated. It is in the best interests of all agents to state their true preferences. Third, many of these theoretical results about the easiness of manipulation have been hard won and are limited in their scope. For instance, Friedgut et al.
were not able to extend their result beyond three candidates [Friedgut et al., 2008]. An empirical study may quickly suggest if the result extends to more candidates. Finally, empirical studies may suggest new avenues for theoretical study. For example, the experiments reported here suggest a simple and universal form for the probability that a coalition is able to elect a desired candidate. It would be interesting to try to derive this form theoretically.
Finding manipulations
We focus on the veto rule. This is a scoring rule in which each agent gets to cast a veto against one candidate. The candidate with the fewest vetoes wins. We suppose that tie-breaking is in favor of the manipulators. However, it is easy to relax this assumption. There are several reason why we start this investigation into the complexity of manipulation with the veto rule. First, the veto rule is very simple to reason about. This can be contrasted with other voting rules that are computationally hard to manipulate. For example, the STV rule is NP-hard to manipulate [Bartholdi and Orlin, 1991] but this complexity appears to come from reasoning about what happens between the different rounds. Second, the veto rule is on the borderline of tractability since constructive manipulation of the rule by a coalition of weighted agents is NP-hard but destructive manipulation is polynomial [Conitzer et al., 2007]. Third, as the next theorem shows, number partitioning algorithms can be used to compute a successful manipulation of the veto rule. More precisely, manipulation of an election with 3 candidates and weighted votes (which is NP-hard [Conitzer et al., 2007]) can be directly reduced to 2-way number partitioning. We therefore compute manipulations in our experiments using the efficient CKK algorithm [Korf, 1995].
Theorem 1 There exists a successful manipulation of an election with 3 candidates by a weighted coalition using the veto rule iff there exists a partitioning of W ∪ {|a − b|} into two bags such that the difference between their two sums is less than or equal to a+b−2c+ i∈W i where W is the multiset of weights of the manipulating coalition, a, b and c are the weights of vetoes assigned to the three candidates by the non-manipulators and the manipulators wish the candidate with weight c to win.
Proof: It never helps a coalition manipulating the veto rule to veto the candidate that they wish to win. The coalition does, however, need to decide how to divide their vetoes between the candidates that they wish to lose. Consider the case a ≥ b. Suppose the partition has weights w − ∆/2 and w + ∆/2 where 2w = i∈W ∪{|a−b|} i and ∆ is the difference between the two sums. The same partition of vetoes is a successful manipulation iff the winning candidate has no more vetoes than the next best candidate. That is,
c ≤ b + (w − ∆/2). Hence ∆ ≤ 2w + 2b − 2c = (a − b) + 2b − 2c + i∈W i = (a + b − 2c) + 2 i∈W i. In the other case, a < b and ∆ ≤ (b + a − 2c) + i∈W i. Thus ∆ ≤ a + b − 2c + i∈W i. 2
Similar arguments can be given to show that the manipulation of a veto election of p candidates can be reduced to finding a p − 1-way partition of numbers, and that manipulation of any scoring rule with 3 candidates and weighted votes can be reduced to 2-way number partitioning. However, manipulating elections with greater than 3 candidates and scoring rules other than veto or plurality appears to require other computational approaches.
Uniform votes
We consider the case that the n agents veto uniformly at random one of the 3 possible candidates, and vetoes carry weights drawn uniformly from (0, k]. When the coalition is small in size, it has too little weight to be able to change the result. On the other hand, when the coalition is large in size, it is sure to be able to make a favored candidate win. There is thus a transition in the manipulability of the problem as the coalition size increases (see Figure 1).
Based on [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a], we expect the critical coalition size to increase as √ n. In Figure 2, we see that the phase transition displays a simple and universal form when plotted against m/ √ n. The phase transition appears to be smooth, with the probability varying slowly and not approaching a step function as problem size increases. We obtained a good fit with 1− 2 3 e −m/ √ n . Other smooth phase transitions have been seen with 2-coloring [Achlioptas, 1999], 1-in-2 satisfiability and Not-All-Equal 2-satisfiability [Walsh, 2002]. It is interesting to note that all these decision problems are polynomial.
The theoretical results mentioned earlier leave open how hard it is to compute whether a manipulation is possible when the coalition size is critical. Figure 3 displays the computational cost to find a manipulation (or prove none exists) using the efficient CKK algorithm. Even in the critical region where problems may or may not be manipulable, it is easy to compute whether the problem is manipulable. All problems are solved in a few branches. This contrasts with phase transition behaviour in problems like satisfiability [Cheeseman et al., 1991;Mitchell et al., 1992; The x-axis is scaled by 1/ √ n. Gent and Walsh, 1994], constraint satisfaction [Gent et al., 1995], number partitioning [Gent and Walsh, 1996a;] and the traveling salesman problem [Gent and Walsh, 1996b] where the hardest problems occur around the phase transition.
Why hard problems are rare
Based on our reduction of manipulation problems to number partitioning, we give a heuristic argument why hard manipulation problems become vanishing rare as n ; ∞ and m = Θ( √ n). The basic idea is simple: by the time the coalition is large enough to be able to change the result, the variance in scores between the candidates is likely to be so large that computing a successful manipulation or proving none is possible will be easy.
Suppose that the manipulators want candidates A and B to lose so that C wins, and that the non-manipulators have cast vetoes of weight a, b and c for A, B and C respectively. Without loss of generality we suppose that a ≥ b. There are three cases to consider. In the first case, a ≥ c and b ≥ c. It is then easy for the manipulators to make C win since C wins whether they veto A or B. In the second case, a ≥ c > b. Again, it is easy for the manipulators to decide if they can make C win. They all veto B. There is a successful manipulation iff C now wins. In the third case, a < c and b < c. The manipulators must partition their m vetoes between A and B so that the total vetoes received by A and B exceeds those for C. Let d be the deficit in weight between A and C and between B and C. That is,
d = (c − a) + (c − b) = 2c − a − b.
We can model d as the sum of n random variables drawn uniformly with probability 1/3 from [0, 2k] and with probability 2/3 from [−k, 0]. These variables have mean 0 and variance 2k 2 /3. By the Central Limit Theorem, d tends to a normal distribution with mean 0, and variance s 2 = 2nk 2 /3. For a manipulation to be possible, d must be less than w, the sum of the weights of the vetoes of the manipulators. By the Central Limit Theorem, w also tends to a normal distribution with mean µ = mk/2, and variance σ 2 = 2mk 2 /3. A simple heuristic argument due to [Karmarkar et al., 1986] and also based on the Central Limit Theorem upper bounds the optimal partition difference of m numbers from (0, k] by O(k √ m/2 m ). In addition, based on the phase transition in number partitioning ], we expect partitioning problems to be easy unless log 2 (k) = Θ(m). Combining these two observations, we expect hard manipulation problems when 0 ≤ w − d ≤ α √ m for some constant α. The probability of this occurring is:
∞ 0 1 √ 2πσ e − (x−µ) 2 2σ 2 x x−α √ m 1 √ 2πs e − y 2 2s 2 dy dx
By substituting for s, µ and σ, we get:
∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 x x−α √ m 1 4πnk 2 /3 e − y 2
4nk 2 /3 dy dx For n ; ∞, this tends to:
∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 α √ m 4πnk 2 /3 e − x 2 4nk 2 /3 dx As e −z ≤ 1 for z > 0, this is upper bounded by: α √ m 4πnk 2 /3 ∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 dx
Since the integral is bounded by 1, m = Θ( √ n) and log 2 (k) = Θ(m), this upper bound varies as:
O( 1 √
m2 m ) Thus, we expect hard instances of manipulation problems to be exponentially rare. Since even a brute force manipulation algorithm takes O(2 m ) time in the worst-case, we do not expect the hard instances to have a significant impact on the average-case as n (and thus m) grows. We stress this is only a heuristic argument. It makes assumptions about the complexity of manipulation problems (in particular that hard instances should lie within the narrow interval 0 ≤ w − d ≤ α √ m). These assumptions are currently only supported by empirical observation and informal argument. However, the experimental results reported in Figure 3 support these conclusions.
Varying weights
The theoretical analyses of manipulation in [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a] suggest that the probability of an election being manipulable is largely independent of k, the size of the weights attached to the vetoes. Figure 4 demonstrates that this indeed appears to be the case in practice. When weights are varied in size from 2 8 to 2 16 , the probability does not appear to change. In fact, the probability curve fits the same simple and universal form plotted in Figure 2. We also observed that the cost of computing a manipulation or proving that none is possible did not change as the weights were varied in size.
Normally distributed votes
What happens with other distributions of votes? The theoretical analyses of manipulation in [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a] suggest that there is a critical coalition size that increases as Θ( √ n) for many types of independent and identically distributed random votes. Similarly, our heuristic argument about why hard manipulation problems are vanishingly rare depends on application of the Central Limit Theorem. It therefore works with other types of independent and identically distributed random votes. We plot the probability that a coalition of m agents can elect a chosen candidate where n agents have already voted. Vetoes are weighted and drawn from a normal distribution with mean 2 8 and standard deviation 2 7 . The x-axis is scaled by √ n.
We shall consider therefore another type of independent and identically distributed vote. In particular, we study an election in which weights are independently drawn from a normal distribution. Figure 5 shows that there is again a smooth phase transition in manipulability. We also plotted Figure 5 on top of Figures 2 and 4. All curves appear to fit the same simple and universal form. As with uniform weights, the computational cost of deciding if an election is manipulable was small even when the coalition size was critical. Finally, we varied the parameters of the normal distribution. The probability of electing a chosen candidate as well as the cost of computing a manipulation did not appear to depend on the mean or variance of the distribution.
Correlated votes
We conjecture that one place to find hard manipulation problems is where votes are more correlated. For example, consider a "hung" election where all n agents veto the candidate that the manipulators wish to win, but the m manipulators have exactly twice the weight of vetoes of the n agents. This election is finely balanced. The favored candidate of the manipulators wins iff the manipulators perfectly partition their vetoes between the two candidates that they wish to lose. In Figure 6, we plot the probability that the m manipulators can make their preferred candidate win in such a "hung" election as we vary the size of their weights k. Similar to number partitioning ], we see a rapid transition Figure 6: Manipulation of an election where votes are highly correlated and the result is "hung". We plot the probability that a coalition of m agents can elect a chosen candidate. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the other agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators.
in manipulability around log 2 (k)/m ≈ 1. In Figure 7, we observe that there is a rapid increase in the computationally complexity to compute a manipulation around this point.
What happens when the votes are less correlated? We consider an election which is perfectly hung as before except for one agent who votes at random between the three candidates. In Figure 8, we plot the cost of computing a manipulation as the weight of this single random veto increases. Even one uncorrelated vote is enough to make manipulation easy if it has the same magnitude in weight as the vetoes of the manipulators. This suggests that we will only find hard manipulation problems in when votes are highly correlated.
Other related work
There have been a number of other recent theoretical results about the computational complexity of manipulating elections. For instance, Procaccia and Rosenschein give a simple greedy procedure that will find a manipulation of a scoring rule for any "junta" distribution of weighted votes in polynomial time with a probability of failure that is an inverse polynomial in n [Procaccia and Rosenschein, 2007b]. A "junta" distribution is concentrated on the hard instances.
As a second example, Friedgut, Kalai and Nisan prove that if the voting rule is neutral and far from dictatorial and there are 3 candidates then there exists an agent for whom a random manipulation succeeds with probability Ω( 1 n ) [Friedgut et al., 2008]. Xia and Conitzer showed that, starting from different assumptions, a random manipulation would succeed with probability Ω( 1 n ) for 3 or more candidates for STV, for 4 or more candidates for any scoring rule and for 5 or more candidates for Copeland [Xia and Conitzer, 2008b].
Coleman and Teague provide polynomial algorithms to compute a manipulation for the STV rule when either the number of voters or the number of candidates is fixed [Coleman and Teague, 2007]. They also conducted an empirical Figure 7: The cost to decide if a hung election can be manipulated. We plot the cost for the CKK algorithm to decide if a coalition of m agents can manipulate a veto election. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the other agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators.
study which demonstrates that only relatively small coalitions are needed to change the elimination order of the STV rule. They observe that most uniform and random elections are not trivially manipulable using a simple greedy heuristic. Finally, similar phenomena have been observed in the phase transition for the Hamiltonian cycle problem [Frank et al., 1998;Vandegriend and Culberson, 1998]. If the number of edges is small, there is likely to be a node of degree smaller than 2. There cannot therefore be any Hamiltonian cycle. By the time that there are enough edges for all nodes to be of degree 2, there are likely to be many possible Hamiltonian cycles and even a simple heuristic can find one. Thus, the phase transition in the existence of a Hamiltonian cycle is not associated with hard instances of the problem. The behavior seen here is similar. By the time the coalition is large enough to manipulate the result, the variance in scores between the candidates is likely to be so large that computing a successful manipulation or proving none is possible is easy.
Conclusions
We have studied whether computational complexity is a barrier to the manipulation for the veto rule. We showed that there is a smooth transition in the probability that a coalition can elect a desired candidate as the size of the manipulating coalition is varied. We demonstrated that a rescaled probability curve displays a simple universal form independent of problem size. Unlike phase transitions for other NPcomplete problems, hard problems are not associated with this transition. Finally, we studied the impact of correlation between votes. We showed that manipulation is hard when votes are highly correlated and the election is "hung". However, even one uncorrelated voter was enough to make manipulation easy again.
What lessons can be learnt from this study? First, there appears to be an universal form for the probability that a coali- Figure 8: The impact of one random voter on the manipulability of a hung election. We plot the cost for the CKK algorithm to decide if a coalition of m agents can manipulate a veto election. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the non-manipulating agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators except for one random nonmanipulating agent whose weight is uniformly drawn from (0, k ]. When the veto of the one random voter has the same weight as the other voters, it is computationally easy to decide if the election can be manipulated.
tion can manipulate the result. Can we derive this theoretically? Second, whilst we have focused on the veto rule, similar behavior is likely with other voting rules. It would, for instance, be interesting to study a more complex rule like STV which is NP-hard to manipulate without weights. Third, is there a connection between the smoothness of the phase transition and problem hardness? Sharp phase transitions like that for satisfiability are associated with hard decision problems, whilst smooth transitions are associated with easy instances of NP-hard problems and with polynomial problems like 2colorability. Fourth, these results demonstrate that empirical studies improve our understanding of manipulation. It would be interesting to consider similar studies for related problems like preference elicitation [Walsh, 2007;Walsh, 2008;Pini et al., 2008].
| 3,536 |
0905.3720
|
2952439423
|
Voting is a simple mechanism to aggregate the preferences of agents. Many voting rules have been shown to be NP-hard to manipulate. However, a number of recent theoretical results suggest that this complexity may only be in the worst-case since manipulation is often easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We demonstrate that there is a smooth transition in the probability that a coalition can elect a desired candidate using the veto rule as the size of the manipulating coalition increases. We show that a rescaled probability curve displays a simple and universal form independent of the size of the problem. We argue that manipulation of the veto rule is asymptotically easy for many independent and identically distributed votes even when the coalition of manipulators is critical in size. Based on this argument, we identify a situation in which manipulation is computationally hard. This is when votes are highly correlated and the election is "hung". We show, however, that even a single uncorrelated voter is enough to make manipulation easy again.
|
Finally, similar phenomena have been observed in the phase transition for the Hamiltonian cycle problem @cite_22 @cite_11 . If the number of edges is small, there is likely to be a node of degree smaller than 2. There cannot therefore be any Hamiltonian cycle. By the time that there are enough edges for all nodes to be of degree 2, there are likely to be many possible Hamiltonian cycles and even a simple heuristic can find one. Thus, the phase transition in the existence of a Hamiltonian cycle is not associated with hard instances of the problem. The behavior seen here is similar. By the time the coalition is large enough to manipulate the result, the variance in scores between the candidates is likely to be so large that computing a successful manipulation or proving none is possible is easy.
|
{
"abstract": [
"Asymptotic and finite size parameters for phase transitions : Hamiltonian circuit as a case study",
"Using an improved backtrack algorithm with sophisticated pruning techniques, we revise previous observations correlating a high frequency of hard to solve Hamiltonian cycle instances with the Gn,m phase transition between Hamiltonicity and non-Hamiltonicity. Instead all tested graphs of 100 to 1500 vertices are easily solved. When we artificially restrict the degree sequence with a bounded maximum degree, although there is some increase in difficulty, the frequency of hard graphs is still low. When we consider more regular graphs based on a generalization of knight's tours, we observe frequent instances of really hard graphs, but on these the average degree is bounded by a constant. We design a set of graphs with a feature our algorithm is unable to detect and so are very hard for our algorithm, but in these we can vary the average degree from O(1) to O(n). We have so far found no class of graphs correlated with the Gn,m phase transition which asymptotically produces a high frequency of hard instances."
],
"cite_N": [
"@cite_22",
"@cite_11"
],
"mid": [
"2045652154",
"1938603953"
]
}
|
Where are the really hard manipulation problems? The phase transition in manipulating the veto rule *
|
The Gibbard-Satterthwaite theorem proves that, under some simple assumptions, most voting rules are manipulable. That is, it may pay for an agent not to report their preferences truthfully. One possible escape from this result was proposed by Bartholdi, Tovey and Trick [Bartholdi et al., 1989]. Whilst a manipulation may exist, perhaps it is computationally too difficult to find. Many results have subsequently been proven showing that various voting rules are NP-hard to manipulate under different assumptions including: an unbounded number of candidates; a small number of candidates but weighted votes; and uncertainty in the distribution of votes. See, for instance, [Bartholdi et al., 1989;Bartholdi and Orlin, 1991; * NICTA is funded by the Australian Government through the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. Conitzer et al., 2007]. There is, however, increasing concern that worst-case results like these may not reflect the difficulty of manipulation in practice. Indeed, a number of recent theoretical results suggest that manipulation may often be computationally easy [Conitzer and Sandholm, 2006;Procaccia and Rosenschein, 2007b;Xia and Conitzer, 2008a;Friedgut et al., 2008;Xia and Conitzer, 2008b].
In this paper we show that, in addition to attacking this question theoretically, we can profitably study it empirically. There are several reasons why empirical analysis is useful. First, theoretical analysis is often asymptotic so does not show the size of hidden constants. In addition, elections are typically bounded in size. Can we be sure that asymptotic behaviour is relevant for the finite sized electorates met in practice? Second, theoretical analysis is often restricted to particular distributions (e.g. independent and identically distributed votes). Manipulation may be very different in practice due to correlations between votes. For instance, if all preferences are single-peaked then there are voting rules which cannot be manipulated. It is in the best interests of all agents to state their true preferences. Third, many of these theoretical results about the easiness of manipulation have been hard won and are limited in their scope. For instance, Friedgut et al.
were not able to extend their result beyond three candidates [Friedgut et al., 2008]. An empirical study may quickly suggest if the result extends to more candidates. Finally, empirical studies may suggest new avenues for theoretical study. For example, the experiments reported here suggest a simple and universal form for the probability that a coalition is able to elect a desired candidate. It would be interesting to try to derive this form theoretically.
Finding manipulations
We focus on the veto rule. This is a scoring rule in which each agent gets to cast a veto against one candidate. The candidate with the fewest vetoes wins. We suppose that tie-breaking is in favor of the manipulators. However, it is easy to relax this assumption. There are several reason why we start this investigation into the complexity of manipulation with the veto rule. First, the veto rule is very simple to reason about. This can be contrasted with other voting rules that are computationally hard to manipulate. For example, the STV rule is NP-hard to manipulate [Bartholdi and Orlin, 1991] but this complexity appears to come from reasoning about what happens between the different rounds. Second, the veto rule is on the borderline of tractability since constructive manipulation of the rule by a coalition of weighted agents is NP-hard but destructive manipulation is polynomial [Conitzer et al., 2007]. Third, as the next theorem shows, number partitioning algorithms can be used to compute a successful manipulation of the veto rule. More precisely, manipulation of an election with 3 candidates and weighted votes (which is NP-hard [Conitzer et al., 2007]) can be directly reduced to 2-way number partitioning. We therefore compute manipulations in our experiments using the efficient CKK algorithm [Korf, 1995].
Theorem 1 There exists a successful manipulation of an election with 3 candidates by a weighted coalition using the veto rule iff there exists a partitioning of W ∪ {|a − b|} into two bags such that the difference between their two sums is less than or equal to a+b−2c+ i∈W i where W is the multiset of weights of the manipulating coalition, a, b and c are the weights of vetoes assigned to the three candidates by the non-manipulators and the manipulators wish the candidate with weight c to win.
Proof: It never helps a coalition manipulating the veto rule to veto the candidate that they wish to win. The coalition does, however, need to decide how to divide their vetoes between the candidates that they wish to lose. Consider the case a ≥ b. Suppose the partition has weights w − ∆/2 and w + ∆/2 where 2w = i∈W ∪{|a−b|} i and ∆ is the difference between the two sums. The same partition of vetoes is a successful manipulation iff the winning candidate has no more vetoes than the next best candidate. That is,
c ≤ b + (w − ∆/2). Hence ∆ ≤ 2w + 2b − 2c = (a − b) + 2b − 2c + i∈W i = (a + b − 2c) + 2 i∈W i. In the other case, a < b and ∆ ≤ (b + a − 2c) + i∈W i. Thus ∆ ≤ a + b − 2c + i∈W i. 2
Similar arguments can be given to show that the manipulation of a veto election of p candidates can be reduced to finding a p − 1-way partition of numbers, and that manipulation of any scoring rule with 3 candidates and weighted votes can be reduced to 2-way number partitioning. However, manipulating elections with greater than 3 candidates and scoring rules other than veto or plurality appears to require other computational approaches.
Uniform votes
We consider the case that the n agents veto uniformly at random one of the 3 possible candidates, and vetoes carry weights drawn uniformly from (0, k]. When the coalition is small in size, it has too little weight to be able to change the result. On the other hand, when the coalition is large in size, it is sure to be able to make a favored candidate win. There is thus a transition in the manipulability of the problem as the coalition size increases (see Figure 1).
Based on [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a], we expect the critical coalition size to increase as √ n. In Figure 2, we see that the phase transition displays a simple and universal form when plotted against m/ √ n. The phase transition appears to be smooth, with the probability varying slowly and not approaching a step function as problem size increases. We obtained a good fit with 1− 2 3 e −m/ √ n . Other smooth phase transitions have been seen with 2-coloring [Achlioptas, 1999], 1-in-2 satisfiability and Not-All-Equal 2-satisfiability [Walsh, 2002]. It is interesting to note that all these decision problems are polynomial.
The theoretical results mentioned earlier leave open how hard it is to compute whether a manipulation is possible when the coalition size is critical. Figure 3 displays the computational cost to find a manipulation (or prove none exists) using the efficient CKK algorithm. Even in the critical region where problems may or may not be manipulable, it is easy to compute whether the problem is manipulable. All problems are solved in a few branches. This contrasts with phase transition behaviour in problems like satisfiability [Cheeseman et al., 1991;Mitchell et al., 1992; The x-axis is scaled by 1/ √ n. Gent and Walsh, 1994], constraint satisfaction [Gent et al., 1995], number partitioning [Gent and Walsh, 1996a;] and the traveling salesman problem [Gent and Walsh, 1996b] where the hardest problems occur around the phase transition.
Why hard problems are rare
Based on our reduction of manipulation problems to number partitioning, we give a heuristic argument why hard manipulation problems become vanishing rare as n ; ∞ and m = Θ( √ n). The basic idea is simple: by the time the coalition is large enough to be able to change the result, the variance in scores between the candidates is likely to be so large that computing a successful manipulation or proving none is possible will be easy.
Suppose that the manipulators want candidates A and B to lose so that C wins, and that the non-manipulators have cast vetoes of weight a, b and c for A, B and C respectively. Without loss of generality we suppose that a ≥ b. There are three cases to consider. In the first case, a ≥ c and b ≥ c. It is then easy for the manipulators to make C win since C wins whether they veto A or B. In the second case, a ≥ c > b. Again, it is easy for the manipulators to decide if they can make C win. They all veto B. There is a successful manipulation iff C now wins. In the third case, a < c and b < c. The manipulators must partition their m vetoes between A and B so that the total vetoes received by A and B exceeds those for C. Let d be the deficit in weight between A and C and between B and C. That is,
d = (c − a) + (c − b) = 2c − a − b.
We can model d as the sum of n random variables drawn uniformly with probability 1/3 from [0, 2k] and with probability 2/3 from [−k, 0]. These variables have mean 0 and variance 2k 2 /3. By the Central Limit Theorem, d tends to a normal distribution with mean 0, and variance s 2 = 2nk 2 /3. For a manipulation to be possible, d must be less than w, the sum of the weights of the vetoes of the manipulators. By the Central Limit Theorem, w also tends to a normal distribution with mean µ = mk/2, and variance σ 2 = 2mk 2 /3. A simple heuristic argument due to [Karmarkar et al., 1986] and also based on the Central Limit Theorem upper bounds the optimal partition difference of m numbers from (0, k] by O(k √ m/2 m ). In addition, based on the phase transition in number partitioning ], we expect partitioning problems to be easy unless log 2 (k) = Θ(m). Combining these two observations, we expect hard manipulation problems when 0 ≤ w − d ≤ α √ m for some constant α. The probability of this occurring is:
∞ 0 1 √ 2πσ e − (x−µ) 2 2σ 2 x x−α √ m 1 √ 2πs e − y 2 2s 2 dy dx
By substituting for s, µ and σ, we get:
∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 x x−α √ m 1 4πnk 2 /3 e − y 2
4nk 2 /3 dy dx For n ; ∞, this tends to:
∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 α √ m 4πnk 2 /3 e − x 2 4nk 2 /3 dx As e −z ≤ 1 for z > 0, this is upper bounded by: α √ m 4πnk 2 /3 ∞ 0 1 4πmk 2 /3 e − (x−mk/2) 2 4mk 2 /3 dx
Since the integral is bounded by 1, m = Θ( √ n) and log 2 (k) = Θ(m), this upper bound varies as:
O( 1 √
m2 m ) Thus, we expect hard instances of manipulation problems to be exponentially rare. Since even a brute force manipulation algorithm takes O(2 m ) time in the worst-case, we do not expect the hard instances to have a significant impact on the average-case as n (and thus m) grows. We stress this is only a heuristic argument. It makes assumptions about the complexity of manipulation problems (in particular that hard instances should lie within the narrow interval 0 ≤ w − d ≤ α √ m). These assumptions are currently only supported by empirical observation and informal argument. However, the experimental results reported in Figure 3 support these conclusions.
Varying weights
The theoretical analyses of manipulation in [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a] suggest that the probability of an election being manipulable is largely independent of k, the size of the weights attached to the vetoes. Figure 4 demonstrates that this indeed appears to be the case in practice. When weights are varied in size from 2 8 to 2 16 , the probability does not appear to change. In fact, the probability curve fits the same simple and universal form plotted in Figure 2. We also observed that the cost of computing a manipulation or proving that none is possible did not change as the weights were varied in size.
Normally distributed votes
What happens with other distributions of votes? The theoretical analyses of manipulation in [Procaccia and Rosenschein, 2007a;Xia and Conitzer, 2008a] suggest that there is a critical coalition size that increases as Θ( √ n) for many types of independent and identically distributed random votes. Similarly, our heuristic argument about why hard manipulation problems are vanishingly rare depends on application of the Central Limit Theorem. It therefore works with other types of independent and identically distributed random votes. We plot the probability that a coalition of m agents can elect a chosen candidate where n agents have already voted. Vetoes are weighted and drawn from a normal distribution with mean 2 8 and standard deviation 2 7 . The x-axis is scaled by √ n.
We shall consider therefore another type of independent and identically distributed vote. In particular, we study an election in which weights are independently drawn from a normal distribution. Figure 5 shows that there is again a smooth phase transition in manipulability. We also plotted Figure 5 on top of Figures 2 and 4. All curves appear to fit the same simple and universal form. As with uniform weights, the computational cost of deciding if an election is manipulable was small even when the coalition size was critical. Finally, we varied the parameters of the normal distribution. The probability of electing a chosen candidate as well as the cost of computing a manipulation did not appear to depend on the mean or variance of the distribution.
Correlated votes
We conjecture that one place to find hard manipulation problems is where votes are more correlated. For example, consider a "hung" election where all n agents veto the candidate that the manipulators wish to win, but the m manipulators have exactly twice the weight of vetoes of the n agents. This election is finely balanced. The favored candidate of the manipulators wins iff the manipulators perfectly partition their vetoes between the two candidates that they wish to lose. In Figure 6, we plot the probability that the m manipulators can make their preferred candidate win in such a "hung" election as we vary the size of their weights k. Similar to number partitioning ], we see a rapid transition Figure 6: Manipulation of an election where votes are highly correlated and the result is "hung". We plot the probability that a coalition of m agents can elect a chosen candidate. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the other agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators.
in manipulability around log 2 (k)/m ≈ 1. In Figure 7, we observe that there is a rapid increase in the computationally complexity to compute a manipulation around this point.
What happens when the votes are less correlated? We consider an election which is perfectly hung as before except for one agent who votes at random between the three candidates. In Figure 8, we plot the cost of computing a manipulation as the weight of this single random veto increases. Even one uncorrelated vote is enough to make manipulation easy if it has the same magnitude in weight as the vetoes of the manipulators. This suggests that we will only find hard manipulation problems in when votes are highly correlated.
Other related work
There have been a number of other recent theoretical results about the computational complexity of manipulating elections. For instance, Procaccia and Rosenschein give a simple greedy procedure that will find a manipulation of a scoring rule for any "junta" distribution of weighted votes in polynomial time with a probability of failure that is an inverse polynomial in n [Procaccia and Rosenschein, 2007b]. A "junta" distribution is concentrated on the hard instances.
As a second example, Friedgut, Kalai and Nisan prove that if the voting rule is neutral and far from dictatorial and there are 3 candidates then there exists an agent for whom a random manipulation succeeds with probability Ω( 1 n ) [Friedgut et al., 2008]. Xia and Conitzer showed that, starting from different assumptions, a random manipulation would succeed with probability Ω( 1 n ) for 3 or more candidates for STV, for 4 or more candidates for any scoring rule and for 5 or more candidates for Copeland [Xia and Conitzer, 2008b].
Coleman and Teague provide polynomial algorithms to compute a manipulation for the STV rule when either the number of voters or the number of candidates is fixed [Coleman and Teague, 2007]. They also conducted an empirical Figure 7: The cost to decide if a hung election can be manipulated. We plot the cost for the CKK algorithm to decide if a coalition of m agents can manipulate a veto election. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the other agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators.
study which demonstrates that only relatively small coalitions are needed to change the elimination order of the STV rule. They observe that most uniform and random elections are not trivially manipulable using a simple greedy heuristic. Finally, similar phenomena have been observed in the phase transition for the Hamiltonian cycle problem [Frank et al., 1998;Vandegriend and Culberson, 1998]. If the number of edges is small, there is likely to be a node of degree smaller than 2. There cannot therefore be any Hamiltonian cycle. By the time that there are enough edges for all nodes to be of degree 2, there are likely to be many possible Hamiltonian cycles and even a simple heuristic can find one. Thus, the phase transition in the existence of a Hamiltonian cycle is not associated with hard instances of the problem. The behavior seen here is similar. By the time the coalition is large enough to manipulate the result, the variance in scores between the candidates is likely to be so large that computing a successful manipulation or proving none is possible is easy.
Conclusions
We have studied whether computational complexity is a barrier to the manipulation for the veto rule. We showed that there is a smooth transition in the probability that a coalition can elect a desired candidate as the size of the manipulating coalition is varied. We demonstrated that a rescaled probability curve displays a simple universal form independent of problem size. Unlike phase transitions for other NPcomplete problems, hard problems are not associated with this transition. Finally, we studied the impact of correlation between votes. We showed that manipulation is hard when votes are highly correlated and the election is "hung". However, even one uncorrelated voter was enough to make manipulation easy again.
What lessons can be learnt from this study? First, there appears to be an universal form for the probability that a coali- Figure 8: The impact of one random voter on the manipulability of a hung election. We plot the cost for the CKK algorithm to decide if a coalition of m agents can manipulate a veto election. Vetoes of the manipulators are weighted and uniformly drawn from (0, k], the non-manipulating agents have all vetoed the candidate that the manipulators wish to win, and the sum of the weights of the manipulators is twice that of the non-manipulators except for one random nonmanipulating agent whose weight is uniformly drawn from (0, k ]. When the veto of the one random voter has the same weight as the other voters, it is computationally easy to decide if the election can be manipulated.
tion can manipulate the result. Can we derive this theoretically? Second, whilst we have focused on the veto rule, similar behavior is likely with other voting rules. It would, for instance, be interesting to study a more complex rule like STV which is NP-hard to manipulate without weights. Third, is there a connection between the smoothness of the phase transition and problem hardness? Sharp phase transitions like that for satisfiability are associated with hard decision problems, whilst smooth transitions are associated with easy instances of NP-hard problems and with polynomial problems like 2colorability. Fourth, these results demonstrate that empirical studies improve our understanding of manipulation. It would be interesting to consider similar studies for related problems like preference elicitation [Walsh, 2007;Walsh, 2008;Pini et al., 2008].
| 3,536 |
0904.4041
|
1488291195
|
The typical content-based image retrieval problem is to find images within a database that are similar to a given query image. This paper presents a solution to a different problem, namely that of content based sub-image retrieval, i.e., finding images from a database that contains another image. Note that this is different from finding a region in a (segmented) image that is similar to another image region given as a query. We present a technique for CBsIR that explores relevance feedback, i.e., the user's input on intermediary results, in order to improve retrieval efficiency. Upon modeling images as a set of overlapping and recursive tiles, we use a tile re-weighting scheme that assigns penalties to each tile of the database images and updates the tile penalties for all relevant images retrieved at each iteration using both the relevant and irrelevant images identified by the user. Each tile is modeled by means of its color content using a compact but very efficient method which can, indirectly, capture some notion of texture as well, despite the fact that only color information is maintained. Performance evaluation on a largely heterogeneous dataset of over 10,000 images shows that the system can achieve a stable average recall value of 70 within the top 20 retrieved (and presented) images after only 5 iterations, with each such iteration taking about 2 seconds on an off-the-shelf desktop computer.
|
The paper by Leung and Ng @cite_3 investigates the idea of either enlarging the query sub-image to match the size of an image block obtained by the four-level multiscale representation of the database images, or conversely contracting the image blocks of the database images so that they become as small as the query sub-image. The paper presents an analytical cost model and focuses on avoiding I O overhead during query processing time. To find a good strategy to search multiple resolutions, four techniques are investigated: the branch-and-bound algorithm, Pure Vertical (PV), Pure Horizontal (PH) and Horizontal-and-Vertical (HV). The HV strategy is argued to be the best considering efficiency. However, the authors do not report clear conclusions regarding the effectiveness (e.g., Precision and or Recall) of their approach.
|
{
"abstract": [
"Many database management systems support whole-image matching. However, users may only remember certain subregions of the images. In this paper, we develop Padding and Reduction Algorithms to support subimage queries of arbitrary size based on local color information. The idea is to estimate the best- case lower bound to the dissimilarity measure between the query and the image. By making use of multiresolution representation, this lower bound becomes tighter as the scale becomes finer. Because image contents are usually pre- extracted and stored, a key issue is how to determine the number of levels used in the representation. We address this issue analytically by estimating the CPU and I O costs, and experimentally by comparing the performance and accuracy of the outcomes of various filtering schemes. Our findings suggest that a 3-level hierarchy is preferred. We also study three strategies for searching multiple resolutions. Our studies indicate that the hybrid strategy with horizontal filtering on the coarse level and vertical filtering on remaining levels is the best choice when using Padding and Reduction Algorithms in the preferred 3-level multiresolution representation. The best 10 desired images can be retrieved efficiently and effectively from a collection of a thousand images in about 3.5 seconds.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only."
],
"cite_N": [
"@cite_3"
],
"mid": [
"2059783351"
]
}
|
Content-Based Sub-Image Retrieval with Relevance Feedback
|
Most of the content-based image retrieval (CBIR) systems perform retrieval based on a full image comparison, i.e., given a query image the system returns overall similar images. This is not useful if users are also interested in images from the database that contain an image (perhaps an object) similar to a query image. We call this searching process Content-Based sub-Image Retrieval (CB-sIR), and it is defined as follows [18]: given an image query Q and an image database S, retrieve from S those images Q which contain Q according to some notion of similarity. To illustrate this consider Figure 1 which displays an example query image and its relevant answer set. of such an answer set, and their respective ranks, retrieved within the top 20 matches after CBsIR is performed 1 . Note that the other 17 images returned are considered non-relevant to the query. Now assume that the user is given the opportunity to mark those 3 images as relevant and all other 17 as irrelevant, i.e., the user is allowed to provide relevance feedback. Figure 3 shows the relevant images retrieved (along with their rank) after taking such feedback into account. Note that all images previously obtained were ranked higher and also new images were found and ranked high as well. The sub-image retrieval problem we consider is similar to region-based image retrieval (RBIR), e.g. [1,9], since the goal may also be to retrieve images at object-level. However, there is a fundamental difference between these two. The CBsIR problem is to search for an image, given as a whole, which is contained within another image, whereas in RBIR one is searching for a region, possibly the result of some image segmentation. The former is more intuitive since users can provide a query image as in traditional CBIR, and unlike the latter, it does not rely on any type of segmentation preprocessing. Unfortunately, automatic image segmentation algorithms usually lead to inaccurate segmentation of the image when trying to achieve homogeneous visual properties. Sometimes the ob- tained regions are only parts of a real object and should be combined with some neighbor regions so as to represent a meaningful object. Thus, complex distance functions are generally used to compare segmented images at query time. Also, the number and size of regions per image are variable and a precise representation of the obtained regions may be storage-wise expensive. Furthermore, since region-based queries are usually performed after the image segmentation and region description steps, it clearly puts some restriction on the user's expression of his/her information need depending on how good the segmentation results match the semantics of images, even though the user can explicitly select any detected region as query region. In those image retrieval systems where images are heterogeneous, rich in texture, very irregular and variable in contents, accurate regions are hard to obtain, making RBIR likely to perform poorly. The main contribution of this paper is to realize CBsIR by employing relevance feedback, in order to capture the user's intentions at query time. As we discuss in the next section, relevance feedback is an interactive learning technique which has already been demonstrated to boost performance in CBIR and RBIR systems. Despite the great potential shown by relevance feedback, to the best of our knowledge there is no published research that uses it in the context of CBsIR, thus positioning our work as unique in this domain.
The remainder of this paper is organized as follows. In the next section we discuss some related work. We also summarize the BIC method [19] for CBIR and how we adopt it for the CBsIR techniques system we propose. (As we shall discuss BIC is used as a building block when modeling images within our proposed approach.) Our retrieval strategy uses query refinement as well as the incorporation of user's judgement, via relevance feedback, into the image similarity measure. This forms the core contribution of this paper and is detailed in Section 3. In Section 4 we present and discuss experimental results, which support our claim of improved retrieval effectiveness. Finally, Section 5 concludes the paper and offers directions for future work.
Relevance Feedback within Traditional CBIR
The key issue in relevance feedback is how to use positive and negative examples to refine the query and/or to adjust the similarity measure. Early relevance feedback schemes for CBIR were adopted from feedback schemes developed for classical textual document retrieval. These schemes fall into two categories: query point movement (query refinement) and re-weighting (similarity measure refinement), both based on the well-known vector model.
The query point movement methods aim at improving the estimate of the "ideal query point" by moving it towards positive example points and away from the negative example points in the query space. One frequently used technique to iteratively update the query is the Rocchio's formula [13]. It is used in the MARS system [16], replacing the document vector by visual feature vectors. Another approach is to update query space by selecting feature models. The best way for effective retrieval is argued to be using a "society" of feature models determined by a learning scheme since each feature model is supposed to represent one aspect of the image content more accurately than others.
Re-weighting methods enhance the importance of a feature's dimensions, helping to retrieve relevant images while also reducing the importance of the dimensions that hinder the process. This is achieved by updating the weights of feature vectors in the distance metric. The refinement of the re-weighting method in the MARS system is called the standard deviation method.
Recent work has proposed more computationally robust methods that perform global feature optimization. The MindReader retrieval system [5] formulates a minimization problem on the parameter estimating process. Using a distance function that is not necessarily aligned with the coordinate axis, the MindReader system allows correlations between attributes in addition for different weights on each component. A further improvement over the MindReader approach [14] uses a unified framework to achieve the optimal query estimation and weighting functions. By minimizing the total distances of the positive examples from the revised query, the weighted average and a whitening transform in the feature space are found to be the optimal solutions. However, this algorithm does not use the negative examples to update the query and image similarity measure; and initially the user needs to input the critical data of training vectors and the relevance matrix into the system.
Tasks that can be improved as a result of experience can be considered as a machine-learning task. Therefore, relevance feedback can be considered as a learning method -the system learns from the examples provided as feedback by a user, i.e., his/her experience, to refine the retrieval results. The aforementioned query-movement method represented by the Rocchio's formula and re-weighting method are both simple learning methods. However, as users are usually reluctant to provide a large number of feedback examples, i.e., the number of training samples is very small. Furthermore, the number of feature dimensions in CBIR systems is also usually high. Thus, learning from small training samples in a very high dimension feature space makes many learning methods, such as decision tree learning and artificial neural networks, unsuitable for CBIR.
There are several key issues in addressing relevance feedback in CBIR as a small sample learning problem. First, how to quickly learn from small sets of feedback samples to improve the retrieval accuracy effectively; second, how to accumulate the knowledge learned from the feedback; and third, how to integrate low-level visual and high-level semantic features in the query. Most of the research in literature has focused on the first issue. In that respect Bayesian learning has been explored and has been shown advantageous compared with other learning methods, e.g., [21]. Active learning methods have been used to actively select samples which maximize the information gain, or minimize entropy/uncertainty in decision-making. These methods enable fast convergence of the retrieval result which in turn increases user satisfaction. Chen et al [2] use Monte carlo sampling to search for the set of samples that will minimize the expected number of future iterations. Tong and Chang [20] propose the use of SVM active learning algorithm to select the sample which maximizes the reduction in the size of the version space in which the class boundary lies. Without knowing apriori the class of a candidate, the best search is to halve the search space each time. In their work, the points near the SVM boundary are used to approximate the most-informative points; and the most-positive images are chosen as the ones farthest from the boundary on the positive side in the feature space.
Relevance Feedback within RBIR
Relevance feedback has been introduced in RBIR systems for a performance improvement as it does for the image retrieval systems using global representations.
In [6], the authors introduce several learning algorithms using the adjusted global image representation to RBIR. First, the query point movement technique is considered by assembling all the segmented regions of positive examples together and resizing the regions to emphasize the latest positive examples in order to form a composite image as the new query. Second, the application of support vector machine (SVM) [20] in relevance feedback for RBIR is discussed. Both the one class SVM as a class distribution estimator and two classes SVM as a classifier are investigated. Third, a region re-weighting algorithm is proposed corresponding to feature re-weighting. It assumes that important regions should appear more times in the positive images and fewer times in all the images of the database. For each region, measures of region frequency RF and inverse image frequency IIF (analogous to the TF and IDF in text retrieval [22]) are introduced for the region importance. Thus the region importance is defined as its region frequency RF weighted by the inverse image frequency IIF, and normalized over all regions in an image. Also, the feedback judgement is memorized for future use by calculating the cumulate region importance. However, this algorithm only consider positive examples while ignoring the effect of the negative examples in each iteration of the retrieval results. Nevertheless, experimental results on a general-purpose image database demonstrate the effectiveness of those proposed learning methods in RBIR.
CBsIR without Relevance Feedback
The paper by Leung and Ng [8] investigates the idea of either enlarging the query sub-image to match the size of an image block obtained by the four-level multiscale representation of the database images, or conversely contracting the image blocks of the database images so that they become as small as the query sub-image. The paper presents an analytical cost model and focuses on avoiding I/O overhead during query processing time. To find a good strategy to search multiple resolutions, four techniques are investigated: the branch-and-bound algorithm, Pure Vertical (PV), Pure Horizontal (PH) and Horizontal-and-Vertical (HV). The HV strategy is argued to be the best considering efficiency. However, the authors do not report clear conclusions regarding the effectiveness (e.g., Precision and/or Recall) of their approach.
The authors of [18] consider global feature extraction to capture the spatial information within image regions. The average color and the covariance matrix of the color channels in L*a*b color space are used to represent the color distribution. They apply a three level non-recursive hierarchical partition to achieve multiscale representation of database images by overlapping regions within them. Aiming at reducing the index size of these global features, a compact abstraction for the global features of a region is introduced. As well, a new distance measure between such abstractions is introduced for efficiently searching through the tiles from the multi-scale partition strategy. This distance is called inter hierarchical distance (IHD) since it is taken between feature vectors of different hierarchical levels of the image partition. The IHD index is a two dimensional vector which consumes small storage space. The search strategy is a simple linear scan of the index file, which assesses the similarity between the query image and a particular database image as well as all its sub-regions using their IHD vectors. Finally, the minimum distance found is used to rank this database image.
In [11] a new method called HTM (Hierarchical Tree Matching) for the CB-sIR problem was proposed. It has three main components: (1) a tree structure that models a hierarchical partition of images into tiles using color features, (2) an index sequence to represent the tree structure (allowing fast access during the search phase), and (3) a search strategy based on the tree structures of both database images and the query image. Since the tree structure presented in [11] is re-used in our work, we detail it in the following. To model an image, a grid is laid on it yielding a hierarchical partition and tiles. Although granularity could be arbitrary, we have obtained good results using a 4×4 grid resulting in a three-level multiscale representation of the image (similarly to what was done in [8] and [18]). The hierarchial partition of an image and its resulting tree structure are illustrated in Figure 4. There are three levels in the hierarchical structure. The highest level is the image itself. For the second level the image is decomposed into 3×3 rectangles with each side having half the length of the whole image, yielding 9 overlapping tiles. The lowest level consists of 4×9=36 rectangles, since each tile of the second level is partitioned into 4 non-overlapping sub-tiles. Note that, to exclude redundance in the CBsIR system, only the indices of the 4×4=16 unique tiles in the lowest level are stored with a small structure for relationship information. This tiling scheme is obviously not unique and as long as a well-formed hierarchy of tiles is used to model the image the technique we proposed can still be applied after corresponding adjustments. The average color of the image tiles in the RGB color space is associated to the nodes in the tree stuctures for images 2 . Thus, every database image is represented as a series of tiles, each of which is mapped to a subtree of the tree modeling the image.
An index sequence representing the predefined parent-child relationship (given by the predefined order of sequence in the index) for the tree structure is stored on secondary storage and used for fast retrieval. Details about the index sequence structure can be found in elsewhere [11]; in short, it resembles a priority tree where the relative order among the tree nodes reflect the relative order of the entries and which can be efficiently mapped onto an array structure. Such an structure allows one to efficiently traverse the necessary indices for computing (sub)image similarity. The searching process is accomplished by "floating" the tree structure of the query image over the full tree structure of the candidate database image, shrinking the query's tree structure so that it is comparable with the candidate database image's trees at each level of the hierarchical structure. The minimum distance from tree comparisons at all hierarchical levels, indicating the best matching tile from a database image, is used as the distance between the database image and the query. Differently from [18], the HTM search strategy considers local information of images' tiles represented by leaf nodes in the subtree structures. The average of distance values among the corresponding leaf nodes is taken for the distance between the tree structures of query image and a certain tile of the database image at any hierarchical level.
Even though different datasets were used, experiments detailed in [11] strongly suggest that the proposed approach yields better retrieval accuracy compared to [18], at the cost of small storage overhead.
The BIC-based Image Abstraction
An straigthforward way to model images is to use its average color. This is obviously not effective in any non-trivial situation. Another simple, and in many situations cost-effective means is to use a global color histogram (GCH) (c.f., [10]). A common critique to GCHs is that it is unable to capture any notion of spatial distribution. To address this several other approaches have been proposed 3 , but they add complexity as a trade-off in order to gain effectiveness. Nevertheless, the use of color only, without any notion of spatial distribution, may be effective, if one is able to capture other features of the images, e.g., texture. That is exactly the advantage of the BIC technique proposed in [19] and which we re-use within our proposal.
The image analysis algorithm of BIC classifies pixels as either border, when its color is the same as its neighboors, or otherwise as interior, and two normalized histograms are computed considering only the border pixels and the interior pixels respectively. That is, for each color two histogram bins exist: one in the border pixel histogram and one in the interior pixel histogram. This allows a more informed color distribution abstraction and captures, implicitly, a notion of texture.
To illustrate the idea consider two images, one composed of two equally sized solid color blocks of different colors, say C1 and C2, and another one where half of pixels of color have color C1 and are randomly distributed. Likewise the other half of pixels have color C2 and are also randomly distributed. Clearly the BIC histograms of those images are quite different, one will have almost only interior pixels and the other will have almost only border pixels. This will yield a low similarity measure, which is indeed the case. Note that the global color histogram, a standard CBIR technique, for both images would be identical, misleading one to think the images were very similar. Note that the difference in the histogram suggests a very different texture in the images, which, on top of the possible color differences, enhances the capability of distinguishing among images even further. For histogram comparison within BIC, the dLog distance function is used to diminish the effect that a large value in a single histogram bin dominates the distance between histograms, no matter the relative importance of this single value [10,12]. The basic motivation behind this is based on the observation that classical techniques based on global color histograms treat all colors equally, despite of their relative concentration. However, the perception of stimulus, color in images in particular, is believed to follow a "sigmoidal" curve [12]. The more relative increment in a stimulus is perceived more clearly when the intensity of the stimulus is smaller than when it is larger. For instance, a change from 10% to 20% of a color is perceived more clearly than a change from 85% to 95%. Indeed, it has been a well observed phenomena regarding many other phenomena involving how sensitive one is (including animals) to different stimuli [3]. Thus, the distance function is defined as: 9], requiring only 4 bits of storage per histogram bin. This allows substantial reduction in storage, and yet a reasonably fine discretization of the bins.
dLog(a, b) = M i=0 |f (a[i]) − f (b[i])| where f (x) = 0 if x = 0 1 if 0 < x ≤ 1 log 2 x + 1
The BIC approach was shown in [19] to outperform several other CBIR approaches and, as such, we adopt it in our CBsIR proposal to extract and compare the visual feature of each tile with the goal of improving the retrieval accuracy.
Relevance Feedback for CBsIR
Despite the great potential of relevance feedback shown in CBIR systems using global representations and in RBIR systems, to the best of our knowledge there is no research that uses it within CBsIR systems. In this section we present our solution for CBsIR by using relevance feedback to learn the user's intention. Our relevance feedback approach has three main components: (1) a tile re-weighting scheme that assigns penalties to each tile of database images and updates those tile penalties for all relevant images retrieved at each iteration using both the relevant (positive) and irrelevant (negative) images identified by the user; (2) a query refinement strategy that is based on the tile re-weighting scheme to approach the most informative query according to the user's intention; (3) an image similarity measure that refines the final ranking of images using the user's feedback information. Each of these components is explained in detail in the following subsections.
Tile Re-Weighting Scheme
Researches in RBIR [7,6] have proposed region re-weighting schemes for relevance feedback. In this research, we design our tile re-weighting scheme that specializes the technique presented in [7] to accomodate our tile-oriented (not region-oriented) HTM approach for CBsIR. It should be emphasized that instead of considering all the images in the database to compute the parameters for region weight [6] (which is computationally expensive), our tile re-weighting scheme uses only the positive and negative examples identified by the user to update the tile penalty of the positive images only, which is much more efficient. Moreover, the region re-weighting scheme in [7] uses a predefined similarity threshold to determine whether the region and the image is similar or not, otherwise the comparison of region pairs would become too expensive since images might consist of different and large number of regions. This threshold is sensitive and subject to change for different kinds of image datasets. Thus, how to obtain the right threshold is yet another challenge for the relevance feedback method in RBIR. However, our RF method for the CBsIR problem does not need any threshold because the number of obtained tiles is the same (and small) for each database image and there exists implicit relationship between the tiles, which makes it easier to compare them.
In our system, the user provides feedback information by identifying positive and negative examples from the retrieved images. The basic assumption is that important tiles should appear more often in positive images than unimportant tiles, e.g., "background tiles" should yield to "theme tiles" in positive images. On the other hand, important tiles should appear less often in negative images than unimportant tiles. Following the principle of "more similar means better matched thus less penalty", we assign a penalty to every tile that represents the database image for the matching process. User's feedback information is used to estimate the "tile penalties" for all positive images, which also refines the final ranking of images. During the feedback iterations, the user does not need to specify which tile of a certain positive image is similar to the query, which would only make the problem only simpler to solve at an additional cost to the user.
Next, we introduce some definitions used to determine the tile penalty and formalize the overall relevance feedback process. Definition 1: The distance between two tiles T a and T b from images I a and I b respectively, is:
DT (T a , T b ) = m i=1 d(F eature(T ai ), F eature(T bi ))
m where T ai and T bi are sub-tiles of T a and T b respectively, m is the number of unique leaf nodes in the tiles' tree structures at any hierarchical levels (if already at the leaf level, m = 1), the distance function d is to be instantiated with some particular measure based on the result of the feature extraction done by the F eature function on the tiles, e.g., BIC's dLog() function defined in the previous section. • Definition 2: The penalty for a certain tile i from a database image after k iterations is defined as: T P i (k), i = 0, · · · , N T , where N T + 1 is the number of tiles per database image, and T P i (0) is initialized as 1 N T +1 . • For instance, in Figure 4, N T + 1 = 1 + 9 + 16, i.e., is equal to the number of nodes in the tree structure representing the hierarchical partition of a database image; for the lowest level, only unique nodes count. Definition 3: For each tile from a positive image, we define a measure of the distance DT S between tile T and an image set IS = {I 1 , I 2 , · · · , I n }. This reflects the extent to which the tile is consistent with other positive images in the feature space. Intuitively, the smaller this value, the more important this tile is in representing the user's intention.
DT S(T, IS)
= n i=1 exp(DT (T, I 0 i )), if T is at full tree level n i=1 exp(min j=1..N T DT (T, I j i )), if T is at the subtree level
where N T in this case is the number of tiles at the current subtree level. • Assuming that I is one of the identified positive example images, we can compute the tile penalty of image I which consists of tiles {T 0 , T 1 , · · · , T N T }. The user provides positive and negative example images during each k th iteration of feedback, denoted respectively as IS + (k) = {I + 1 (k), · · · , I + p (k)} and IS − (k) = {I − 1 (k), · · · , I − q (k)}, where p + q is typically much smaller than the size of the database.
Based on the above preparations, we now come to the definition of tile penalty. Definition 4: For all images (only being positive), the tile penalty of T i after k iterations of feedback is computed (and normalized) as: , acts as a penalty, reflecting the influence of the negative examples. • This implies the intuition that a tile from a positive example image should be penalized if it is similar to negative examples. Basically, we compute the distances DT S between a particular tile T and the positive image set IS + as well as the negative image set IS − respectively to update the penalty of that tile from a positive example image. The inverse of the tile's distance from the negative image set is used to weight its corresponding distance from the positive image set.
T P i (k) = W i × DT S(T i , IS + (k)) N T j=0 (W j × DT S(T j , IS + (k))
Let us now illustrate the above methodology with a simple example, which also motivates the notion of tile penalty. For simplicity, assume that the color palette consists of only three colors: black, gray and white. Figure 6 shows the top 3 retrieved images and the user's feedback judgement. Image I 1 is marked as a positive example since it actually contains the query image, which exactly represents the sub-image retrieval problem we are dealing with. Image I 2 is also marked as a positive example because it is the enlargement of the query image (and therefore containing it as well). For the sake of illustration, assume a two-level multi-cale representation of database images is used as in Figure 7.
The tile penalties for tiles per database image are initialized as 0.1 for the 10 tiles, i.e., T P i (0) = 0.1, i ∈ [0 , 9]. Now, take tile T 1 for example. According to Definition 3, we need to compute the distances DT S between T 1 and the positive/negative image set. In order to do this, firstly, the distances between T 1 and all tiles at the corresponding subtree levels of all the images in the positive/negative image set should be obtained by Definition 1. Then, using Definition 4 the new penalty of T 1 is updated from 0.1 to 0.090 correspondingly. The penalties for other tiles is updated in the same way during each feedback iteration. We illustrate the new values of all tile penalties for database image I 1 as a positive example after one feedback iteration in Figure 7. We can see that after the user provides feedback information, some tiles lose some weight while others gain. For instance, T 1 , T 2 , T 3 and T 9 receive less penalties now because they only contain the color of grey and/or black which is/are also in the query. T 0 , T 4 , T 5 , T 7 and T 8 are penalized more since they all contain the color white. The new weights for these tiles generally follow the trend that more percentage of white color more penalty. T 6 , which is a rotation of the query image maintains its weight for this iteration. This means that our system is to some extent also capable of perceiving changes such as rotation. Besides, for a closer look at the updated tile penalties of positive image I 1 , T 1 receives more penalty than T 3 now although they are similar to the query image in the same degree. Note that, according to Definition 4, both the positive and the negative example images are used to calculate new tile penalties. And we penalize a tile more if it is also somewhat more similar to the negative example images compared with other tiles in the positive example image. Thus it is reasonable that the tile penalty for T 1 appears higher than that for T 3 after feedback learning, since T 1 contains some black color which is also in the negative example image I 3 while T 3 contains only the grey color.
Query Feature Update
The relevance feedback process using query refinement strategy is based on the tile re-weighting scheme and all positive and negative example images. The main concern is that we need to maintain as much as possible the original feature of query image while introducing new feature elements that would capture more new relevant images. Considering the hierarchical tree structure of the query image, we use the most similar tile (with minimum tile penalty) at every subtree level of each positive image to update the query feature at the corresponding subtree level. Definition 5: The updated query feature after k iterations is:
qn k l [j] = p i=1 (1 − T P min i l (k)) × P os k i l [j] p i=1 (1 − T P min i l (k))
where qn k l is the new feature with M dimensions for a subtree (tile) at the l th level of the tree structure for the query image after k iterations, T P min i l (k) is the minimum tile penalty for a subtree (tile) found at the l th level of the tree structure for the i th positive image after k iterations, P os k i l is the feature for the subtree (tile) with minimum tile penalty at the l th level of the i th positive image's tree structure after k iterations, and p is the number of positive images given by the user at this iteration. • Intuitively, we use the weighted average to update the feature for a subtree (tile) of the query, based on the features of those tiles that have minimum tile penalties within respective positive images. In this way, we try to approach the optimal query that carries the most information needed to retrieve as many relevant images to the query as possible.
Image Similarity
With the updated query feature and tile penalties for positive images, we can now define the distance between images and the query for ranking evaluation at each feedback iteration. In order to locate the best match to the query sub-image, our image similarity measure tries to find the minimum from the distances between the database image tiles and the query (recall that both the database image and the query sub-image have been modeled by the tree structure in the same way) at corresponding hierarchical level in the tree structure, weighted by the tile penalty of corresponding database image tiles. Definition 6: The distance between the (updated) query image Q and a database image I at the k th iteration is:
DI k (I, Q) = min i=0..N T T P i (k − 1) × DT (I i , Q j )
where N T + 1 is the number of all subtrees in the tree structure (tiles) of a database image, and T P i (k − 1) is the tile penalty for the i th tile of image I after k − 1 iterations. • For the comparison of full tree structures, i = 0 and j = 0, indicating both the full tree structure of the database image and the query image. For the comparison of subtree structures, i = 1..N l for each 1 ≤ j ≤ (L − 1), where N l is the number of subtree structures at the l th level of the tree structure and L is the number of levels of the tree structure, mapped from the hierarchical partition. j indicates the subtree structure at a particular level of the query image's tree structure, as a result of shrinking the original query tree structure to make the comparison with the subtree structures of database images comparable.
Finally, the overall relevance feedback process for the CBsIR system can be summarized in the following algorithm:
1. The user submits a query (sub)-image. 2. The system retrieves the initial set of images using the proposed similarity measure, which consists of database images containing tiles similar to the query sub-image. 6. The revised query and new tile penalties for database images is used to compute the ranking score for each image and sort the results.
7. Show the new retrieval results and, if the user wishes to continue, go to step 3.
Experiments and Results
Before going further let us define the metrics we use to measure retrieval effectiveness. For certain applications, it is more useful that the system brings new relevant images (found due to the update of query feature from previous feedback) forward into the top range rather than keeping those already retrieved relevant images again in the current iteration. For other applications, however, the opposite situation applies, the user is more interested in obtaining more relevant images during each iteration keeping those s/he has already seen before. Given these observations, we use two complementary measures for precision and recall as follows: The new recall and precision explicitly measure the learning aptitude of the system; ideally it retrieves more new relevant images as soon as possible. Moreover, we also measure the total number of distinct relevant images the system can find during all the feedback iterations. This is a history-based measure that implicitly includes some relevant images "lost" (out of the currently presented images) in the process. We call them cumulative recall and cumulative precision defined as follows:
1. Cumulative Recall: the percentage of distinct relevant images from all iterations so far (not necessarily shown at the current iteration) over the number of relevant images in the predefined answer set. Table 1 exemplifies the measures mentioned above, assuming the answer set for a query contains 3 images A, B, C and the number of returned (presented) images is 5.
In addition to the above measures, we also evaluate storage overhead and query processing time.
We test the proposed relevance feedback approach using a heterogenous image dataset consisting of 10,150 color JPEG images: a mixture of the public Stanford10k 5 dataset and some images from one of COREL's CD-ROMs, each of which falls into a particular category -we use 21 such categories 6 . Some categories do not have rotated or translated images, but others do. On average, each answer set has 11 images, and none of the answer sets has more than 20 images, which is the amount of images we present to the user for feedback during each iteration. It is important to note that the queries and answer sets are not part of the Stanford10k dataset in order to minimize the probability that other images, not contained in the expected answer set, could also be part of the answer but not accounted for. We manually crop part of a certain image from each of the above categories to form a query image set of 21 queries (one for each category). Images of the same categories serve as the answer sets for queries (one sample query and its corresponding answer set are shown in Figure 1). The size of the query image varies, being on average 18% the size of the database images. The following performance results are collected from the online demo available at http://db.cs.ualberta.ca/mn/CBsIR.html. (An sample of the two initial iterations using our system is presented in the Appendix.)
In our experiments, the maximum number of iterations explored is set to 10 (users will give feedback 9 times by pointing out which images are relevant (positive)/irrelevant (negative) to the query) and we present the top 20 retrieved images at each iteration. While within the same query session, the information collected at one step of the relevance feedback phase is used in the next step (as indicated in the definitions presented in Section 3), the information collected across different query sessions is not integrated into the search for the next queries -even if the very same query is submitted to the system again. I.e., we assume query sessions are independent; more specifically, once the user goes to the initial page, all accumulated learning is cleared. This consideration is based on the observation of the subjectivity of human perception and the fact that even the same person could perceive the same retrieval result differently at different times. As discussed earlier we use BIC histograms to model the contents of an image tile. The number of quantized colors in such histograms is therefore a parameter for BIC. We use two different values for this parameter, 16 and 64 colors, in order to evaluate the influence of the underlying tile model on the overall retrieval effectiveness. Table 2 shows how many, on average, iterations were necessary to have the original image (the one from which the query sub-image was extracted) placed within the top 20 images. It is clear that using 64 quantized colors is more efficient, as the hit rate of the original images is almost optimal. Even though this trend, i.e., the more colors the better the retrieval, is fairly intuitive, it is interesting to see that this advantage does not grow linearly with the number of colors across all experiments. That is to say, that even using a low number of colors one can still obtain fairly good results.
The retrieval accuracy using 64 quantized colors is shown in Figure 8 and Figure 9. As it can be clearly seen, after 5 iterations the system has already learned most of the information it could learn, i.e., the information gain (given by the new recall and new precision curves) is nearly null. On the other hand, after only 5 iterations the actual recall and actual precision values increased by 55% and 60% respectively. It is also noteworthy to mention that the stable actual precision value of nearly 40% is not as low as it may seem at first. The answer sets have an average of 11 images and since the user is presented with 20 images, the maximum precision one could get (on average) would be about 50% as almost half of the displayed images could not be considered relevant by construction. This interpretation leads to the proposal of the following measure:
• Normalized Precision: the actual precision over the maximum possible actual precision value.
Interestingly enough, careful consideration of such a measure shows that is equivalent to the usual notion of (actual) recall. Indeed, consider R and A to be the sets of relevant answers and the retrieved answers with respect to a given query. The actual precision is then defined as |R ∩ A|/|A|. The maximum precision value one can obtain is |R|/|A|. When the former is divided by the latter one obtains |R ∩ A|/|R| which is precisely the definition of actual recall. This leads to the argument that precision-based measures are not well suited for this type of scenario, where non-relevant images are very likely to be included in the answer set regardless of their relevance. The actual recall, being concerned only with the relevant images is a more realistic measure. Under this argument, 70% of stable actual recall (or normalized precision) after 5 iterations seems quite reasonable.
We also obtained about 85% for cumulative recall and about 50% for cumulative precision. The reason for the higher values than those for actual recall and actual precision is because some relevant images that may be "lost" in subsequent iterations are always accounted for in these measures.
Using 16 quantized colors, as one would expect, yields less accuracy than using 64 quantized colors. However, an interesting aspect shown in Figures 10 and 11 is that even though, the amount of information (i.e., number of colors) was reduced by 75%, the effectiveness was reduced by at most 10% compared to the values in Figures 8 and 9. The cost of the loss of information is more clear when looking at the "learning aptitude." Using 16 colors required twice as many iterations in order to bring the curves to a stable state. Still, this show a sublinear dependence on the number of colors: using 4 times more colors yields only 10% more effectiveness and 2 times faster learning.
Another interesting observation, which supports the main advantage of using more color for tile abstraction, can be seen when comparing the new precision and recall curves using different numbers of colors directly (Figures 12 and 13). Up until the 4th or 5th iteration usign 64 colors yields higher values, meaning that it is learning faster, after that point, it has learned basically what it could have learn. On the other hand the curve for using 16 colors shows that the method is still learning. Figure 14 shows the average time required to process a query during each iteration, i.e., to access all disk-resident data, complete the learning from the user's feedback at the current iteration (not applicable to the first iteration), obtain the distance between the query image and database images and sort them by their resulting ranks. The first iteration takes, on average, slightly less than 2 seconds when using 64 quantized colors and 0.6 second when using 16 quantized colors, whereas each subsequent iteration requires about 2.5 seconds and 1 second respectively for the two feature representations. This slight increase is due to the overhead for computing and updating the tile penalties at each iteration. As well, note that the gain in speed is proportional to the smaller number of colors used, i.e., using 64 colors yields a performance about times slower than using only 16 quantized colors. Extracting image features from the image database, applying the BIC method, and generating the metadata file requires about 0.15 secs/image on a computer running Linux 2.4.20 with AMD Athlon XP 1900+ CPU and 1GB of main memory and is independent of the number of colors used -this procedure can be done off-line and should not be considered part of query processing overhead.
Finally, the storage cost for the disk-resident metadata is 10.5 MB (only about 20% the size of the image database), while using 16 quantized colors needs proportionally less storage, namely 2.7 MB, again proportional to the representation overhead.
Conclusions
In this paper we have shown, for the first time, how relevance feedback can be used to improve the performance of CBsIR. We presented a relevance feedbackbased technique, which is based on a tile re-weighting scheme that assigns penalties to each tile of database images and updates those of all relevant images using both the positive and negative examples identified by the user. The user's feed- back is used to refine the image similarity measure by weighting the tile distances between the query and the database image tiles with their corresponding tile penalties. We combine this learning method with the BIC approach for image modeling to improve the performance of content-based sub-image retrieval. Our results on an image database of over 10,000 images suggest that the learning method is quite effective for CBsIR. While using less colors within BIC reduce storage overhead and improve speedup query processing it does not affect substantially retrieval efficiency in the long term. The main drawback is the system take longer to "learn" making the overall retrieval task a longer one. A few possible venues for further investigation include the design of disk based access structure for the hierarchical tree (to enhance the scalability for larger databases), the use of better (more powerful yet compact) representation for the tile features, possibly removing the background of the images and the incorporation of more sophisticated machine learning techniques to shorten the gap between low-level image features and high-level semantic contents of images so as to better understand the user's intention.
| 7,482 |
0904.4041
|
1488291195
|
The typical content-based image retrieval problem is to find images within a database that are similar to a given query image. This paper presents a solution to a different problem, namely that of content based sub-image retrieval, i.e., finding images from a database that contains another image. Note that this is different from finding a region in a (segmented) image that is similar to another image region given as a query. We present a technique for CBsIR that explores relevance feedback, i.e., the user's input on intermediary results, in order to improve retrieval efficiency. Upon modeling images as a set of overlapping and recursive tiles, we use a tile re-weighting scheme that assigns penalties to each tile of the database images and updates the tile penalties for all relevant images retrieved at each iteration using both the relevant and irrelevant images identified by the user. Each tile is modeled by means of its color content using a compact but very efficient method which can, indirectly, capture some notion of texture as well, despite the fact that only color information is maintained. Performance evaluation on a largely heterogeneous dataset of over 10,000 images shows that the system can achieve a stable average recall value of 70 within the top 20 retrieved (and presented) images after only 5 iterations, with each such iteration taking about 2 seconds on an off-the-shelf desktop computer.
|
In @cite_10 a new method called HTM (Hierarchical Tree Matching) for the CBsIR problem was proposed. It has three main components: (1) a tree structure that models a hierarchical partition of images into tiles using color features, (2) an index sequence to represent the tree structure (allowing fast access during the search phase), and (3) a search strategy based on the tree structures of both database images and the query image. Since the tree structure presented in @cite_10 is re-used in our work, we detail it in the following.
|
{
"abstract": [
"This paper deals with the problem of finding images that contain a given query image, the so-called content-based sub-image retrieval. We propose an approach based on a hierarchical tree that encodes the color feature of image tiles which are in turn stored as an index sequence. The index sequences of both candidate images and the query sub-image are then compared in order to rank the database images suitability with respect to the query. In our experiments, using 10,000 images and disk-resident metadata, for 60Σ (80Σ) of the queries the relevant image, i.e., the one where the query sub-image was extracted from, was found among the first 10 (50) retrieved images in about 0.15 sec."
],
"cite_N": [
"@cite_10"
],
"mid": [
"2007872281"
]
}
|
Content-Based Sub-Image Retrieval with Relevance Feedback
|
Most of the content-based image retrieval (CBIR) systems perform retrieval based on a full image comparison, i.e., given a query image the system returns overall similar images. This is not useful if users are also interested in images from the database that contain an image (perhaps an object) similar to a query image. We call this searching process Content-Based sub-Image Retrieval (CB-sIR), and it is defined as follows [18]: given an image query Q and an image database S, retrieve from S those images Q which contain Q according to some notion of similarity. To illustrate this consider Figure 1 which displays an example query image and its relevant answer set. of such an answer set, and their respective ranks, retrieved within the top 20 matches after CBsIR is performed 1 . Note that the other 17 images returned are considered non-relevant to the query. Now assume that the user is given the opportunity to mark those 3 images as relevant and all other 17 as irrelevant, i.e., the user is allowed to provide relevance feedback. Figure 3 shows the relevant images retrieved (along with their rank) after taking such feedback into account. Note that all images previously obtained were ranked higher and also new images were found and ranked high as well. The sub-image retrieval problem we consider is similar to region-based image retrieval (RBIR), e.g. [1,9], since the goal may also be to retrieve images at object-level. However, there is a fundamental difference between these two. The CBsIR problem is to search for an image, given as a whole, which is contained within another image, whereas in RBIR one is searching for a region, possibly the result of some image segmentation. The former is more intuitive since users can provide a query image as in traditional CBIR, and unlike the latter, it does not rely on any type of segmentation preprocessing. Unfortunately, automatic image segmentation algorithms usually lead to inaccurate segmentation of the image when trying to achieve homogeneous visual properties. Sometimes the ob- tained regions are only parts of a real object and should be combined with some neighbor regions so as to represent a meaningful object. Thus, complex distance functions are generally used to compare segmented images at query time. Also, the number and size of regions per image are variable and a precise representation of the obtained regions may be storage-wise expensive. Furthermore, since region-based queries are usually performed after the image segmentation and region description steps, it clearly puts some restriction on the user's expression of his/her information need depending on how good the segmentation results match the semantics of images, even though the user can explicitly select any detected region as query region. In those image retrieval systems where images are heterogeneous, rich in texture, very irregular and variable in contents, accurate regions are hard to obtain, making RBIR likely to perform poorly. The main contribution of this paper is to realize CBsIR by employing relevance feedback, in order to capture the user's intentions at query time. As we discuss in the next section, relevance feedback is an interactive learning technique which has already been demonstrated to boost performance in CBIR and RBIR systems. Despite the great potential shown by relevance feedback, to the best of our knowledge there is no published research that uses it in the context of CBsIR, thus positioning our work as unique in this domain.
The remainder of this paper is organized as follows. In the next section we discuss some related work. We also summarize the BIC method [19] for CBIR and how we adopt it for the CBsIR techniques system we propose. (As we shall discuss BIC is used as a building block when modeling images within our proposed approach.) Our retrieval strategy uses query refinement as well as the incorporation of user's judgement, via relevance feedback, into the image similarity measure. This forms the core contribution of this paper and is detailed in Section 3. In Section 4 we present and discuss experimental results, which support our claim of improved retrieval effectiveness. Finally, Section 5 concludes the paper and offers directions for future work.
Relevance Feedback within Traditional CBIR
The key issue in relevance feedback is how to use positive and negative examples to refine the query and/or to adjust the similarity measure. Early relevance feedback schemes for CBIR were adopted from feedback schemes developed for classical textual document retrieval. These schemes fall into two categories: query point movement (query refinement) and re-weighting (similarity measure refinement), both based on the well-known vector model.
The query point movement methods aim at improving the estimate of the "ideal query point" by moving it towards positive example points and away from the negative example points in the query space. One frequently used technique to iteratively update the query is the Rocchio's formula [13]. It is used in the MARS system [16], replacing the document vector by visual feature vectors. Another approach is to update query space by selecting feature models. The best way for effective retrieval is argued to be using a "society" of feature models determined by a learning scheme since each feature model is supposed to represent one aspect of the image content more accurately than others.
Re-weighting methods enhance the importance of a feature's dimensions, helping to retrieve relevant images while also reducing the importance of the dimensions that hinder the process. This is achieved by updating the weights of feature vectors in the distance metric. The refinement of the re-weighting method in the MARS system is called the standard deviation method.
Recent work has proposed more computationally robust methods that perform global feature optimization. The MindReader retrieval system [5] formulates a minimization problem on the parameter estimating process. Using a distance function that is not necessarily aligned with the coordinate axis, the MindReader system allows correlations between attributes in addition for different weights on each component. A further improvement over the MindReader approach [14] uses a unified framework to achieve the optimal query estimation and weighting functions. By minimizing the total distances of the positive examples from the revised query, the weighted average and a whitening transform in the feature space are found to be the optimal solutions. However, this algorithm does not use the negative examples to update the query and image similarity measure; and initially the user needs to input the critical data of training vectors and the relevance matrix into the system.
Tasks that can be improved as a result of experience can be considered as a machine-learning task. Therefore, relevance feedback can be considered as a learning method -the system learns from the examples provided as feedback by a user, i.e., his/her experience, to refine the retrieval results. The aforementioned query-movement method represented by the Rocchio's formula and re-weighting method are both simple learning methods. However, as users are usually reluctant to provide a large number of feedback examples, i.e., the number of training samples is very small. Furthermore, the number of feature dimensions in CBIR systems is also usually high. Thus, learning from small training samples in a very high dimension feature space makes many learning methods, such as decision tree learning and artificial neural networks, unsuitable for CBIR.
There are several key issues in addressing relevance feedback in CBIR as a small sample learning problem. First, how to quickly learn from small sets of feedback samples to improve the retrieval accuracy effectively; second, how to accumulate the knowledge learned from the feedback; and third, how to integrate low-level visual and high-level semantic features in the query. Most of the research in literature has focused on the first issue. In that respect Bayesian learning has been explored and has been shown advantageous compared with other learning methods, e.g., [21]. Active learning methods have been used to actively select samples which maximize the information gain, or minimize entropy/uncertainty in decision-making. These methods enable fast convergence of the retrieval result which in turn increases user satisfaction. Chen et al [2] use Monte carlo sampling to search for the set of samples that will minimize the expected number of future iterations. Tong and Chang [20] propose the use of SVM active learning algorithm to select the sample which maximizes the reduction in the size of the version space in which the class boundary lies. Without knowing apriori the class of a candidate, the best search is to halve the search space each time. In their work, the points near the SVM boundary are used to approximate the most-informative points; and the most-positive images are chosen as the ones farthest from the boundary on the positive side in the feature space.
Relevance Feedback within RBIR
Relevance feedback has been introduced in RBIR systems for a performance improvement as it does for the image retrieval systems using global representations.
In [6], the authors introduce several learning algorithms using the adjusted global image representation to RBIR. First, the query point movement technique is considered by assembling all the segmented regions of positive examples together and resizing the regions to emphasize the latest positive examples in order to form a composite image as the new query. Second, the application of support vector machine (SVM) [20] in relevance feedback for RBIR is discussed. Both the one class SVM as a class distribution estimator and two classes SVM as a classifier are investigated. Third, a region re-weighting algorithm is proposed corresponding to feature re-weighting. It assumes that important regions should appear more times in the positive images and fewer times in all the images of the database. For each region, measures of region frequency RF and inverse image frequency IIF (analogous to the TF and IDF in text retrieval [22]) are introduced for the region importance. Thus the region importance is defined as its region frequency RF weighted by the inverse image frequency IIF, and normalized over all regions in an image. Also, the feedback judgement is memorized for future use by calculating the cumulate region importance. However, this algorithm only consider positive examples while ignoring the effect of the negative examples in each iteration of the retrieval results. Nevertheless, experimental results on a general-purpose image database demonstrate the effectiveness of those proposed learning methods in RBIR.
CBsIR without Relevance Feedback
The paper by Leung and Ng [8] investigates the idea of either enlarging the query sub-image to match the size of an image block obtained by the four-level multiscale representation of the database images, or conversely contracting the image blocks of the database images so that they become as small as the query sub-image. The paper presents an analytical cost model and focuses on avoiding I/O overhead during query processing time. To find a good strategy to search multiple resolutions, four techniques are investigated: the branch-and-bound algorithm, Pure Vertical (PV), Pure Horizontal (PH) and Horizontal-and-Vertical (HV). The HV strategy is argued to be the best considering efficiency. However, the authors do not report clear conclusions regarding the effectiveness (e.g., Precision and/or Recall) of their approach.
The authors of [18] consider global feature extraction to capture the spatial information within image regions. The average color and the covariance matrix of the color channels in L*a*b color space are used to represent the color distribution. They apply a three level non-recursive hierarchical partition to achieve multiscale representation of database images by overlapping regions within them. Aiming at reducing the index size of these global features, a compact abstraction for the global features of a region is introduced. As well, a new distance measure between such abstractions is introduced for efficiently searching through the tiles from the multi-scale partition strategy. This distance is called inter hierarchical distance (IHD) since it is taken between feature vectors of different hierarchical levels of the image partition. The IHD index is a two dimensional vector which consumes small storage space. The search strategy is a simple linear scan of the index file, which assesses the similarity between the query image and a particular database image as well as all its sub-regions using their IHD vectors. Finally, the minimum distance found is used to rank this database image.
In [11] a new method called HTM (Hierarchical Tree Matching) for the CB-sIR problem was proposed. It has three main components: (1) a tree structure that models a hierarchical partition of images into tiles using color features, (2) an index sequence to represent the tree structure (allowing fast access during the search phase), and (3) a search strategy based on the tree structures of both database images and the query image. Since the tree structure presented in [11] is re-used in our work, we detail it in the following. To model an image, a grid is laid on it yielding a hierarchical partition and tiles. Although granularity could be arbitrary, we have obtained good results using a 4×4 grid resulting in a three-level multiscale representation of the image (similarly to what was done in [8] and [18]). The hierarchial partition of an image and its resulting tree structure are illustrated in Figure 4. There are three levels in the hierarchical structure. The highest level is the image itself. For the second level the image is decomposed into 3×3 rectangles with each side having half the length of the whole image, yielding 9 overlapping tiles. The lowest level consists of 4×9=36 rectangles, since each tile of the second level is partitioned into 4 non-overlapping sub-tiles. Note that, to exclude redundance in the CBsIR system, only the indices of the 4×4=16 unique tiles in the lowest level are stored with a small structure for relationship information. This tiling scheme is obviously not unique and as long as a well-formed hierarchy of tiles is used to model the image the technique we proposed can still be applied after corresponding adjustments. The average color of the image tiles in the RGB color space is associated to the nodes in the tree stuctures for images 2 . Thus, every database image is represented as a series of tiles, each of which is mapped to a subtree of the tree modeling the image.
An index sequence representing the predefined parent-child relationship (given by the predefined order of sequence in the index) for the tree structure is stored on secondary storage and used for fast retrieval. Details about the index sequence structure can be found in elsewhere [11]; in short, it resembles a priority tree where the relative order among the tree nodes reflect the relative order of the entries and which can be efficiently mapped onto an array structure. Such an structure allows one to efficiently traverse the necessary indices for computing (sub)image similarity. The searching process is accomplished by "floating" the tree structure of the query image over the full tree structure of the candidate database image, shrinking the query's tree structure so that it is comparable with the candidate database image's trees at each level of the hierarchical structure. The minimum distance from tree comparisons at all hierarchical levels, indicating the best matching tile from a database image, is used as the distance between the database image and the query. Differently from [18], the HTM search strategy considers local information of images' tiles represented by leaf nodes in the subtree structures. The average of distance values among the corresponding leaf nodes is taken for the distance between the tree structures of query image and a certain tile of the database image at any hierarchical level.
Even though different datasets were used, experiments detailed in [11] strongly suggest that the proposed approach yields better retrieval accuracy compared to [18], at the cost of small storage overhead.
The BIC-based Image Abstraction
An straigthforward way to model images is to use its average color. This is obviously not effective in any non-trivial situation. Another simple, and in many situations cost-effective means is to use a global color histogram (GCH) (c.f., [10]). A common critique to GCHs is that it is unable to capture any notion of spatial distribution. To address this several other approaches have been proposed 3 , but they add complexity as a trade-off in order to gain effectiveness. Nevertheless, the use of color only, without any notion of spatial distribution, may be effective, if one is able to capture other features of the images, e.g., texture. That is exactly the advantage of the BIC technique proposed in [19] and which we re-use within our proposal.
The image analysis algorithm of BIC classifies pixels as either border, when its color is the same as its neighboors, or otherwise as interior, and two normalized histograms are computed considering only the border pixels and the interior pixels respectively. That is, for each color two histogram bins exist: one in the border pixel histogram and one in the interior pixel histogram. This allows a more informed color distribution abstraction and captures, implicitly, a notion of texture.
To illustrate the idea consider two images, one composed of two equally sized solid color blocks of different colors, say C1 and C2, and another one where half of pixels of color have color C1 and are randomly distributed. Likewise the other half of pixels have color C2 and are also randomly distributed. Clearly the BIC histograms of those images are quite different, one will have almost only interior pixels and the other will have almost only border pixels. This will yield a low similarity measure, which is indeed the case. Note that the global color histogram, a standard CBIR technique, for both images would be identical, misleading one to think the images were very similar. Note that the difference in the histogram suggests a very different texture in the images, which, on top of the possible color differences, enhances the capability of distinguishing among images even further. For histogram comparison within BIC, the dLog distance function is used to diminish the effect that a large value in a single histogram bin dominates the distance between histograms, no matter the relative importance of this single value [10,12]. The basic motivation behind this is based on the observation that classical techniques based on global color histograms treat all colors equally, despite of their relative concentration. However, the perception of stimulus, color in images in particular, is believed to follow a "sigmoidal" curve [12]. The more relative increment in a stimulus is perceived more clearly when the intensity of the stimulus is smaller than when it is larger. For instance, a change from 10% to 20% of a color is perceived more clearly than a change from 85% to 95%. Indeed, it has been a well observed phenomena regarding many other phenomena involving how sensitive one is (including animals) to different stimuli [3]. Thus, the distance function is defined as: 9], requiring only 4 bits of storage per histogram bin. This allows substantial reduction in storage, and yet a reasonably fine discretization of the bins.
dLog(a, b) = M i=0 |f (a[i]) − f (b[i])| where f (x) = 0 if x = 0 1 if 0 < x ≤ 1 log 2 x + 1
The BIC approach was shown in [19] to outperform several other CBIR approaches and, as such, we adopt it in our CBsIR proposal to extract and compare the visual feature of each tile with the goal of improving the retrieval accuracy.
Relevance Feedback for CBsIR
Despite the great potential of relevance feedback shown in CBIR systems using global representations and in RBIR systems, to the best of our knowledge there is no research that uses it within CBsIR systems. In this section we present our solution for CBsIR by using relevance feedback to learn the user's intention. Our relevance feedback approach has three main components: (1) a tile re-weighting scheme that assigns penalties to each tile of database images and updates those tile penalties for all relevant images retrieved at each iteration using both the relevant (positive) and irrelevant (negative) images identified by the user; (2) a query refinement strategy that is based on the tile re-weighting scheme to approach the most informative query according to the user's intention; (3) an image similarity measure that refines the final ranking of images using the user's feedback information. Each of these components is explained in detail in the following subsections.
Tile Re-Weighting Scheme
Researches in RBIR [7,6] have proposed region re-weighting schemes for relevance feedback. In this research, we design our tile re-weighting scheme that specializes the technique presented in [7] to accomodate our tile-oriented (not region-oriented) HTM approach for CBsIR. It should be emphasized that instead of considering all the images in the database to compute the parameters for region weight [6] (which is computationally expensive), our tile re-weighting scheme uses only the positive and negative examples identified by the user to update the tile penalty of the positive images only, which is much more efficient. Moreover, the region re-weighting scheme in [7] uses a predefined similarity threshold to determine whether the region and the image is similar or not, otherwise the comparison of region pairs would become too expensive since images might consist of different and large number of regions. This threshold is sensitive and subject to change for different kinds of image datasets. Thus, how to obtain the right threshold is yet another challenge for the relevance feedback method in RBIR. However, our RF method for the CBsIR problem does not need any threshold because the number of obtained tiles is the same (and small) for each database image and there exists implicit relationship between the tiles, which makes it easier to compare them.
In our system, the user provides feedback information by identifying positive and negative examples from the retrieved images. The basic assumption is that important tiles should appear more often in positive images than unimportant tiles, e.g., "background tiles" should yield to "theme tiles" in positive images. On the other hand, important tiles should appear less often in negative images than unimportant tiles. Following the principle of "more similar means better matched thus less penalty", we assign a penalty to every tile that represents the database image for the matching process. User's feedback information is used to estimate the "tile penalties" for all positive images, which also refines the final ranking of images. During the feedback iterations, the user does not need to specify which tile of a certain positive image is similar to the query, which would only make the problem only simpler to solve at an additional cost to the user.
Next, we introduce some definitions used to determine the tile penalty and formalize the overall relevance feedback process. Definition 1: The distance between two tiles T a and T b from images I a and I b respectively, is:
DT (T a , T b ) = m i=1 d(F eature(T ai ), F eature(T bi ))
m where T ai and T bi are sub-tiles of T a and T b respectively, m is the number of unique leaf nodes in the tiles' tree structures at any hierarchical levels (if already at the leaf level, m = 1), the distance function d is to be instantiated with some particular measure based on the result of the feature extraction done by the F eature function on the tiles, e.g., BIC's dLog() function defined in the previous section. • Definition 2: The penalty for a certain tile i from a database image after k iterations is defined as: T P i (k), i = 0, · · · , N T , where N T + 1 is the number of tiles per database image, and T P i (0) is initialized as 1 N T +1 . • For instance, in Figure 4, N T + 1 = 1 + 9 + 16, i.e., is equal to the number of nodes in the tree structure representing the hierarchical partition of a database image; for the lowest level, only unique nodes count. Definition 3: For each tile from a positive image, we define a measure of the distance DT S between tile T and an image set IS = {I 1 , I 2 , · · · , I n }. This reflects the extent to which the tile is consistent with other positive images in the feature space. Intuitively, the smaller this value, the more important this tile is in representing the user's intention.
DT S(T, IS)
= n i=1 exp(DT (T, I 0 i )), if T is at full tree level n i=1 exp(min j=1..N T DT (T, I j i )), if T is at the subtree level
where N T in this case is the number of tiles at the current subtree level. • Assuming that I is one of the identified positive example images, we can compute the tile penalty of image I which consists of tiles {T 0 , T 1 , · · · , T N T }. The user provides positive and negative example images during each k th iteration of feedback, denoted respectively as IS + (k) = {I + 1 (k), · · · , I + p (k)} and IS − (k) = {I − 1 (k), · · · , I − q (k)}, where p + q is typically much smaller than the size of the database.
Based on the above preparations, we now come to the definition of tile penalty. Definition 4: For all images (only being positive), the tile penalty of T i after k iterations of feedback is computed (and normalized) as: , acts as a penalty, reflecting the influence of the negative examples. • This implies the intuition that a tile from a positive example image should be penalized if it is similar to negative examples. Basically, we compute the distances DT S between a particular tile T and the positive image set IS + as well as the negative image set IS − respectively to update the penalty of that tile from a positive example image. The inverse of the tile's distance from the negative image set is used to weight its corresponding distance from the positive image set.
T P i (k) = W i × DT S(T i , IS + (k)) N T j=0 (W j × DT S(T j , IS + (k))
Let us now illustrate the above methodology with a simple example, which also motivates the notion of tile penalty. For simplicity, assume that the color palette consists of only three colors: black, gray and white. Figure 6 shows the top 3 retrieved images and the user's feedback judgement. Image I 1 is marked as a positive example since it actually contains the query image, which exactly represents the sub-image retrieval problem we are dealing with. Image I 2 is also marked as a positive example because it is the enlargement of the query image (and therefore containing it as well). For the sake of illustration, assume a two-level multi-cale representation of database images is used as in Figure 7.
The tile penalties for tiles per database image are initialized as 0.1 for the 10 tiles, i.e., T P i (0) = 0.1, i ∈ [0 , 9]. Now, take tile T 1 for example. According to Definition 3, we need to compute the distances DT S between T 1 and the positive/negative image set. In order to do this, firstly, the distances between T 1 and all tiles at the corresponding subtree levels of all the images in the positive/negative image set should be obtained by Definition 1. Then, using Definition 4 the new penalty of T 1 is updated from 0.1 to 0.090 correspondingly. The penalties for other tiles is updated in the same way during each feedback iteration. We illustrate the new values of all tile penalties for database image I 1 as a positive example after one feedback iteration in Figure 7. We can see that after the user provides feedback information, some tiles lose some weight while others gain. For instance, T 1 , T 2 , T 3 and T 9 receive less penalties now because they only contain the color of grey and/or black which is/are also in the query. T 0 , T 4 , T 5 , T 7 and T 8 are penalized more since they all contain the color white. The new weights for these tiles generally follow the trend that more percentage of white color more penalty. T 6 , which is a rotation of the query image maintains its weight for this iteration. This means that our system is to some extent also capable of perceiving changes such as rotation. Besides, for a closer look at the updated tile penalties of positive image I 1 , T 1 receives more penalty than T 3 now although they are similar to the query image in the same degree. Note that, according to Definition 4, both the positive and the negative example images are used to calculate new tile penalties. And we penalize a tile more if it is also somewhat more similar to the negative example images compared with other tiles in the positive example image. Thus it is reasonable that the tile penalty for T 1 appears higher than that for T 3 after feedback learning, since T 1 contains some black color which is also in the negative example image I 3 while T 3 contains only the grey color.
Query Feature Update
The relevance feedback process using query refinement strategy is based on the tile re-weighting scheme and all positive and negative example images. The main concern is that we need to maintain as much as possible the original feature of query image while introducing new feature elements that would capture more new relevant images. Considering the hierarchical tree structure of the query image, we use the most similar tile (with minimum tile penalty) at every subtree level of each positive image to update the query feature at the corresponding subtree level. Definition 5: The updated query feature after k iterations is:
qn k l [j] = p i=1 (1 − T P min i l (k)) × P os k i l [j] p i=1 (1 − T P min i l (k))
where qn k l is the new feature with M dimensions for a subtree (tile) at the l th level of the tree structure for the query image after k iterations, T P min i l (k) is the minimum tile penalty for a subtree (tile) found at the l th level of the tree structure for the i th positive image after k iterations, P os k i l is the feature for the subtree (tile) with minimum tile penalty at the l th level of the i th positive image's tree structure after k iterations, and p is the number of positive images given by the user at this iteration. • Intuitively, we use the weighted average to update the feature for a subtree (tile) of the query, based on the features of those tiles that have minimum tile penalties within respective positive images. In this way, we try to approach the optimal query that carries the most information needed to retrieve as many relevant images to the query as possible.
Image Similarity
With the updated query feature and tile penalties for positive images, we can now define the distance between images and the query for ranking evaluation at each feedback iteration. In order to locate the best match to the query sub-image, our image similarity measure tries to find the minimum from the distances between the database image tiles and the query (recall that both the database image and the query sub-image have been modeled by the tree structure in the same way) at corresponding hierarchical level in the tree structure, weighted by the tile penalty of corresponding database image tiles. Definition 6: The distance between the (updated) query image Q and a database image I at the k th iteration is:
DI k (I, Q) = min i=0..N T T P i (k − 1) × DT (I i , Q j )
where N T + 1 is the number of all subtrees in the tree structure (tiles) of a database image, and T P i (k − 1) is the tile penalty for the i th tile of image I after k − 1 iterations. • For the comparison of full tree structures, i = 0 and j = 0, indicating both the full tree structure of the database image and the query image. For the comparison of subtree structures, i = 1..N l for each 1 ≤ j ≤ (L − 1), where N l is the number of subtree structures at the l th level of the tree structure and L is the number of levels of the tree structure, mapped from the hierarchical partition. j indicates the subtree structure at a particular level of the query image's tree structure, as a result of shrinking the original query tree structure to make the comparison with the subtree structures of database images comparable.
Finally, the overall relevance feedback process for the CBsIR system can be summarized in the following algorithm:
1. The user submits a query (sub)-image. 2. The system retrieves the initial set of images using the proposed similarity measure, which consists of database images containing tiles similar to the query sub-image. 6. The revised query and new tile penalties for database images is used to compute the ranking score for each image and sort the results.
7. Show the new retrieval results and, if the user wishes to continue, go to step 3.
Experiments and Results
Before going further let us define the metrics we use to measure retrieval effectiveness. For certain applications, it is more useful that the system brings new relevant images (found due to the update of query feature from previous feedback) forward into the top range rather than keeping those already retrieved relevant images again in the current iteration. For other applications, however, the opposite situation applies, the user is more interested in obtaining more relevant images during each iteration keeping those s/he has already seen before. Given these observations, we use two complementary measures for precision and recall as follows: The new recall and precision explicitly measure the learning aptitude of the system; ideally it retrieves more new relevant images as soon as possible. Moreover, we also measure the total number of distinct relevant images the system can find during all the feedback iterations. This is a history-based measure that implicitly includes some relevant images "lost" (out of the currently presented images) in the process. We call them cumulative recall and cumulative precision defined as follows:
1. Cumulative Recall: the percentage of distinct relevant images from all iterations so far (not necessarily shown at the current iteration) over the number of relevant images in the predefined answer set. Table 1 exemplifies the measures mentioned above, assuming the answer set for a query contains 3 images A, B, C and the number of returned (presented) images is 5.
In addition to the above measures, we also evaluate storage overhead and query processing time.
We test the proposed relevance feedback approach using a heterogenous image dataset consisting of 10,150 color JPEG images: a mixture of the public Stanford10k 5 dataset and some images from one of COREL's CD-ROMs, each of which falls into a particular category -we use 21 such categories 6 . Some categories do not have rotated or translated images, but others do. On average, each answer set has 11 images, and none of the answer sets has more than 20 images, which is the amount of images we present to the user for feedback during each iteration. It is important to note that the queries and answer sets are not part of the Stanford10k dataset in order to minimize the probability that other images, not contained in the expected answer set, could also be part of the answer but not accounted for. We manually crop part of a certain image from each of the above categories to form a query image set of 21 queries (one for each category). Images of the same categories serve as the answer sets for queries (one sample query and its corresponding answer set are shown in Figure 1). The size of the query image varies, being on average 18% the size of the database images. The following performance results are collected from the online demo available at http://db.cs.ualberta.ca/mn/CBsIR.html. (An sample of the two initial iterations using our system is presented in the Appendix.)
In our experiments, the maximum number of iterations explored is set to 10 (users will give feedback 9 times by pointing out which images are relevant (positive)/irrelevant (negative) to the query) and we present the top 20 retrieved images at each iteration. While within the same query session, the information collected at one step of the relevance feedback phase is used in the next step (as indicated in the definitions presented in Section 3), the information collected across different query sessions is not integrated into the search for the next queries -even if the very same query is submitted to the system again. I.e., we assume query sessions are independent; more specifically, once the user goes to the initial page, all accumulated learning is cleared. This consideration is based on the observation of the subjectivity of human perception and the fact that even the same person could perceive the same retrieval result differently at different times. As discussed earlier we use BIC histograms to model the contents of an image tile. The number of quantized colors in such histograms is therefore a parameter for BIC. We use two different values for this parameter, 16 and 64 colors, in order to evaluate the influence of the underlying tile model on the overall retrieval effectiveness. Table 2 shows how many, on average, iterations were necessary to have the original image (the one from which the query sub-image was extracted) placed within the top 20 images. It is clear that using 64 quantized colors is more efficient, as the hit rate of the original images is almost optimal. Even though this trend, i.e., the more colors the better the retrieval, is fairly intuitive, it is interesting to see that this advantage does not grow linearly with the number of colors across all experiments. That is to say, that even using a low number of colors one can still obtain fairly good results.
The retrieval accuracy using 64 quantized colors is shown in Figure 8 and Figure 9. As it can be clearly seen, after 5 iterations the system has already learned most of the information it could learn, i.e., the information gain (given by the new recall and new precision curves) is nearly null. On the other hand, after only 5 iterations the actual recall and actual precision values increased by 55% and 60% respectively. It is also noteworthy to mention that the stable actual precision value of nearly 40% is not as low as it may seem at first. The answer sets have an average of 11 images and since the user is presented with 20 images, the maximum precision one could get (on average) would be about 50% as almost half of the displayed images could not be considered relevant by construction. This interpretation leads to the proposal of the following measure:
• Normalized Precision: the actual precision over the maximum possible actual precision value.
Interestingly enough, careful consideration of such a measure shows that is equivalent to the usual notion of (actual) recall. Indeed, consider R and A to be the sets of relevant answers and the retrieved answers with respect to a given query. The actual precision is then defined as |R ∩ A|/|A|. The maximum precision value one can obtain is |R|/|A|. When the former is divided by the latter one obtains |R ∩ A|/|R| which is precisely the definition of actual recall. This leads to the argument that precision-based measures are not well suited for this type of scenario, where non-relevant images are very likely to be included in the answer set regardless of their relevance. The actual recall, being concerned only with the relevant images is a more realistic measure. Under this argument, 70% of stable actual recall (or normalized precision) after 5 iterations seems quite reasonable.
We also obtained about 85% for cumulative recall and about 50% for cumulative precision. The reason for the higher values than those for actual recall and actual precision is because some relevant images that may be "lost" in subsequent iterations are always accounted for in these measures.
Using 16 quantized colors, as one would expect, yields less accuracy than using 64 quantized colors. However, an interesting aspect shown in Figures 10 and 11 is that even though, the amount of information (i.e., number of colors) was reduced by 75%, the effectiveness was reduced by at most 10% compared to the values in Figures 8 and 9. The cost of the loss of information is more clear when looking at the "learning aptitude." Using 16 colors required twice as many iterations in order to bring the curves to a stable state. Still, this show a sublinear dependence on the number of colors: using 4 times more colors yields only 10% more effectiveness and 2 times faster learning.
Another interesting observation, which supports the main advantage of using more color for tile abstraction, can be seen when comparing the new precision and recall curves using different numbers of colors directly (Figures 12 and 13). Up until the 4th or 5th iteration usign 64 colors yields higher values, meaning that it is learning faster, after that point, it has learned basically what it could have learn. On the other hand the curve for using 16 colors shows that the method is still learning. Figure 14 shows the average time required to process a query during each iteration, i.e., to access all disk-resident data, complete the learning from the user's feedback at the current iteration (not applicable to the first iteration), obtain the distance between the query image and database images and sort them by their resulting ranks. The first iteration takes, on average, slightly less than 2 seconds when using 64 quantized colors and 0.6 second when using 16 quantized colors, whereas each subsequent iteration requires about 2.5 seconds and 1 second respectively for the two feature representations. This slight increase is due to the overhead for computing and updating the tile penalties at each iteration. As well, note that the gain in speed is proportional to the smaller number of colors used, i.e., using 64 colors yields a performance about times slower than using only 16 quantized colors. Extracting image features from the image database, applying the BIC method, and generating the metadata file requires about 0.15 secs/image on a computer running Linux 2.4.20 with AMD Athlon XP 1900+ CPU and 1GB of main memory and is independent of the number of colors used -this procedure can be done off-line and should not be considered part of query processing overhead.
Finally, the storage cost for the disk-resident metadata is 10.5 MB (only about 20% the size of the image database), while using 16 quantized colors needs proportionally less storage, namely 2.7 MB, again proportional to the representation overhead.
Conclusions
In this paper we have shown, for the first time, how relevance feedback can be used to improve the performance of CBsIR. We presented a relevance feedbackbased technique, which is based on a tile re-weighting scheme that assigns penalties to each tile of database images and updates those of all relevant images using both the positive and negative examples identified by the user. The user's feed- back is used to refine the image similarity measure by weighting the tile distances between the query and the database image tiles with their corresponding tile penalties. We combine this learning method with the BIC approach for image modeling to improve the performance of content-based sub-image retrieval. Our results on an image database of over 10,000 images suggest that the learning method is quite effective for CBsIR. While using less colors within BIC reduce storage overhead and improve speedup query processing it does not affect substantially retrieval efficiency in the long term. The main drawback is the system take longer to "learn" making the overall retrieval task a longer one. A few possible venues for further investigation include the design of disk based access structure for the hierarchical tree (to enhance the scalability for larger databases), the use of better (more powerful yet compact) representation for the tile features, possibly removing the background of the images and the incorporation of more sophisticated machine learning techniques to shorten the gap between low-level image features and high-level semantic contents of images so as to better understand the user's intention.
| 7,482 |
0904.4041
|
1488291195
|
The typical content-based image retrieval problem is to find images within a database that are similar to a given query image. This paper presents a solution to a different problem, namely that of content based sub-image retrieval, i.e., finding images from a database that contains another image. Note that this is different from finding a region in a (segmented) image that is similar to another image region given as a query. We present a technique for CBsIR that explores relevance feedback, i.e., the user's input on intermediary results, in order to improve retrieval efficiency. Upon modeling images as a set of overlapping and recursive tiles, we use a tile re-weighting scheme that assigns penalties to each tile of the database images and updates the tile penalties for all relevant images retrieved at each iteration using both the relevant and irrelevant images identified by the user. Each tile is modeled by means of its color content using a compact but very efficient method which can, indirectly, capture some notion of texture as well, despite the fact that only color information is maintained. Performance evaluation on a largely heterogeneous dataset of over 10,000 images shows that the system can achieve a stable average recall value of 70 within the top 20 retrieved (and presented) images after only 5 iterations, with each such iteration taking about 2 seconds on an off-the-shelf desktop computer.
|
An straigthforward way to model images is to use its average color. This is obviously not effective in any non-trivial situation. Another simple, and in many situations cost-effective means is to use a global color histogram (GCH) (c.f., @cite_14 ). A common critique to GCHs is that it is unable to capture any notion of spatial distribution. To address this several other approaches have been proposed A comprehensive survey thereof is beyond the scope of this paper. , but they add complexity as a trade-off in order to gain effectiveness. Nevertheless, the use of color only, without any notion of spatial distribution, may be effective, if one is able to capture other features of the images, e.g., texture. That is exactly the advantage of the BIC technique proposed in @cite_0 and which we re-use within our proposal.
|
{
"abstract": [
"This paper presents (Border Interior pixel Classification), a compact and efficient CBIR approach suitable for broad image domains. It has three main components: (1) a simple and powerful image analysis algorithm that classifies image pixels as either border or interior, (2) a new logarithmic distance (dLog) for comparing histograms, and (3) a compact representation for the visual features extracted from images. Experimental results show that the BIC approach is consistently more compact, more efficient and more effective than state-of-the-art CBIR approaches based on sophisticated image analysis algorithms and complex distance functions. It was also observed that the dLog distance function has two main advantages over vectorial distances (e.g., L 1 ): (1) it is able to increase substantially the effectiveness of (several) histogram-based CBIR approaches and, at the same time, (2) it reduces by 50 the space requirement to represent a histogram.",
"Introduction. Multimedia Data Types and Formats. Multimedia Database Design Issues. Text Document Indexing and Retrieval. Indexing and Retrieval of Audio. Image Indexing and Retrieval. Video Indexing and Retrieval. Integrated Multimedia Indexing and Retrieval. Techniques and Data Structures for Efficient Multimedia Similarity Search. System Support for Distributed Multimedia Databases. Measurement of Multimedia Information Retrieval Effectiveness. Products, Applications, and New Developments."
],
"cite_N": [
"@cite_0",
"@cite_14"
],
"mid": [
"2039388884",
"2108486992"
]
}
|
Content-Based Sub-Image Retrieval with Relevance Feedback
|
Most of the content-based image retrieval (CBIR) systems perform retrieval based on a full image comparison, i.e., given a query image the system returns overall similar images. This is not useful if users are also interested in images from the database that contain an image (perhaps an object) similar to a query image. We call this searching process Content-Based sub-Image Retrieval (CB-sIR), and it is defined as follows [18]: given an image query Q and an image database S, retrieve from S those images Q which contain Q according to some notion of similarity. To illustrate this consider Figure 1 which displays an example query image and its relevant answer set. of such an answer set, and their respective ranks, retrieved within the top 20 matches after CBsIR is performed 1 . Note that the other 17 images returned are considered non-relevant to the query. Now assume that the user is given the opportunity to mark those 3 images as relevant and all other 17 as irrelevant, i.e., the user is allowed to provide relevance feedback. Figure 3 shows the relevant images retrieved (along with their rank) after taking such feedback into account. Note that all images previously obtained were ranked higher and also new images were found and ranked high as well. The sub-image retrieval problem we consider is similar to region-based image retrieval (RBIR), e.g. [1,9], since the goal may also be to retrieve images at object-level. However, there is a fundamental difference between these two. The CBsIR problem is to search for an image, given as a whole, which is contained within another image, whereas in RBIR one is searching for a region, possibly the result of some image segmentation. The former is more intuitive since users can provide a query image as in traditional CBIR, and unlike the latter, it does not rely on any type of segmentation preprocessing. Unfortunately, automatic image segmentation algorithms usually lead to inaccurate segmentation of the image when trying to achieve homogeneous visual properties. Sometimes the ob- tained regions are only parts of a real object and should be combined with some neighbor regions so as to represent a meaningful object. Thus, complex distance functions are generally used to compare segmented images at query time. Also, the number and size of regions per image are variable and a precise representation of the obtained regions may be storage-wise expensive. Furthermore, since region-based queries are usually performed after the image segmentation and region description steps, it clearly puts some restriction on the user's expression of his/her information need depending on how good the segmentation results match the semantics of images, even though the user can explicitly select any detected region as query region. In those image retrieval systems where images are heterogeneous, rich in texture, very irregular and variable in contents, accurate regions are hard to obtain, making RBIR likely to perform poorly. The main contribution of this paper is to realize CBsIR by employing relevance feedback, in order to capture the user's intentions at query time. As we discuss in the next section, relevance feedback is an interactive learning technique which has already been demonstrated to boost performance in CBIR and RBIR systems. Despite the great potential shown by relevance feedback, to the best of our knowledge there is no published research that uses it in the context of CBsIR, thus positioning our work as unique in this domain.
The remainder of this paper is organized as follows. In the next section we discuss some related work. We also summarize the BIC method [19] for CBIR and how we adopt it for the CBsIR techniques system we propose. (As we shall discuss BIC is used as a building block when modeling images within our proposed approach.) Our retrieval strategy uses query refinement as well as the incorporation of user's judgement, via relevance feedback, into the image similarity measure. This forms the core contribution of this paper and is detailed in Section 3. In Section 4 we present and discuss experimental results, which support our claim of improved retrieval effectiveness. Finally, Section 5 concludes the paper and offers directions for future work.
Relevance Feedback within Traditional CBIR
The key issue in relevance feedback is how to use positive and negative examples to refine the query and/or to adjust the similarity measure. Early relevance feedback schemes for CBIR were adopted from feedback schemes developed for classical textual document retrieval. These schemes fall into two categories: query point movement (query refinement) and re-weighting (similarity measure refinement), both based on the well-known vector model.
The query point movement methods aim at improving the estimate of the "ideal query point" by moving it towards positive example points and away from the negative example points in the query space. One frequently used technique to iteratively update the query is the Rocchio's formula [13]. It is used in the MARS system [16], replacing the document vector by visual feature vectors. Another approach is to update query space by selecting feature models. The best way for effective retrieval is argued to be using a "society" of feature models determined by a learning scheme since each feature model is supposed to represent one aspect of the image content more accurately than others.
Re-weighting methods enhance the importance of a feature's dimensions, helping to retrieve relevant images while also reducing the importance of the dimensions that hinder the process. This is achieved by updating the weights of feature vectors in the distance metric. The refinement of the re-weighting method in the MARS system is called the standard deviation method.
Recent work has proposed more computationally robust methods that perform global feature optimization. The MindReader retrieval system [5] formulates a minimization problem on the parameter estimating process. Using a distance function that is not necessarily aligned with the coordinate axis, the MindReader system allows correlations between attributes in addition for different weights on each component. A further improvement over the MindReader approach [14] uses a unified framework to achieve the optimal query estimation and weighting functions. By minimizing the total distances of the positive examples from the revised query, the weighted average and a whitening transform in the feature space are found to be the optimal solutions. However, this algorithm does not use the negative examples to update the query and image similarity measure; and initially the user needs to input the critical data of training vectors and the relevance matrix into the system.
Tasks that can be improved as a result of experience can be considered as a machine-learning task. Therefore, relevance feedback can be considered as a learning method -the system learns from the examples provided as feedback by a user, i.e., his/her experience, to refine the retrieval results. The aforementioned query-movement method represented by the Rocchio's formula and re-weighting method are both simple learning methods. However, as users are usually reluctant to provide a large number of feedback examples, i.e., the number of training samples is very small. Furthermore, the number of feature dimensions in CBIR systems is also usually high. Thus, learning from small training samples in a very high dimension feature space makes many learning methods, such as decision tree learning and artificial neural networks, unsuitable for CBIR.
There are several key issues in addressing relevance feedback in CBIR as a small sample learning problem. First, how to quickly learn from small sets of feedback samples to improve the retrieval accuracy effectively; second, how to accumulate the knowledge learned from the feedback; and third, how to integrate low-level visual and high-level semantic features in the query. Most of the research in literature has focused on the first issue. In that respect Bayesian learning has been explored and has been shown advantageous compared with other learning methods, e.g., [21]. Active learning methods have been used to actively select samples which maximize the information gain, or minimize entropy/uncertainty in decision-making. These methods enable fast convergence of the retrieval result which in turn increases user satisfaction. Chen et al [2] use Monte carlo sampling to search for the set of samples that will minimize the expected number of future iterations. Tong and Chang [20] propose the use of SVM active learning algorithm to select the sample which maximizes the reduction in the size of the version space in which the class boundary lies. Without knowing apriori the class of a candidate, the best search is to halve the search space each time. In their work, the points near the SVM boundary are used to approximate the most-informative points; and the most-positive images are chosen as the ones farthest from the boundary on the positive side in the feature space.
Relevance Feedback within RBIR
Relevance feedback has been introduced in RBIR systems for a performance improvement as it does for the image retrieval systems using global representations.
In [6], the authors introduce several learning algorithms using the adjusted global image representation to RBIR. First, the query point movement technique is considered by assembling all the segmented regions of positive examples together and resizing the regions to emphasize the latest positive examples in order to form a composite image as the new query. Second, the application of support vector machine (SVM) [20] in relevance feedback for RBIR is discussed. Both the one class SVM as a class distribution estimator and two classes SVM as a classifier are investigated. Third, a region re-weighting algorithm is proposed corresponding to feature re-weighting. It assumes that important regions should appear more times in the positive images and fewer times in all the images of the database. For each region, measures of region frequency RF and inverse image frequency IIF (analogous to the TF and IDF in text retrieval [22]) are introduced for the region importance. Thus the region importance is defined as its region frequency RF weighted by the inverse image frequency IIF, and normalized over all regions in an image. Also, the feedback judgement is memorized for future use by calculating the cumulate region importance. However, this algorithm only consider positive examples while ignoring the effect of the negative examples in each iteration of the retrieval results. Nevertheless, experimental results on a general-purpose image database demonstrate the effectiveness of those proposed learning methods in RBIR.
CBsIR without Relevance Feedback
The paper by Leung and Ng [8] investigates the idea of either enlarging the query sub-image to match the size of an image block obtained by the four-level multiscale representation of the database images, or conversely contracting the image blocks of the database images so that they become as small as the query sub-image. The paper presents an analytical cost model and focuses on avoiding I/O overhead during query processing time. To find a good strategy to search multiple resolutions, four techniques are investigated: the branch-and-bound algorithm, Pure Vertical (PV), Pure Horizontal (PH) and Horizontal-and-Vertical (HV). The HV strategy is argued to be the best considering efficiency. However, the authors do not report clear conclusions regarding the effectiveness (e.g., Precision and/or Recall) of their approach.
The authors of [18] consider global feature extraction to capture the spatial information within image regions. The average color and the covariance matrix of the color channels in L*a*b color space are used to represent the color distribution. They apply a three level non-recursive hierarchical partition to achieve multiscale representation of database images by overlapping regions within them. Aiming at reducing the index size of these global features, a compact abstraction for the global features of a region is introduced. As well, a new distance measure between such abstractions is introduced for efficiently searching through the tiles from the multi-scale partition strategy. This distance is called inter hierarchical distance (IHD) since it is taken between feature vectors of different hierarchical levels of the image partition. The IHD index is a two dimensional vector which consumes small storage space. The search strategy is a simple linear scan of the index file, which assesses the similarity between the query image and a particular database image as well as all its sub-regions using their IHD vectors. Finally, the minimum distance found is used to rank this database image.
In [11] a new method called HTM (Hierarchical Tree Matching) for the CB-sIR problem was proposed. It has three main components: (1) a tree structure that models a hierarchical partition of images into tiles using color features, (2) an index sequence to represent the tree structure (allowing fast access during the search phase), and (3) a search strategy based on the tree structures of both database images and the query image. Since the tree structure presented in [11] is re-used in our work, we detail it in the following. To model an image, a grid is laid on it yielding a hierarchical partition and tiles. Although granularity could be arbitrary, we have obtained good results using a 4×4 grid resulting in a three-level multiscale representation of the image (similarly to what was done in [8] and [18]). The hierarchial partition of an image and its resulting tree structure are illustrated in Figure 4. There are three levels in the hierarchical structure. The highest level is the image itself. For the second level the image is decomposed into 3×3 rectangles with each side having half the length of the whole image, yielding 9 overlapping tiles. The lowest level consists of 4×9=36 rectangles, since each tile of the second level is partitioned into 4 non-overlapping sub-tiles. Note that, to exclude redundance in the CBsIR system, only the indices of the 4×4=16 unique tiles in the lowest level are stored with a small structure for relationship information. This tiling scheme is obviously not unique and as long as a well-formed hierarchy of tiles is used to model the image the technique we proposed can still be applied after corresponding adjustments. The average color of the image tiles in the RGB color space is associated to the nodes in the tree stuctures for images 2 . Thus, every database image is represented as a series of tiles, each of which is mapped to a subtree of the tree modeling the image.
An index sequence representing the predefined parent-child relationship (given by the predefined order of sequence in the index) for the tree structure is stored on secondary storage and used for fast retrieval. Details about the index sequence structure can be found in elsewhere [11]; in short, it resembles a priority tree where the relative order among the tree nodes reflect the relative order of the entries and which can be efficiently mapped onto an array structure. Such an structure allows one to efficiently traverse the necessary indices for computing (sub)image similarity. The searching process is accomplished by "floating" the tree structure of the query image over the full tree structure of the candidate database image, shrinking the query's tree structure so that it is comparable with the candidate database image's trees at each level of the hierarchical structure. The minimum distance from tree comparisons at all hierarchical levels, indicating the best matching tile from a database image, is used as the distance between the database image and the query. Differently from [18], the HTM search strategy considers local information of images' tiles represented by leaf nodes in the subtree structures. The average of distance values among the corresponding leaf nodes is taken for the distance between the tree structures of query image and a certain tile of the database image at any hierarchical level.
Even though different datasets were used, experiments detailed in [11] strongly suggest that the proposed approach yields better retrieval accuracy compared to [18], at the cost of small storage overhead.
The BIC-based Image Abstraction
An straigthforward way to model images is to use its average color. This is obviously not effective in any non-trivial situation. Another simple, and in many situations cost-effective means is to use a global color histogram (GCH) (c.f., [10]). A common critique to GCHs is that it is unable to capture any notion of spatial distribution. To address this several other approaches have been proposed 3 , but they add complexity as a trade-off in order to gain effectiveness. Nevertheless, the use of color only, without any notion of spatial distribution, may be effective, if one is able to capture other features of the images, e.g., texture. That is exactly the advantage of the BIC technique proposed in [19] and which we re-use within our proposal.
The image analysis algorithm of BIC classifies pixels as either border, when its color is the same as its neighboors, or otherwise as interior, and two normalized histograms are computed considering only the border pixels and the interior pixels respectively. That is, for each color two histogram bins exist: one in the border pixel histogram and one in the interior pixel histogram. This allows a more informed color distribution abstraction and captures, implicitly, a notion of texture.
To illustrate the idea consider two images, one composed of two equally sized solid color blocks of different colors, say C1 and C2, and another one where half of pixels of color have color C1 and are randomly distributed. Likewise the other half of pixels have color C2 and are also randomly distributed. Clearly the BIC histograms of those images are quite different, one will have almost only interior pixels and the other will have almost only border pixels. This will yield a low similarity measure, which is indeed the case. Note that the global color histogram, a standard CBIR technique, for both images would be identical, misleading one to think the images were very similar. Note that the difference in the histogram suggests a very different texture in the images, which, on top of the possible color differences, enhances the capability of distinguishing among images even further. For histogram comparison within BIC, the dLog distance function is used to diminish the effect that a large value in a single histogram bin dominates the distance between histograms, no matter the relative importance of this single value [10,12]. The basic motivation behind this is based on the observation that classical techniques based on global color histograms treat all colors equally, despite of their relative concentration. However, the perception of stimulus, color in images in particular, is believed to follow a "sigmoidal" curve [12]. The more relative increment in a stimulus is perceived more clearly when the intensity of the stimulus is smaller than when it is larger. For instance, a change from 10% to 20% of a color is perceived more clearly than a change from 85% to 95%. Indeed, it has been a well observed phenomena regarding many other phenomena involving how sensitive one is (including animals) to different stimuli [3]. Thus, the distance function is defined as: 9], requiring only 4 bits of storage per histogram bin. This allows substantial reduction in storage, and yet a reasonably fine discretization of the bins.
dLog(a, b) = M i=0 |f (a[i]) − f (b[i])| where f (x) = 0 if x = 0 1 if 0 < x ≤ 1 log 2 x + 1
The BIC approach was shown in [19] to outperform several other CBIR approaches and, as such, we adopt it in our CBsIR proposal to extract and compare the visual feature of each tile with the goal of improving the retrieval accuracy.
Relevance Feedback for CBsIR
Despite the great potential of relevance feedback shown in CBIR systems using global representations and in RBIR systems, to the best of our knowledge there is no research that uses it within CBsIR systems. In this section we present our solution for CBsIR by using relevance feedback to learn the user's intention. Our relevance feedback approach has three main components: (1) a tile re-weighting scheme that assigns penalties to each tile of database images and updates those tile penalties for all relevant images retrieved at each iteration using both the relevant (positive) and irrelevant (negative) images identified by the user; (2) a query refinement strategy that is based on the tile re-weighting scheme to approach the most informative query according to the user's intention; (3) an image similarity measure that refines the final ranking of images using the user's feedback information. Each of these components is explained in detail in the following subsections.
Tile Re-Weighting Scheme
Researches in RBIR [7,6] have proposed region re-weighting schemes for relevance feedback. In this research, we design our tile re-weighting scheme that specializes the technique presented in [7] to accomodate our tile-oriented (not region-oriented) HTM approach for CBsIR. It should be emphasized that instead of considering all the images in the database to compute the parameters for region weight [6] (which is computationally expensive), our tile re-weighting scheme uses only the positive and negative examples identified by the user to update the tile penalty of the positive images only, which is much more efficient. Moreover, the region re-weighting scheme in [7] uses a predefined similarity threshold to determine whether the region and the image is similar or not, otherwise the comparison of region pairs would become too expensive since images might consist of different and large number of regions. This threshold is sensitive and subject to change for different kinds of image datasets. Thus, how to obtain the right threshold is yet another challenge for the relevance feedback method in RBIR. However, our RF method for the CBsIR problem does not need any threshold because the number of obtained tiles is the same (and small) for each database image and there exists implicit relationship between the tiles, which makes it easier to compare them.
In our system, the user provides feedback information by identifying positive and negative examples from the retrieved images. The basic assumption is that important tiles should appear more often in positive images than unimportant tiles, e.g., "background tiles" should yield to "theme tiles" in positive images. On the other hand, important tiles should appear less often in negative images than unimportant tiles. Following the principle of "more similar means better matched thus less penalty", we assign a penalty to every tile that represents the database image for the matching process. User's feedback information is used to estimate the "tile penalties" for all positive images, which also refines the final ranking of images. During the feedback iterations, the user does not need to specify which tile of a certain positive image is similar to the query, which would only make the problem only simpler to solve at an additional cost to the user.
Next, we introduce some definitions used to determine the tile penalty and formalize the overall relevance feedback process. Definition 1: The distance between two tiles T a and T b from images I a and I b respectively, is:
DT (T a , T b ) = m i=1 d(F eature(T ai ), F eature(T bi ))
m where T ai and T bi are sub-tiles of T a and T b respectively, m is the number of unique leaf nodes in the tiles' tree structures at any hierarchical levels (if already at the leaf level, m = 1), the distance function d is to be instantiated with some particular measure based on the result of the feature extraction done by the F eature function on the tiles, e.g., BIC's dLog() function defined in the previous section. • Definition 2: The penalty for a certain tile i from a database image after k iterations is defined as: T P i (k), i = 0, · · · , N T , where N T + 1 is the number of tiles per database image, and T P i (0) is initialized as 1 N T +1 . • For instance, in Figure 4, N T + 1 = 1 + 9 + 16, i.e., is equal to the number of nodes in the tree structure representing the hierarchical partition of a database image; for the lowest level, only unique nodes count. Definition 3: For each tile from a positive image, we define a measure of the distance DT S between tile T and an image set IS = {I 1 , I 2 , · · · , I n }. This reflects the extent to which the tile is consistent with other positive images in the feature space. Intuitively, the smaller this value, the more important this tile is in representing the user's intention.
DT S(T, IS)
= n i=1 exp(DT (T, I 0 i )), if T is at full tree level n i=1 exp(min j=1..N T DT (T, I j i )), if T is at the subtree level
where N T in this case is the number of tiles at the current subtree level. • Assuming that I is one of the identified positive example images, we can compute the tile penalty of image I which consists of tiles {T 0 , T 1 , · · · , T N T }. The user provides positive and negative example images during each k th iteration of feedback, denoted respectively as IS + (k) = {I + 1 (k), · · · , I + p (k)} and IS − (k) = {I − 1 (k), · · · , I − q (k)}, where p + q is typically much smaller than the size of the database.
Based on the above preparations, we now come to the definition of tile penalty. Definition 4: For all images (only being positive), the tile penalty of T i after k iterations of feedback is computed (and normalized) as: , acts as a penalty, reflecting the influence of the negative examples. • This implies the intuition that a tile from a positive example image should be penalized if it is similar to negative examples. Basically, we compute the distances DT S between a particular tile T and the positive image set IS + as well as the negative image set IS − respectively to update the penalty of that tile from a positive example image. The inverse of the tile's distance from the negative image set is used to weight its corresponding distance from the positive image set.
T P i (k) = W i × DT S(T i , IS + (k)) N T j=0 (W j × DT S(T j , IS + (k))
Let us now illustrate the above methodology with a simple example, which also motivates the notion of tile penalty. For simplicity, assume that the color palette consists of only three colors: black, gray and white. Figure 6 shows the top 3 retrieved images and the user's feedback judgement. Image I 1 is marked as a positive example since it actually contains the query image, which exactly represents the sub-image retrieval problem we are dealing with. Image I 2 is also marked as a positive example because it is the enlargement of the query image (and therefore containing it as well). For the sake of illustration, assume a two-level multi-cale representation of database images is used as in Figure 7.
The tile penalties for tiles per database image are initialized as 0.1 for the 10 tiles, i.e., T P i (0) = 0.1, i ∈ [0 , 9]. Now, take tile T 1 for example. According to Definition 3, we need to compute the distances DT S between T 1 and the positive/negative image set. In order to do this, firstly, the distances between T 1 and all tiles at the corresponding subtree levels of all the images in the positive/negative image set should be obtained by Definition 1. Then, using Definition 4 the new penalty of T 1 is updated from 0.1 to 0.090 correspondingly. The penalties for other tiles is updated in the same way during each feedback iteration. We illustrate the new values of all tile penalties for database image I 1 as a positive example after one feedback iteration in Figure 7. We can see that after the user provides feedback information, some tiles lose some weight while others gain. For instance, T 1 , T 2 , T 3 and T 9 receive less penalties now because they only contain the color of grey and/or black which is/are also in the query. T 0 , T 4 , T 5 , T 7 and T 8 are penalized more since they all contain the color white. The new weights for these tiles generally follow the trend that more percentage of white color more penalty. T 6 , which is a rotation of the query image maintains its weight for this iteration. This means that our system is to some extent also capable of perceiving changes such as rotation. Besides, for a closer look at the updated tile penalties of positive image I 1 , T 1 receives more penalty than T 3 now although they are similar to the query image in the same degree. Note that, according to Definition 4, both the positive and the negative example images are used to calculate new tile penalties. And we penalize a tile more if it is also somewhat more similar to the negative example images compared with other tiles in the positive example image. Thus it is reasonable that the tile penalty for T 1 appears higher than that for T 3 after feedback learning, since T 1 contains some black color which is also in the negative example image I 3 while T 3 contains only the grey color.
Query Feature Update
The relevance feedback process using query refinement strategy is based on the tile re-weighting scheme and all positive and negative example images. The main concern is that we need to maintain as much as possible the original feature of query image while introducing new feature elements that would capture more new relevant images. Considering the hierarchical tree structure of the query image, we use the most similar tile (with minimum tile penalty) at every subtree level of each positive image to update the query feature at the corresponding subtree level. Definition 5: The updated query feature after k iterations is:
qn k l [j] = p i=1 (1 − T P min i l (k)) × P os k i l [j] p i=1 (1 − T P min i l (k))
where qn k l is the new feature with M dimensions for a subtree (tile) at the l th level of the tree structure for the query image after k iterations, T P min i l (k) is the minimum tile penalty for a subtree (tile) found at the l th level of the tree structure for the i th positive image after k iterations, P os k i l is the feature for the subtree (tile) with minimum tile penalty at the l th level of the i th positive image's tree structure after k iterations, and p is the number of positive images given by the user at this iteration. • Intuitively, we use the weighted average to update the feature for a subtree (tile) of the query, based on the features of those tiles that have minimum tile penalties within respective positive images. In this way, we try to approach the optimal query that carries the most information needed to retrieve as many relevant images to the query as possible.
Image Similarity
With the updated query feature and tile penalties for positive images, we can now define the distance between images and the query for ranking evaluation at each feedback iteration. In order to locate the best match to the query sub-image, our image similarity measure tries to find the minimum from the distances between the database image tiles and the query (recall that both the database image and the query sub-image have been modeled by the tree structure in the same way) at corresponding hierarchical level in the tree structure, weighted by the tile penalty of corresponding database image tiles. Definition 6: The distance between the (updated) query image Q and a database image I at the k th iteration is:
DI k (I, Q) = min i=0..N T T P i (k − 1) × DT (I i , Q j )
where N T + 1 is the number of all subtrees in the tree structure (tiles) of a database image, and T P i (k − 1) is the tile penalty for the i th tile of image I after k − 1 iterations. • For the comparison of full tree structures, i = 0 and j = 0, indicating both the full tree structure of the database image and the query image. For the comparison of subtree structures, i = 1..N l for each 1 ≤ j ≤ (L − 1), where N l is the number of subtree structures at the l th level of the tree structure and L is the number of levels of the tree structure, mapped from the hierarchical partition. j indicates the subtree structure at a particular level of the query image's tree structure, as a result of shrinking the original query tree structure to make the comparison with the subtree structures of database images comparable.
Finally, the overall relevance feedback process for the CBsIR system can be summarized in the following algorithm:
1. The user submits a query (sub)-image. 2. The system retrieves the initial set of images using the proposed similarity measure, which consists of database images containing tiles similar to the query sub-image. 6. The revised query and new tile penalties for database images is used to compute the ranking score for each image and sort the results.
7. Show the new retrieval results and, if the user wishes to continue, go to step 3.
Experiments and Results
Before going further let us define the metrics we use to measure retrieval effectiveness. For certain applications, it is more useful that the system brings new relevant images (found due to the update of query feature from previous feedback) forward into the top range rather than keeping those already retrieved relevant images again in the current iteration. For other applications, however, the opposite situation applies, the user is more interested in obtaining more relevant images during each iteration keeping those s/he has already seen before. Given these observations, we use two complementary measures for precision and recall as follows: The new recall and precision explicitly measure the learning aptitude of the system; ideally it retrieves more new relevant images as soon as possible. Moreover, we also measure the total number of distinct relevant images the system can find during all the feedback iterations. This is a history-based measure that implicitly includes some relevant images "lost" (out of the currently presented images) in the process. We call them cumulative recall and cumulative precision defined as follows:
1. Cumulative Recall: the percentage of distinct relevant images from all iterations so far (not necessarily shown at the current iteration) over the number of relevant images in the predefined answer set. Table 1 exemplifies the measures mentioned above, assuming the answer set for a query contains 3 images A, B, C and the number of returned (presented) images is 5.
In addition to the above measures, we also evaluate storage overhead and query processing time.
We test the proposed relevance feedback approach using a heterogenous image dataset consisting of 10,150 color JPEG images: a mixture of the public Stanford10k 5 dataset and some images from one of COREL's CD-ROMs, each of which falls into a particular category -we use 21 such categories 6 . Some categories do not have rotated or translated images, but others do. On average, each answer set has 11 images, and none of the answer sets has more than 20 images, which is the amount of images we present to the user for feedback during each iteration. It is important to note that the queries and answer sets are not part of the Stanford10k dataset in order to minimize the probability that other images, not contained in the expected answer set, could also be part of the answer but not accounted for. We manually crop part of a certain image from each of the above categories to form a query image set of 21 queries (one for each category). Images of the same categories serve as the answer sets for queries (one sample query and its corresponding answer set are shown in Figure 1). The size of the query image varies, being on average 18% the size of the database images. The following performance results are collected from the online demo available at http://db.cs.ualberta.ca/mn/CBsIR.html. (An sample of the two initial iterations using our system is presented in the Appendix.)
In our experiments, the maximum number of iterations explored is set to 10 (users will give feedback 9 times by pointing out which images are relevant (positive)/irrelevant (negative) to the query) and we present the top 20 retrieved images at each iteration. While within the same query session, the information collected at one step of the relevance feedback phase is used in the next step (as indicated in the definitions presented in Section 3), the information collected across different query sessions is not integrated into the search for the next queries -even if the very same query is submitted to the system again. I.e., we assume query sessions are independent; more specifically, once the user goes to the initial page, all accumulated learning is cleared. This consideration is based on the observation of the subjectivity of human perception and the fact that even the same person could perceive the same retrieval result differently at different times. As discussed earlier we use BIC histograms to model the contents of an image tile. The number of quantized colors in such histograms is therefore a parameter for BIC. We use two different values for this parameter, 16 and 64 colors, in order to evaluate the influence of the underlying tile model on the overall retrieval effectiveness. Table 2 shows how many, on average, iterations were necessary to have the original image (the one from which the query sub-image was extracted) placed within the top 20 images. It is clear that using 64 quantized colors is more efficient, as the hit rate of the original images is almost optimal. Even though this trend, i.e., the more colors the better the retrieval, is fairly intuitive, it is interesting to see that this advantage does not grow linearly with the number of colors across all experiments. That is to say, that even using a low number of colors one can still obtain fairly good results.
The retrieval accuracy using 64 quantized colors is shown in Figure 8 and Figure 9. As it can be clearly seen, after 5 iterations the system has already learned most of the information it could learn, i.e., the information gain (given by the new recall and new precision curves) is nearly null. On the other hand, after only 5 iterations the actual recall and actual precision values increased by 55% and 60% respectively. It is also noteworthy to mention that the stable actual precision value of nearly 40% is not as low as it may seem at first. The answer sets have an average of 11 images and since the user is presented with 20 images, the maximum precision one could get (on average) would be about 50% as almost half of the displayed images could not be considered relevant by construction. This interpretation leads to the proposal of the following measure:
• Normalized Precision: the actual precision over the maximum possible actual precision value.
Interestingly enough, careful consideration of such a measure shows that is equivalent to the usual notion of (actual) recall. Indeed, consider R and A to be the sets of relevant answers and the retrieved answers with respect to a given query. The actual precision is then defined as |R ∩ A|/|A|. The maximum precision value one can obtain is |R|/|A|. When the former is divided by the latter one obtains |R ∩ A|/|R| which is precisely the definition of actual recall. This leads to the argument that precision-based measures are not well suited for this type of scenario, where non-relevant images are very likely to be included in the answer set regardless of their relevance. The actual recall, being concerned only with the relevant images is a more realistic measure. Under this argument, 70% of stable actual recall (or normalized precision) after 5 iterations seems quite reasonable.
We also obtained about 85% for cumulative recall and about 50% for cumulative precision. The reason for the higher values than those for actual recall and actual precision is because some relevant images that may be "lost" in subsequent iterations are always accounted for in these measures.
Using 16 quantized colors, as one would expect, yields less accuracy than using 64 quantized colors. However, an interesting aspect shown in Figures 10 and 11 is that even though, the amount of information (i.e., number of colors) was reduced by 75%, the effectiveness was reduced by at most 10% compared to the values in Figures 8 and 9. The cost of the loss of information is more clear when looking at the "learning aptitude." Using 16 colors required twice as many iterations in order to bring the curves to a stable state. Still, this show a sublinear dependence on the number of colors: using 4 times more colors yields only 10% more effectiveness and 2 times faster learning.
Another interesting observation, which supports the main advantage of using more color for tile abstraction, can be seen when comparing the new precision and recall curves using different numbers of colors directly (Figures 12 and 13). Up until the 4th or 5th iteration usign 64 colors yields higher values, meaning that it is learning faster, after that point, it has learned basically what it could have learn. On the other hand the curve for using 16 colors shows that the method is still learning. Figure 14 shows the average time required to process a query during each iteration, i.e., to access all disk-resident data, complete the learning from the user's feedback at the current iteration (not applicable to the first iteration), obtain the distance between the query image and database images and sort them by their resulting ranks. The first iteration takes, on average, slightly less than 2 seconds when using 64 quantized colors and 0.6 second when using 16 quantized colors, whereas each subsequent iteration requires about 2.5 seconds and 1 second respectively for the two feature representations. This slight increase is due to the overhead for computing and updating the tile penalties at each iteration. As well, note that the gain in speed is proportional to the smaller number of colors used, i.e., using 64 colors yields a performance about times slower than using only 16 quantized colors. Extracting image features from the image database, applying the BIC method, and generating the metadata file requires about 0.15 secs/image on a computer running Linux 2.4.20 with AMD Athlon XP 1900+ CPU and 1GB of main memory and is independent of the number of colors used -this procedure can be done off-line and should not be considered part of query processing overhead.
Finally, the storage cost for the disk-resident metadata is 10.5 MB (only about 20% the size of the image database), while using 16 quantized colors needs proportionally less storage, namely 2.7 MB, again proportional to the representation overhead.
Conclusions
In this paper we have shown, for the first time, how relevance feedback can be used to improve the performance of CBsIR. We presented a relevance feedbackbased technique, which is based on a tile re-weighting scheme that assigns penalties to each tile of database images and updates those of all relevant images using both the positive and negative examples identified by the user. The user's feed- back is used to refine the image similarity measure by weighting the tile distances between the query and the database image tiles with their corresponding tile penalties. We combine this learning method with the BIC approach for image modeling to improve the performance of content-based sub-image retrieval. Our results on an image database of over 10,000 images suggest that the learning method is quite effective for CBsIR. While using less colors within BIC reduce storage overhead and improve speedup query processing it does not affect substantially retrieval efficiency in the long term. The main drawback is the system take longer to "learn" making the overall retrieval task a longer one. A few possible venues for further investigation include the design of disk based access structure for the hierarchical tree (to enhance the scalability for larger databases), the use of better (more powerful yet compact) representation for the tile features, possibly removing the background of the images and the incorporation of more sophisticated machine learning techniques to shorten the gap between low-level image features and high-level semantic contents of images so as to better understand the user's intention.
| 7,482 |
0904.4041
|
1488291195
|
The typical content-based image retrieval problem is to find images within a database that are similar to a given query image. This paper presents a solution to a different problem, namely that of content based sub-image retrieval, i.e., finding images from a database that contains another image. Note that this is different from finding a region in a (segmented) image that is similar to another image region given as a query. We present a technique for CBsIR that explores relevance feedback, i.e., the user's input on intermediary results, in order to improve retrieval efficiency. Upon modeling images as a set of overlapping and recursive tiles, we use a tile re-weighting scheme that assigns penalties to each tile of the database images and updates the tile penalties for all relevant images retrieved at each iteration using both the relevant and irrelevant images identified by the user. Each tile is modeled by means of its color content using a compact but very efficient method which can, indirectly, capture some notion of texture as well, despite the fact that only color information is maintained. Performance evaluation on a largely heterogeneous dataset of over 10,000 images shows that the system can achieve a stable average recall value of 70 within the top 20 retrieved (and presented) images after only 5 iterations, with each such iteration taking about 2 seconds on an off-the-shelf desktop computer.
|
The BIC approach was shown in @cite_0 to outperform several other CBIR approaches and, as such, we adopt it in our CBsIR proposal to extract and compare the visual feature of each tile with the goal of improving the retrieval accuracy.
|
{
"abstract": [
"This paper presents (Border Interior pixel Classification), a compact and efficient CBIR approach suitable for broad image domains. It has three main components: (1) a simple and powerful image analysis algorithm that classifies image pixels as either border or interior, (2) a new logarithmic distance (dLog) for comparing histograms, and (3) a compact representation for the visual features extracted from images. Experimental results show that the BIC approach is consistently more compact, more efficient and more effective than state-of-the-art CBIR approaches based on sophisticated image analysis algorithms and complex distance functions. It was also observed that the dLog distance function has two main advantages over vectorial distances (e.g., L 1 ): (1) it is able to increase substantially the effectiveness of (several) histogram-based CBIR approaches and, at the same time, (2) it reduces by 50 the space requirement to represent a histogram."
],
"cite_N": [
"@cite_0"
],
"mid": [
"2039388884"
]
}
|
Content-Based Sub-Image Retrieval with Relevance Feedback
|
Most of the content-based image retrieval (CBIR) systems perform retrieval based on a full image comparison, i.e., given a query image the system returns overall similar images. This is not useful if users are also interested in images from the database that contain an image (perhaps an object) similar to a query image. We call this searching process Content-Based sub-Image Retrieval (CB-sIR), and it is defined as follows [18]: given an image query Q and an image database S, retrieve from S those images Q which contain Q according to some notion of similarity. To illustrate this consider Figure 1 which displays an example query image and its relevant answer set. of such an answer set, and their respective ranks, retrieved within the top 20 matches after CBsIR is performed 1 . Note that the other 17 images returned are considered non-relevant to the query. Now assume that the user is given the opportunity to mark those 3 images as relevant and all other 17 as irrelevant, i.e., the user is allowed to provide relevance feedback. Figure 3 shows the relevant images retrieved (along with their rank) after taking such feedback into account. Note that all images previously obtained were ranked higher and also new images were found and ranked high as well. The sub-image retrieval problem we consider is similar to region-based image retrieval (RBIR), e.g. [1,9], since the goal may also be to retrieve images at object-level. However, there is a fundamental difference between these two. The CBsIR problem is to search for an image, given as a whole, which is contained within another image, whereas in RBIR one is searching for a region, possibly the result of some image segmentation. The former is more intuitive since users can provide a query image as in traditional CBIR, and unlike the latter, it does not rely on any type of segmentation preprocessing. Unfortunately, automatic image segmentation algorithms usually lead to inaccurate segmentation of the image when trying to achieve homogeneous visual properties. Sometimes the ob- tained regions are only parts of a real object and should be combined with some neighbor regions so as to represent a meaningful object. Thus, complex distance functions are generally used to compare segmented images at query time. Also, the number and size of regions per image are variable and a precise representation of the obtained regions may be storage-wise expensive. Furthermore, since region-based queries are usually performed after the image segmentation and region description steps, it clearly puts some restriction on the user's expression of his/her information need depending on how good the segmentation results match the semantics of images, even though the user can explicitly select any detected region as query region. In those image retrieval systems where images are heterogeneous, rich in texture, very irregular and variable in contents, accurate regions are hard to obtain, making RBIR likely to perform poorly. The main contribution of this paper is to realize CBsIR by employing relevance feedback, in order to capture the user's intentions at query time. As we discuss in the next section, relevance feedback is an interactive learning technique which has already been demonstrated to boost performance in CBIR and RBIR systems. Despite the great potential shown by relevance feedback, to the best of our knowledge there is no published research that uses it in the context of CBsIR, thus positioning our work as unique in this domain.
The remainder of this paper is organized as follows. In the next section we discuss some related work. We also summarize the BIC method [19] for CBIR and how we adopt it for the CBsIR techniques system we propose. (As we shall discuss BIC is used as a building block when modeling images within our proposed approach.) Our retrieval strategy uses query refinement as well as the incorporation of user's judgement, via relevance feedback, into the image similarity measure. This forms the core contribution of this paper and is detailed in Section 3. In Section 4 we present and discuss experimental results, which support our claim of improved retrieval effectiveness. Finally, Section 5 concludes the paper and offers directions for future work.
Relevance Feedback within Traditional CBIR
The key issue in relevance feedback is how to use positive and negative examples to refine the query and/or to adjust the similarity measure. Early relevance feedback schemes for CBIR were adopted from feedback schemes developed for classical textual document retrieval. These schemes fall into two categories: query point movement (query refinement) and re-weighting (similarity measure refinement), both based on the well-known vector model.
The query point movement methods aim at improving the estimate of the "ideal query point" by moving it towards positive example points and away from the negative example points in the query space. One frequently used technique to iteratively update the query is the Rocchio's formula [13]. It is used in the MARS system [16], replacing the document vector by visual feature vectors. Another approach is to update query space by selecting feature models. The best way for effective retrieval is argued to be using a "society" of feature models determined by a learning scheme since each feature model is supposed to represent one aspect of the image content more accurately than others.
Re-weighting methods enhance the importance of a feature's dimensions, helping to retrieve relevant images while also reducing the importance of the dimensions that hinder the process. This is achieved by updating the weights of feature vectors in the distance metric. The refinement of the re-weighting method in the MARS system is called the standard deviation method.
Recent work has proposed more computationally robust methods that perform global feature optimization. The MindReader retrieval system [5] formulates a minimization problem on the parameter estimating process. Using a distance function that is not necessarily aligned with the coordinate axis, the MindReader system allows correlations between attributes in addition for different weights on each component. A further improvement over the MindReader approach [14] uses a unified framework to achieve the optimal query estimation and weighting functions. By minimizing the total distances of the positive examples from the revised query, the weighted average and a whitening transform in the feature space are found to be the optimal solutions. However, this algorithm does not use the negative examples to update the query and image similarity measure; and initially the user needs to input the critical data of training vectors and the relevance matrix into the system.
Tasks that can be improved as a result of experience can be considered as a machine-learning task. Therefore, relevance feedback can be considered as a learning method -the system learns from the examples provided as feedback by a user, i.e., his/her experience, to refine the retrieval results. The aforementioned query-movement method represented by the Rocchio's formula and re-weighting method are both simple learning methods. However, as users are usually reluctant to provide a large number of feedback examples, i.e., the number of training samples is very small. Furthermore, the number of feature dimensions in CBIR systems is also usually high. Thus, learning from small training samples in a very high dimension feature space makes many learning methods, such as decision tree learning and artificial neural networks, unsuitable for CBIR.
There are several key issues in addressing relevance feedback in CBIR as a small sample learning problem. First, how to quickly learn from small sets of feedback samples to improve the retrieval accuracy effectively; second, how to accumulate the knowledge learned from the feedback; and third, how to integrate low-level visual and high-level semantic features in the query. Most of the research in literature has focused on the first issue. In that respect Bayesian learning has been explored and has been shown advantageous compared with other learning methods, e.g., [21]. Active learning methods have been used to actively select samples which maximize the information gain, or minimize entropy/uncertainty in decision-making. These methods enable fast convergence of the retrieval result which in turn increases user satisfaction. Chen et al [2] use Monte carlo sampling to search for the set of samples that will minimize the expected number of future iterations. Tong and Chang [20] propose the use of SVM active learning algorithm to select the sample which maximizes the reduction in the size of the version space in which the class boundary lies. Without knowing apriori the class of a candidate, the best search is to halve the search space each time. In their work, the points near the SVM boundary are used to approximate the most-informative points; and the most-positive images are chosen as the ones farthest from the boundary on the positive side in the feature space.
Relevance Feedback within RBIR
Relevance feedback has been introduced in RBIR systems for a performance improvement as it does for the image retrieval systems using global representations.
In [6], the authors introduce several learning algorithms using the adjusted global image representation to RBIR. First, the query point movement technique is considered by assembling all the segmented regions of positive examples together and resizing the regions to emphasize the latest positive examples in order to form a composite image as the new query. Second, the application of support vector machine (SVM) [20] in relevance feedback for RBIR is discussed. Both the one class SVM as a class distribution estimator and two classes SVM as a classifier are investigated. Third, a region re-weighting algorithm is proposed corresponding to feature re-weighting. It assumes that important regions should appear more times in the positive images and fewer times in all the images of the database. For each region, measures of region frequency RF and inverse image frequency IIF (analogous to the TF and IDF in text retrieval [22]) are introduced for the region importance. Thus the region importance is defined as its region frequency RF weighted by the inverse image frequency IIF, and normalized over all regions in an image. Also, the feedback judgement is memorized for future use by calculating the cumulate region importance. However, this algorithm only consider positive examples while ignoring the effect of the negative examples in each iteration of the retrieval results. Nevertheless, experimental results on a general-purpose image database demonstrate the effectiveness of those proposed learning methods in RBIR.
CBsIR without Relevance Feedback
The paper by Leung and Ng [8] investigates the idea of either enlarging the query sub-image to match the size of an image block obtained by the four-level multiscale representation of the database images, or conversely contracting the image blocks of the database images so that they become as small as the query sub-image. The paper presents an analytical cost model and focuses on avoiding I/O overhead during query processing time. To find a good strategy to search multiple resolutions, four techniques are investigated: the branch-and-bound algorithm, Pure Vertical (PV), Pure Horizontal (PH) and Horizontal-and-Vertical (HV). The HV strategy is argued to be the best considering efficiency. However, the authors do not report clear conclusions regarding the effectiveness (e.g., Precision and/or Recall) of their approach.
The authors of [18] consider global feature extraction to capture the spatial information within image regions. The average color and the covariance matrix of the color channels in L*a*b color space are used to represent the color distribution. They apply a three level non-recursive hierarchical partition to achieve multiscale representation of database images by overlapping regions within them. Aiming at reducing the index size of these global features, a compact abstraction for the global features of a region is introduced. As well, a new distance measure between such abstractions is introduced for efficiently searching through the tiles from the multi-scale partition strategy. This distance is called inter hierarchical distance (IHD) since it is taken between feature vectors of different hierarchical levels of the image partition. The IHD index is a two dimensional vector which consumes small storage space. The search strategy is a simple linear scan of the index file, which assesses the similarity between the query image and a particular database image as well as all its sub-regions using their IHD vectors. Finally, the minimum distance found is used to rank this database image.
In [11] a new method called HTM (Hierarchical Tree Matching) for the CB-sIR problem was proposed. It has three main components: (1) a tree structure that models a hierarchical partition of images into tiles using color features, (2) an index sequence to represent the tree structure (allowing fast access during the search phase), and (3) a search strategy based on the tree structures of both database images and the query image. Since the tree structure presented in [11] is re-used in our work, we detail it in the following. To model an image, a grid is laid on it yielding a hierarchical partition and tiles. Although granularity could be arbitrary, we have obtained good results using a 4×4 grid resulting in a three-level multiscale representation of the image (similarly to what was done in [8] and [18]). The hierarchial partition of an image and its resulting tree structure are illustrated in Figure 4. There are three levels in the hierarchical structure. The highest level is the image itself. For the second level the image is decomposed into 3×3 rectangles with each side having half the length of the whole image, yielding 9 overlapping tiles. The lowest level consists of 4×9=36 rectangles, since each tile of the second level is partitioned into 4 non-overlapping sub-tiles. Note that, to exclude redundance in the CBsIR system, only the indices of the 4×4=16 unique tiles in the lowest level are stored with a small structure for relationship information. This tiling scheme is obviously not unique and as long as a well-formed hierarchy of tiles is used to model the image the technique we proposed can still be applied after corresponding adjustments. The average color of the image tiles in the RGB color space is associated to the nodes in the tree stuctures for images 2 . Thus, every database image is represented as a series of tiles, each of which is mapped to a subtree of the tree modeling the image.
An index sequence representing the predefined parent-child relationship (given by the predefined order of sequence in the index) for the tree structure is stored on secondary storage and used for fast retrieval. Details about the index sequence structure can be found in elsewhere [11]; in short, it resembles a priority tree where the relative order among the tree nodes reflect the relative order of the entries and which can be efficiently mapped onto an array structure. Such an structure allows one to efficiently traverse the necessary indices for computing (sub)image similarity. The searching process is accomplished by "floating" the tree structure of the query image over the full tree structure of the candidate database image, shrinking the query's tree structure so that it is comparable with the candidate database image's trees at each level of the hierarchical structure. The minimum distance from tree comparisons at all hierarchical levels, indicating the best matching tile from a database image, is used as the distance between the database image and the query. Differently from [18], the HTM search strategy considers local information of images' tiles represented by leaf nodes in the subtree structures. The average of distance values among the corresponding leaf nodes is taken for the distance between the tree structures of query image and a certain tile of the database image at any hierarchical level.
Even though different datasets were used, experiments detailed in [11] strongly suggest that the proposed approach yields better retrieval accuracy compared to [18], at the cost of small storage overhead.
The BIC-based Image Abstraction
An straigthforward way to model images is to use its average color. This is obviously not effective in any non-trivial situation. Another simple, and in many situations cost-effective means is to use a global color histogram (GCH) (c.f., [10]). A common critique to GCHs is that it is unable to capture any notion of spatial distribution. To address this several other approaches have been proposed 3 , but they add complexity as a trade-off in order to gain effectiveness. Nevertheless, the use of color only, without any notion of spatial distribution, may be effective, if one is able to capture other features of the images, e.g., texture. That is exactly the advantage of the BIC technique proposed in [19] and which we re-use within our proposal.
The image analysis algorithm of BIC classifies pixels as either border, when its color is the same as its neighboors, or otherwise as interior, and two normalized histograms are computed considering only the border pixels and the interior pixels respectively. That is, for each color two histogram bins exist: one in the border pixel histogram and one in the interior pixel histogram. This allows a more informed color distribution abstraction and captures, implicitly, a notion of texture.
To illustrate the idea consider two images, one composed of two equally sized solid color blocks of different colors, say C1 and C2, and another one where half of pixels of color have color C1 and are randomly distributed. Likewise the other half of pixels have color C2 and are also randomly distributed. Clearly the BIC histograms of those images are quite different, one will have almost only interior pixels and the other will have almost only border pixels. This will yield a low similarity measure, which is indeed the case. Note that the global color histogram, a standard CBIR technique, for both images would be identical, misleading one to think the images were very similar. Note that the difference in the histogram suggests a very different texture in the images, which, on top of the possible color differences, enhances the capability of distinguishing among images even further. For histogram comparison within BIC, the dLog distance function is used to diminish the effect that a large value in a single histogram bin dominates the distance between histograms, no matter the relative importance of this single value [10,12]. The basic motivation behind this is based on the observation that classical techniques based on global color histograms treat all colors equally, despite of their relative concentration. However, the perception of stimulus, color in images in particular, is believed to follow a "sigmoidal" curve [12]. The more relative increment in a stimulus is perceived more clearly when the intensity of the stimulus is smaller than when it is larger. For instance, a change from 10% to 20% of a color is perceived more clearly than a change from 85% to 95%. Indeed, it has been a well observed phenomena regarding many other phenomena involving how sensitive one is (including animals) to different stimuli [3]. Thus, the distance function is defined as: 9], requiring only 4 bits of storage per histogram bin. This allows substantial reduction in storage, and yet a reasonably fine discretization of the bins.
dLog(a, b) = M i=0 |f (a[i]) − f (b[i])| where f (x) = 0 if x = 0 1 if 0 < x ≤ 1 log 2 x + 1
The BIC approach was shown in [19] to outperform several other CBIR approaches and, as such, we adopt it in our CBsIR proposal to extract and compare the visual feature of each tile with the goal of improving the retrieval accuracy.
Relevance Feedback for CBsIR
Despite the great potential of relevance feedback shown in CBIR systems using global representations and in RBIR systems, to the best of our knowledge there is no research that uses it within CBsIR systems. In this section we present our solution for CBsIR by using relevance feedback to learn the user's intention. Our relevance feedback approach has three main components: (1) a tile re-weighting scheme that assigns penalties to each tile of database images and updates those tile penalties for all relevant images retrieved at each iteration using both the relevant (positive) and irrelevant (negative) images identified by the user; (2) a query refinement strategy that is based on the tile re-weighting scheme to approach the most informative query according to the user's intention; (3) an image similarity measure that refines the final ranking of images using the user's feedback information. Each of these components is explained in detail in the following subsections.
Tile Re-Weighting Scheme
Researches in RBIR [7,6] have proposed region re-weighting schemes for relevance feedback. In this research, we design our tile re-weighting scheme that specializes the technique presented in [7] to accomodate our tile-oriented (not region-oriented) HTM approach for CBsIR. It should be emphasized that instead of considering all the images in the database to compute the parameters for region weight [6] (which is computationally expensive), our tile re-weighting scheme uses only the positive and negative examples identified by the user to update the tile penalty of the positive images only, which is much more efficient. Moreover, the region re-weighting scheme in [7] uses a predefined similarity threshold to determine whether the region and the image is similar or not, otherwise the comparison of region pairs would become too expensive since images might consist of different and large number of regions. This threshold is sensitive and subject to change for different kinds of image datasets. Thus, how to obtain the right threshold is yet another challenge for the relevance feedback method in RBIR. However, our RF method for the CBsIR problem does not need any threshold because the number of obtained tiles is the same (and small) for each database image and there exists implicit relationship between the tiles, which makes it easier to compare them.
In our system, the user provides feedback information by identifying positive and negative examples from the retrieved images. The basic assumption is that important tiles should appear more often in positive images than unimportant tiles, e.g., "background tiles" should yield to "theme tiles" in positive images. On the other hand, important tiles should appear less often in negative images than unimportant tiles. Following the principle of "more similar means better matched thus less penalty", we assign a penalty to every tile that represents the database image for the matching process. User's feedback information is used to estimate the "tile penalties" for all positive images, which also refines the final ranking of images. During the feedback iterations, the user does not need to specify which tile of a certain positive image is similar to the query, which would only make the problem only simpler to solve at an additional cost to the user.
Next, we introduce some definitions used to determine the tile penalty and formalize the overall relevance feedback process. Definition 1: The distance between two tiles T a and T b from images I a and I b respectively, is:
DT (T a , T b ) = m i=1 d(F eature(T ai ), F eature(T bi ))
m where T ai and T bi are sub-tiles of T a and T b respectively, m is the number of unique leaf nodes in the tiles' tree structures at any hierarchical levels (if already at the leaf level, m = 1), the distance function d is to be instantiated with some particular measure based on the result of the feature extraction done by the F eature function on the tiles, e.g., BIC's dLog() function defined in the previous section. • Definition 2: The penalty for a certain tile i from a database image after k iterations is defined as: T P i (k), i = 0, · · · , N T , where N T + 1 is the number of tiles per database image, and T P i (0) is initialized as 1 N T +1 . • For instance, in Figure 4, N T + 1 = 1 + 9 + 16, i.e., is equal to the number of nodes in the tree structure representing the hierarchical partition of a database image; for the lowest level, only unique nodes count. Definition 3: For each tile from a positive image, we define a measure of the distance DT S between tile T and an image set IS = {I 1 , I 2 , · · · , I n }. This reflects the extent to which the tile is consistent with other positive images in the feature space. Intuitively, the smaller this value, the more important this tile is in representing the user's intention.
DT S(T, IS)
= n i=1 exp(DT (T, I 0 i )), if T is at full tree level n i=1 exp(min j=1..N T DT (T, I j i )), if T is at the subtree level
where N T in this case is the number of tiles at the current subtree level. • Assuming that I is one of the identified positive example images, we can compute the tile penalty of image I which consists of tiles {T 0 , T 1 , · · · , T N T }. The user provides positive and negative example images during each k th iteration of feedback, denoted respectively as IS + (k) = {I + 1 (k), · · · , I + p (k)} and IS − (k) = {I − 1 (k), · · · , I − q (k)}, where p + q is typically much smaller than the size of the database.
Based on the above preparations, we now come to the definition of tile penalty. Definition 4: For all images (only being positive), the tile penalty of T i after k iterations of feedback is computed (and normalized) as: , acts as a penalty, reflecting the influence of the negative examples. • This implies the intuition that a tile from a positive example image should be penalized if it is similar to negative examples. Basically, we compute the distances DT S between a particular tile T and the positive image set IS + as well as the negative image set IS − respectively to update the penalty of that tile from a positive example image. The inverse of the tile's distance from the negative image set is used to weight its corresponding distance from the positive image set.
T P i (k) = W i × DT S(T i , IS + (k)) N T j=0 (W j × DT S(T j , IS + (k))
Let us now illustrate the above methodology with a simple example, which also motivates the notion of tile penalty. For simplicity, assume that the color palette consists of only three colors: black, gray and white. Figure 6 shows the top 3 retrieved images and the user's feedback judgement. Image I 1 is marked as a positive example since it actually contains the query image, which exactly represents the sub-image retrieval problem we are dealing with. Image I 2 is also marked as a positive example because it is the enlargement of the query image (and therefore containing it as well). For the sake of illustration, assume a two-level multi-cale representation of database images is used as in Figure 7.
The tile penalties for tiles per database image are initialized as 0.1 for the 10 tiles, i.e., T P i (0) = 0.1, i ∈ [0 , 9]. Now, take tile T 1 for example. According to Definition 3, we need to compute the distances DT S between T 1 and the positive/negative image set. In order to do this, firstly, the distances between T 1 and all tiles at the corresponding subtree levels of all the images in the positive/negative image set should be obtained by Definition 1. Then, using Definition 4 the new penalty of T 1 is updated from 0.1 to 0.090 correspondingly. The penalties for other tiles is updated in the same way during each feedback iteration. We illustrate the new values of all tile penalties for database image I 1 as a positive example after one feedback iteration in Figure 7. We can see that after the user provides feedback information, some tiles lose some weight while others gain. For instance, T 1 , T 2 , T 3 and T 9 receive less penalties now because they only contain the color of grey and/or black which is/are also in the query. T 0 , T 4 , T 5 , T 7 and T 8 are penalized more since they all contain the color white. The new weights for these tiles generally follow the trend that more percentage of white color more penalty. T 6 , which is a rotation of the query image maintains its weight for this iteration. This means that our system is to some extent also capable of perceiving changes such as rotation. Besides, for a closer look at the updated tile penalties of positive image I 1 , T 1 receives more penalty than T 3 now although they are similar to the query image in the same degree. Note that, according to Definition 4, both the positive and the negative example images are used to calculate new tile penalties. And we penalize a tile more if it is also somewhat more similar to the negative example images compared with other tiles in the positive example image. Thus it is reasonable that the tile penalty for T 1 appears higher than that for T 3 after feedback learning, since T 1 contains some black color which is also in the negative example image I 3 while T 3 contains only the grey color.
Query Feature Update
The relevance feedback process using query refinement strategy is based on the tile re-weighting scheme and all positive and negative example images. The main concern is that we need to maintain as much as possible the original feature of query image while introducing new feature elements that would capture more new relevant images. Considering the hierarchical tree structure of the query image, we use the most similar tile (with minimum tile penalty) at every subtree level of each positive image to update the query feature at the corresponding subtree level. Definition 5: The updated query feature after k iterations is:
qn k l [j] = p i=1 (1 − T P min i l (k)) × P os k i l [j] p i=1 (1 − T P min i l (k))
where qn k l is the new feature with M dimensions for a subtree (tile) at the l th level of the tree structure for the query image after k iterations, T P min i l (k) is the minimum tile penalty for a subtree (tile) found at the l th level of the tree structure for the i th positive image after k iterations, P os k i l is the feature for the subtree (tile) with minimum tile penalty at the l th level of the i th positive image's tree structure after k iterations, and p is the number of positive images given by the user at this iteration. • Intuitively, we use the weighted average to update the feature for a subtree (tile) of the query, based on the features of those tiles that have minimum tile penalties within respective positive images. In this way, we try to approach the optimal query that carries the most information needed to retrieve as many relevant images to the query as possible.
Image Similarity
With the updated query feature and tile penalties for positive images, we can now define the distance between images and the query for ranking evaluation at each feedback iteration. In order to locate the best match to the query sub-image, our image similarity measure tries to find the minimum from the distances between the database image tiles and the query (recall that both the database image and the query sub-image have been modeled by the tree structure in the same way) at corresponding hierarchical level in the tree structure, weighted by the tile penalty of corresponding database image tiles. Definition 6: The distance between the (updated) query image Q and a database image I at the k th iteration is:
DI k (I, Q) = min i=0..N T T P i (k − 1) × DT (I i , Q j )
where N T + 1 is the number of all subtrees in the tree structure (tiles) of a database image, and T P i (k − 1) is the tile penalty for the i th tile of image I after k − 1 iterations. • For the comparison of full tree structures, i = 0 and j = 0, indicating both the full tree structure of the database image and the query image. For the comparison of subtree structures, i = 1..N l for each 1 ≤ j ≤ (L − 1), where N l is the number of subtree structures at the l th level of the tree structure and L is the number of levels of the tree structure, mapped from the hierarchical partition. j indicates the subtree structure at a particular level of the query image's tree structure, as a result of shrinking the original query tree structure to make the comparison with the subtree structures of database images comparable.
Finally, the overall relevance feedback process for the CBsIR system can be summarized in the following algorithm:
1. The user submits a query (sub)-image. 2. The system retrieves the initial set of images using the proposed similarity measure, which consists of database images containing tiles similar to the query sub-image. 6. The revised query and new tile penalties for database images is used to compute the ranking score for each image and sort the results.
7. Show the new retrieval results and, if the user wishes to continue, go to step 3.
Experiments and Results
Before going further let us define the metrics we use to measure retrieval effectiveness. For certain applications, it is more useful that the system brings new relevant images (found due to the update of query feature from previous feedback) forward into the top range rather than keeping those already retrieved relevant images again in the current iteration. For other applications, however, the opposite situation applies, the user is more interested in obtaining more relevant images during each iteration keeping those s/he has already seen before. Given these observations, we use two complementary measures for precision and recall as follows: The new recall and precision explicitly measure the learning aptitude of the system; ideally it retrieves more new relevant images as soon as possible. Moreover, we also measure the total number of distinct relevant images the system can find during all the feedback iterations. This is a history-based measure that implicitly includes some relevant images "lost" (out of the currently presented images) in the process. We call them cumulative recall and cumulative precision defined as follows:
1. Cumulative Recall: the percentage of distinct relevant images from all iterations so far (not necessarily shown at the current iteration) over the number of relevant images in the predefined answer set. Table 1 exemplifies the measures mentioned above, assuming the answer set for a query contains 3 images A, B, C and the number of returned (presented) images is 5.
In addition to the above measures, we also evaluate storage overhead and query processing time.
We test the proposed relevance feedback approach using a heterogenous image dataset consisting of 10,150 color JPEG images: a mixture of the public Stanford10k 5 dataset and some images from one of COREL's CD-ROMs, each of which falls into a particular category -we use 21 such categories 6 . Some categories do not have rotated or translated images, but others do. On average, each answer set has 11 images, and none of the answer sets has more than 20 images, which is the amount of images we present to the user for feedback during each iteration. It is important to note that the queries and answer sets are not part of the Stanford10k dataset in order to minimize the probability that other images, not contained in the expected answer set, could also be part of the answer but not accounted for. We manually crop part of a certain image from each of the above categories to form a query image set of 21 queries (one for each category). Images of the same categories serve as the answer sets for queries (one sample query and its corresponding answer set are shown in Figure 1). The size of the query image varies, being on average 18% the size of the database images. The following performance results are collected from the online demo available at http://db.cs.ualberta.ca/mn/CBsIR.html. (An sample of the two initial iterations using our system is presented in the Appendix.)
In our experiments, the maximum number of iterations explored is set to 10 (users will give feedback 9 times by pointing out which images are relevant (positive)/irrelevant (negative) to the query) and we present the top 20 retrieved images at each iteration. While within the same query session, the information collected at one step of the relevance feedback phase is used in the next step (as indicated in the definitions presented in Section 3), the information collected across different query sessions is not integrated into the search for the next queries -even if the very same query is submitted to the system again. I.e., we assume query sessions are independent; more specifically, once the user goes to the initial page, all accumulated learning is cleared. This consideration is based on the observation of the subjectivity of human perception and the fact that even the same person could perceive the same retrieval result differently at different times. As discussed earlier we use BIC histograms to model the contents of an image tile. The number of quantized colors in such histograms is therefore a parameter for BIC. We use two different values for this parameter, 16 and 64 colors, in order to evaluate the influence of the underlying tile model on the overall retrieval effectiveness. Table 2 shows how many, on average, iterations were necessary to have the original image (the one from which the query sub-image was extracted) placed within the top 20 images. It is clear that using 64 quantized colors is more efficient, as the hit rate of the original images is almost optimal. Even though this trend, i.e., the more colors the better the retrieval, is fairly intuitive, it is interesting to see that this advantage does not grow linearly with the number of colors across all experiments. That is to say, that even using a low number of colors one can still obtain fairly good results.
The retrieval accuracy using 64 quantized colors is shown in Figure 8 and Figure 9. As it can be clearly seen, after 5 iterations the system has already learned most of the information it could learn, i.e., the information gain (given by the new recall and new precision curves) is nearly null. On the other hand, after only 5 iterations the actual recall and actual precision values increased by 55% and 60% respectively. It is also noteworthy to mention that the stable actual precision value of nearly 40% is not as low as it may seem at first. The answer sets have an average of 11 images and since the user is presented with 20 images, the maximum precision one could get (on average) would be about 50% as almost half of the displayed images could not be considered relevant by construction. This interpretation leads to the proposal of the following measure:
• Normalized Precision: the actual precision over the maximum possible actual precision value.
Interestingly enough, careful consideration of such a measure shows that is equivalent to the usual notion of (actual) recall. Indeed, consider R and A to be the sets of relevant answers and the retrieved answers with respect to a given query. The actual precision is then defined as |R ∩ A|/|A|. The maximum precision value one can obtain is |R|/|A|. When the former is divided by the latter one obtains |R ∩ A|/|R| which is precisely the definition of actual recall. This leads to the argument that precision-based measures are not well suited for this type of scenario, where non-relevant images are very likely to be included in the answer set regardless of their relevance. The actual recall, being concerned only with the relevant images is a more realistic measure. Under this argument, 70% of stable actual recall (or normalized precision) after 5 iterations seems quite reasonable.
We also obtained about 85% for cumulative recall and about 50% for cumulative precision. The reason for the higher values than those for actual recall and actual precision is because some relevant images that may be "lost" in subsequent iterations are always accounted for in these measures.
Using 16 quantized colors, as one would expect, yields less accuracy than using 64 quantized colors. However, an interesting aspect shown in Figures 10 and 11 is that even though, the amount of information (i.e., number of colors) was reduced by 75%, the effectiveness was reduced by at most 10% compared to the values in Figures 8 and 9. The cost of the loss of information is more clear when looking at the "learning aptitude." Using 16 colors required twice as many iterations in order to bring the curves to a stable state. Still, this show a sublinear dependence on the number of colors: using 4 times more colors yields only 10% more effectiveness and 2 times faster learning.
Another interesting observation, which supports the main advantage of using more color for tile abstraction, can be seen when comparing the new precision and recall curves using different numbers of colors directly (Figures 12 and 13). Up until the 4th or 5th iteration usign 64 colors yields higher values, meaning that it is learning faster, after that point, it has learned basically what it could have learn. On the other hand the curve for using 16 colors shows that the method is still learning. Figure 14 shows the average time required to process a query during each iteration, i.e., to access all disk-resident data, complete the learning from the user's feedback at the current iteration (not applicable to the first iteration), obtain the distance between the query image and database images and sort them by their resulting ranks. The first iteration takes, on average, slightly less than 2 seconds when using 64 quantized colors and 0.6 second when using 16 quantized colors, whereas each subsequent iteration requires about 2.5 seconds and 1 second respectively for the two feature representations. This slight increase is due to the overhead for computing and updating the tile penalties at each iteration. As well, note that the gain in speed is proportional to the smaller number of colors used, i.e., using 64 colors yields a performance about times slower than using only 16 quantized colors. Extracting image features from the image database, applying the BIC method, and generating the metadata file requires about 0.15 secs/image on a computer running Linux 2.4.20 with AMD Athlon XP 1900+ CPU and 1GB of main memory and is independent of the number of colors used -this procedure can be done off-line and should not be considered part of query processing overhead.
Finally, the storage cost for the disk-resident metadata is 10.5 MB (only about 20% the size of the image database), while using 16 quantized colors needs proportionally less storage, namely 2.7 MB, again proportional to the representation overhead.
Conclusions
In this paper we have shown, for the first time, how relevance feedback can be used to improve the performance of CBsIR. We presented a relevance feedbackbased technique, which is based on a tile re-weighting scheme that assigns penalties to each tile of database images and updates those of all relevant images using both the positive and negative examples identified by the user. The user's feed- back is used to refine the image similarity measure by weighting the tile distances between the query and the database image tiles with their corresponding tile penalties. We combine this learning method with the BIC approach for image modeling to improve the performance of content-based sub-image retrieval. Our results on an image database of over 10,000 images suggest that the learning method is quite effective for CBsIR. While using less colors within BIC reduce storage overhead and improve speedup query processing it does not affect substantially retrieval efficiency in the long term. The main drawback is the system take longer to "learn" making the overall retrieval task a longer one. A few possible venues for further investigation include the design of disk based access structure for the hierarchical tree (to enhance the scalability for larger databases), the use of better (more powerful yet compact) representation for the tile features, possibly removing the background of the images and the incorporation of more sophisticated machine learning techniques to shorten the gap between low-level image features and high-level semantic contents of images so as to better understand the user's intention.
| 7,482 |
0904.3093
|
2950663737
|
It is shown that one can count @math -edge paths in an @math -vertex graph and @math -set @math -packings on an @math -element universe, respectively, in time @math and @math , up to a factor polynomial in @math , @math , and @math ; in polynomial space, the bounds hold if multiplied by @math or @math , respectively. These are implications of a more general result: given two set families on an @math -element universe, one can count the disjoint pairs of sets in the Cartesian product of the two families with @math basic operations, where @math is the number of members in the two families and their subsets.
|
Concerning set packings the situation is analogous, albeit the research has been somewhat less extensive. Deciding whether a given family of @math subsets of an @math -element universe contains a @math -packing is known to be W[1]-hard @cite_5 , and thus it is unlikely that the problem is fixed parameter tractable, that is, solvable in time @math for some function @math and constant @math . If @math is fairly large, say exponential in @math , the fastest known algorithms actually count the packings by employing the inclusion--exclusion machinery @cite_23 @cite_3 and run in time @math . This bound holds also for the presented algorithm (cf. Theorem ).
|
{
"abstract": [
"",
"We present a fast algorithm for the subset convolution problem:given functions f and g defined on the lattice of subsets of ann-element set n, compute their subset convolution f*g, defined for S⊆ N by [ (f * g)(S) = [T ⊆ S] f(T) g(S T),,]where addition and multiplication is carried out in an arbitrary ring. Via Mobius transform and inversion, our algorithm evaluates the subset convolution in O(n2 2n) additions and multiplications, substanti y improving upon the straightforward O(3n) algorithm. Specifically, if the input functions have aninteger range [-M,-M+1,...,M], their subset convolution over the ordinary sum--product ring can be computed in O(2n log M) time; the notation O suppresses polylogarithmic factors.Furthermore, using a standard embedding technique we can compute the subset convolution over the max--sum or min--sum semiring in O(2n M) time. To demonstrate the applicability of fast subset convolution, wepresent the first O(2k n2 + n m) algorithm for the Steiner tree problem in graphs with n vertices, k terminals, and m edges with bounded integer weights, improving upon the O(3kn + 2k n2 + n m) time bound of the classical Dreyfus-Wagner algorithm. We also discuss extensions to recent O(2n)-time algorithms for covering and partitioning problems (Bjorklund and Husfeldt, FOCS 2006; Koivisto, FOCS 2006).",
"Given a set @math with @math elements and a family @math of subsets, we show how to partition @math into @math such subsets in @math time. We also consider variations of this problem where the subsets may overlap or are weighted, and we solve the decision, counting, summation, and optimization versions of these problems. Our algorithms are based on the principle of inclusion-exclusion and the zeta transform. In effect we get exact algorithms in @math time for several well-studied partition problems including domatic number, chromatic number, maximum @math -cut, bin packing, list coloring, and the chromatic polynomial. We also have applications to Bayesian learning with decision graphs and to model-based data clustering. If only polynomial space is available, our algorithms run in time @math if membership in @math can be decided in polynomial time. We solve chromatic number in @math time and domatic number in @math time. Finally, we present a family of polynomial space approximation algorithms that find a number between @math and @math in time @math ."
],
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_23"
],
"mid": [
"",
"2045326192",
"2074359677"
]
}
|
Counting Paths and Packings in Halves
|
Some combinatorial structures can be viewed as two halves that meet in the middle. For example, a k-edge path is a combination of two k/2-edge paths. Bidirectional search [10,25] finds such structures by searching the two halves simultaneously until the two search frontiers meet. In instantiations of this idea, it is crucial to efficiently join the two frontiers to obtain a valid or optimal solution. For instance, the meet-in-the-middle algorithm for the Subset Sum problem, by Horowitz and Sahni [15], implements the join operation via a clever pass through two sorted lists of subset sums.
In the present paper, we take the meet-in-the-middle approach to counting problems, in particular, to counting paths and packings. Here, the join operation amounts to consideration of pairs of disjoint subsets of a finite universe, each subset weighted by the number of structures that span the subset. We begin in Sect. 2 by formalizing this as the Disjoint Sum problem and providing an algorithm for it based on inclusion-exclusion techniques [5-7, 17, 18, 20]. In Sect. 3 we apply the method to count paths of k edges in a given n-vertex graph in time O * n k/2 ; throughout the paper, O * suppresses a factor polynomial in the mentioned parameters (here, n and k). In Sect. 4 we give another application, to count k-packings in a given family of m-element subsets of an n-element universe in time O * n mk/2 . For both problems we also present slightly slower algorithms that require only polynomial space.
We note that an earlier report on this work under a different title [8] already introduces a somewhat more general technique and an application to counting paths. The report has been cited in some recent papers [1,26], which we, among other related previous work, discuss below.
The Disjoint Sum Problem
Given two set families A and B, and functions α and β that associate each member of A and B, respectively, an element from a ring R, the Disjoint Sum problem is to find the sum of the products α(A)β(B) over all disjoint pairs of subsets (A, B) in the Cartesian product A × B; denote the sum by α β. In applications, the ring R is typically the set of integers equipped with the usual addition and multiplication operation. Note that, had the condition of disjointness removed, the problem could be easily solved using about |A| + |B| additions and one multiplication. However, to respect the disjointness condition, the straightforward algorithm appears to require about |A||B| ring operations and tests of disjointness.
In many cases, we fortunately can do better by applying the principle of inclusion and exclusion. The basic idea is to compute the sum over pairs (A, B) with A ∩ B = ∅ by subtracting the sum over pairs with A ∩ B = X = ∅ from the sum over pairs with no constraints. For a precise treatment, it is handy to denote by N the union of all the members in the families A and B, and extend the functions α and β to all subsets of N by letting them evaluate to 0 outside A and B, respectively. We also use the Iverson bracket notation: [P ] = 1 if P is true, and [P ] = 0 otherwise. Now, by elementary manipulation,
α β = A B [A ∩ B = ∅] α(A) β(B) = A B X (−1) |X| [X ⊆ A ∩ B] α(A) β(B) = X (−1) |X| A B [X ⊆ A] [X ⊆ B] α(A) β(B) = X (−1) |X| A⊇X α(A) B⊇X β(B) .(1)
Here we understand that A, B, and X run through all subsets of N unless otherwise specified. Note also that the second equality holds because every nonempty set has exactly as many subsets of even size as subsets of odd size.
To analyze the complexity of evaluating the inclusion-exclusion expression (1), we define the lower set of a set family F, denoted by ↓F, as the family consisting of all the sets in F and their subsets. We first observe that in (1) it suffices to let X run over the intersection of ↓A and ↓B, for any other X has no supersets in A or in B. Second, we observe that the valueŝ
α(X) . = A⊇X α(A) ,
for all X ∈ ↓A, can be computed in a total of |↓A| n ring and set operations, as follows. Let a 1 , a 2 , . . . , a n be the n elements of N . For any i = 0, 1, . . . , n and X ∈ ↓A defineα i (X) as the sum of the α(A) over all sets A ∈ ↓A with A ⊇ X and A ∩ {a 1 , a 2 , . . . , a i } = X ∩ {a 1 , a 2 , . . . , a i }. In particular,α n (X) = α(X) and α 0 (X) =α(X). Furthermore, by induction on i one can prove the recurrencê
α i−1 (X) = [a i ∈ X]α i (X) + [X ∪ {a i } ∈ ↓A]α i (X ∪ {a i }) ;
for details, see closely related recent work on trimmed zeta transform and Moebius inversion [6,7]. Thus, for each i, the valuesα i (X) for all X ∈ ↓A can be computed with |↓A| ring and set operations. We have shown the following.
p 0 (S, v) = [S = ∅] , p j (S, v) = u∈S p j−1 (S \ {u}, u) [uv ∈ E] for j > 0 .
Alternatively, one may use the inclusion-exclusion formula [17,20]
p j (S, v) = Y ⊆S (−1) |S\Y | w j (Y, v) ,
where w j (Y, v) is the number of j-edge walks starting from v and visiting some vertices of Y , that is, sequences u 0 u 1 · · · u j with u 0 = v, each u i−1 u i ∈ E, and u 1 , u 2 , . . . , u j ∈ Y . Note that for any given Y , v, and j, the term w j (Y, v) can be computed in time polynomial in n. Using either of the above two formulas, the values p j (S, v) for all v ∈ V and sets S ⊆ V \ {v} of size j, can be computed in time O * n ↓j ; here and henceforth, q ↓r denotes the sum of the binomial coefficients q 0 + q 1 + · · · + q r . In particular, the number of k-edge paths in the graph is obtained as the sum of p k (S, v) over all v ∈ V and S ⊆ V \ {v} of size k, in time O * n ↓k . However, meet-in-the-middle yields a much faster algorithm. Assuming for simplicity that k is even, the path has a mid-vertex, v k/2 , at which the path uniquely decomposes into two k/2-edge paths, namely v 0 v 1 · · · v k/2 and v k/2 v k/2+1 · · · v k , with almost disjoint supports. Thus, the number of k-edge paths is obtained as the sum of the products In the remainder of this section we present a polynomial-space variant of the above described algorithm. Let the mid-vertex v be fixed. Then the task is to compute, for each X ⊆ V \ {v} of size at most k/2, the sum
S⊇X p k/2 (S, v) = S⊇X Y ⊆S (−1) |S\Y | w k/2 (Y, v)
in space polynomial in n and k. If done in a straightforward manner, the running time, ignoring polynomial factors, becomes proportional to the number of triplets (X, S, Y ) with X, Y ⊆ S ⊆ V \ {v} and |S| = k/2. This number is n−1 k/2 2 k because there are n−1 k/2 choices for S and for any fixed S, there are 2 k/2 choices for X and 2 k/2 choices for Y .
A faster algorithm is obtained by reversing the order of summation:
S⊇X p k/2 (S, v) = Y w k/2 (Y, v) S (−1) |S\Y | [X, Y ⊆ S] = Y w k/2 (Y, v) (−1) k/2−|Y | n − |X ∪ Y | k/2 − |X ∪ Y | ;
here Y and S run through all subsets of V \ {v} of size at most k/2 and exactly k/2, respectively. The latter equality holds because S is of size k/2 and contains X ∪ Y . It remains to find in how many ways one can choose the sets X and Y such that the union U . = X ∪ Y is of size at most k/2. This number is
k/2 s=0 n − 1 s 3 s ≤ 3 2 n − 1 k/2 3 k/2 ,
because there are n−1 s ways to choose U of size s, and one can put each element in U either to X or Y or both.
Set Packing
Next, consider packings in a set family F consisting of subsets of a universe N . We will assume that each member of F is of size m. A k-packing in F is a set of k mutually disjoint members of F. The members F 1 , F 2 , . . . , F k of a k-packing can be ordered in k! different ways to an ordered k-packing F 1 F 2 · · · F k . Define the support of the ordered k-packing as the union of its members. For any S ⊆ N , let π j (S) denote the number of ordered j-packings in F with support S. The values can be computing by dynamic programming using the recurrence
π 0 (S) = [S = ∅] , π j (S) = F ⊆S π j−1 (S \ F ) [F ∈ F] for j > 0 .
Alternatively, one may use the inclusion-exclusion formula
π j (S) = Y ⊆S (−1) |S\Y | F ⊆Y [F ∈ F] j
(here we use the assumption that every member of F is of size m) [5,6]. Using the inclusion-exclusion formula, the values π j (S) for all S ⊆ N of size mj can be computed in time O * n ↓mj , where n is the cardinality of N ; a straightforward implementation of the dynamic programming algorithm yields the same bound, provided that m is a constant. In particular, the number of k-packings in F is obtained as the sum of π k (S) k! over all S ⊆ N of size mk, in time O * n ↓mk . Again, meet-in-the-middle gives a much faster algorithm. Assuming for simplicity that k is even, we observe that the ordered k-packing decomposes uniquely into two ordered k/2-packings F 1 F 2 · · · F k/2 and F k/2+1 F k/2+2 · · · F k with disjoint supports. Thus the number of ordered k-packings in F is obtained as the sum of the products in space polynomial in n, k, and m.
As with counting paths in the previous section, a faster than the straightforward algorithm is obtained by reversing the order of summation:
S⊇X π k/2 (S) = Y F ⊆Y [F ∈ F] k/2 S (−1) |S\Y | [X, Y ⊆ S] = Y F ⊆Y [F ∈ F] k/2 (−1) k/2−|Y | n − |X ∪ Y | mk/2 − |X ∪ Y | ;
here Y and S run through all subsets of N of size at most mk/2 and exactly mk/2, respectively. It remains to find the number of triplets (X, Y, F ) satisfying |X ∪ Y | ≤ mk/2, |F | = m, and F ⊆ Y . This number is
because there are n mk/2 choices for the union U . = X ∪ Y of size s, within which there are s m choices for F ; the elements in F can be put to only Y or to both X and Y , whereas each of the remaining s − m elements in U is put to either X or Y or both.
Theorem 5. The k-packings in a given family of m-element subsets of an nelement set can be counted in time O * 5 mk/2 n mk/2 in space polynomial in n, k, and m.
We remark that the upper bound (2) is rather crude for small values of m.
In particular, provided that m is a constant, we can replace the constant 5 by 3.
| 2,127 |
0904.3093
|
2950663737
|
It is shown that one can count @math -edge paths in an @math -vertex graph and @math -set @math -packings on an @math -element universe, respectively, in time @math and @math , up to a factor polynomial in @math , @math , and @math ; in polynomial space, the bounds hold if multiplied by @math or @math , respectively. These are implications of a more general result: given two set families on an @math -element universe, one can count the disjoint pairs of sets in the Cartesian product of the two families with @math basic operations, where @math is the number of members in the two families and their subsets.
|
The presented meet-in-the-middle approach resembles the randomized divide-and-conquer technique by Chen, Lu, Sze, and Zhang @cite_20 and the similar divide-and-color method by Kneis, M " o lle, Richter, and Rossmanith @cite_6 , designed for parameterized decision problems. These can, in turn, be viewed as extensions of the recursive partitioning technique of Gurevich and Shelah @cite_10 for the Hamiltonian Path problem. That said, our contribution is rather in the observation that, in the counting context, the join operation can be done efficiently using the inclusion--exclusion machinery. While our formalization of the problem as the Disjoint Sum problem is new, the solution itself can, in essence, already be found in Kennes @cite_18 , even though in terms of possibility calculus and without the idea of trimming,'' that is, restricting the computations to small subsets. Kennes's results were rediscovered in a dual form and extended to accommodate trimming in the authors' recent works @cite_23 @cite_3 @cite_21 .
|
{
"abstract": [
"",
"We study ways to expedite Yates’s algorithm for computing the zeta and Moebius transforms of a function defined on the subset lattice. We develop a trimmed variant of Moebius inversion that proceeds point by point, finishing the calculation at a subset before considering its supersets. For an n-element universe U and a family ℱ of its subsets, trimmed Moebius inversion allows us to compute the number of packings, coverings, and partitions of U with k sets from ℱ in time within a polynomial factor (in n) of the number of supersets of the members of ℱ. Relying on an projection theorem of (J. Comb. Theory Ser. A 43:23–37, 1986) to bound the sizes of set families, we apply these ideas to well-studied combinatorial optimisation problems on graphs with maximum degree Δ. In particular, we show how to compute the domatic number in time within a polynomial factor of (2Δ+1−2) n (Δ+1) and the chromatic number in time within a polynomial factor of (2Δ+1−Δ−1) n (Δ+1). For any constant Δ, these bounds are O((2−e) n ) for e>0 independent of the number of vertices n.",
"",
"We present a fast algorithm for the subset convolution problem:given functions f and g defined on the lattice of subsets of ann-element set n, compute their subset convolution f*g, defined for S⊆ N by [ (f * g)(S) = [T ⊆ S] f(T) g(S T),,]where addition and multiplication is carried out in an arbitrary ring. Via Mobius transform and inversion, our algorithm evaluates the subset convolution in O(n2 2n) additions and multiplications, substanti y improving upon the straightforward O(3n) algorithm. Specifically, if the input functions have aninteger range [-M,-M+1,...,M], their subset convolution over the ordinary sum--product ring can be computed in O(2n log M) time; the notation O suppresses polylogarithmic factors.Furthermore, using a standard embedding technique we can compute the subset convolution over the max--sum or min--sum semiring in O(2n M) time. To demonstrate the applicability of fast subset convolution, wepresent the first O(2k n2 + n m) algorithm for the Steiner tree problem in graphs with n vertices, k terminals, and m edges with bounded integer weights, improving upon the O(3kn + 2k n2 + n m) time bound of the classical Dreyfus-Wagner algorithm. We also discuss extensions to recent O(2n)-time algorithms for covering and partitioning problems (Bjorklund and Husfeldt, FOCS 2006; Koivisto, FOCS 2006).",
"Given a set @math with @math elements and a family @math of subsets, we show how to partition @math into @math such subsets in @math time. We also consider variations of this problem where the subsets may overlap or are weighted, and we solve the decision, counting, summation, and optimization versions of these problems. Our algorithms are based on the principle of inclusion-exclusion and the zeta transform. In effect we get exact algorithms in @math time for several well-studied partition problems including domatic number, chromatic number, maximum @math -cut, bin packing, list coloring, and the chromatic polynomial. We also have applications to Bayesian learning with decision graphs and to model-based data clustering. If only polynomial space is available, our algorithms run in time @math if membership in @math can be decided in polynomial time. We solve chromatic number in @math time and domatic number in @math time. Finally, we present a family of polynomial space approximation algorithms that find a number between @math and @math in time @math .",
"One way to cope with an NP-hard problem is to find an algorithm that is fact on average with respect to a natural probability distribution on inputs. We consider from that point of view the Hamilto...",
"Improved randomized and deterministic algorithms are presented for PATH, MATCHING, and PACKING problems. Our randomized algorithms are based on the divide-and-conquer technique, and improve previous best algorithms for these problems. For example, for the k-PATH problem, our randomized algorithm runs in time O(4kk3.42m) and space O(nklogk + m), improving the previous best randomized algorithm for the problem that runs in time O(5.44kkm) and space O(2kkn + m). To achieve improved deterministic algorithms, we study a number of previously proposed de-randomization schemes, and also develop a new derandomization scheme. These studies result in a number of deterministic algorithms: one of time O(4k+o(k)m) for the k-PATH problem, one of time O(2.803kk nlog2 n) for the 3-D MATCHING problem, and one of time O(43k+o(k)n) for the 3-SET PACKING problem. All these significantly improve previous best algorithms for the problems."
],
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_23",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"2126436716",
"",
"2045326192",
"2074359677",
"2012473793",
"1970828859"
]
}
|
Counting Paths and Packings in Halves
|
Some combinatorial structures can be viewed as two halves that meet in the middle. For example, a k-edge path is a combination of two k/2-edge paths. Bidirectional search [10,25] finds such structures by searching the two halves simultaneously until the two search frontiers meet. In instantiations of this idea, it is crucial to efficiently join the two frontiers to obtain a valid or optimal solution. For instance, the meet-in-the-middle algorithm for the Subset Sum problem, by Horowitz and Sahni [15], implements the join operation via a clever pass through two sorted lists of subset sums.
In the present paper, we take the meet-in-the-middle approach to counting problems, in particular, to counting paths and packings. Here, the join operation amounts to consideration of pairs of disjoint subsets of a finite universe, each subset weighted by the number of structures that span the subset. We begin in Sect. 2 by formalizing this as the Disjoint Sum problem and providing an algorithm for it based on inclusion-exclusion techniques [5-7, 17, 18, 20]. In Sect. 3 we apply the method to count paths of k edges in a given n-vertex graph in time O * n k/2 ; throughout the paper, O * suppresses a factor polynomial in the mentioned parameters (here, n and k). In Sect. 4 we give another application, to count k-packings in a given family of m-element subsets of an n-element universe in time O * n mk/2 . For both problems we also present slightly slower algorithms that require only polynomial space.
We note that an earlier report on this work under a different title [8] already introduces a somewhat more general technique and an application to counting paths. The report has been cited in some recent papers [1,26], which we, among other related previous work, discuss below.
The Disjoint Sum Problem
Given two set families A and B, and functions α and β that associate each member of A and B, respectively, an element from a ring R, the Disjoint Sum problem is to find the sum of the products α(A)β(B) over all disjoint pairs of subsets (A, B) in the Cartesian product A × B; denote the sum by α β. In applications, the ring R is typically the set of integers equipped with the usual addition and multiplication operation. Note that, had the condition of disjointness removed, the problem could be easily solved using about |A| + |B| additions and one multiplication. However, to respect the disjointness condition, the straightforward algorithm appears to require about |A||B| ring operations and tests of disjointness.
In many cases, we fortunately can do better by applying the principle of inclusion and exclusion. The basic idea is to compute the sum over pairs (A, B) with A ∩ B = ∅ by subtracting the sum over pairs with A ∩ B = X = ∅ from the sum over pairs with no constraints. For a precise treatment, it is handy to denote by N the union of all the members in the families A and B, and extend the functions α and β to all subsets of N by letting them evaluate to 0 outside A and B, respectively. We also use the Iverson bracket notation: [P ] = 1 if P is true, and [P ] = 0 otherwise. Now, by elementary manipulation,
α β = A B [A ∩ B = ∅] α(A) β(B) = A B X (−1) |X| [X ⊆ A ∩ B] α(A) β(B) = X (−1) |X| A B [X ⊆ A] [X ⊆ B] α(A) β(B) = X (−1) |X| A⊇X α(A) B⊇X β(B) .(1)
Here we understand that A, B, and X run through all subsets of N unless otherwise specified. Note also that the second equality holds because every nonempty set has exactly as many subsets of even size as subsets of odd size.
To analyze the complexity of evaluating the inclusion-exclusion expression (1), we define the lower set of a set family F, denoted by ↓F, as the family consisting of all the sets in F and their subsets. We first observe that in (1) it suffices to let X run over the intersection of ↓A and ↓B, for any other X has no supersets in A or in B. Second, we observe that the valueŝ
α(X) . = A⊇X α(A) ,
for all X ∈ ↓A, can be computed in a total of |↓A| n ring and set operations, as follows. Let a 1 , a 2 , . . . , a n be the n elements of N . For any i = 0, 1, . . . , n and X ∈ ↓A defineα i (X) as the sum of the α(A) over all sets A ∈ ↓A with A ⊇ X and A ∩ {a 1 , a 2 , . . . , a i } = X ∩ {a 1 , a 2 , . . . , a i }. In particular,α n (X) = α(X) and α 0 (X) =α(X). Furthermore, by induction on i one can prove the recurrencê
α i−1 (X) = [a i ∈ X]α i (X) + [X ∪ {a i } ∈ ↓A]α i (X ∪ {a i }) ;
for details, see closely related recent work on trimmed zeta transform and Moebius inversion [6,7]. Thus, for each i, the valuesα i (X) for all X ∈ ↓A can be computed with |↓A| ring and set operations. We have shown the following.
p 0 (S, v) = [S = ∅] , p j (S, v) = u∈S p j−1 (S \ {u}, u) [uv ∈ E] for j > 0 .
Alternatively, one may use the inclusion-exclusion formula [17,20]
p j (S, v) = Y ⊆S (−1) |S\Y | w j (Y, v) ,
where w j (Y, v) is the number of j-edge walks starting from v and visiting some vertices of Y , that is, sequences u 0 u 1 · · · u j with u 0 = v, each u i−1 u i ∈ E, and u 1 , u 2 , . . . , u j ∈ Y . Note that for any given Y , v, and j, the term w j (Y, v) can be computed in time polynomial in n. Using either of the above two formulas, the values p j (S, v) for all v ∈ V and sets S ⊆ V \ {v} of size j, can be computed in time O * n ↓j ; here and henceforth, q ↓r denotes the sum of the binomial coefficients q 0 + q 1 + · · · + q r . In particular, the number of k-edge paths in the graph is obtained as the sum of p k (S, v) over all v ∈ V and S ⊆ V \ {v} of size k, in time O * n ↓k . However, meet-in-the-middle yields a much faster algorithm. Assuming for simplicity that k is even, the path has a mid-vertex, v k/2 , at which the path uniquely decomposes into two k/2-edge paths, namely v 0 v 1 · · · v k/2 and v k/2 v k/2+1 · · · v k , with almost disjoint supports. Thus, the number of k-edge paths is obtained as the sum of the products In the remainder of this section we present a polynomial-space variant of the above described algorithm. Let the mid-vertex v be fixed. Then the task is to compute, for each X ⊆ V \ {v} of size at most k/2, the sum
S⊇X p k/2 (S, v) = S⊇X Y ⊆S (−1) |S\Y | w k/2 (Y, v)
in space polynomial in n and k. If done in a straightforward manner, the running time, ignoring polynomial factors, becomes proportional to the number of triplets (X, S, Y ) with X, Y ⊆ S ⊆ V \ {v} and |S| = k/2. This number is n−1 k/2 2 k because there are n−1 k/2 choices for S and for any fixed S, there are 2 k/2 choices for X and 2 k/2 choices for Y .
A faster algorithm is obtained by reversing the order of summation:
S⊇X p k/2 (S, v) = Y w k/2 (Y, v) S (−1) |S\Y | [X, Y ⊆ S] = Y w k/2 (Y, v) (−1) k/2−|Y | n − |X ∪ Y | k/2 − |X ∪ Y | ;
here Y and S run through all subsets of V \ {v} of size at most k/2 and exactly k/2, respectively. The latter equality holds because S is of size k/2 and contains X ∪ Y . It remains to find in how many ways one can choose the sets X and Y such that the union U . = X ∪ Y is of size at most k/2. This number is
k/2 s=0 n − 1 s 3 s ≤ 3 2 n − 1 k/2 3 k/2 ,
because there are n−1 s ways to choose U of size s, and one can put each element in U either to X or Y or both.
Set Packing
Next, consider packings in a set family F consisting of subsets of a universe N . We will assume that each member of F is of size m. A k-packing in F is a set of k mutually disjoint members of F. The members F 1 , F 2 , . . . , F k of a k-packing can be ordered in k! different ways to an ordered k-packing F 1 F 2 · · · F k . Define the support of the ordered k-packing as the union of its members. For any S ⊆ N , let π j (S) denote the number of ordered j-packings in F with support S. The values can be computing by dynamic programming using the recurrence
π 0 (S) = [S = ∅] , π j (S) = F ⊆S π j−1 (S \ F ) [F ∈ F] for j > 0 .
Alternatively, one may use the inclusion-exclusion formula
π j (S) = Y ⊆S (−1) |S\Y | F ⊆Y [F ∈ F] j
(here we use the assumption that every member of F is of size m) [5,6]. Using the inclusion-exclusion formula, the values π j (S) for all S ⊆ N of size mj can be computed in time O * n ↓mj , where n is the cardinality of N ; a straightforward implementation of the dynamic programming algorithm yields the same bound, provided that m is a constant. In particular, the number of k-packings in F is obtained as the sum of π k (S) k! over all S ⊆ N of size mk, in time O * n ↓mk . Again, meet-in-the-middle gives a much faster algorithm. Assuming for simplicity that k is even, we observe that the ordered k-packing decomposes uniquely into two ordered k/2-packings F 1 F 2 · · · F k/2 and F k/2+1 F k/2+2 · · · F k with disjoint supports. Thus the number of ordered k-packings in F is obtained as the sum of the products in space polynomial in n, k, and m.
As with counting paths in the previous section, a faster than the straightforward algorithm is obtained by reversing the order of summation:
S⊇X π k/2 (S) = Y F ⊆Y [F ∈ F] k/2 S (−1) |S\Y | [X, Y ⊆ S] = Y F ⊆Y [F ∈ F] k/2 (−1) k/2−|Y | n − |X ∪ Y | mk/2 − |X ∪ Y | ;
here Y and S run through all subsets of N of size at most mk/2 and exactly mk/2, respectively. It remains to find the number of triplets (X, Y, F ) satisfying |X ∪ Y | ≤ mk/2, |F | = m, and F ⊆ Y . This number is
because there are n mk/2 choices for the union U . = X ∪ Y of size s, within which there are s m choices for F ; the elements in F can be put to only Y or to both X and Y , whereas each of the remaining s − m elements in U is put to either X or Y or both.
Theorem 5. The k-packings in a given family of m-element subsets of an nelement set can be counted in time O * 5 mk/2 n mk/2 in space polynomial in n, k, and m.
We remark that the upper bound (2) is rather crude for small values of m.
In particular, provided that m is a constant, we can replace the constant 5 by 3.
| 2,127 |
0903.3461
|
2949988283
|
This paper investigates under which conditions information can be reliably shared and consensus can be solved in unknown and anonymous message-passing networks that suffer from crash-failures. We provide algorithms to emulate registers and solve consensus under different synchrony assumptions. For this, we introduce a novel pseudo leader-election approach which allows a leader-based consensus implementation without breaking symmetry.
|
There have been several approaches to solve fault-tolerant consensus in anonymous networks deterministically. In @cite_5 , fault-tolerant consensus is solved under the assumption that failure detector @math @cite_7 exists, i.e. exactly one correct process eventually knows forever that it is the leader. In @cite_3 , fault-tolerant and obstruction-free For obstruction-free consensus, termination is only guaranteed if a process can take enough steps without beeing interrupted by other processes. consensus is solved if registers are available.
|
{
"abstract": [
"We present here two consensus algorithms in shared memory asynchronous systems with the eventual leader election failure detector *** . In both algorithms eventually only the leader given by failure detector *** will progress, and being eventually alone to make steps the leader will decide. The first algorithm uses an infinite number of multi-writer multi-reader atomic registers and works with an unbounded number of anonymous processes. The second uses only a finite number of single-writer multi-reader registers but assumes a finite number of processes with known unique identities.",
"",
"We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg [1996], it is shown that W , a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as W. Thus, W is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes."
],
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_7"
],
"mid": [
"1796152689",
"",
"2077240273"
]
}
|
Fault-Tolerant Consensus in Unknown and Anonymous Networks
|
Most of the algorithms for distributed systems consider that the number of processes in the system is known and every process has a distinct ID. However, in some networks such as in wireless sensors networks, this is not necessarily true. Additionally, such networks are typically not totally synchronous and processes may suffer from failures such as crashes.
Designing protocols for such networks is especially intricate, since a process can never know if its messages have been received by all processes in the system. In this paper, we investigate under which conditions information can be reliably shared and consensus can be solved in such environments.
Typically, in systems where no hardware registers are available, one makes additional assumptions to be able to reliably share information, e.g. by assuming a correct majority of processes. However, these techniques assume also some knowledge about the total number of processes. With processes with distinct identities, the requirements to emulate a register have been precisely determined by showing that the quorum failure detector Σ is the weakest failure detector to simulate registers in asynchronous message passing systems [5]. But again, this approach fails due to the lack of identities in our anonymous environment.
To circumvent these problems, we assume that the system is not totally asynchronous, but assume the existence of some partial synchrony. We specify our environments by using the general round-based algorithm framework (GIRAF) of [11]. This has two advantages: (i) it is easy to precisely Carole Delporte-Gallet and Hugues Fauconnier were supported by grant ANR-08-VERSO-SHAMAN. Andreas Tielmann was supported by grants from Région Ile- de-France. specify an environment and (ii) it makes it easy to emulate environments to show minimality results.
We first define the moving source environment (MS) in which at every time at least one process (called the source) sends timely messages to all other processes, but this source may change over time and infinitely often. Although this environment is considerably weaker than a total synchronous environment, we show that it is still sufficient to implement registers, although it is not possible to implement the consensus abstraction. In fact, it can be emulated by hardware registers in totally asynchronous "known" networks for any number of process crashes. Therefore, if we would be able to implement consensus in this environment, we could contradict the famous FLP impossibility result [7]. This result states, that consensus cannot be implemented in asynchronous message passing networks, even if only one process may crash. Since we can emulate registers if only one process may crash [2], we can also emulate the MS environment and therefore cannot be more powerful.
To implement consensus, we consider some additional stronger synchrony assumptions. Our first consensus algorithm assumes that additionally to the assumptions of the MS environment, eventually all processes communicate timely. We call this environment the eventual synchronous (ES) environment. It resembles Dwork et al. [6]. In our second consensus algorithm, we consider a weaker environment and only assume that eventually always the same process is able to send timely to all other processes. We call it the eventual stable source environment (ESS). It resembles the model of [1] in which it is used to elect a leader, a classical approach to implement in turn consensus.
Due to the indistinguishability of several processes that behave identical, a true leader election is not possible in our anonymous environment. Therefore, in our second algorithm, we take benefit of the fact that it suffices for the implementation of consensus if all processes that consider itself as a leader behave the same way. We show how to eventually guarantee this using the history of the processes proposal values.
Furthermore, we consider the weak-set data-structure [4]. This data-structure comes along some problems that arise with registers in unknown and anonymous networks. Every process can add values to a weak-set and read the values written before. Contrary to a register, it allows for sharing information without knowing identities of other processes and without the risk of an overwritten value due to a concurrent write. Furthermore, we show that it precisely captures the power of the MS environment, i.e. we can show that it can be implemented in the MS environment and a weak-set can be used to emulate the MS ennvironment. Interestingly, in known networks, a weak-set is equivalent to the register abstraction and can thus be seen as a generalization for unknown and anonymous networks.
Furthermore, we show that although it is possible to emulate registers in our MS environment, it is not possible to emulate Σ [5], the weakest failure detector for registers. And this result is not only due to the anonymity of the processes, it holds even if the number of processes and their identities are known. Note that this is not a contradiction, since the result in [5] means only that Σ is the weakest of all failure detectors with which a register can be implemented and we have exhibited synchrony assumptions where the existence of a failure detector is not necessary at all.
Model and Definitions
We assume a network with an unknown (but finite) number of processes where the processes have no IDs (i.e. they are totally anonymous) and communicate using a broadcast primitive. The set of processes is denoted Π. We assume that the broadcast primitive is reliable, although it may not always deliver messages on time. Furthermore, any number of processes may crash and the processes do not recover. Processes that do not crash are called correct. 1. For obstruction-free consensus, termination is only guaranteed if a process can take enough steps without beeing interrupted by other processes.
We model an algorithm A as a set of deterministic automata, one for every process in the system. We assume only fair runs, i.e. every correct process executes infinitely many steps.
Consensus
In the consensus problem, the processes try to decide on one of some proposed values. Three properties have to be satisfied:
Validity: Every decided value has to be a proposed value. Termination: Eventually, every correct process decides. Agreement: No two processes decide different values.
An extension to GIRAF
Algorithm 1 presents an extension to the generic roundbased algorithm framework of [11] (GIRAF). It is extended to deal with the particularities of our model, namely the anonymity and unknown number of the processes. The framework is modeled as an I/O automaton. To implement a specific algorithm, the framework is instantiated with two functions: initialize() and compute(). The compute() function takes the round number and the messages received so far as parameters. We omit to specify a failure detector output as parameter (as in [11]), because we are not interested in failure detectors here. Both functions are non-blocking, i.e. they are not allowed to wait for any other event.
Our extension lies in the way we model the received messages. Since the processes have no IDs, we represent the messages that are received during one round as a set instead of an array.
The communication between the processes proceeds in rounds and the advancement of the rounds is controlled by the environment via the receive i and end-of-round i input actions. These actions may occur separately at each process p i and therefore rounds are not necessarily synchronized among processes. The framework can capture any asynchronous message passing algorithm (see [11]).
Environments are specified using round-based properties, restricting the message arrivals in each round.
Environments
We say that a process p i is in round k, if there have been k invocations of end-of-round i . A process p i has a timely link in round k, if end-of-round i occurs in round k and every correct process p j receives the round k message of p i in round k.
In this paper, we consider three different environments: exists a process p s (a source) that has a timely link in round k. • In the second environment, which we call the eventual synchronous (ES) environment, we demand the same as in the MS environment, but additionally require that there is some round k such that in every round k ′ ≥ k, all correct processes have timely links in round k ′ . • In the third environment, which we call the eventually stable source (ESS) environment, we demand the same as in the MS environment, but additionally require that eventually the source process p s is always the same in every round. This means, that there is some round k such that in every round k ′ ≥ k, the same process p s has a timely link in round k ′ . Algorithm 2 implements consensus in the ES environment. The idea of the algorithm is to ensure safety by waiting until a value is contained in every message received in a round. In this way, one can ensure that a value has also been relayed by the current source and is therefore known by everybody (we say that the value is written). If a process evaluates Line 9 to true, then VAL is known by everybody (because it was written in the last round) and no other process will consider another value as written, because only a value which has also been relayed by a source can be in WRITTEN. But the relayed value of a source would also be in PROPOSED at every process.
Implementing consensus in ES
To guarantee the liveness of the consensus algorithm, we use the fact that eventually, all proposal values in the system are received in every even round by everybody and everybody will select the same maximum in Line 12. Therefore, everybody will propose the same value in the next round and the algorithm will terminate.
Analysis
For all local variables VAR, we denote by VAR i the local variable of process p i (e.g., PROPOSED i ). For every variable VAR i , VAR k i is the value of this variable after process p i has executed Line 7 when compute has been invoked with parameter k (i.e. in round k).
Lemma 1.
If no process has decided yet and for some p i , v ∈ WRITTEN k i , then every process p j that enters round k has v ∈ PROPOSED k j . Proof: If a process p i has a value v in WRITTEN k i , then v has been contained in every message, which p i has received in round k (Line 6). This includes the message of the source, since by assumption the source has not yet terminated. But by definition, every other process p j that enters round k also has received the message of this source in this round and added it to its set PROPOSED k j (Line 7). Therefore, v is in PROPOSED k j . Lemma 2. If no process has decided yet and p i has v ∈ WRITTENOLD k i in an even round k, then every other process p j that enters round k has v ∈ WRITTEN k j . Proof: If a process p i has a value v in WRITTENOLD k i , then it has had v in WRITTEN k−1 i . Therefore, every other process p j that enters round k − 1 has v in PROPOSED k−1 j in the same odd round k − 1 (Lemma 1). Since no value is removed from a set PROPOSED in odd rounds, v will be contained in every set PROPOSED broadcast at the end of round k − 1 and therefore get into WRITTEN k j at every process p j that enters round k.
Theorem 1. Algorithm 2 implements consensus in the ES environment.
Proof: We have to prove the 3 properties of consensus. Validity is immediately clear, because VAL is always an initial value.
To prove termination, assume that the system has stabilized, i.e. all faulty processes have crashed and all messages are received in the round after which they have been sent. Then, all processes receive the same set of messages in every round. Therefore, the set PROPOSED and thus WRITTEN is the same at all correct processes and everybody will always select the same maximum in Line 12. In the next round all processes start with the same proposal value and this value will be written in every future round. Thus, everybody will evaluate Line 9 to true in the next round.
To prove agreement, assume p i is the first process that decides a value v in a round k. This means, that p i has evaluated Line 9 to true. If some other value than v would have been written anywhere in the system, this would contradict PROPOSED = {v} (Lemma 1), since p i is the first process that decides. Furthermore, v is in WRITTEN at every process in the system in round k, since it is also in WRITTENOLD (Lemma 2). Therefore, every other process decides v in the same round, or it will evaluate Line 11 to true and select v as new VAL. Thus, no other value will ever get into PROPOSED anywhere in the system, no other value will ever be written and no other value will ever be selected as VAL.
Implementing consensus in ESS
Algorithm 3 implements consensus in the ESS environment. For the safety part, the algorithm is very close to algorithm 2 (see Section 3).
To guarantee liveness, we use the fact that we have at least one process which is eventually a source forever. We use the idea of the construction of the leader failure detector Ω [3]. It elects a leader among the processes which is eventually stable. In "known" networks, with some eventual synchrony, Ω can be implemented by counting heartbeats of processes (e.g. in [1]). But we are not able to count heartbeats of different processes here, because in our model the processes have no IDs. To circumvent this problem, we identify processes with the history of their proposal values. If several processes have the same history, they either propose the same value, or their histories diverge and will never become identical again. Eventually, all processes will select the same history as maximal history and the processes with this history will propose in every round the same values.
Implementation
Every process maintains a list of the values it broadcasts in every round (specifically, its proposal values). This list is denoted by the variable HISTORY. In this way, two processes that propose in the same round different values will eventually have different HISTORY variables. Note that, although the space required by the variables may be unbounded, in every round they require only finite space. Thus, if we could ensure that eventually all processes that propose have in every round the same history (and at least one process proposes infinitely often), then the proposal values sent are indistinguishable from the proposal values of a single "classical" leader.
However, the history of a process permanently grows. Therefore, every process includes its current history in every message it broadcasts. Furthermore, it maintains a counter C for every history it has yet heard of (in such a way that no memory is allocated for histories it has not yet heard of). Then, it compares the histories it receives with the ones it has received in previous rounds. If some old history is a prefix of a new history, it assigns the counter of the new history the value of the counter of the old one, increased by one. Thus, the counter of a history that corresponds to an eventual source is eventually increased in every round.
In this way, it is possible to ensure that eventually only eventual sources that converge to the same infinite history consider itself as leader. In a classical approach, eventually only these leaders would propose values. But to meet our safety requirements, it is crucial to ensure that all processes propose in every round at least something to make sure that the value of the current source is received by everybody. Therefore, we let processes that do not consider itself as a leader propose the special value ⊥.
Analysis
Similarly to Section 3, for every variable VAR i , VAR k i is the value of this variable after process p i has executed Line 9 in round k. Definition 1. We say, that p i has heard of p j 's round k message (m k j ), if p i has received m k j in round k, or if there exists another process p l such that p i has heard of p l 's round k ′ message for some k ′ > k and p l has heard of p j 's round k message.
Let process p s be an eventual source. We then identify three groups of processes:
out-connected: The processes, the eventual source p s has infinitely often heard of. ⋄-silent:
The processes that are not out-connected.
⋄-proposer:
The out-connected processes that have eventually in every round timely links towards all other out-connected processes. 2 leader:
We say that a process p i is a leader in some round k (p i ∈ leader(k)), iff
∀H, C k i [HISTORY k i ] ≥ C k i [H]
. If process p i is eventually a leader forever, i.e. there exists a k, such that for all k ′ ≥ k, p i ∈ leader(k ′ ), then we simply write that p i ∈ leader. Note that it may be possible that there are several processes in leader. The sets relate to each other in the following way:
{p s } ⊆ ⋄-proposer ⊆ out-connected ⊆ correct and ⋄-silent ∩ out-connected = ∅ We will later show that leader ⊆ ⋄-proposer (Lemma 6).
Lemma 3.
Eventually, in every odd round k, for every ⋄-proposer p i , the set PROPOSED in m k i is a subset of the set WRITTEN at all out-connected processes in round k + 1.
More formally:
∃k, ∀k ′ ≥ k with k ′ mod 2 = 1, ∀p i ∈ ⋄-proposer, ∀p j ∈ out-connected :
m k ′ i = PROPOSED, −, − → PROPOSED ⊆ WRITTEN k ′ +1 j
Proof: Follows directly from the definition of ⋄proposers and the fact that out-connected processes eventually do not receive any timely messages from ⋄-silent processes.
Lemma 4. Eventually, at all out-connected processes, the counters that correspond to histories of ⋄-proposers increase in every round by one. More formally:
∃k, ∀k ′ ≥ k, ∀p i ∈ ⋄-proposer, ∀p j ∈ out-connected,
C k ′ +1 j [HISTORY k ′ +1 i ] = C k ′ j [HISTORY k ′ i ] + 1
Proof: Assume a time when the system has stabilized. This means, that all ⋄-proposers send timely messages to all out-connected processes in every round and no out-connected process receives timely messages from ⋄-silent processes. Then, let k be the number of the current round and for every ⋄-proposer p i let p j be an out-connected process, such that the counter C k j [HISTORY k i ] is minimal among all out-connected processes in round k. Then, the counter for p i 's history at p j will never decrease, because p j will never receive a message with a lower counter from any other process.
Since p i is a ⋄-proposer, the counter for p i 's history will increase by one at p j in every round. For every other out-connected process, since it receives also a message from p i in every round and it can only finitely often receive a lower counter corresponding to p i 's history (the lowest one is p j 's), the counter of p i 's history eventually increases in every round by one.
Lemma 5.
If a history of a process p j infinitely often corresponds to a maximal counter at a ⋄-proposer p i , then p j is a leader forever. More formally:
∀p i ∈ ⋄-proposer, ∀p j ∈ Π : (∀k, ∃k ′ > k, ∀h, (C k ′ i [HISTORY k ′ j ] ≥ C k ′ i [h])) → p j ∈ leader
Proof: We first show that p j ∈ ⋄-proposer. Assume that it is not. Since p i ∈ ⋄-proposer, eventually the counter that corresponds to p i 's history is increased by one at every out-connected process (Lemma 4). Since p j ∈ ⋄-proposer, some out-connected process p l does not receive m k j in round k for infinitely many rounds k. Therefore, the counter at p l that corresponds to p j 's history is not increased by one in these rounds and is eventually strictly lower than the one that corresponds to p i 's history. Since every time some out-connected process has a lower counter than the others, eventually this counter propagates to all other out-connected processes, p i 's history will eventually be higher than p j 's at all out-connected processes. A contradiction.
If p i and p j are both ⋄-proposers, then eventually they receive their messages timely in every round k. Since p j 's history increases at all out-connected processes by one (Lemma 4), eventually C k j [HISTORY k j ] = C k i [HISTORY k j ]. Since by our assumption, in some future round k ′ , p j 's history is maximal at p i and a counter can increase by at most one and the counters that correspond to p j 's history increase always by one (Lemma 4), C k j [HISTORY k j ] is maximal forever and therefore p j is a leader forever. Lemma 6. Eventually, there exists a process p i ∈ leader and every leader is a ⋄-proposer. More formally:
∃k, ∃p i ∈ Π, ∀k ′ ≥ k : p i ∈ leader(k ′ ) (1) and ∀p i ∈ Π : (∀k, ∃k ′ , k ′ > k, p i ∈ leader(k ′ )) → p i ∈ ⋄-proposer(2)
Proof: The eventual source p s is a ⋄-proposer. Therefore, there exists at least one ⋄-proposer. Either p s is also a leader forever, or there is another process whose history infinitely often corresponds to a higher counter at p s than p s 's history. Then, with Lemma 5 this process is a leader forever. This implies (1).
Assume a process p i is not a ⋄-proposer. Then, p i 's counter is increased by less than one in infinitely many rounds at some processes. Because eventually these counters propagate to all out-connected processes and the values of ⋄-proposers are increased in every round by at least one (Lemma 4), eventually the history of some ⋄-proposer is higher than that of p i . Therefore, p i cannot be a leader forever. This implies (2).
Lemma 7.
If no process has decided yet, then eventually only values of leaders and ⊥ get into a set WRITTEN anywhere. More formally:
∃k, ∀k ′ ≥ k, ∀p i ∈ Π : WRITTEN k ′ i ⊆ ∪ pj ∈leader(k ′ ) VAL k ′ j ∪ {⊥}
Proof: There is a time after which there exists at least one leader and all leaders are ⋄-proposers (Lemma 6) and since leaders propose their values always, all their values get into every set WRITTEN at all out-connected processes in every even round (Lemma 3).
Therefore, every set PROPOSED contains a value of a leader (compare Lemma 1) and no process that considers itself not as leader and has a value different from a leader will evaluate line 15 to true and add a different value to its set PROPOSED.
Theorem 2. Algorithm 3 implements consensus in ESS.
Proof: We have to prove the 3 properties of consensus. Validity is clear, since VAL is always an initial value.
To prove termination, assume there exists a run where no process ever decides. Then, eventually only non-⊥ values of leaders will get into a set WRITTEN anywhere (Lemma 7) and they will get into WRITTEN always in every even round (Lemma 3) and all out-connected processes select the same value (the maximum in Line 14). Therefore, only this value and ⊥ will be written in subsequent rounds and every out-connected process will select this value as value for PROPOSED in Line 16 (i.e., no out-connected process will select ⊥) and everybody will evaluate Line 11 to true in the next round. Therefore, eventually, every correct process will decide.
To prove agreement, assume p i is the first process that decides a value v in a round k. This means, that p i has evaluated Line 11 to true. Then, as PROPOSED ⊆ {v, ⊥}, no other value different from ⊥ is in a set WRITTEN anywhere in the system (compare Lemma 1) and v is in WRITTEN at every process in the system in round k, since it is also in WRITTENOLD (compare Lemma 2). Therefore, every other process decides v in the same round, or it will evaluate Line 13 to true and select v as new VAL and no other value different from ⊥ will ever get into PROPOSED anywhere in the system and therefore, no other value will ever be selected as VAL.
Weak-Sets
The weak-set data structure has been introduced by Delporte-Gallet and Fauconnier in [4].
A weak-set S is a shared data structure that contains a set of values. It is defined by two operations: the add S (v) operation to add a value v to the set and the get S operation which returns a subset of the values contained in the weakset. Note that we do not consider operations to remove values from the set. Every get S operation returns all values v where the corresponding add S (v) operation has completed before the beginning of the get S operation. Furthermore, no value v ′ where no add S (v ′ ) has started before the termination of the get S operation is returned. For add S operations concurrent with the get S operation, it may or may not return the values. Therefore, weak-sets are not necessarily linearizable 3 .
Weak-Sets and registers
A weak-set is clearly stronger than a (regular) register:
Proposition 1. A weak-set implements a (regular) multiplewriter multiple-reader register.
Proof: To write a value, every process reads the weakset and stores the content in a variable HISTORY. Then, every process adds the value to be written together with HISTORY to the weak-set.
To read a value, a process reads the weak-set and returns the highest value among all values accompanied by a HISTORY with maximal length.
This transformation satisfies the two properties of regular registers, namely termination and validity. Termination follows directly from the termination property of weak-sets.
If several processes write at the same time, two reads at two different processes may return different values, but after all writes have completed, the return value will be the same at all processes. To see that also validity holds, consider the value returned by a read. If there is no concurrent write, then the value returned is the last value written (i.e. the maximal value of all values concurrently written).
In [4], a weak-set is implemented using (atomic) registers in the following two cases: Proposition 2. If the set of processes using the weak set is known (i.e. the IDs and the quantity), then weak-sets can be implemented with single-writer multiple-reader registers. Proposition 3. If the set of possible values for the weak set is finite, then weak-sets can be implemented with multiplewriter multiple-reader registers.
Weak-Sets and the MS environment
Algorithm 4 shows how to implement a weak-set in the MS environment. Similarly to Section 3, for every variable VAR i , VAR k i is the value of this variable after process p i has executed Line 15 in round k (i.e. after compute is called with parameter k).
| 4,662 |
0903.2574
|
2949096794
|
Arrow's Impossibility Theorem states that any constitution which satisfies Independence of Irrelevant Alternatives (IIA) and Unanimity and is not a Dictator has to be non-transitive. In this paper we study quantitative versions of Arrow theorem. Consider @math voters who vote independently at random, each following the uniform distribution over the 6 rankings of 3 alternatives. Arrow's theorem implies that any constitution which satisfies IIA and Unanimity and is not a dictator has a probability of at least @math for a non-transitive outcome. When @math is large, @math is a very small probability, and the question arises if for large number of voters it is possible to avoid paradoxes with probability close to 1. Here we give a negative answer to this question by proving that for every @math , there exists a @math , which depends on @math only, such that for all @math , and all constitutions on 3 alternatives, if the constitution satisfies: The IIA condition. For every pair of alternatives @math , the probability that the constitution ranks @math above @math is at least @math . For every voter @math , the probability that the social choice function agrees with a dictatorship on @math at most @math . Then the probability of a non-transitive outcome is at least @math .
|
As noted in @cite_5 , there is an interesting connection between quantitative Arrow statements and the concept of testing introduced in @cite_12 @cite_16 which was studied and used extensively since. Roughly speaking a property of functions is testable if it is possible to perform a randomized test for the property such that if the probability that the function passes the test is close to @math , then the function has to be close to a function with the property (say in the hamming distance). In terms of testing, our result states that among all functions satisfying the IIA property, the Transitivity property is testable. Moreover, the natural test "works": i.e., in order to test for transitivity, one can pick a random input and check if the outcome is transitive.
|
{
"abstract": [
"",
"In this paper, we consider the question of determining whether a function f has property P or is e-far from any function with property P. A property testing algorithm is given a sample of the value of f on instances drawn according to some distribution. In some cases, it is also allowed to query f on instances of its choice. We study this question for different properties and establish some connections to problems in learning theory and approximation. In particular, we focus our attention on testing graph properties. Given access to a graph G in the form of being able to query whether an edge exists or not between a pair of vertices, we devise algorithms to test whether the underlying graph has properties such as being bipartite, k -Colorable, or having a p -Clique (clique of density p with respect to the vertex set). Our graph property testing algorithms are probabilistic and make assertions that are correct with high probability, while making a number of queries that is independent of the size of the graph. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph that correspond to the property being tested, if it holds for the input graph.",
"The study of self-testing and self-correcting programs leads to the search for robust characterizations of functions. Here the authors make this notion precise and show such a characterization for polynomials. From this characterization, the authors get the following applications. Simple and efficient self-testers for polynomial functions are constructed. The characterizations provide results in the area of coding theory by giving extremely fast and efficient error-detecting schemes for some well-known codes. This error-detection scheme plays a crucial role in subsequent results on the hardness of approximating some NP-optimization problems."
],
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_12"
],
"mid": [
"",
"1970630090",
"2018925011"
]
}
|
A Quantitative Arrow Theorem
|
• The IIA condition.
• For every pair of alternatives a, b, the probability that the constitution ranks a above b is at least ǫ.
• For every voter i, the probability that the social choice function agrees with a dictatorship on i at most 1 − ǫ.
Then the probability of a non-transitive outcome is at least δ.
Our results generalize to any number k ≥ 3 of alternatives and to other distributions over the alternatives. We further derive a quantitative characterization of all social choice functions satisfying the IIA condition whose outcome is transitive with probability at least 1 − δ. Our results provide a quantitative statement of Arrow theorem and its generalizations and strengthen results of Kalai and Keller who proved quantitative Arrow theorems for k = 3 and for balanced constitutions only, i.e., for constitutions which satisfy for every pair of alternatives a, b, that the probability that the constitution ranks a above b is exactly 1/2.
The main novel technical ingredient of our proof is the use of inverse-hypercontractivity to show that if the outcome is transitive with high probability then there are no two different voters who are pivotal with for two different pairwise preferences with non-negligible probability. Another important ingredient of the proof is the application of non-linear invariance to lower bound the probability of a paradox for constitutions where all voters have small probability for being pivotal.
Introduction
Notation and Quantitative Setup
We will assume voters vote independently and uniformly at random so each voter chooses one of the k! possible rankings with equal probability. We will write P for the underlying probability measure and E for the corresponding expected value. In this probabilistic setup, it is natural to measure transitivity as well as how close are two different constitutions.
• Given two constitutions F, G on n voters, we denote the statistical distance between F and G by D(F, G), so that:
D(F, G) = P[F (σ) = G(σ)].
• Given a constitution F , we write T (F ) for the probability that the outcome of F is transitive and P (F ) for the probability that the outcome of F is non-transitive so (P stands for paradox):
T (F ) = P[F (σ) is transitive], P (F ) = 1 − T (F ).
Main Result
In our main result we show that Theorem 1.3. For every number of alternatives k ≥ 1 and ǫ > 0, there exists a δ = δ(ǫ), such that for every n ≥ 1, if F is a constitution on n voters and k alternatives satisfying:
• IIA and
• P (F ) < δ,
then there exists G ∈ F k (n) satisfying D(F, G) < k 2 ǫ. Moreover, one may take:
δ = exp − C ǫ 21 ,(1)
for some absolute constant 0 < C < ∞.
We therefore obtain the following result stated at the abstract:
Corollary 1.4.
For any number of alternatives k ≥ 3 and ǫ > 0, there exists a δ = δ(ǫ), such that for every n, if F is a constitution on n voters and k alternatives satisfying:
• IIA and
• F is k 2 ǫ far from any dictator, so D(F, G) > k 2 ǫ for any dictator G,
• For every pair of alternatives a and b, the probability that F ranks a above b is at least k 2 ǫ, then the probability of a non-transitive outcome, P (F ), is at least δ, where δ(ǫ) may be taken as in (1).
Proof. Assume by contradiction that P (F ) < δ. Then by Theorem 1.3 there exists a function G ∈ F n,k satisfying D(F, G) < k 2 ǫ. Note that for every pair of alternatives a and b it holds that:
P[G ranks a above b] ≥ P[F ranks a above b] − D(F, G) > 0.
Therefore for every pair of alternatives there is a positive probability that G ranks a above b. Thus by Theorem 1.2 it follows that G is a dictator which is a contradiction.
Remark 1.5. Note that if G ∈ F k (n) and F is any constitution satisfying D(F, G) < k 2 ǫ then P (F ) < k 2 ǫ.
Remark 1.6. The bounds stated in Theorem 1.3 and Corollary 1.4 in terms of k and ǫ is clearly not an optimal one. We expect that the true dependency has δ which is some fixed power of ǫ. Moreover we expect that the bound D(F, G) < k 2 ǫ should be improved to D(F, G) < ǫ.
Generalizations and Small Paradox Probability
Theorem 1.3 and Corollary 1.4 extend to more general product distributions. We call a distribution µ over the permutations of k elements S(k), symmetric if µ(−σ) = µ(σ) for all σ ∈ S(k). We will write α = α(µ) for min(µ(σ) : σ ∈ S(k)). We will write P and E for the probability and expected value according to the product measure µ n .
Theorem 1.7. Theorem 1.3 and Corollary 1.4 extend to the following setup where voters vote independently at random according to a symmetric distribution µ over the permutations of k elements. In this setup it suffices to take
δ = exp − C 1 αǫ C 2 (α) ,(2)
where 0 < C 1 (α), C 2 (α) < ∞. In particular one may take C 2 (α) = 3 + 1/(2α 2 ).
The dependency of δ on ǫ in (1) and (2) is a bad one. For values of ǫ < O(n −1 ) it is possible to obtain better dependency, where δ is polynomial in ǫ. In Section 4 we prove the following. Theorem 1.8. Consider voting on k alternatives where voters vote uniformly at random from S n k . Let
1 324 > ǫ > 0.(3)
For every n, if F is a constitution on n voters satisfying:
• IIA and
• P (F ) < 1 36 ǫ 3 n −3 ,(4)
then there exists G ∈ F 3 (n) satisfying D(F, G) ≤ 10k 2 ǫ. If each voter follows a symmetric voting distribution then with minimal probability α then the same statement holds where (3) is replaced with α 2 /9 > ǫ > 0 and (4) is replaced with
P (F ) < α 2 ǫ − 1 2α n −3 .
Related Work
The first attempt at getting a quantitative version of Arrow's theorem is Theorem 1.2 in a beautiful paper by Kalai [10] which we state in our notation as follows.
Theorem 1.9. There exists a K > 0 such that the following holds: Consider voting on k = 3 alternatives where voters vote uniformly at random from S n 3 . Assume F is a balanced constitution, i.e., for every pair a, b, of alternatives, it holds that the probability that F ranks a above b is exactly 1/2. Then if P (F ) < ǫ, then D(F, G) < Kǫ for some dictator G.
Comparing Kalai's result to Theorem 1.3 we see that • Kalai obtains better dependency of δ in terms of ǫ.
• Kalai's result holds only for k = 3 alternatives, while ours hold for any number of alternatives.
• Kalai's result holds only when F is balanced while ours hold for all F . The approach of [10] is based on "direct" manipulation of the Fourier expression for probability of paradox. A number of unsuccessful attempts (including by the author of the current paper) have been made to extend this approach to a more general setup without assuming balance of the functions and to larger number of alternatives.
A second result of [10] proves that for balanced functions which are transitive the probability of a paradox is bounded away from zero. Transitivity is a strong assumption roughly meaning that all voters have the same power. We do not assume transitivity in the current paper. A related result [14,15] proved a conjecture of Kalai showing that among all balanced low influence functions, majority minimizes the probability of a paradox. The low influence condition is weaker than transitivity,but still requires that no single voter has strong influence on the outcome of the vote.
Keller [11] extended some of Kalai's result to symmetric distributions (still under the balance assumption). Keller [11] also provides lower bounds on the probability of a paradox in the case the functions are monotone and balanced.
We want to note of some natural limitation to the approach taken in [10] and [11] which is based on "direct" analysis of the probability of a paradox in terms of the Fourier expansion. First, this approach does not provide a proof of Arrow theorem nor does it ever use it (while our approach does). Second, it is easy to see that one can get small paradox probability by looking at constitutions on 3 alternatives which almost always rank one candidates at the top. Thus a quantitative version of Arrow theorem cannot be stated just in terms of distance to a dictator. Indeed an example in [11] (see Theorem 1.2) implies that for non-balanced functions the probability of a paradox cannot be related in a linear fashion to the distance from dictator or to other functions in F 3 (n).
As noted in [10], there is an interesting connection between quantitative Arrow statements and the concept of testing introduced in [18,9] which was studied and used extensively since. Roughly speaking a property of functions is testable if it is possible to perform a randomized test for the property such that if the probability that the function passes the test is close to 1, then the function has to be close to a function with the property (say in the hamming distance). In terms of testing, our result states that among all functions satisfying the IIA property, the Transitivity property is testable. Moreover, the natural test "works": i.e., in order to test for transitivity, one can pick a random input and check if the outcome is transitive.
We finally want to note that the special case of the quantitative Arrow theorem proved by Kalai [10] for balanced functions has been used to derive the first quantitative version of the Gibbard-Satterthwaite Theorem [8,19] in [7]. The results of [7] are limited in the sense that they require neutrality and apply only to 3 candidates. It is interesting to explore if the full quantitative version of Arrow theorem proven here will allow to obtain stronger quantitative version of the Gibbard-Satterthwaite Theorem.
Proof Ideas
We first recall the notion of influence of a voter. Recall that for f : {−1, 1} n → {−1, 1}, the influence of voter 1 ≤ i ≤ n is given by
I i (f ) = P[f (X 1 , . . . , X i−1 , 0, X i+1 , . . . , X n ) = f (X 1 , . . . , X i−1 , 1, X i+1 , . . . , X n )],
where X 1 , . . . , X n are distributed uniformly at random. The notion of influence is closely related to the notion of pivotal voter which was introduced in Barabera's proof of Arrow's Theorem [3]. Recall
that voter i is pivotal for f at x if f (x 1 , . . . , x i−1 , 1, x i+1 , . . . , x n ) = f (x 1 , . . . , x i−1 , −1, x i+1 , . . . , x n ).
Thus the influence of voter i is the expected probability that voter i is pivotal.
We discuss the main ideas of the proof for the case k = 3. By the IIA property that pairwise preference (a > b), (b > c) and (c > a) are decided by three different functions f, g and h depending on the pairwise preference of the individual voters.
• The crucial and novel step is showing that for every ǫ > 0, there exists δ > 0, such that if two different voters i = j satisfy I i (f ) > ǫ and I j (g) > ǫ, then the probability of a non-transitive outcome is at least δ = ǫ C , for some C > 0. The proof of this step uses and generalizes the results of [16], which are based on inverse-hyper-contractive estimates [5]. We show that if I i (f ) > ǫ and I j (g) > ǫ then with probability at least ǫ C , over all voters but i and j, the restricted f and g, have i and j pivotal. We show how this implies that with probability ǫ C we may chose the rankings of i and j, leading to a non-transitive outcome. And therefore the probability of a paradox is at least ǫ C /36. This step may be viewed as a quantitative version of a result by Barbera [3]. The main step in Barbera's proof of Arrow theorem is proving that if two distinct voters are pivotal for two different pairwise preferences that the constitution has a non-rational outcome.
• The results above suffice to establish a quantitative Arrow theorem for ǫ = O(n −1 ). This follows from the fact that all influences of a function are bounded by ǫn −1 then the function is O(ǫ) close to a constant function. The probability of paradox obtained here is of order ǫ C .
• Next, we show that the statement of the theorem holds when n is large and all functions f, g, h are symmetric threshold functions. Note that in this case, since symmetric thresholds functions are low influence functions, the conclusion of the theorem reads: if non of the alternatives is ranked at top/bottom with probability ≥ 1 − ǫ, then the probability of a paradox is at least δ.
• Using the Majority is stablest result [15] (see also [14]) in the strong form proven in [13] (see also [12]) we extend the result above as long as for any pair of functions say f, g there exist no variable for which both I i (f ) and I i (g) is large.
• The remaining case is where there exists a single voter i, such that I i (f ) is large for at least two of the functions and all other variables have low influences. By expanding the paradox probability in terms of the 6 possible ranking of voter i and using the previous case, we obtain the conclusion of the theorem, i.e., that in this case either there is a non-negligible probability of a paradox, or the function close to a dictator function on voter i.
Some notation and preliminaries are given in Section 2. The proof for the case where two different functions have two different influential voters is given in Section 3. This already allows to establish a quantitative Arrow theorem in the case where the functions is very close to an element of F k (n) in Section 4. The proof of the Gaussian Arrow Theorem is given in Section 5. Applying "strong" non-linear invariance the result is obtained for low influence functions in Section 6. The result with one influential variable is the derived in Section 7. The proof of the main result for 3 alternatives is then given in Section 8. Section 9 concludes the proof by deriving the proof for any number of alternatives. The combinatorial Theorem 1.2 is proven in Section 10. Section 11 provides the adjustment of the proofs needed to obtain the results for symmetric distributions.
Acknowledgement
Thanks to Marcus Issacson and Arnab Sen for interesting discussions. Thanks to Salvador Barbera for helpful comments on a manuscript of the paper.
Preliminaries
For the proof we introduce some notation and then follow the steps above.
Some Notation
The following notation will be useful for the proof. A social choice function is a function from a profile on n permutation, i.e., an element of S(k) n to a binary decision for every pair of alternatives which one is preferable. The set of pairs of candidates is nothing but k 2 . Therefore a social choice function is a map F :
S(k) n → {−1, 1} ( k 2 ) where F (σ) = (h a>b (σ) : {a, b} ∈ k 2 ) means F ranks a above b if h a>b (σ) = 1, F ranks b above a if h a>b (σ) = −1.
We will further use the convention that h a>b (σ) = −h b>a (σ).
The binary notation above is also useful to encode the individual preferences σ(1), . . . , σ(n) as follows. Given σ = σ(1), . . . , σ(n) we define binary vectors x a>b = x a>b (σ) in the following manner:
x a>b (i) = 1, if voter i ranks a above b; x a>b (i) = −1, if voter i ranks a above b
The IIA condition implies that the pairwise preference between any pair of outcomes depends only on the individual pairwise preferences. Thus, if F satisfies the IIA property then there exists functions f a>b for every pair of candidates a and b such that
F (σ) = ((f a>b (x a>b ) : {a, b} ∈ k 2 )
We will also consider more general distributions over S(k). We call a distribution µ on S(k) symmetric if µ(−σ) = µ(σ) for all σ ∈ S(k). We will write α = α(µ) for min(µ(σ) : σ ∈ S(k)).
The Correlation Between x a>b and x b>c
For some of the derivations below will need the correlations between the random variables x a>b (i) and x b>c (i). We have the following easy fact: Lemma 2.1. Assume that voters vote uniformly at random from S(3). Then:
1. For all i = j and all a, b, c, d the variables x a>b (i) and x c>d (j) are independent.
2. If a, b, c are distinct then E[x a>b (i)x b>c (i)] = −1/3.
For the proof of part 2 of the Lemma, note that the expected value depends only on the distribution over the rankings of a, b, c which is uniform. It thus suffices to consider the case k = 3. In this case there are 4 permutations where x a>b (i) = x b>c (i) and 2 permutations where x a>b (i) = x b>c (i).
We will also need the following estimate Lemma 2.2. Assume that voters vote uniformly at random from S(3). Let f = x c>a and let
(T f )(x a>b , x b>c ) = E[f |x a>b , x b>c ]. Then |T f | 2 = 1/ √ 3.
Proof. There are two permutations where x a>b , x b>c determine x c>a . For all other permutations x c>a is equally likely to be −1 and 1 conditioned on x a>b and x b>c . We conclude that |T f | 2 2 = 1/3 and therefore |T f | 2 = 1/ √ 3.
Inverse Hyper-contraction and Correlated Intersections Probabilities
We will use some corollaries of the inverse hyper-contraction estimates proven by Borell [4]. The following corollary is from [16].
Lemma 2.3. Let x, y ∈ {−1, 1} n be distributed uniformly and (x i , y i ) are independent. Assume that E[x(i)] = E[y(i)] = 0 for all i and that E[x(i)y(i)] = ρ ≥ 0. Let B 1 , B 2 ⊂ {−1,
1} n be two sets and assume that
P[B 1 ] ≥ e −α 2 , P[B 2 ] ≥ e −β 2 .
Then:
P[x ∈ B 1 , y ∈ B 2 ] ≥ exp(− α 2 + β 2 + 2ραβ 1 − ρ 2 ).
We will need to generalize the result above to negative ρ and further to different ρ value for different bits.
Lemma 2.4. Let x, y ∈ {−1, 1} n be distributed uniformly and (x i , y i ) are independent. Assume that E[x(i)] = E[y(i)] = 0 for all i and that |E[x(i)y(i)]| ≤ ρ. Let B 1 , B 2 ⊂ {−1, 1} n be two sets and assume that P[B 1 ] ≥ e −α 2 , P[B 2 ] ≥ e −β 2 .
Then:
P[x ∈ B 1 , y ∈ B 2 ] ≥ exp(− α 2 + β 2 + 2ραβ 1 − ρ 2 ).
In particular if P[B 1 ] ≥ ǫ and P[B 2 ] ≥ ǫ, then:
P[x ∈ B 1 , y ∈ B 2 ] ≥ ǫ 2 1−ρ . (5) Proof. Take z so that (x i , z i ) are independent and E[z i ] = 0 and E[x i z i ] = ρ.
It is easy to see there exists w i independent of x, z with s.t. the joint distribution of (x, y) is the same as (x, z · w), where z · w = (z 1 w 1 , . . . , z n w n ). Now for each fixed w we have that
P[x ∈ B 1 , z · w ∈ B 2 ] = P[x ∈ B 1 , z ∈ w · B 2 ] ≥ exp(− α 2 + β 2 + 2ραβ 1 − ρ 2 ), where w · B 2 = {w · w ′ : w ′ ∈ B 2 }.
Therefore taking expectation over w we obtain:
P[x ∈ B 1 , y ∈ B 2 ] = EP[x ∈ B 1 , z · w ∈ B 2 ] ≥ exp(− α 2 + β 2 + 2ραβ 1 − ρ 2 )
as needed. The conclusion (5) follows by simple substitution (note the difference with Corollary 3.5 in [16] for sets of equal size which is a typo).
Applying the CLT and using [5] one obtains the same result for Gaussian random variables.
Lemma 2.5. Let N, M be N (0, I n ) with (N (i), M (i)) n i=1 independent. Assume that |E[N (i)M (i)]| ≤ ρ. Let B 1 , B 2 ⊂ R n be two sets and assume that P[B 1 ] ≥ e −α 2 , P[B 2 ] ≥ e −β 2 ,
Then:
P[N ∈ B 1 , M ∈ B 2 ] ≥ exp(− α 2 + β 2 + 2ραβ 1 − ρ 2 ).
In particular if P[B 1 ] ≥ ǫ and P[B 2 ] ≥ ǫ, then:
P[N ∈ B 1 , M ∈ B 2 ] ≥ ǫ 2 1−ρ .(6)
Proof. Fix the values of α and β and assume without loss of generality that max i |E[N (i)M (i)]| is obtained for i = 1. Then by [5] (see also [13]), the minimum of the quantity
P[N ∈ B 1 , M ∈ B 2 ]
under the constraints on the measures given by α and β is obtained in one dimension, where B 1 and B 2 are intervals I 1 , I 2 . Look at random variables
x(i), y(i), where E[x(i)] = E[y(i)] = 0 and E[x(i)y(i)] = E[M 1 N 1 ]. Let X n = n −1/2 i=1 x a>b (i) and Y n = n −1/2 i=1 x a>b (i). Then the CLT implies that P[X n ∈ I 1 ] → P[N 1 ∈ B 1 ], P[Y n ∈ I 2 ] → P[M 1 ∈ B 2 ], and P[X n ∈ I 1 , Y n ∈ I 2 ] → P[N 1 ∈ B 1 , M 1 ∈ B 2 ].
The proof now follows from the previous lemma.
Two Influential Voters
We begin the proof of Arrow theorem by considering the case of 3 candidates named a, b, c and two influential voters named 1 and 2. Note that for each voter there are 6 legal values for
(x a>b i , x b>c i , x c>a i
). These are all vector different from (−1, −1, −1) and (1, 1, 1). Similarly constitution given by f a>b , f b>c and f c>a has a non-transitive outcome if and only if
(f a>b (x a>b ), f b>c (x b>c ), f c>a (x c>a )) ∈ {(−1, −1, −1), (1, 1, 1)}.
Two Pivots Imply Paradox
We will use the following Lemma which as kindly noted by Barbera was first proven in his paper [3].
Proposition 3.1. Consider a social choice function on 3 candidates a, b and c and two voters denoted 1 and 2. Assume that the social choice function satisfies that IIA condition and that voter 1 is pivotal for f a>b and voter 2 is pivotal for f b>c . Then there exists a profile for which
(f a>b (x a>b ), f b>c (x b>c ), f c>a (x c>a )) is non-transitive.
For completeness we provide a proof using the language of the current paper (the proof of [3] like much of the literature on Arrow's theorem uses binary relation notation).
Proof. Since voter 1 is pivotal for f a>b and voter 2 is pivotal for f b>c there exist x, y such that
f a>b (0, y) = f a>b (1, y), f b>c (x, 0) = f b>c (x, 1).
Look at the profile where
x a>b = (x * , y), x b>c = (x, y * ), x c>a = (−x, −y).
We claim that for all values of x * , y * this correspond to transitive rankings of the two voters. This follows from the fact that neither (x * , x, −x) nor (y, y * , −y) belong to the set {(1, 1, 1), (−1, −1, −1)}. Note furthermore we may chose x * and y * such that
f c>a (−x, −y) = f a>b (x * , y) = f b>c (x, y * ).
We have thus proved the existence of a non-transitive outcome as needed.
Two influential Voters Implies Joint Pivotality
Next we establish the following result.
Lemma 3.2. Consider a social choice function on 3 candidates a, b and c and n voters denoted 1, . . . , n. Assume that the social choice function satisfies that IIA condition and that voters vote uniformly at random. Assume further that I 1 (f a>b ) > ǫ and I 1 (f b>c ) > ǫ. Let B = {σ : 1 is pivotal for f a>b (x a>b (σ)) and 2 is pivotal for f b>c (x b>c (σ))}.
Then
P[B] ≥ ǫ 3 .
Proof. Let
B 1 = {σ : 1 is pivotal for f a>b }, B 2 = {σ : 2 is pivotal for f b>c }.(i)] = E[x b>c (i)] = 0 and |E[x a>b (i)x b>c (i)]| = 1/3
The proof now follows from Lemma 2.4.
Two Influential Voters Imply Non-Transitivity
We can now prove the main result of the section. • IIA and
• There exists three distinct alternatives a, b and c and two distinct voters i and j such that
I i (f a>b ) > ǫ, I j (f b>c ) > ǫ. then P (F ) > 1 36 ǫ 3 .
Proof. We look at F restricted to rankings of a, b and c. Note that in the uniform case each permeation has probability 1/6 Without loss of generality assume that i = 1 and j = 2 and consider first the case of the uniform distribution over rankings. let B be the event from Lemma 3.2. By the lemma we have P[B] ≥ ǫ 3 . Note that if σ ∈ S(3) n satisfies that σ ∈ B, then fixing σ(3), . . . , σ(n) we may apply Proposition 3.1 to conclude that there are values of σ * (1) and σ * (2) leading to a non-transitive outcome. Therefore:
P[P (F )] ≥ P[(σ * (1), σ * (2), σ(3), . . . , σ(n)) : σ ∈ B] ≥ 1 36 P[B] ≥ 1 36 ǫ 3 .
Arrow Theorem for Almost Transitive Functions
In this section we prove a quantitative Arrow Theorem in the case where the probability of a non-transitive outcome is inverse polynomial in n. In this case it is possible to obtain an easier quantitative proof which does not rely on invariance. We will use the following easy and well known Lemma.
Lemma 4.1. Let f : {−1, 1} n → {−1, 1} and assume I i (f ) ≤ ǫn −1 for all i. Then there exist a constant function s ∈ {−1, 1} such that D(f, s) ≤ 2ǫ.
Similarly, let f :
{−1, 1} n → {−1, 1} and assume I i (f ) ≤ ǫn −1 for all i = j.
Then there exists a function g :
{−1, 1} → {−1, 1} such that D(f, g(x j )) ≤ 2ǫ.
Proof. For the first claim, use
1 2 min(P[f = 1], P[f = −1]) ≤ P[f = 1]P[f = −1] = Var[f ] ≤ n i=1 I i (f ) ≤ ǫ.(7)
For the second claim assume WLOG that j = 1. Let
f 1 (x 2 , . . . , x n ) = f (1, x 2 , . . . , x n ) and f −1 (x 2 , . . . , x n ) = f (−1, x 2 , . . . , x n ). Apply (7) to chose s 1 so that D(f 1 , s 1 ) ≤ i>1 I i (f 1 ).
Similarly, let s −1 be chosen so that
D(f, s −1 ) ≤ i>1 I i (f −1 ).
Let g(1) = s 1 and g(−1) = s −1 . Then:
2D(f, g) = D(f 1 , s 1 ) + D(f −1 , s −1 ) ≤ i>1 I i (f 1 ) + i>1 I i (f −1 ) = 2 i>1 I i (f ) ≤ 2ǫ.
The proof follows.
For every n, if F is a constitution on n voters satisfying:
• IIA and
• P (F ) < 1 36 ǫ 3 n −3 ,(9)
then there exists G ∈ F 3 (n) satisfying D(F, G) ≤ 10ǫ.
Proof. We prove the theorem for the uninform case. The proof for the symmetric case is identical. II. There exists a voter i such that for all j = i and all f ∈ {f a>b , f b>c , f c>a }, it holds that
Let f a>b , f b>c , f c>a : {−1, 1} n → {−1,I j (f ) < η.
III. There exists two different functions f, g ∈ {f a>b , f b>c , f c>a } such that for all i it holds that I i (f ) < η and I i (g) < η.
Note that each F satisfies one of the three conditions above. Note further that in case I. we have P (F ) > 1 36 ǫ 3 n −3 by Theorem 3.3 which contradicts the assumption (9). So to conclude the proof is suffices to obtain D(F, G) ≤ 10ǫ assuming (9).
In case II. it follows from Lemma 4.1 that there exists functions g a>b , g b>c and g c>a of voter i only such that
D(f a>b , g a>b ) < 2ǫ, D(f b>c , g b>c ) < 2ǫ, D(f c>a , g c>a ) < 2ǫ.
Letting G be the constitution defined by the g's we therefore have D(F, G) ≤ 6ǫ and P (G) ≤ P (F ) + 6ǫ ≤ 9ǫ.
Furthermore if 9ǫ < 1 36 this implies that P DX(G) = 0. So D(F, F 3 (n)) ≤ 6ǫ which is a contradiction.
In the remaining case III. assume WLOG that f a>b and f b>c have all influences small. By Lemma 4.1 if follows that f a>b and f b>c are 2ǫ far from a constant function. There are now two subcases to consider. In the first case there exists an s ∈ {±1} such that D(f a>b , s) ≤ 2ǫ and D(f b>c , −s) ≤ 2ǫ. Note that in the case letting
g a>b = s, g b>c = −s, g c>a = f c>a ,
and G be the constitution defined by the g's, we obtain that G ∈ F 3 (n) and D(F, G) ≤ 4ǫ.
We finally need to consider the case where D(f a>b , s) ≤ 2ǫ and D(f b>c , s) ≤ 2ǫ for some s ∈ {±1}. Let A(a, b) be the set of σ where f a>b = −s and similarly for A(b, c) and A(a, c). Then
P[A(a, b)] ≤ 2ǫ and P[A(a, c)] ≤ 2ǫ. Furthermore by transitivity P[A(a, c)] ≤ P[A(a, b)] + P[A(b, c)] + P (F ) ≤ 6ǫ.
We thus conclude that D(f c>a , s) ≤ 6ǫ. Letting g a>b = g b>c = −g c>a = s and G the constitution defined by G we have that D(F, G) ≤ 10ǫ. A contradiction. The proof follows It is now easy to prove Theorem 1.8 for the uniform distribution. The adaptations to symmetric distributions will be discussed in Section 11.
Proof. The proof follows by applying Theorem 4.2 to triplets of alternatives. We give the proof for the uniform case. Assume P (F ) < 1 36 ǫ 3 n −3 . Note that if g 1 , g 2 : {−1, 1} n → {−1, 1} are two different function each of which is either a dictator or a constant function than D(g 1 , g 2 ) ≥ 1/2. Therefore for all a, b it holds that D(f a>b , g) < 10ǫ for at most one function g which is either a dictator or a constant function. In case there exists such function we let g a>b = g, otherwise, we let g a>b = f a>b .
Let G be the social choice function defined by the functions g a>b . Clearly:
D(F, G) < 10 k 2 ǫ < 10k 2 ǫ.
The proof would follow if we could show P (G) = 0 and therefore G ∈ F k (n).
To prove that G ∈ F k (n) is suffices to show that for every set A of three alternatives, it holds that G A ∈ F 3 (n). Since P (F A ) ≤ P (F ) < 1 36 ǫ 3 n −3 , Theorem 4.2 implies that there exists a function H A ∈ F 3 (n) s.t. D(H A , F A ) < 10ǫ. There are two cases to consider:
• H A is a dictator. This implies that f a>b is 10ǫ close to a dictator for each a, b and therefore f a>b = g a>b for all pairs a, b, so G A = H A ∈ F 3 (n).
• There exists an alternative (say a) that H A always ranks at the top/bottom. In this case we have that f a>b and f c>a are at most ǫ far from the constant functions 1 and −1 (or −1 and 1). The functions g a>b and g c>a have to take the same constant values and therefore again we have that G A ∈ F 3 (n).
The proof follows.
The Gaussian Arrow Theorem
The next step is to consider a Gaussian version of the problem. The Gaussian version corresponds to a situation where the functions f a>b , f b>c , f c>a can only "see" averages of large subsets of the voters. We thus define a 3 dimensional normal vector N . The first coordinate of N is supposed to represent the deviation of the number of voters where a ranks above b from the mean. The second coordinate is for b ranking above c and the last coordinate for c ranking above a.
Since averaging maintain the expected value and covariances, we define:
E[N 2 1 ] = E[N 2 2 ] = E[N 2 3 ] = 1,(10)P[f i (N i ) = u, f i+1 (N i+1 ) = −u] ≤ 1 − ǫ(11)
Then with the setup given in (10) it holds that:
P[f 1 (N 1 ) = f 2 (N 2 ) = f 3 (N 3 )] ≥ δ.
Moreover, one may take δ = (ǫ/2) 18 .
We note that the negation of condition (11) corresponds to having one of the alternatives at the top/bottom with probability at least 1 − ǫ. Therefore the theorem states that unless this is the case, the probability of a paradox is at least δ. Since the Gaussian setup excludes dictator functions in terms of the original vote, this is the result to be expected in this case.
Proof. We will consider two cases: either all the functions f i satisfy |Ef i | ≤ 1 − ǫ, or there exists at least one function with |Ef i | > 1 − ǫ.
Assume first that there exist a function f i with |Ef i | > 1 − ǫ. Without loss of generality assume that P[f 1 = 1] > 1 − ǫ/2. Note that by (11) it follows that P[f 2 = 1] > ǫ/2 and P[f 3 = 2] > ǫ/2. By Lemma 2.5, we have P[f 2 (N 2 ) = 1, f 3 (N 3 ) = 1] > (ǫ/2) 3 . We now look at the function g = 1(f 2 = 1, f 3 = 1). Let
M 1 = √ 3 2 (N 2 + N 3 ), M 2 = √ 3 2 √ 2 (N 2 − N 3 ).
Then it is easy to see that M 2 (i) is uncorrelated with and therefore independent off N 1 (i), M 1 (i) for all i. Moreover, for all i the covariance between M 1 (i) and N 1 (i) is 1/ √ 3 (this also follows from Lemma 2.2) and 1 − 1/ where Z = (Z 1 , . . . , Z n ) is a normal Gaussian vector independent of anything else. We obtain:
P[f 1 (N 1 ) = 1, f 2 (N 2 ) = 1, f 3 (N 3 ) = 1] = P[f 1 (N 1 , Z) = 1, g(M 1 , M 2 ) = 1] ≥ ((ǫ/2) 3 ) 2 1/3 ≥ (ǫ/2) 18 .
We next consider the case where all functions satisfy |Ef i | ≤ 1−ǫ. In this case at least two of the functions obtain the same value with probability at least a 1/2. Let's assume that P[f 1 = 1] ≥ 1/2 and P[f 2 = 1] ≥ 1/2. Then by Lemma 2.5 we obtain that
P[f 1 = 1, f 2 = 1] ≥ 1/8.
Again we define g = 1(f 1 = 1, f 2 = 1). Since P[f 3 = 1] > ǫ/2, we may apply Lemma 2.5 and obtain that:
P[f 1 = 1, f 2 = 1, f 3 = 1] = P[f 1 = 1, g = 1] ≥ (ǫ/2) 3 .
This concludes the proof.
Arrow Theorem for Low Influence Functions
Our next goal is to apply Theorem 5.1 along with invariance in order to obtain Arrow theorem for low influence functions. Non linear invariance principles were proven in [17] and latter in [15] and [13]. We will use the two later results which have quantitative bounds in terms of the influences. The proof for uniform voting distributions follows in a straightforward manner from Theorem 5.1, Kalai's formula and the Majority is Stablest (MIST) result in the strong form stated at [6,13] where it is allowed that for each variable one of the functions has high influence. The proof follows since Kalai's formula allows to write the probability of a paradox as sum of correlation terms between pairs of function and each correlation factor is asymptotically minimized by symmetric monotone threshold functions. Therefore the overall expression is also minimized by symmetric monotone threshold functions. However, Theorem 5.1 provides a lower bound on the probability of paradox for symmetric threshold functions so the proof follows. The case of symmetric distributions is much more involved and will be discussed in subsection 11.5.
We finally note that the application of invariance is the step of the proof where δ becomes very small (more than exponentially small in ǫ, instead of just polynomially small). A better error estimate in invariance principles in terms of influences will thus have a dramatic effect on the value of δ.
Arrow's theorem for low influence functions.
We first recall the following result from Kalai [10]. Lemma 6.1. Consider a constitution F on 3 voters satisfying IIA and let F be given by f a>b , f b>c and f c>a . Then:
P (F ) = 1 4 1 + E[f a>b (x a>b )f b>c (x b>c )] + E[f b>c (x b>c )f c>a (x c>a )] + E[f c>a (x c>a )f a>b (x a>b )](12)
Proof. Moreover s(x, y, z) = 1/4(1 + xy + yz + zx). The proof follows.
P[f i = u, f i+1 = −u] ≤ 1 − 2ǫ(13)
and for all j it holds that |{1 ≤ i ≤ 3 :
I j (f i ) > τ }| ≤ 1.(14)
Then it holds that
P(f 1 , f 2 , f 3 ) ≥ δ.
Moreover, assuming the uniform distribution, one may take:
δ = 1 8 (ǫ/2) 20 , τ = τ (δ), where τ (δ) := δ C log(1/δ) δ ,
for some absolute constant C.
Proof. Let g 1 , g 2 , g 3 : R → {−1, 1} be of the form
g i = sgn(x − t i ), where t i is chosen so that E[g i ] = E[f i ] (
where the first expected value is according to the Gaussian measure). Let N 1 , N 2 , N 3 ∼ N (0, 1) be jointly Gaussian with E[N i N i+1 ] = −1/3. From Theorem 5.1 it follows that:
P (g 1 , g 2 , g 3 ) > 8δ,
and from the Majority is Stablest theorem as stated in Theorem 6.3 and Lemma 6.8 in [13], it follows that by choosing C in the definition of τ large enough, we have:
E[f 1 (x a>b )f 2 (x b>c )] ≥ E[g 1 (N 1 )g 2 (N 2 )] − δ, E[f 2 (x a>b )f 3 (x b>c )] ≥ E[g 2 (N 1 )g 3 (N 2 )] − δ, E[f 3 (x a>b )f 1 (x b>c )] ≥ E[g 3 (N 1 )g 1 (N 2 )] − δ.
From (12) and (32) it now follows that:
P (f 1 , f 2 , f 3 ) ≥ P (g 1 , g 2 , g 3 ) − 3δ/4 > 7δ,
as needed.
One Influential Variable
The last case to consider is where there is a single influential variable. This case contains in particular the case of the dictator function. Indeed, our goal in this section will be to show that if there is a single influential voter and the probability of an irrational outcome is small, then the function must be close to a dictator function or to a function where one of the alternatives is always ranked at the bottom (top).
I j (f i ) < ατ.(15)
Then either
P(f 1 , f 2 , f 3 ) ≥ αδ,(16)
or there exists a function G ∈ F 3 (n) such that D(F, G) ≤ 9ǫ. Moreover, assuming the uniform distribution, one may take: δ = (ǫ/2) 20 , τ = τ (δ).
Proof. Consider the functions f
b i for 1 ≤ i ≤ 3 and b ∈ {−1, 1} defined by f b i (x 2 , . . . , x n ) = f i (b, x 2 , . . . , x n ).
Note that for all b ∈ {−1, 1}, for all 1 ≤ i ≤ 3 and for all j > 1 it holds that I j (f b i i ) < τ and therefore we may apply Theorem 6.2. We obtain that for every
b = (b 1 , b 2 , b 3 ) / ∈ {(1, 1, 1), (−1, −1, −1)} either: P(f b 1 1 , f b 2 2 , f b 3 3 ) ≥ δ,(17)
or there exist a u(b, i) ∈ {−1, 1} and an i = i(b) such that
min(P[f b i i = u(b, i)], P[f b i+1 i+1 = −u(b, i)]) ≥ 1 − 3ǫ.(18)
Note that if there exists a vector b = (b 0 , b 1 , b 2 ) / ∈ {(1, 1, 1), (−1, −1, −1)} for which (17) holds then (16) follows immediately.
It thus remains to consider the case where (18) holds for all 6 vectors b. In this case we will define new functions g i as follows. We let g i (b, x 2 , . . . , , x 2 , . . . , x n ) otherwise. We let G be the social choice function defined by g 1 , g 2 and g 3 . From (18) it follow that for every b = (b 0 , b 1 , b 2 ) / ∈ {(1, 1, 1), (−1, −1, −1)} there exists two functions g i , g i+1 and a value u s.t. g i (b i , x 2 , . . . , x n ) is the constant function u and g i+1 (b i+1 , x 2 , . . . , x n ) is the constant function −u. So P (g 1 , g 2 , g 3 ) = P[(g 1 , g 2 , g 3 ) ∈ {(1, 1, 1), (−1, −1, −1)}] = 0, and therefore G ∈ F 3 (n). It is further easy to see that D(f i , g i ) ≤ 3ǫ for all i and therefore:
x n ) = u if P[f b i i = u] ≥ 1 − 3ǫ for u ∈ {−1, 1} and g i (b, x 2 , . . . , x n ) = f i (bD(F, G) ≤ D(f 1 , g 1 ) + D(f 2 , g 2 ) + D(f 3 , g 3 ) ≤ 9ǫ.
The proof follows.
Quantitative Arrow Theorem for 3 Candidates
We now prove a quantitative version of Arrow theorem for 3 alternatives.
Theorem 8.1. Consider voting on 3 alternatives where voters vote uniformly at random from S n 3 . Let ǫ > 0. Then there exists a δ = δ(ǫ), such that for every n, if F is a constitution on n voters satisfying:
• IIA and • P (F ) < δ, then there exists G ∈ F 3 (n) satisfying D(F, G) < ǫ. Moreover, one can take
δ = exp − C ǫ 21 .(19)
Proof. Let f a>b , f b>c , f c>a : {−1, 1} n → {−1, 1} be the three pairwise preference functions. Let η = δ (where the values of C will be determined later). We will consider three cases:
• There exist two voters i = j ∈ [n] and two functions f = g ∈ {f a>b , f b>c , f c>a } such that
I i (f ) > η, I j (g) > η.(20)
• For every two functions f = g ∈ {f a>b , f b>c , f c>a } and every i ∈ [n], it holds that min(I i (f ), I i (g)) < η.
• There exists a voter j ′ such that for all j = j ′ max(I j (f a>b ), I j (f b>c ), I j (f c>a )) < η.
First note that each F satisfies at least one of the three conditions (20), (21) or (22). Thus it suffices to prove the theorem for each of the three cases.
In (20), we have by Theorem 3.3 have that
P (F ) > 1 36 η 3 .
We thus obtain that P (F ) > δ where δ is given in (19) by taking larger values C ′ for C.
In case (21), by Theorem 6.2 it follows that either there exist a function G which always put a candidate at top / bottom and D(F, G) < ǫ (if (13) holds), or P (F ) > Cǫ 20 >> δ.
Similarly in the remaining case (22), we have by Theorem 7.1 that either D(F, G) < ǫ or P (F ) > Cǫ 20 >> δ. The proof follows.
Proof Concluded
We now conclude the proof.
Theorem 9.1. Consider voting on k alternatives where voters vote uniformly at random from S n k . Let 1 100 > ǫ > 0. Then there exists a δ = δ(ǫ), such that for every n, if F is a constitution on n voters satisfying:
• IIA and • P (F ) < δ, then there exists G ∈ F k (n) satisfying D(F, G) < k 2 ǫ.
Moreover, one can take
δ = exp − C ǫ 21 .(23)
Proof. The proof follows by applying Theorem 8.1 to triplets of alternatives. Assume P (F ) < δ(ǫ).
Note that if g 1 , g 2 : {−1, 1} n → {−1, 1} are two different function each of which is either a dictator or a constant function than D(g 1 , g 2 ) ≥ 1/2. Therefore for all a, b it holds that D(f a>b , g) < ǫ/10 for at most one function g which is either a dictator or a constant function. In case there exists such function we let g a>b = g, otherwise, we let g a>b = f a>b .
Let G be the social choice function defined by the functions g a>b . Clearly:
D(F, G) < k 2 ǫ < k 2 ǫ.
The proof would follow if we could show P (G) = 0 and therefore G ∈ F k (n).
To prove that G ∈ F k (n) is suffices to show that for every set A of three alternatives, it holds that G A ∈ F 3 (n). Since P (F ) < δ implies P (F A ) < δ, Theorem 8.1 implies that there exists a function H A ∈ F 3 (n) s.t. D(H A , F A ) < ǫ. There are two cases to consider:
• H A is a dictator. This implies that f a>b is ǫ close to a dictator for each a, b and therefore f a>b = g a>b for all pairs a, b, so G A = H A ∈ F 3 (n).
• There exists an alternative (say a) that H A always ranks at the top/bottom. In this case we have that f a>b and f c>a are at most ǫ far from the constant functions 1 and −1 (or −1 and 1). The functions g a>b and g c>a have to take the same constant values and therefore again we have that G A ∈ F 3 (n).
The proof follows.
Remark 9.2. Note that this proof is generic in the sense that it takes the quantitative Arrow's result for 3 alternatives as a black box and produces a quantitative Arrow result for any k ≥ 3 alternatives.
10 The class F k (n)
In this section we prove Theorem 1.2. As noted before Wilson [20] gave a partial characterization of functions satisfying IIA. Using a version of Barbera's lemma and the fact we consider only strict orderings we are able to give a complete characterization of the class F k (n). For the discussion below it would be useful to say that the constitution F is a Degenerate if there exists an alternative a such that for all profiles F ranks at the top (bottom). The constitution F is Non Degenerate (ND) if it is not degenerate.
Different Pivots for Different Choices imply Non-Transitivity
We begin by considering the case of 3 candidates named a, b, c and n voters named 1, . . . , n. We first state Barbera's lemma in this case. Proof. Without loss of generality assume that voter 1 is pivotal for f a>b and voter 2 is pivotal for f b>c . Therefore there exist x 2 , . . . , x n satisfying
f a>b (+1, x 2 , . . . , x n ) = f a>b (−1, x 2 , . . . , x n )(24)
and y 1 , y 3 , . . . , y n satisfying f b>c (y 1 , +1, y 3 , . . . , y n ) = f b>c (y 1 , −1, y 3 , . . . , y n ).
Let z 1 = −y 1 and z i = −x i for i ≥ 2. By (24) and (25) we may choose x 1 and y 2 so that
f a>b (x) = f b>c (y) = f (z),
where x = (x 1 , . . . , x n ), y = (y 1 , . . . , y n ) and z = (z 1 , . . . , z n ). Note further, that by construction for all i it holds that
(x i , y i , z i ) / ∈ {(1, 1, 1), (−1, −1, −1)},
and therefore there exists a profile σ such that
x = x(σ), y = y(σ), z = z(σ).
The proof follows.
n voters, 3 Candidates
In order to prove Theorem 1.2 we need the following proposition regarding constitutions of a single voter.
Proposition 10.2. Consider a constitution F of a single voter and three alternatives {a, b, c} which satisfies IIA and transitivity. Then exactly one of the following conditions hold:
• F is constant. In other words, F (σ) = τ for all σ and some fixed τ ∈ S(3).
• There exists an alternative c such that c is always ranked at the top (bottom) of the ranking and f a>b (x) = x or f a>b (x) = −x.
• F (σ) = σ for all σ • F (σ) = −σ for all σ.
Proof. Assume F is not constant, then there exist two alternatives a, b such that f a>b is not constant and therefore f a>b (x) = x or f a>b (x) = −x. Let c be the remaining alternative. If c is always ranked at the bottom or the top the claim follows. Otherwise one of the functions f a>c or f b>c is not constant. We claim that in this case all three functions are non-constant. Suppose by way of contradiction that f c>a is the constant 1. This means that c is always ranked on top of a. However, since f a>b is non-constant there exists a value x such that f a>b (x) = 1 and similarly there exist a value y such that f b>c (y) = 1. Let σ be a ranking whose a > b preference is given by x and whose b > c preferences are given by y. Then G(σ) satisfies that a is preferred to b and b is preferred to c. Thus by transitivity it follows that a is preferred to c -a contradiction. The same argument applied if f c>a is the constant −1 or if f b>c is a constant function.
We have thus established that all three functions f a>b , f b>c and f c>a are of the form f (x) = x of f (x) = −x. To conclude we want to show that all three functions are identical. Suppose otherwise. Then two of the functions have the same sign while the third has a different sign. Without loss of generality assume f a>b (x) = f b>c (x) = x and f c>a (x) = −x. Then looking at the profile a > b > c we see that σ ′ = F (σ) must satisfy a > b and b > c but also c > a a contradiction. A similar proof applies when f a>b (x) = f b>c (x) = −x and f c>a (x) = x. Theorem 10.3. Any constitution on three alternatives which satisfies Transitivity, IIA and ND is a dictator.
Proof. There are two cases to consider. The first case is where two of the functions f a>b , f b>c and f c>a are constant. Without loss of generality assume that f a>b and f b>c are constant. Note that if f a>b is the constant 1 and f b>c is the constant −1 then b is ranked at the bottom for all social outcomes in contradiction to the ND condition. A similar contradiction is derived if f a>b is the constant −1 and f b>c is the constant 1. We thus conclude that f a>b = f b>c . However by transitivity this implies that f c>a is also a constant function and f c>a = −f a>b .
The second case to consider is where at least two of the functions f a>b , f b>c and f c>a are not constant. Assume without loss of generality that f a>b , f b>c are non-constant. Therefore, each has at least one pivotal voter. From Theorem 10.1 it follows that there exists a single voter i such that each of the functions is either constant, or has a single pivotal voter i. We thus conclude that F is of the form F (σ) = G(σ(i)) for some function G. Applying Proposition 10.2 shows that either G(σ) = σ or G(σ) = −σ and concludes the proof.
The Characterization Theorem
We now prove Theorem 1.2. Given a set of alternatives A ′ ⊂ A and an alternative b / ∈ A, we write b ∼ A ′ if there exist two alternatives a, a ′ ∈ A s and two profiles σ and σ ′ s.t. F (σ) ranks b above a and F (σ ′ ) ranks a ′ above b. Note that if it does not hold that b ∼ A ′ then either {b} > F A ′ or A ′ > F {b}.
We will use the following lemmas.
Lemma 10.4. Let F be a transitive constitution satisfying IIA and A 1 , . . . , A r , {b} disjoint sets of alternatives satisfying A 1 > F A 2 > F . . . > F A r . Then either • There exists an 1 ≤ s ≤ r + 1 such
A 1 > F . . . > F A s−1 > {b} > F A s > F . . . > F A r ,(26)
or • There exist an 1 ≤ s ≤ r such that b ∼ A r and
A 1 > F . . . > F A s ∪ {b} > F A s+1 > F . . . > F A r .(27)
Proof. Consider first the case where for all s it does not hold that b ∼ A s . In this case for all
s either b > F A s or A s > F b. Since b > F A s implies b > F A s+1 > F . . . and A s ′ > F b implies . . . > F A s ′ −1 > F A s ′ > F b
for all s, s ′ by transitivity, equation (26) follows.
Next assume b ∼ A s . We argue that in this case
. . . > F A s−1 > F {b} > F A s+1 > F . . . ,
which implies (27).
Suppose by contradiction that b > F A s+1 does not hold. Then there exists an element a ∈ A s+1 and a profile σ where F (σ) ranks a above b. From the fact that b ∼ A s it follows that there exist c ∈ A s and a profile σ ′ where F (σ ′ ) ranks b above c above a. We now look at the constitution F restricted to B = {a, b, c}. For each of a, b, c there exist at least one profile where they are not at the top/bottom of the social outcome. It therefore follows that Theorem 10.3 applies to F B and that F B is a dictator. However, the assumption that A s > F A s+1 implies that c > F a. A contradiction. The proof that F s−1 > F b is identical.
Lemma 10.5. Let F be a constitution satisfying transitivity and IIA. Let A be a set of alternatives such that F A is a dictator and b ∼ A. Then F A∪{b} is a dictator.
Proof. Assume without loss of generality that F A (σ) = σ(i). Let a ∈ A be such that there exist a profile where F ranks a above b and c ∈ A be such there exists a profile where a is ranked below c. Let B = {a, b, c}. Then F B satisfies the condition of Theorem 10.3 and is therefore dictator. Moreover since the f a>c (x) = x(i) it follows that f a>b (x) = x(i) and f b>c (x) = x(i). Let d be any other alternative in A. Let B = {a, b, d}. Then since f a>b (x) = f a>d (x) = x(i), the conditions of Theorem 10.3 hold for F B and therefore f b>d (x) = x(i). We have thus concluded that F A∪{b} (σ) = σ for all σ as needed. The proof for the case where F A (σ) = −σ is identical. Theorem 10.3 also immediately implies the following: Lemma 10.6. Let F be a constitution satisfying transitivity and IIA. Let A be a set of two alternatives such that F A is not constant and b ∼ A. Then F A∪{b} is a dictator.
We can now prove Theorem 1.2.
Proof. The proof is by induction on the number of alternatives k. The case k = 2 is trivial. Either F always ranks a above b in which case {a} > F {b} as needed or F is a non-constant function in which case the set A = {a, b} satisfies the desired conclusion.
For the induction step assume the theorem holds for k alternatives and let F be a constitution on k + 1 alternatives which satisfies IIA and Transitivity. Let B be a subset of k of the alternatives and b = A \ B.
By by the induction hypothesis applied to F B , we may write B as a disjoint union of A 1 , . . . , A r such that A 1 > F A 2 > . . . > F A r and such that if A s is of size 3 or more then F As is a dictator and if F As is of size two then F As is non constant. We now apply Lemma 10.4. If (26) holds then the proof follows. If (27) holds then the proof would follow once we show that F C is of the desired form where C = A s ∪ {b}. If A s is of size 1 then from the definition of ∼ it follows that F As∪{b} is non-constant as needed. If A s is of size 2 then Lemma (10.6) implies that F As∪{b} is a dictator as needed and for the case of A s of size 3 or more this follows from Lemma (10.5). The proof follows.
Symmetric Distributions
In this section we provide some details on how to prove the results stated for general symmetric distributions. Most of the generalizations are straightforward. The main exception is Arrow theorem for low influences functions and the corresponding Gaussian result. These results require extension of the Invariance machinery and are developed in subsections 11.4 and 11.5.
The Correlation Between x a>b and x b>c
The same proof of Lemma 2.1 gives the following:
(T f )(x a>b , x b>c ) = E[f |x a>b , x b>c ]. Then |T f | 2 ≤ √ 1 − 4α.
Proof. The proof is identical to the previous proof.
Two Influential Voters
We briefly note that repeating the proofs of Lemma 3.2 and Theorem 3.3 we obtain the same results with
• In Lemma 3.2 we obtain the lower bound
P[B] ≥ ǫ 1 2α .
• In Theorem 3.3 we obtain the lower bound P (F ) > β 2 ǫ 1 2α , where β = αk!/6.
Almost Transitive Functions
We note that the same proof of Theorem 4.2 gives the following result for symmetric distributions.
Theorem 11.3. Consider voting on 3 alternatives where each voter follows a symmetric voting distribution with minimal probability α. Let
α 2 9 > ǫ > 0.(28)
For every n, if F is a constitution on n voters satisfying:
• IIA and
• P (F ) < α 2 ǫ 3 n − 1 2α .(29)
then there exists G ∈ F 3 (n) satisfying D(F, G) ≤ 10ǫ.
Theorem 11.3 implies in turn the second assertion of Theorem 1.8.
The Gaussian Arrow Theorem
We start by proving a version of Theorem 5.1 for symmetric distributions.
Since averaging maintains the expected value and covariances, we define:
E[N 2 1 ] = E[N 2 2 ] = E[N 2 3 ] = 1,(30)E[N 1 N 2 ] = E[x a>b (1)x b>c (1)] := ρ 1,2 , E[N 2 N 3 ] = E[x b>c (1)x c>a (1)] := ρ 2,3 , E[N 3 N 1 ] = E[x c>a (1)x a>b (1)] := ρ 3,1 .
We let N (1), . . . , N (n) be independent copies of N . We write N = (N (1), . . . , N (n)) and for 1 ≤ i ≤ 3 we write N i = (N (1) i , . . . , N (n) i ). The variant of Theorem 5.1 we prove is the following.
P[f i (N i ) = u, f i+1 (N i+1 ) = −u] ≤ 1 − ǫ (31)
Then with the setup given in (10) it holds that:
P[f 1 (N 1 ) = f 2 (N 2 ) = f 3 (N 3 )] ≥ δ.
Moreover, one may take δ = (ǫ/2) 1/2α 2 .
Proof. The proof is similar to the proof of Theorem 5.1. Note that if P[f 2 = 1] > ǫ/2 and P[f 3 = 1] > ǫ/2 then:
P[f 2 = 1, f 3 = 1] > (ǫ/2) 1/2α .
Again we define M 1 , M 2 so that M 2 is uncorrelated with N . Using Lemma 11.2 we obtain that the correlation between M 1 (i) and N (i) is at most
√ 1 − 4α. Using 1 − √ 1 − 4α ≥ 2α one then obtains P[f 1 = 1, f 2 = 1, f 3 = 1] > (ǫ/2) 1 2α 2 .
Kalai's Formula and [−1, 1] Valued Votes.
In this subsection we will give a more detailed description of the functions that achieve minimum probability of a paradox in the Gaussian case. We will first generalize Lemma 6.1.
The same proof of Lemma 6.1 gives the following:
Lemma 11.5. Consider the setup of Theorem 5.1. Then:
P[f 1 (N 1 ) = f 2 (N 2 ) = f 3 (N 3 )] = 1 4 (1 + E[f 1 (N 1 )f 2 (N 2 )] + E[f 2 (N 2 )f 3 (N 3 )] + E[f 3 (N 3 )f 1 (N 1 )])(32)
Given the basic voting setup, we define P (
f 1 , f 2 , f 3 ) for three function f 1 , f 2 , f 3 : {−1, 1} n → [−1, 1] by letting P (f 1 , f 2 , f 3 ) = E[s(f 1 (x a>b , f 2 (x b>c ), f 3 (x c>a )] = 1 4 1 + E[f 1 (x a>b )f 2 (x b>c )] + E[f 2 (x b>c )f 3 (x c>a )] + E[f 3 (x c>a )f 1 (x a>b )] .
Similarly, given three function
f 1 , f 2 , f 3 : R n → [−1, 1] we define P (f 1 , f 2 , f 3 ) = E[s(f 1 (N 1 , N 2 , N 3 ))] = 1 4 (1 + E[f 1 (N 1 )f 2 (N 2 ))] + E[f 2 (N 2 )f 3 (N 3 )] + E[f 3 (N 3 )f 1 (N 1 )]) ,
where N i are define in (10).
We will use the following lemma.
min(uE[f i ], −uE[f i+1 ]) ≤ 1 − 2ǫ (33)
Then with the setup given in (10) it holds that:
P(f 1 , f 2 , f 3 ) ≥ δ.
Moreover, in the uniform case one may take δ = (ǫ/2) 20 . In the general case one may take δ = (ǫ/2) 2+1/(2α 2 ) .
Proof. We will prove the claim for the uniform case. Again we consider two cases. By Lemma 11.6 on the event B, the value of s(f 1 , f 2 , f 3 ), is at least ǫ 2 /2, and on the complement of B, it is non-negative. We thus conclude
P (f 1 , f 2 , f 3 ) ≥ (ǫ/2) 20 .
In the second case, all functions satisfy P[f i ≤ 1 − ǫ] ≥ ǫ/2. Then, letting A i denote the event where f i ≤ 1 − ǫ and repeating the argument above, we obtain the same bound. The proof for non-uniform distributions is identical.
Remark 11.8. We briefly note that Theorem 11.7 and the other theorems proven in this section hold in further generality, where voters vote independently according to any (perhaps not symmetric) distribution over the rankings where the probability of any ranking is at least α. The proof of these extensions is identical to the proofs provided here.
Arrow Theorem for Low Influence Functions
The proof for symmetric distributions is way more involved than the proof for uniform distributions. The main difference between the two cases is while in the uniform case in the expansion (12) each correlation factor is asymptotically minimized by symmetric monotone threshold functions and therefore the overall expression is also minimized by symmetric monotone threshold functions.
In the general symmetric case, it is impossible to apply invariance to pairs of functions as one of the correlations parameters in (12) may be positive in which case it is maximized (rather than minimized) by monotone symmetric threshold functions. To deal with this case we therefore derive an appropriate extension of invariance which may be of independent interest. Roughly speaking the extension establishes a map Ψ mapping B = {f :
{−1, 1} n → [−1, 1]} to G = {f : R n → [−1, 1]} such that for functions with low influences E[f g] is close to E[Ψ(f )Ψ(g)]
where the first expected value is with respect to two correlated input from the uniform measure on {−1, 1} and the second is with respect to two correlated Gaussian in R n .
Symmetric Distributions
For the statement below we recall the notion of low degree influence. For a functions f : {−1, 1} n → R where {−1, 1} n is equipped with the uniform measure, the degree d influence of the i'th variable of f is defined by:
I ≤d (f ) = S:|S|≤d,i∈Sf 2 (S).
Obviously I ≤d i (f ) ≤ I(f ). The usefulness of I ≤d i to be used in the next subsection comes from the fact that
i I ≤d i (f ) ≤ d · Var[f ].(35)
Invariance Result
Our starting point will be the following extensions of results from [15] and [13].
Theorem 11.9. For all ǫ, −1 < ρ < 1 the following holds. Consider the space {−1, 1} n equipped with the uniform measure and the space R n equipped with the Gaussian measure. Then for every function f :
{−1, 1} n → [−1, 1] there exists a functionf : R n → [−1, 1] such that the following hold. Consider (X, Y ) distributed in {−1, 1} n × {−1, 1} n where (X i , Y i ) are independent with E[X i ] = E[Y i ] = 0, E[X 2 i ] = E[Y 2 i ] = 1, E[X i Y i ] = ρ. Consider (N, M ) jointly Gaussian and distributed in R n × R n with (N i , M i ) independent with E[N i ] = E[M i ] = 0, E[N 2 i ] = E[M 2 i ] = 1, E[N i M i ] = ρ.
Then
• For the constant functions 1 and −1 it holds that1 = 1 and−1 = −1.
• If f and g are two functions such that for all i, it holds that max(I
log(1/τ ) i (f ), I log(1/τ ) i (g)) < τ then |E[f (X)g(Y )] − E[f (N )g(M )]| ≤ ǫ. (36) if τ ≤ τ (ǫ, |ρ|) := ǫ C log(1/ǫ) (1−|ρ|)ǫ ,(37)
for some absolute constant C.
Proof. We briefly explain how does this follow from the [15] and [13]. Given f , we take small η and look T 1−η f , where T is the Bonami-Beckner operator. For small η and every two functions f and g it holds that E[f (X)g(Y )] is ǫ/4 close to E[T 1−η f (X)T 1−η g(Y )]. By Lemma 6.1 in [13] this can be done with
η = C (1 − |ρ|)ǫ log(1/ǫ) .
T 1−η f is given by a multi-linear polynomial which we can also write in terms of Gaussian random variables. Let's call the Gaussian polynomial f ′ . The polynomial f ′ has the same expected value as f but in general it takes values in all of R. Similarly for different f and g we have
E[f ′ g ′ ] = E[T 1−η f (X)T 1−η g(Y )]. We letf (x) = f ′ (x) if |f ′ (x)| ≤ 1 and f ′ (x) = 1 (f ′ (x) = −1) if f ′ (x) ≥ 1 (f ′ (x) ≤ −1)
. It is easy to see that1 = 1 and−1 = −1.
By Theorem 3.20 in [15] it follows that E[(f ′ −f ) 2 ] < ǫ 2 /16 if all influences of f are bounded by τ given in (37). An immediate application of Cauchy-Schwartz implies that E[f ′ (N )g ′ (M )] is at most ǫ/2 far from E[f (N )f (M )]. We thus obtain Theorem 11.9.
Arrow Theorem for Low Influence Functions
We can prove a quantitative Arrow theorem for low influence functions. For the statement from this point on, it would be useful to denote
τ α (δ) := δ C log(1/δ) αδ , τ (δ) := τ 1/3 (δ),
for some absolute constant C.
Theorem 11.10. For every ǫ > 0 there exists a δ(ǫ) > 0 and a τ (δ) > 0 such that the following hold. Let
f 1 , f 2 , f 3 : {−1, 1} n → [−1, 1]. Assume that for all 1 ≤ i ≤ 3 and all u ∈ {−1, 1} it holds that min(uE[f i ], −uE[f i+1 ]) ≤ 1 − 3ǫ(38)
and for all 1 ≤ i ≤ 3 and 1 ≤ j ≤ n it holds that
I log(1/τ ) j (f i ) < τ,
Then it holds that
P(f 1 , f 2 , f 3 ) > δ.
Moreover, assuming the uniform distribution, one may take:
δ = 1 4 (ǫ/2) 20 , τ = τ (δ,1 3 )
.
And assuming a general symmetric voting distribution with a minimal probability α for every permutation, one can take:
δ = 1 4 (ǫ/2) 2+1/(2α 2 ) , τ = τ (δ, 1 − 4α).
Proof. Let g 1 =f 1 , g 2 =f 2 and g 3 =f 3 , the functions whose existence is guaranteed by Theorem 11.9. We will apply the theorem for the pairs of functions (f 1 , f 2 ), (f 2 , f 3 ) and (f 3 , f 1 ) and the correlations given in (10). Taking ρ = 0 and noting that T 0 1 = 1 and1 = 1, we conclude that for all i it holds that |E[f i ] − E[g i ]| < ǫ. It therefore follows from (38) that the functions g i satisfy (33) and therefore P (g 1 , g 2 , g 3 ) ≥ 4δ, where the correlations between the g i 's are given by (10).
Recall that:
P (g 1 , g 2 , g 3 ) = 1 4 (E[g 1 (N 1 )g 2 (N 2 )] + E[g 2 (N 2 )g 3 (N 3 )] + E[g 3 (N 3 )g 1 (N 1 )]) .
Applying theorem 11.9 we see that
|E[g 1 (N 1 )g 2 (N 2 )] − E[f 1 (x a>b )f 2 (x b>c )]| < δ,
and similarly for the other expectations. We therefore conclude that
P (f 1 , f 2 , f 3 ) > P (g 1 , g 2 , g 3 ) − 3δ/4 > δ,
as needed.
Arrow Theorem For Low Cross Influences Functions
Our final result in the low influence realm deals with the situation that for each coordinate, at most one function has large influence while the two others have small influences. Such a case occurs for example when one function is a function of a small number of voters while the two others are majority type functions. The main result of the current subsection shows that indeed is such situation there is a good probability of a paradox. The proof is based on extending an averaging argument from [13].
Theorem 11.11. For every ǫ > 0 there exists a δ(ǫ) > 0 and a τ (δ) > 0 such that the following hold. Let 1]. Assume that for all 1 ≤ i ≤ 3 and all u ∈ {−1, 1} it holds that
f 1 , f 2 , f 3 : {−1, 1} n → [−1,min(uE[f i ], −uE[f i+1 ]) ≤ 1 − 3ǫ(39)
and for all j it holds that |{1 ≤ i ≤ 3 : I
log 2 1/τ j (f i ) > τ }| ≤ 1.(40)
Then it holds that
P(f 1 , f 2 , f 3 ) ≥ δ.
Moreover, assuming the uniform distribution, one may take:
δ = 1 8 (ǫ/2) 20 , τ = τ (δ, 1/3).
Assuming a general symmetric voting distribution with a minimal distribution α for every permutation, one can take:
δ = 1 8 (ǫ/2) 2+1/(2α 2 ) , τ = τ (δ, 1 − 4α).
The proof will use the following lemma which is a special case of a lemma from [13].
g i (x) = E[f i (Y )|Y [n]\S = x [n]\S ].
Then the functions g i do not depend on the coordinates in S,
are [0, 1] valued, satisfy E[g i ] = E[f i ] and |E[f 1 (X)f 2 (Y )]] − E[g 1 (X)g 2 (Y )]| ≤ |S| √ ǫ,
where (X i , Y i ) are independent distributed according to µ.
Proof. Recall that averaging over a subset of the variables preserves expected value. It also maintains the property of taking values in [0, 1] and decreases influences. Thus it suffices to prove the claim for the case where |S| = 1. The general case then follows by induction.
So assume without loss of generality that S = {1} consists of the first coordinate only and that I 2 (f 2 ) ≤ ǫ, so that E[(f 2 − g 2 ) 2 ] ≤ ǫ. Then by Cauchy-Schwartz we have E[|f 2 − g 2 |] ≤ √ ǫ and using the fact that the functions are bounded in [0, 1] we obtain
|E[f 1 f 2 − f 1 g 2 ]| ≤ √ ǫ.(41)
Let us write E 1 for the expected value with respect to the first variable. Recalling that the g i do not depend on the first variable we obtain that
E 1 [f 1 g 2 ] = g 2 E 1 [f 1 ] = g 1 g 2 .
This implies that
E[f 1 g 2 ] = E[g 1 g 2 ],(42)
and the proof follows from (41) and (42).
We can now prove Theorem 11.11.
Proof. The proof will use the fact that the sum of low-degree influences (35) together with the fact that averaging makes (standard influences) smaller. In order to work with these two notions of influences simultaneously we begin by replacing each functions f i with the function T 1−η f i where as in Theorem 11.9 we let η = C 1 αδ log(1/δ) ,
where C 1 is large enough so that
Let
R = log(1/τ ), R ′ = log 2 (1/τ ). and choose C 2 and C 3 large enough so that
τ ′ = δ C 3 log(1/δ) αδ , satisfies 3R τ τ ′ + (1 − η) 2R ′ ≤ δ 16
.
Assume that f i satisfy (40), i.e., for all j:
|{1 ≤ i ≤ 3 : I R ′ j (f i ) > τ ′ }| ≤ 1.
We will show that the statement of the theorem holds for f i . For this let
S i = {i : I ≤R i (f i ) > τ },
and S = S 1 ∪ S 2 ∪ S 3 . Since R ′ ≥ R and τ ′ ≤ τ , the sets S i are disjoint.
Moreover,each of the sets S i is of size at most R τ . Also, if j ∈ S and I ≤R j (f i ) > τ then for i = i ′ it holds that I ≤R j (f i ′ ) < τ ′ and therefore I j (f i ′ ) ≤ τ ′ + (1 − γ) 2R ′ . In other words, for all j ∈ S we have that at least two of the functions f i satisfy I j (f i ) ≤ τ .
We now apply Lemma 11.12 with
f i (x) = E[f i (X)|X [n]\S = x [n]\S ].
We obtain that for any pair of functions f i , f i+1 it holds that
|E[f i f i+1 ] − E[f ifi+1 ]| ≤ 2R τ τ ′ + (1 − η) 2R ′ ≤ δ 16 .(43)
Note that the functionsf i satisfy that max i,j I j (f i )) ≤ τ . This implies that the results of Theorem 11.10 hold forf i . This together with (43) implies the desired result.
Remark 11.13. We note that Theorem 11.11 and the other theorems proven in this section hold in further generality, where voters vote independently according to any (perhaps not symmetric) distribution over the rankings where the probability of any ranking is at least α with bounds on τ that are somewhat worse than those obtained here. The proof of these extensions is similar to the proofs presented here (recall Remark 11.8). The main difference is since now the distributions of x a>b etc. are biased, the applications of invariance principle results in somewhat worse results.
One Influential Variable
We note that Theorem 7.1 holds as stated for symmetric distributions with α being the minimum probability over all permutations and δ = (ǫ/2) 2+1/(2α 2 ) , τ = τ (δ, 1 − 4α).
The only difference in the proof is that instead of Theorem 6.2. we use Theorem 11.10.
Quantitative Arrow Theorem for 3 Candidates
We briefly state the generalization of Theorem 8.1 to symmetric distributions.
Theorem 11.14. The statement of the Theorem 8.1 holds true for symmetric distributions on 3 alternatives with minimum probability for each ranking α with
δ = exp − C 1 αǫ C 2 (α) ,(44)
where C 2 (α) = 3 + 1/(2α 2 ).
Proof. The proof is identical. In case (20), we now have P (F ) > α 2 η 3 .
We thus obtain that P (F ) > δ where δ is given (44) by taking larger values C ′ 1 and C ′ 2 for C 1 and C 2 .
In case (21), by Theorem 11.11 it follows that either there exist a function G which always put a candidate at top / bottom and D(F, G) < ǫ (if (39) holds), or P (F ) > Cǫ C 2 (α) >> δ.
Similarly in the remaining case (22), we have by version of Theorem 7.1 for symmetric distributions that either D(F, G) < ǫ or P (F ) > Cǫ C 2 (α) >> δ. The proof follows.
Proof Concluded
The general version of Theorem 9.1 reads:
Theorem 11.15. Theorem 9.1 holds for symmetric distributions on k alternatives with minimum probability for each ranking β and α = k!β/6 and
δ = exp − C 1 αǫ C 2 (α) ,(45)
where C 2 (α) = 3 + 1/(2α 2 ).
Proof. The proof follows by applying Theorem 11.14 to triplets of alternatives as before. Note that when restricting to 3 alternatives, the minimum probability assigned to each order is at least α.
Open Problems
As a conclusion we want to mention some natural open problems.
• We believe that the results obtained here hold also for non-symmetric distributions of rankings as long as the probability of every ranking is bounded below by some constant α. Recalling remarks 11.8 and 11.13, we see that the main challenge in extending the results to this setup is extending the proof for the case where two different functions f and g have two different influential voters. The problem in extending this result is the lack of inverse-hyper-contraction results for biased measures on {−1, 1} n . Deriving such estimates is of independent interest.
• A second natural problem is to attempt and obtain other quantitative results in social choice theory using Fourier methods. A natural candidate is the Gibbard-Satterthwaite Theorem [8,19]. A first quantitative estimates for 3 alternatives was obtained in [7]. As mentioned before, the results of [7] are limited in the sense that they require neutrality and apply only to 3 candidates. It is interesting to explore if the full quantitative version of Arrow theorem proven here will allow to obtain stronger quantitative version of the Gibbard-Satterthwaite Theorem.
| 14,686 |
0902.4157
|
2952291254
|
We present a novel geographical routing scheme for spontaneous wireless mesh networks. Greedy geographical routing has many advantages, but suffers from packet losses occurring at the border of voids. In this paper, we propose a flexible greedy routing scheme that can be adapted to any variant of geographical routing and works for any connectivity graph, not necessarily Unit Disk Graphs. The idea is to reactively detect voids, backtrack packets, and propagate information on blocked sectors to reduce packet loss. We also propose an extrapolating algorithm to reduce the latency of void discovery and to limit route stretch. Performance evaluation via simulation shows that our modified greedy routing avoids most of packet losses.
|
Geographic information can largely reduce complexity of routing in spontaneous mesh networks. The most simple and widely used protocol is greedy geographic routing @cite_4 @cite_1 @cite_6 @cite_12 : when a node receives a packet, it uses the following forwarding rule: is usually defined with respect to the distance towards the destination. Since improvement is not negative, there is no routing loops. Moreover, routing is scalable, because all routing decisions are local.
|
{
"abstract": [
"Greedy geographic routing is attractive for large multi-hop wireless networks because of its simple and distributed operation. However, it may easily result in dead ends or hotspots when routing in a network with obstacles (regions without sufficient connectivity to forward messages). In this paper, we propose a distributed routing algorithm that combines greedy geographic routing with two non-Euclidian distance metrics, chosen so as to provide load balanced routing around obstacles and hotspots. The first metric, Local Shortest Path, is used to achieve high probability of progress, while the second metric, Weighted Distance Gain, is used to select a desirable node among those that provide progress. The proposed Load Balanced Local Shortest Path (LBLSP) routing algorithm provides loop freedom, guarantees delivery when a path exists, is able to efficiently route around obstacles, and provides good load balancing.",
"We consider routing problems in ad hoc wireless networks modeled as unit graphs in which nodes are points in the plane and two nodes can communicate if the distance between them is less than some fixed unit. We describe the first distributed algorithms for routing that do not require duplication of packets or memory at the nodes and yet guarantee that a packet is delivered to its destination. These algorithms can be extended to yield algorithms for broadcasting and geocasting that do not require packet duplication. A by product of our results is a simple distributed protocol for extracting a planar subgraph of a unit graph. We also present simulation results on the performance of our algorithms.",
"Geographic forwarding in wireless sensor networks (WSN) has long suffered from the problem of bypassing \"dead ends,\" i.e., those areas in the network where no node can be found in the direction of the data collection point (the sink). Solutions have been proposed to this problem, that rely on geometric techniques leading to the planarization of the network topology graph. In this paper, a novel method alternative to planarization is proposed, termed ALBA-R, that successfully routes packets to the sink transparently to dead ends. ALBA-R combines nodal duty cycles (awake asleep schedules), channel access and geographic routing in a cross-layer fashion. Dead ends are dealt with by enhancing geographic routing with a mechanism that is distributed, localized and capable of routing packets around connectivity holes. An extensive set of simulations is provided, that demonstrates that ALBA-R is scalable, generates negligible overhead, and outperforms similar solutions with respect to all the metrics of interest investigated, especially in sparse topologies, notoriously the toughest benchmark for geographic routing protocols.",
"With the development of ad hoc networks, some researchers proposed several geometric routing protocols which depend on the planarization of the network connectivity graph to guarantee the delivery of the packet between any pair of nodes in the network. In this paper, we proposed a new online routing algorithm GLNFR (greedy and local neighbor face routing) for finding paths between the nodes of the ad hoc networks by storing a small amount of local face information at each node. The localized Delaunay triangulation was used to be the backbone of wireless network on which the GLNFR routing algorithm runs. It has the better scalability and adaptability for the change of ad hoc networks. Experiment on NS have been conducted. The results show that the delivery rate of packets and routing protocol message cost under such novel routing protocols performs better than others proposed before."
],
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_12",
"@cite_6"
],
"mid": [
"1974718755",
"2156689181",
"2165424349",
"2118832075"
]
}
|
Efficient Greedy Geographical Non-Planar Routing with Rreactive Deflection
|
We consider wireless mesh networks composed of a large number of wireless routers providing connectivity to mobile nodes. They begin to emerge in some regions to provide cheap network connectivity to a community of end users. Usually they grow in a spontaneous way when users or operators add more routers to increase capacity and coverage.
We assume that mesh routers benefit from abundant resources (memory, energy, computation power, GPS devices in some cases), may only move, quit, or join occasionally, so that the topology of a typical mesh networks stays fairly stable. The organization of mesh networks needs to be autonomic, because unlike the current Internet, they cannot rely on highly skilled personnel for configuring, connecting, and running mesh routers. Spontaneous growth of such networks may result in a dense and unplanned topology with some uncovered areas.
Unlike traditional approaches, geographical routing presents interesting properties for spontaneous wireless mesh networks: it does not require any information on the global topology since a node choses the next hop among its neighbor routers based of the destination location. Consequently, the routing scheme is scalable, because it only involves local decisions. Geographical routing is simple, because it does not require routing tables so that there is no overhead of their creation and maintenance. Joining the network is also simple, because a new mesh router only needs an address based on its geographical position. Such addresses can be obtained from a dedicated device (e.g. GPS) or with methods for deriving consistent location addresses based on the information from neighboring nodes about radio signal strength [4] or connectivity [16]. The most familiar variant of geographical routing is greedy forwarding in which a node forwards a packet to the neighbor closest to the destination [2,13]. Greedy forwarding guarantees loop-free operation, but packets may be dropped at blocked nodes that have only neighbors in the backward direction. Blocked nodes appear at some places near uncovered areas (voids) or close to obstacles to radio waves in a given direction.
Our main contribution is to propose a new greedy routing that correctly deals with voids. First, we define a new mechanism to reactively detect voids and surround them, which significantly reduces packet loss. Moreover, the information of detected voids propagates backwards so that subsequent packets to the same direction benefit from this reactive detection. Second, we propose a mechanism in which voids deviate packets and shorten the length of a route compared to classical approaches. Our routing scheme works in any network topology independently of whether it corresponds to a planar graph or not.
We start with the description of the related work on geographical routing in Section 2. Section 3 presents the details of the proposed new greedy routing protocol. Then, we evaluate its performance via simulation in Section 4 and conclude.
Reactive Deflection
Geographical routing is attractive for mesh networks, but suffers from two main drawbacks: blocked nodes can drop many packets and the route length may drastically increase when a surrounding mechanism tries to deviate a packet around a void (e.g. the left-hand rule in unit disk graphs). In this paper, we assume a general connectivity graph and propose to reactively detect blocked nodes and locally advertise blocked sectors to avoid packet losses. Such a technique is efficient in any type of networks and graphs since it does not assume any particular graph property.
Detection of blocked nodes can be done in a proactive way: locally flood information to detect voids. For example, we can discover the topology of the wireless mesh to detect elementary cycles in which no other node is located inside the ring. The location of nodes helps to surround voids. However, such an approach requires a complete knowledge of the mesh topology and is computationally intensive.
In opposition to this approach, we have chosen a reactive method: a node becomes blocked with respect to a given destination when it cannot forward a packet to any Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France void blocked node blocked sector void neighbor closer to the destination. Hence, the part of the network not concerned by forwarding this packet does not generate any control traffic so that this approach is more scalable. Let us first adopt the following notation: In our approach, a node chooses a neighbor closer to the destination and not blocked for this direction. If a node fails to forward a packet to a given destination, it will consider itself as blocked for this direction. It will advertise backwards a list of blocked directions so that its neighbors will not choose it as a next hop for these directions. If several non blocked neighbors exist, the forwarder chooses the neighbor closest to the destination, i.e. with the best improvement.
For advertising blocked directions, we propose to use the notion of blocked sectors: a node N advertises that it is blocked for any destination that falls in sector S (N, angle min , angle max , dist min ). Let us consider the topology illustrated in end if 6: end for 7: if next = ∅ then 8: Blocked(N,D) ← true next ← previous hop 9: end if 10: return next To limit the overhead, a node tries to merge all its blocked sectors before advertising them. It can only merge overlapping sectors having the same minimal distances (within some tolerance ∆ d ). Otherwise, the merged blocked sector may include nodes that are reachable-consider for instance the topology of figure 2: if node p merges sectors C and C ′ , node p 1 may appear in the blocked sector. Thus, it would become unreachable from p 2 . Clearly, we must avoid such a merging. Only sectors will the same d min will be merged : tolerance ∆ d allows some merging of sectors with approximately equal minimal distances.
More formally, node N executes Algorithm 1. Procedure ReactiveDef lection() finds the next hop for forwarding a packet to destination D: the next hop must be closer to the destination and must be unblocked for D. If it does not return any node, it means Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France that node N becomes blocked for destination D (variable BLOCKED(N,D) becomes true). Thus, node N updates its blocked sectors and sends the packet backwards to the previous hop with its list of blocked sectors piggybacked onto the packet. This scheme is loop-free: when a node sends a packet backwards, the receiver will update its blocked sectors and it cannot choose the same next hop for subsequent packets, because Algorithm 1 does not forward packets to blocked nodes. In networks with non unit disk graph topology, when a node becomes blocked and there are no other neighbor closer to the destination, the node needs to discover a node in a larger vicinity able to forward the packet to the destination. Usually, it consists of flooding a request in a k-neighborhood of the node, k being a parameter to limit the scope of flooding. In this case, the length of the route may increase, which is illustrated in Figure 3: border nodes need to forward the packet to reach a virtual next hop [2,9]. This increases both the load of the border nodes and the route length. We propose to limit the effect of such a behavior.
Note that when we reduce packet loss with the previously described algorithm, we also reduce in the long term the route length. Indeed, the nodes around the void discover that they have blocked sectors. When they propagate the information about blocked sectors, nodes with all blocked neighbors also become blocked for this destination. Finally, each node discovers a blocked area and forwards packets outside this area. However, we need several useless packet transmissions and backtracking before the network converges, and blocked sectors are correctly constructed. We propose a mechanism to accelerate the convergence of this propagation process by extrapolating the location of a blocked area.
We propose to detect the border of a void based on only local neighborhood knowledge. We will show that even if a node has only local knowledge, i.e. about nodes at a limited distance, voids can be efficiently surrounded. When a node must transmit a packet backwards, it locally floods an hello packet containing the list of its neighbors and blocked sectors in a k hop scope.
To detect the border of a void, node N first searches for the blocked k-neighbor closest to the direction of the destination D, i.e. for all blocked nodes BN . Then, N constructs the Maximum Connected Set of blocked nodes that contains BN : it adds BN to this set, and recursively adds all its blocked neighbors. Finally, N computes the forbidden sector that spans the maximum connected set-it extrapolates the blocked area. Figure 4 illustrates void detection with the knowledge of the 3-neighborhood topology. First, the source node detects if it knows a node with a blocked direction and takes the closest one to the direction to the destination. In the example the blocked node is B. Then, the source constructs the connected set of blocked nodes that includes node B: it obtains set {A, B, C, D}. Obviously, node F is not present in the set since it is not connected to A via other blocked nodes. In the same way, border node H is not in set {A, B, C, D}, because it is 2 hops away: it is border to another void. Finally, we obtain the forbidden sector for the destination. We can note that node E is not blocked since it can choose node F when A is blocked: E will never be blocked for the direction.
Algorithm 2 presents the modified protocol. Function ISINFORBIDDENSECTOR computes the forbidden sector and returns TRUE, if the node is located inside this sector. Function CLOSERTOSECTORLIMITS(P,Q) returns TRUE, if P is closer than Q to the forbidden sector limits.
In other words, if some next hops exist and do not lie in the computed forbidden sector, we choose the best one. Otherwise, if all possible next hops are in this forbidden sector, we choose the node closest to the limits of the forbidden sector. With this modified routing scheme, we forward packets outside the forbidden sector, because a void appears as something repellent to packets by creating forbidden sectors in a distributed manner while keeping routing loop-free.
Algorithm 2 ModifiedReactiveDeflection(D,ForbiddenSector)
1: next ← ∅ 2: for all n ∈ N eighbors do 3: if d(n, D) < d(S, D) and !BLOCKED(n,D) then 4: if !ISINFORBIDDENSECTOR(n) and { ISINFORBIDDENSECTOR(next) or d(n, D) < d(next, D) } then 5: next ← n 6: else if ISINFORBIDDENSECTOR(next) and ISINFORBIDDENSECTOR(n) and CLOSERTOSECTORLIMITS(n,next) then 7: next ← n 8: end if 9: end if 10: end for 11: if next = ∅ then 12: Blocked(N,D) ← true next ← previous hop 13: end if 14: return next
Performance evaluation
We have generated random meshes of 1000 nodes according to two models: Unit Disk Graphs [8] and what we call a proxi-graph (a graph based on proximity). In a proxigraph, each node chooses a radio range following a Gaussian distribution centered at 1 with standard deviation Std depending on the radio range (we assume Std = 25%·(radio−range) in our simulations). We consider a proxi-graph with a rectangular void of size two fifth of the simulation disk radius in the center of the simulation area. Besides, we discard disconnected topologies and use a disk simulation area to reduce border effects. Data traffic consists of 1,000 flows of 10 packets each from a random source to a random destination. At the beginning, to evaluate only the properties of routing itself, we assume ideal radio and MAC layers: packets do not experience any loss due to channel or MAC behavior to only test routing properties. Then, we evaluate the performances of the proposed protocols with the ns2 simulator to take into account more realistic radio conditions. Finally, we assume that nodes advertise the list of blocked sectors to their Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France neighbors and a node is aware of the blocked nodes in its 3-neighborhood (hellos contain the list of neighbors) since it achieves the best tradeoff between the performance and the overhead as shown in the simulations. We compare our routing algorithm with greedy geographic routing to quantify the reduction of packet loss. We use the classical version of greedy routing (the neighbor closest to the destination is chosen as next hop) since other versions do not show a significative improvement (smallest angle deviation, closest neighbor that is closer to the destination than myself). We mainly measure packet loss (the proportion of packets sent by a source that never reach the destination), route length (the average route length in hops for the delivered packets), and stretch factor (the average ratio of the route length for a packet and the length of the shortest route for the associated source/destination pair). We evaluated mainly the impact of density (average number of neighbors per node) on the routing performances. We plot the average values and the associated 95% confidence intervals.
Performance for Unit Disk Graphs
In the first experiment, we have measured the route length obtained for deflection routing with different values of the k-neighborhood with density of 8 neighbors per node in Unit Disk Graphs (cf. Table 1). We can remark that we quickly obtain a shorter route length with k = 3. For larger values, deflection routing tends to overestimate the size of voids and increase the route length.
Then, the following experiment shows (cf. Figure 5) that packet loss for greedy routing decreases with the increase of density: probability of having a large area without any node decreases so that voids are less probable to appear. However, more than 70% of packets are lost for low density. On the contrary, the proposed routing scheme Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France lowers packet loss: almost no packet is lost (less than 4%) when density exceeds a small threshold (8 neighbors per node). Thus, nodes reroute less packets by means of reactive discovery so that the overhead is lower and delay improved. We can also notice that route length optimization has no impact on delivery ratio. We have also measured the route length (cf. Figure 6(a)). Greedy routing does not achieve to find routes when voids exist. Thus, the packet drop probability for greedy routing is larger when the destination is farther. Since the route length is only mea-Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France sured for delivered packets, this poor delivery ratio creates mechanically a lower average route length. To characterize its increase, we have measured the stretch factor (cf. Figure 6(b)). We can observe that our optimized algorithm succeeds in slightly reducing the route length. More importantly, route length optimization results in deviating packets from voids and decreasing the load on the void's borders. The stretch factor is larger for low density since more voids are present and packets must be deflected and backtracked more often. The reader can note that greedy routing forwards one packet until it reaches a blocked node: this increases the load on these block nodes, even if they implement a void bypassing. Finally, the route length is reduced only for very low densities with the optimized version of deflection routing since blocked sectors interpolating is working only when voids are sufficiently large. Moreover, we could overestimate the forbidden sector by adding a guarding angle around the blocked nodes: we would reduce the route length, but increase packet drops since we would over-estimate the presence of voids.
Performance for a proxi-graph with one rectangular void
We have evaluated packet loss rate in a proxi-graph with one central void (cf. Figure 7(b)). We can see that packet loss increases compared to unit disk graphs, particularly for high density: dead-ends are more probable. Moreover, since the graph is not UDG, a node may choose a next hop in a greedy way although it does not have any neighbor in the direction of the destination. Thus, to increase density it is not sufficient to surround voids. We can remark that with our algorithms we significantly reduce packet loss ratio.
Finally, we have measured the route length (cf. Figure 7(a)). We can remark the same trend as for UDG and for a proxi-graph with a void. Obviously, the route length is longer, because packets have to surround the rectangular void.
Performance for more realistic channel conditions
We have implemented greedy and deflection routing in ns (version 2.33) to test a nonideal MAC and PHY layer. We have only consider 200 nodes, because of scalability limits of ns2. As above, we have placed one rectangular void in the center and all the nodes are placed randomly in the remaining simulation area. We have discarded all disconnected nodes. Finally, we have sequentially activated flows between random pairs of source and destination nodes. A flow sends 10 packets of 512 bytes with an inter-packet interval of 0.25s. In this way, we measure the ability of the routing protocol to discover a route rather than its robustness to the network load.
We first report on the packet loss ratio for greedy and deflection routing (cf. Figure 8). We can remark that deflection routing achieves a lower loss rate than greedy routing: it discovers more routes. However, the MAC layer is now not ideal: packets can be dropped because of collisions or transmission errors, especially if the route is long. This explains the larger packet loss compared to the previous simulations. This effect also suggests that IEEE 802.11 needs improvement for wireless mesh networks [12].
Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France represents packet losses only due to routing voids (i.e. there is no next hop according to the routing algorithm) that characterize the routing protocol and not the influence of the MAC layer (collisions, errors etc.). We can observe the same trends as for the proxi-graph (cf. Figure 7(b)): greedy routing suffers much more from voids than deflection routing. Finally, we also measure the route length (cf. Figure 9): although the route can be longer than for the ideal MAC and PHY layers, because for instance a node could not discover a neighbor, deflection routing discovers routes that are not much longer than those in greedy routing. Besides, the optimized version of deflection routing becomes efficient in surrounding voids and reducing the route length in very sparse networks.
Conclusion
We have proposed a scheme for greedy geographical routing with reactive defect detection. The idea is to reactively detect blocked nodes and propagate the defect information by computing a set of blocked sectors. To reduce the route length and accelerate void detection in dense mesh networks, we have also proposed a method to extrapolate void location. Simulation results show good performance of the proposed methods: packet loss as well as the route length decrease compared to greedy routing.
| 3,219 |
0902.4157
|
2952291254
|
We present a novel geographical routing scheme for spontaneous wireless mesh networks. Greedy geographical routing has many advantages, but suffers from packet losses occurring at the border of voids. In this paper, we propose a flexible greedy routing scheme that can be adapted to any variant of geographical routing and works for any connectivity graph, not necessarily Unit Disk Graphs. The idea is to reactively detect voids, backtrack packets, and propagate information on blocked sectors to reduce packet loss. We also propose an extrapolating algorithm to reduce the latency of void discovery and to limit route stretch. Performance evaluation via simulation shows that our modified greedy routing avoids most of packet losses.
|
Geographical routing requires addresses based on geographical coordinates: a node must obtain its location either with a dedicated physical device (e.g. GPS) or through a more complex algorithm, e.g. by estimating the position with respect to its neighbors. propose to construct a local coordinate system for each node and determine the coordinates of its neighbors @cite_3 . Then, they aggregate the local coordinate systems into global coordinates. The authors assume the distance to each neighbor known, but usually it is difficult to obtain. follow a similar approach, but based on the of packets coming from neighbors @cite_5 . A pragmatic approach to this problem is to assume that a subset of mesh routers know their exact positions via GPS devices and other nodes can compute their positions with respect to its neighbors @cite_8 .
|
{
"abstract": [
"Position information of individual nodes is useful in implementing functions such as routing and querying in ad-hoc networks. Deriving position information by using the capability of the nodes to measure time of arrival (TOA), time difference of arrival (TDOA), angle of arrival (AOA) and signal strength have been used to localize nodes relative to a frame of reference. The nodes in an ad-hoc network can have multiple capabilities and exploiting one or more of the capabilities can improve the quality of positioning. In this paper, we show how AOA capability of the nodes can be used to derive position information. We propose a method for all nodes to determine their orientation and position in an ad-hoc network where only a fraction of the nodes have positioning capabilities, under the assumption that each node has the AOA capability.",
"We consider the problem of node positioning in ad hoc networks. We propose a distributed, infrastructure-free positioning algorithm that does not rely on GPS (Global Positioning System). Instead, the algorithm uses the distances between the nodes to build a relative coordinate system in which the node positions are computed in two dimensions. Despite the distance measurement errors and the motion of the nodes, the algorithm provides sufficient location information and accuracy to support basic network functions. Examples of applications where this algorithm can be used include Location Aided Routing [10] and Geodesic Packet Forwarding [2]. Another example are sensor networks, where mobility is less of a problem. The main contribution of this work is to define and compute relative positions of the nodes in an ad hoc network without using GPS. We further explain how the proposed approach can be applied to wide area ad hoc networks.",
"Beacon placement strongly affects the quality of spatial localization, a critical service for context-aware applications in wireless sensor networks; yet this aspect of localization has received little attention. Fixed beacon placement approaches such as uniform and very dense placement are not always viable and will be inadequate in very noisy environments in which sensor networks may be expected to operate (with high terrain and propagation uncertainties). We motivate the need for empirically adaptive beacon placement and outline a general approach based on exploration and instrumentation of the terrain conditions by a mobile human or robot agent. We design, evaluate and analyze three novel adaptive beacon placement algorithms using this approach for localization based on RF-proximity. In our evaluation, we find that beacon density rather than noise level has a more significant impact on beacon placement algorithms. Our beacon placement algorithms are applicable to a low (beacon) density regime of operation. Noise makes moderate density regimes more improvable."
],
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_8"
],
"mid": [
"2130137998",
"2109764756",
"2100465929"
]
}
|
Efficient Greedy Geographical Non-Planar Routing with Rreactive Deflection
|
We consider wireless mesh networks composed of a large number of wireless routers providing connectivity to mobile nodes. They begin to emerge in some regions to provide cheap network connectivity to a community of end users. Usually they grow in a spontaneous way when users or operators add more routers to increase capacity and coverage.
We assume that mesh routers benefit from abundant resources (memory, energy, computation power, GPS devices in some cases), may only move, quit, or join occasionally, so that the topology of a typical mesh networks stays fairly stable. The organization of mesh networks needs to be autonomic, because unlike the current Internet, they cannot rely on highly skilled personnel for configuring, connecting, and running mesh routers. Spontaneous growth of such networks may result in a dense and unplanned topology with some uncovered areas.
Unlike traditional approaches, geographical routing presents interesting properties for spontaneous wireless mesh networks: it does not require any information on the global topology since a node choses the next hop among its neighbor routers based of the destination location. Consequently, the routing scheme is scalable, because it only involves local decisions. Geographical routing is simple, because it does not require routing tables so that there is no overhead of their creation and maintenance. Joining the network is also simple, because a new mesh router only needs an address based on its geographical position. Such addresses can be obtained from a dedicated device (e.g. GPS) or with methods for deriving consistent location addresses based on the information from neighboring nodes about radio signal strength [4] or connectivity [16]. The most familiar variant of geographical routing is greedy forwarding in which a node forwards a packet to the neighbor closest to the destination [2,13]. Greedy forwarding guarantees loop-free operation, but packets may be dropped at blocked nodes that have only neighbors in the backward direction. Blocked nodes appear at some places near uncovered areas (voids) or close to obstacles to radio waves in a given direction.
Our main contribution is to propose a new greedy routing that correctly deals with voids. First, we define a new mechanism to reactively detect voids and surround them, which significantly reduces packet loss. Moreover, the information of detected voids propagates backwards so that subsequent packets to the same direction benefit from this reactive detection. Second, we propose a mechanism in which voids deviate packets and shorten the length of a route compared to classical approaches. Our routing scheme works in any network topology independently of whether it corresponds to a planar graph or not.
We start with the description of the related work on geographical routing in Section 2. Section 3 presents the details of the proposed new greedy routing protocol. Then, we evaluate its performance via simulation in Section 4 and conclude.
Reactive Deflection
Geographical routing is attractive for mesh networks, but suffers from two main drawbacks: blocked nodes can drop many packets and the route length may drastically increase when a surrounding mechanism tries to deviate a packet around a void (e.g. the left-hand rule in unit disk graphs). In this paper, we assume a general connectivity graph and propose to reactively detect blocked nodes and locally advertise blocked sectors to avoid packet losses. Such a technique is efficient in any type of networks and graphs since it does not assume any particular graph property.
Detection of blocked nodes can be done in a proactive way: locally flood information to detect voids. For example, we can discover the topology of the wireless mesh to detect elementary cycles in which no other node is located inside the ring. The location of nodes helps to surround voids. However, such an approach requires a complete knowledge of the mesh topology and is computationally intensive.
In opposition to this approach, we have chosen a reactive method: a node becomes blocked with respect to a given destination when it cannot forward a packet to any Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France void blocked node blocked sector void neighbor closer to the destination. Hence, the part of the network not concerned by forwarding this packet does not generate any control traffic so that this approach is more scalable. Let us first adopt the following notation: In our approach, a node chooses a neighbor closer to the destination and not blocked for this direction. If a node fails to forward a packet to a given destination, it will consider itself as blocked for this direction. It will advertise backwards a list of blocked directions so that its neighbors will not choose it as a next hop for these directions. If several non blocked neighbors exist, the forwarder chooses the neighbor closest to the destination, i.e. with the best improvement.
For advertising blocked directions, we propose to use the notion of blocked sectors: a node N advertises that it is blocked for any destination that falls in sector S (N, angle min , angle max , dist min ). Let us consider the topology illustrated in end if 6: end for 7: if next = ∅ then 8: Blocked(N,D) ← true next ← previous hop 9: end if 10: return next To limit the overhead, a node tries to merge all its blocked sectors before advertising them. It can only merge overlapping sectors having the same minimal distances (within some tolerance ∆ d ). Otherwise, the merged blocked sector may include nodes that are reachable-consider for instance the topology of figure 2: if node p merges sectors C and C ′ , node p 1 may appear in the blocked sector. Thus, it would become unreachable from p 2 . Clearly, we must avoid such a merging. Only sectors will the same d min will be merged : tolerance ∆ d allows some merging of sectors with approximately equal minimal distances.
More formally, node N executes Algorithm 1. Procedure ReactiveDef lection() finds the next hop for forwarding a packet to destination D: the next hop must be closer to the destination and must be unblocked for D. If it does not return any node, it means Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France that node N becomes blocked for destination D (variable BLOCKED(N,D) becomes true). Thus, node N updates its blocked sectors and sends the packet backwards to the previous hop with its list of blocked sectors piggybacked onto the packet. This scheme is loop-free: when a node sends a packet backwards, the receiver will update its blocked sectors and it cannot choose the same next hop for subsequent packets, because Algorithm 1 does not forward packets to blocked nodes. In networks with non unit disk graph topology, when a node becomes blocked and there are no other neighbor closer to the destination, the node needs to discover a node in a larger vicinity able to forward the packet to the destination. Usually, it consists of flooding a request in a k-neighborhood of the node, k being a parameter to limit the scope of flooding. In this case, the length of the route may increase, which is illustrated in Figure 3: border nodes need to forward the packet to reach a virtual next hop [2,9]. This increases both the load of the border nodes and the route length. We propose to limit the effect of such a behavior.
Note that when we reduce packet loss with the previously described algorithm, we also reduce in the long term the route length. Indeed, the nodes around the void discover that they have blocked sectors. When they propagate the information about blocked sectors, nodes with all blocked neighbors also become blocked for this destination. Finally, each node discovers a blocked area and forwards packets outside this area. However, we need several useless packet transmissions and backtracking before the network converges, and blocked sectors are correctly constructed. We propose a mechanism to accelerate the convergence of this propagation process by extrapolating the location of a blocked area.
We propose to detect the border of a void based on only local neighborhood knowledge. We will show that even if a node has only local knowledge, i.e. about nodes at a limited distance, voids can be efficiently surrounded. When a node must transmit a packet backwards, it locally floods an hello packet containing the list of its neighbors and blocked sectors in a k hop scope.
To detect the border of a void, node N first searches for the blocked k-neighbor closest to the direction of the destination D, i.e. for all blocked nodes BN . Then, N constructs the Maximum Connected Set of blocked nodes that contains BN : it adds BN to this set, and recursively adds all its blocked neighbors. Finally, N computes the forbidden sector that spans the maximum connected set-it extrapolates the blocked area. Figure 4 illustrates void detection with the knowledge of the 3-neighborhood topology. First, the source node detects if it knows a node with a blocked direction and takes the closest one to the direction to the destination. In the example the blocked node is B. Then, the source constructs the connected set of blocked nodes that includes node B: it obtains set {A, B, C, D}. Obviously, node F is not present in the set since it is not connected to A via other blocked nodes. In the same way, border node H is not in set {A, B, C, D}, because it is 2 hops away: it is border to another void. Finally, we obtain the forbidden sector for the destination. We can note that node E is not blocked since it can choose node F when A is blocked: E will never be blocked for the direction.
Algorithm 2 presents the modified protocol. Function ISINFORBIDDENSECTOR computes the forbidden sector and returns TRUE, if the node is located inside this sector. Function CLOSERTOSECTORLIMITS(P,Q) returns TRUE, if P is closer than Q to the forbidden sector limits.
In other words, if some next hops exist and do not lie in the computed forbidden sector, we choose the best one. Otherwise, if all possible next hops are in this forbidden sector, we choose the node closest to the limits of the forbidden sector. With this modified routing scheme, we forward packets outside the forbidden sector, because a void appears as something repellent to packets by creating forbidden sectors in a distributed manner while keeping routing loop-free.
Algorithm 2 ModifiedReactiveDeflection(D,ForbiddenSector)
1: next ← ∅ 2: for all n ∈ N eighbors do 3: if d(n, D) < d(S, D) and !BLOCKED(n,D) then 4: if !ISINFORBIDDENSECTOR(n) and { ISINFORBIDDENSECTOR(next) or d(n, D) < d(next, D) } then 5: next ← n 6: else if ISINFORBIDDENSECTOR(next) and ISINFORBIDDENSECTOR(n) and CLOSERTOSECTORLIMITS(n,next) then 7: next ← n 8: end if 9: end if 10: end for 11: if next = ∅ then 12: Blocked(N,D) ← true next ← previous hop 13: end if 14: return next
Performance evaluation
We have generated random meshes of 1000 nodes according to two models: Unit Disk Graphs [8] and what we call a proxi-graph (a graph based on proximity). In a proxigraph, each node chooses a radio range following a Gaussian distribution centered at 1 with standard deviation Std depending on the radio range (we assume Std = 25%·(radio−range) in our simulations). We consider a proxi-graph with a rectangular void of size two fifth of the simulation disk radius in the center of the simulation area. Besides, we discard disconnected topologies and use a disk simulation area to reduce border effects. Data traffic consists of 1,000 flows of 10 packets each from a random source to a random destination. At the beginning, to evaluate only the properties of routing itself, we assume ideal radio and MAC layers: packets do not experience any loss due to channel or MAC behavior to only test routing properties. Then, we evaluate the performances of the proposed protocols with the ns2 simulator to take into account more realistic radio conditions. Finally, we assume that nodes advertise the list of blocked sectors to their Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France neighbors and a node is aware of the blocked nodes in its 3-neighborhood (hellos contain the list of neighbors) since it achieves the best tradeoff between the performance and the overhead as shown in the simulations. We compare our routing algorithm with greedy geographic routing to quantify the reduction of packet loss. We use the classical version of greedy routing (the neighbor closest to the destination is chosen as next hop) since other versions do not show a significative improvement (smallest angle deviation, closest neighbor that is closer to the destination than myself). We mainly measure packet loss (the proportion of packets sent by a source that never reach the destination), route length (the average route length in hops for the delivered packets), and stretch factor (the average ratio of the route length for a packet and the length of the shortest route for the associated source/destination pair). We evaluated mainly the impact of density (average number of neighbors per node) on the routing performances. We plot the average values and the associated 95% confidence intervals.
Performance for Unit Disk Graphs
In the first experiment, we have measured the route length obtained for deflection routing with different values of the k-neighborhood with density of 8 neighbors per node in Unit Disk Graphs (cf. Table 1). We can remark that we quickly obtain a shorter route length with k = 3. For larger values, deflection routing tends to overestimate the size of voids and increase the route length.
Then, the following experiment shows (cf. Figure 5) that packet loss for greedy routing decreases with the increase of density: probability of having a large area without any node decreases so that voids are less probable to appear. However, more than 70% of packets are lost for low density. On the contrary, the proposed routing scheme Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France lowers packet loss: almost no packet is lost (less than 4%) when density exceeds a small threshold (8 neighbors per node). Thus, nodes reroute less packets by means of reactive discovery so that the overhead is lower and delay improved. We can also notice that route length optimization has no impact on delivery ratio. We have also measured the route length (cf. Figure 6(a)). Greedy routing does not achieve to find routes when voids exist. Thus, the packet drop probability for greedy routing is larger when the destination is farther. Since the route length is only mea-Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France sured for delivered packets, this poor delivery ratio creates mechanically a lower average route length. To characterize its increase, we have measured the stretch factor (cf. Figure 6(b)). We can observe that our optimized algorithm succeeds in slightly reducing the route length. More importantly, route length optimization results in deviating packets from voids and decreasing the load on the void's borders. The stretch factor is larger for low density since more voids are present and packets must be deflected and backtracked more often. The reader can note that greedy routing forwards one packet until it reaches a blocked node: this increases the load on these block nodes, even if they implement a void bypassing. Finally, the route length is reduced only for very low densities with the optimized version of deflection routing since blocked sectors interpolating is working only when voids are sufficiently large. Moreover, we could overestimate the forbidden sector by adding a guarding angle around the blocked nodes: we would reduce the route length, but increase packet drops since we would over-estimate the presence of voids.
Performance for a proxi-graph with one rectangular void
We have evaluated packet loss rate in a proxi-graph with one central void (cf. Figure 7(b)). We can see that packet loss increases compared to unit disk graphs, particularly for high density: dead-ends are more probable. Moreover, since the graph is not UDG, a node may choose a next hop in a greedy way although it does not have any neighbor in the direction of the destination. Thus, to increase density it is not sufficient to surround voids. We can remark that with our algorithms we significantly reduce packet loss ratio.
Finally, we have measured the route length (cf. Figure 7(a)). We can remark the same trend as for UDG and for a proxi-graph with a void. Obviously, the route length is longer, because packets have to surround the rectangular void.
Performance for more realistic channel conditions
We have implemented greedy and deflection routing in ns (version 2.33) to test a nonideal MAC and PHY layer. We have only consider 200 nodes, because of scalability limits of ns2. As above, we have placed one rectangular void in the center and all the nodes are placed randomly in the remaining simulation area. We have discarded all disconnected nodes. Finally, we have sequentially activated flows between random pairs of source and destination nodes. A flow sends 10 packets of 512 bytes with an inter-packet interval of 0.25s. In this way, we measure the ability of the routing protocol to discover a route rather than its robustness to the network load.
We first report on the packet loss ratio for greedy and deflection routing (cf. Figure 8). We can remark that deflection routing achieves a lower loss rate than greedy routing: it discovers more routes. However, the MAC layer is now not ideal: packets can be dropped because of collisions or transmission errors, especially if the route is long. This explains the larger packet loss compared to the previous simulations. This effect also suggests that IEEE 802.11 needs improvement for wireless mesh networks [12].
Laboratoire Informatique de Grenoble, UMR CNRS 5217 681 rue de la passerelle, BP72, 38402 Saint Martin d'Heres Cedex, France represents packet losses only due to routing voids (i.e. there is no next hop according to the routing algorithm) that characterize the routing protocol and not the influence of the MAC layer (collisions, errors etc.). We can observe the same trends as for the proxi-graph (cf. Figure 7(b)): greedy routing suffers much more from voids than deflection routing. Finally, we also measure the route length (cf. Figure 9): although the route can be longer than for the ideal MAC and PHY layers, because for instance a node could not discover a neighbor, deflection routing discovers routes that are not much longer than those in greedy routing. Besides, the optimized version of deflection routing becomes efficient in surrounding voids and reducing the route length in very sparse networks.
Conclusion
We have proposed a scheme for greedy geographical routing with reactive defect detection. The idea is to reactively detect blocked nodes and propagate the defect information by computing a set of blocked sectors. To reduce the route length and accelerate void detection in dense mesh networks, we have also proposed a method to extrapolate void location. Simulation results show good performance of the proposed methods: packet loss as well as the route length decrease compared to greedy routing.
| 3,219 |
0901.2934
|
1512489499
|
In this paper, we first consider a channel that is contaminated by two independent Gaussian noises @math and @math . The capacity of this channel is computed when independent noisy versions of @math are known to the transmitter and or receiver. It is shown that the channel capacity is greater then the capacity when @math is completely unknown, but is less then the capacity when @math is perfectly known at the transmitter or receiver. For example, if there is one noisy version of @math known at the transmitter only, the capacity is @math , where @math is the input power constraint and @math is the power of the noise corrupting @math . We then consider a Gaussian cognitive interference channel (IC) and propose a causal noisy dirty paper coding (DPC) strategy. We compute the achievable region using this noisy DPC strategy and quantify the regions when it achieves the upper bound on the rate.
|
One special case of ) is when a noisy version of @math is known only to the transmitter; our result in this case is a generalization of Costa's celebrated result @cite_6 . @cite_6 , it is shown that the achievable rate when the noise @math is perfectly known at the transmitter is equivalent to the rate when @math is known at the receiver, and this rate does not depend on the variance of @math . A new coding strategy to achieve this capacity was also introduced in @cite_6 and is popularly referred to as dirty paper coding (DPC). We generalize Costa's result to the case of noisy interference knowledge. We show that the capacity with knowledge of a noisy version of @math at the transmitter is equal to the capacity with knowledge of a statistically equivalent noisy version of @math at the receiver. However, unlike @cite_6 where the capacity does not depend on the variance of @math , in the general noisy side information case, the capacity decreases as the variance of @math increases.
|
{
"abstract": [
"A channel with output Y = X + S + Z is examined, The state S N(0, QI) and the noise Z N(0, NI) are multivariate Gaussian random variables ( I is the identity matrix.). The input X R^ n satisfies the power constraint (l n) i=1 ^ n X_ i ^ 2 P . If S is unknown to both transmitter and receiver then the capacity is 1 2 (1 + P ( N + Q)) nats per channel use. However, if the state S is known to the encoder, the capacity is shown to be C^ = 1 2 (1 + P N) , independent of Q . This is also the capacity of a standard Gaussian channel with signal-to-noise power ratio P N . Therefore, the state S does not affect the capacity of the channel, even though S is unknown to the receiver. It is shown that the optimal transmitter adapts its signal to the state S rather than attempting to cancel it."
],
"cite_N": [
"@cite_6"
],
"mid": [
"1976109068"
]
}
|
Noisy DPC and Application to a Cognitive Channel
|
Consider a channel in which the received signal, Y is corrupted by two independent additive white Gaussian noise (AWGN) sequences, S ∼ N (0, QI n ) and Z 0 ∼ N (0, N 0 I n ), where I n is the identity matrix of size n. The received signal is of the form,
Y = X + S + Z 0 ,(1)
where X is the transmitted sequence for n uses of the channel. Let the transmitter and receiver each has knowledge of independent noisy observations of S. We quantify the benefit of this additional knowledge by computing the capacity of the channel in (1) and presenting the coding scheme that achieves capacity. Our result indicates that the capacity is of the form C( P µQ+N0 ), where C(x) = 0.5 log(1 + x) and 0 ≤ µ ≤ 1 is the residual fraction (explicitly characterized in Sec. II-C) of the interference power, Q, that can not be canceled with the noisy observations at the transmitter and receiver.
We then consider the network in Fig. 2 in which the primary transmitter (node A) is sending information to its intended receiver (node B). There is also a secondary transmitter (node C) who wishes to communicate with its receiver (node D) on the same frequency as the primary nodes. We focus on the case when nodes C and D are relatively closer to node A than node B. Such a scenario might occur for instance when node A is a cellular base station and nodes C and D are two nearby nodes, while node B is at the cell-edge.
Let node A communicate with its receiver node B at rate R using transmit power P A . Let the transmit power of node C equal P C . Since we assumed that node B is much farther away from the other nodes, we do not explicitly consider the 1 The authors are with the Department of Electrical Engineering, Southern Methodist University, Dallas, TX, USA. Email: {ypeng,rajand}@lyle.smu.edu. This work has been supported in part by the National Science Foundation through grant CCF 0546519.
interference that P C causes at node B. A simple lower bound, R CD−lb on the rate that nodes C and D can communicate is
R CD−lb = C(|h CD | 2 P C /(N D + |h AD | 2 P A )),(2)
which is achieved by treating the signal from node A as noise at node D. Similarly, a simple upper bound on this rate is obtained (if either nodes C or D has perfect, noncausal knowledge of node A's signal) as
R CD−ub = C(|h CD | 2 P C /N D ).(3)
We The channel model is depicted in Fig. 1. The transmitter sends an index, W ∈ {1, 2, . . . , K}, to the receiver in n uses of the channel at rate R = 1 n log 2 K bits per transmission. The output of the channel in (1) is contaminated by two independent AWGN sequences, S ∼ N (0, QI n ) and Z 0 ∼ N (0, N 0 I n ). Side information M 1 = S + Z 1 , which is noisy observations of the interference is available at the transmitter. Similarly, noisy side information M 2 = S + Z 2 , is available at the receiver. The noise vectors are distributed as Z 1 ∼ N (0, N 1 I n ) and Z 2 ∼ N (0, N 2 I n ).
Based on index W and M 1 , the encoder transmits one codeword, X, from a (2 nR , n) code book, which satisfies average power constraint, 1 n X 2 ≤ P . LetŴ be the estimate of W at the receiver; an error occurs ifŴ = W .
C. Channel Capacity
Theorem 1: Consider a channel of the form (1) with an average transmit power constraint P . Let independent noisy observations M 1 = S + Z 1 and M 2 = S + Z 2 of the interference S be available, respectively, at the transmitter and receiver. The noise vectors have the following distributions:
Z i ∼ N (0, N i I n ), i = 0, 1, 2 and S ∼ N (0, QI n ). The capacity of this channel equals C P µQ+N0 , where 0 ≤ µ = 1 1+ Q N 1 + Q N 2 ≤ 1.
Remark: Clearly µ = 0 when either N 1 = 0 or N 2 = 0 and the capacity is C(P/N 0 ), which is consistent with [1] 1 . Further, µ = 1 when N 1 → ∞ and N 2 → ∞, and the capacity is C(P/(Q + N 0 )), which is the capacity of a Gaussian channel with noise Q + N 0 . Thus, one can interpret µ as the residual fractional power of the interference that cannot be canceled by the noisy observations at the transmitter and receiver.
Proof: We first compute an outer bound on the capacity of this channel. It is clear that the channel capacity can not exceed max p(x|m1,m2) I(X; Y |M 1 , M 2 ), which is the capacity when both M 1 and M 2 are known at the transmitter and receiver. Thus, a capacity bound of the channel can be calculated as 1 Costa's result is a special case with N 1 = 0 and N 2 = ∞.
I(X; Y |M 1 , M 2 ) = I(X; Y, M 1 , M 2 ) − I(X; M 1 , M 2 ) ≤ I(X; Y, M 1 , M 2 ) (4) = H(X) + H(Y, M 1 , M 2 ) − H(X, Y, M 1 , M 2 )= 1 2 log(2πe) 4 P P + Q + N 0 Q Q Q Q + N 1 Q Q Q Q + N 2 − 1 2 log(2πe) 4 P P 0 0 P P + Q + N 0 Q Q 0 Q Q + N 1 Q 0 Q Q Q + N 2 = C (P/(µQ + N 0 )) .(5)
where µ =
1 1+ Q N 1 + Q N 2
. Note that the inequality in (4) is actually a strict equality since I(X; M 1 , M 2 ) = 0.
D. Achievability of Capacity
We now prove that (5) is achievable. The codebook generation and encoding method we use follows the principles in [2], [3]. The construction of auxiliary variable is similar to [1].
Random codebook generation:
1) Generate 2 nI(U;Y,M2) i.i.d. length-n codewords U, whose elements are drawn i.i.d. according to U ∼ N (0, P + α 2 (Q + N 1 )), where α is a coefficient to be optimized.
2) Randomly place the 2 nI(U;Y,M2) codewords U into 2 nR cells in such a way that each of the cells has the same number of codewords. The codewords and their assignments to the 2 nR cells are revealed to both the transmitter and the receiver.
Encoding: 1) Given an index W and an observation, M 1 = M 1 (i), of the Gaussian noise sequence, S, the encoder searches among all the codewords U in the W th cell to find a codeword that is jointly typical with M 1 (i). It is easy to show using the joint asymptotic equipartition property (AEP) [8] that if the number of codewords in each cell is at least 2 nI(U,M1) , the probability of finding such a codeword U = U(i) exponentially approaches 1 as n → ∞.
2) Once a jointly typical pair (U(i), M 1 (i)) is found, the encoder calculates the codeword to be transmitted as X(i) = U(i) − αM 1 (i). With high probability, X(i) will be a typical sequence which satisfies 1 n X(i) 2 ≤ P . Decoding:
1) Given X(i) is transmitted, the received signal is Y(i) = X(i) + S + Z 0 .
The decoder searches among all 2 nI(U;Y,M2) codewords U for a sequence that is jointly typical with Y(i). By joint AEP, the decoder will find U(i) as the only jointly typical codeword with probability approaching 1.
2) Based on the knowledge of the codeword assignment to the cells, the decoder estimatesŴ as the index of the cell that U(i) belongs to.
Proof of achievability:
Let U = X + αM 1 = X + α(S + Z 1 ), Y = X + S + Z 0 and M 2 = S + Z 2 , where X ∼ N (0, P ), S ∼ N (0, Q) and Z i ∼ N (0, N i ), i = 0, 1, 2 are independent Gaussian random variables.
To ensure that with high probability, in each of the 2 nR cells, at least one jointly typical pair of U and M 1 can be found. The rate, R, which is a function of α, must satisfy
R(α) ≤ I(U ; Y, M 2 ) − I(U ; M 1 ).(6)
The two mutual informations in (6) can be calculated as
I(U ; Y, M 2 ) = H(U ) + H(Y, M 2 ) − H(U, Y, M 2 ) = 1 2 log P + α 2 (Q + N 1 ) P + Q + N 0 Q Q Q + N 2 (7) − 1 2 log P + α 2 (Q + N 1 ) P + αQ αQ P + αQ P + Q + N 0 Q αQ Q Q + N 2 and I(U ; M 1 ) = 1 2 log P + α 2 (Q + N 1 ) P .(8)
Substituting (7) and (8) into (6), we find
R(α) ≤ 1 2 log P [(Q + P + N 0 )(Q + N 2 ) − Q 2 ] − 1 2 log α 2 [Q(P + N 0 )(N 1 + N 2 ) + (Q + P + N 0 )N 1 N 2 ] −2αQP N 2 + P (QN 0 + QN 2 + N 0 N 2 )} .(9)
After simple algebraic manipulations, the optimal coefficient, α * , that maximizes the right hand side of (9) is found to be
α * = QP N 2 Q(P + N 0 )(N 1 + N 2 ) + (Q + P + N 0 )N 1 N 2 .(10)
Substituting for α * in (9), the maximal rate equals
R(α * ) = C (P/(µQ + N 0 ))(11)with 1 µ = 1 + Q N1 + Q N2
, which equals the upper bound (5).
E. Special cases
Noisy estimate at transmitter/receiver only: When the observation of S is only available at the transmitter or receiver, the channel is equivalent to our original model when N 2 → ∞ and N 1 → ∞, respectively. Their capacity are, respectively
I(X; Y |M 1 ) = C(P/(Q[N 1 /(Q + N 1 )] + N 0 )) (12) I(X; Y, M 2 ) = C(P/(Q[N 2 /(Q + N 2 )] + N 0 )),(13)
Note that when N 1 = 0, the channel model further reduces to Costa's DPC channel model [1]. This paper extends that result to the case of noisy interference. Indeed, by setting N 1 = N 2 in (13) and (12), we can see that the capacity with noisy interference known to transmitter only equals the capacity with a statistically similar noisy interference known to receiver only.
From (12), one may intuitively interpret the effect of knowledge of M 1 at the transmitter. Indeed, a fraction Q Q+N1 of the interfering power can be canceled using the proposed coding scheme. The remaining N1 Q+N1 fraction of the interfering power, Q, is treated as 'residual' noise. Thus, unlike Costa's result [1], the capacity in this case depends on the power Q of the interfering source: For a fixed N 1 , as Q → ∞, the capacity decreases and approaches C (P/ (N 1 + N 0 )).
Multiple Independent Observations: Let there be n 1 independent observations M 1 , M 2 , . . . ,M n1 of S at the transmitter and n 2 independent observations M n1+1 ,M n1+2 ,. . . , M n1+n2 at the receiver. It can be easily shown that the capacity in this case is given by C (P/(μQ + N 0 )), whereμ = 1 1+ Q N 1 + Q N 2 +···+ Q N n 1 +n 2 and N 1 , N 2 , . . . , N n1+n2 are the variances of the Gaussian noise variables, corresponding to the n 1 +n 2 observations. The proof involves calculating maximum likelihood estimates (MLE) of the interference at both the transmit and receive nodes and using these estimates in Theorem 1. To avoid repetitive derivations, the proof is omitted.
It is easy to see that the capacity expression is symmetric in the noise variances at the transmitter and receiver. In other words, having all the n 1 + n 2 observations at the transmitter would result in the same capacity. Thus, the observations of S made at the transmitter and the receiver are equivalent in achievable rate, as long as the corrupting Gaussian noises have the same statistics.
In this section, we assumed non-causal knowledge of the interference at the transmitter and receiver nodes. In the next section, we propose a simple and practical transmission scheme that uses causal knowledge of the interference to increase the achievable rate. Proof: Consider the various cases as follows: 1. Let |h AD | 2 ≥ PC |hCD | 2 +ND PA (e 2R − 1). Now, consider the multiple access channel from nodes A, C to node D. Clearly, node D can decode the signal transmitted by node A by treating the signal from node C as noise. Hence, it can easily subtract this signal from the received signal and node C can achieve its rate upper bound C(P C |h CD | 2 /N C ).
III. APPLYING DPC TO A COGNITIVE CHANNEL
2. Consider the case |h AD | 2 ≤ ND PA (e 2R − 1) and |h AC | 2 ≤ NC PA (e 2R − 1). Now, neither node C nor node D can perfectly decode the signal from node A. Thus, an achievable rate of C( |hCD| 2 PC (ND+PA|hAD| 2 ) ) for node C is obtained simply by treating the signal from node A as noise at node D.
3. Now, consider the case |h AC | 2 ≥ NC PA (e 2R − 1) and
PC |hCD | 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1)
In the following we construct a simple practical scheme in which nodes C and D obtain causal, noisy estimates of the signal being sent from node A. Using these estimates and Theorem 1, the nodes cancel out a part of the interference to achieve a higher transmission rate as follows.
R CD = C( |hCD | 2 PC ND ) if |h AD | 2 ≥ PC |hCD| 2 +ND PA (e 2R − 1) C( |hCD | 2 PC (ND+PA|hAD | 2 ) ) if |h AD | 2 ≤ ND PA (e 2R − 1)and |h AC | 2 ≤ NC PA (e 2R − 1) C( |hCD | 2 PC µr |hAD| 2 PA+ND ) if |h AC | 2 ≤ NC PA (e 2R − 1) and PC |hCD| 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1) (1 − m n )C( |hCD | 2 PC (n/n−m) µt|hAD| 2 PA+ND ) if |h AC | 2 ≥ NC PA (e 2R − 1)and|h AD | 2 ≤ ND PA (e 2R − 1) (1 − m n )C( |hCD | 2 PC (n/n−m) µtr |hAD| 2 PA+ND ) if |h AC | 2 ≥ NC PA (e 2R − 1)and PC |hCD| 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≤ ND PA (e 2R − 1)(14)
Let us assume that node A uses a code book of size (2 nR , n) where each element is i.i.d. Gaussian distributed. The transmit signal is denoted as X A (i), i = 1, 2, . . . n. Nodes C and D listen to the signal transmitted by node A for m symbols in each block of n symbols. Based on the received signal, nodes C and D decodes the code word transmitted by node A.
Let P e,C and P e,D denote, respectively, the probability of decoding error at nodes C and D: These error probabilities depend on the channel gains as well as m. In the remaining n − m symbols, nodes C and D use their estimate of X A (i), i = m + 1, . . . n to increase their transmission rate. Using Theorem 1, the achievable rate is given by
r = 1 2 1 − m n log 1 + |h CD | 2 P C (n/n − m) µ tr |h AD | 2 P A + N D ,(15)
where
1 µ tr = 1 + |h AD | 2 P A N 1 + |h AD | 2 P A N 2(16)
The transmit power at node C is increased over the n − m symbols that it transmits to meet average power constraint P C . The variance of error in the estimate of X A at nodes C and D is given respectively by N 1 and N 2 . Because of the i.i.d Gaussian code book being used, N 1 = 2P e,C P A |h AD | 2 and N 2 = 2P e,D P A |h AD | 2 . The value of P e,C and P e,D can be obtained using the theory of error exponent. Specifically, using the random coding bound, we obtain, P e,C ≤ exp(−mE C (R)) and P e,D ≤ exp(−nE D (R)) (17)
where E C (R) and E D (R) represent the random coding exponent. E C (R) is derived in [9] and shown in (18) for easy reference (E D (R) is similarly defined). In (18),
A 1 = |hAC | 2 PA NC , β = exp(2R), γ = 0.5(1 + A1 2 + 1 + A 2 1 4 ), δ = 0.5 log(0.5 + A1 4 + 0.5 1 + A 2 1 4 )
. Substituting for N 1 and N 2 into (16), one can obtain the rate given in (14).
Note that there is no constraint that node C must use codes of length m − n since node A uses codes of length n. Node C can code over multiple codewords of A to achieve its desired probability of error.
The selection of m critically affects the achievable rates. On the one hand, increasing m results in lesser fraction of time available for actual data communications between nodes C and D and thus decreasing rate. On the other hand, increasing m results in improved decoding of node A's signal at nodes C and D consequently reducing P e,C and P e,D and increasing the achievable rate. The optimal value of m can be obtained by equating the derivative of (15) to 0. Due to the analytical intractability, we resort to simple numerical optimization to find the optimal value of m. For a given n, we evaluate the rate r CD for all values of m = 1, 2, . . . n and then simply pick the largest value. We are currently trying to derive analytical expressions for the optimum value of m.
4. Let |h AC | 2 ≤ NC PA (e 2R −1) and PC |hCD| 2 +ND PA (e 2R −1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1). In this case, the transmitter node C cannot decode node A's signal. However, node D uses all n received symbols to first decode node A's signal (with certain error probability) and then cancel its effect from the received signal. Subsequently, node D will decode node C's signal and the achievable rate is obtained from Theorem 1.
5. Finally, let |h AC | 2 ≥ NC PA (e 2R − 1) and |h AD | 2 ≤ ND PA (e 2R − 1). In this case, node D cannot decode node A's signal. However, node C uses the first m received symbols to first decode node A's signal (with certain error probability) and then employ a noisy DPC transmission strategy. Subsequently, the achievable rate is obtained from Theorem 1.
A. Numerical Results
In our numerical results we fix the values for the parameters as: P A = 10, P C = 2, N C = N D = 1. For simplicity we fix |h CD | = 1 and vary h AC and h AD . 3 shows the variation of the achievable rate with m for different values of n. As n increases the fractional penalty on the rate for larger m is offset by the gains due to better decoding. Thus, the optimum value of m increases. However,
E C (R) = 0 if R > C |hAC | 2 PC NC A1 4β (β + 1) − (β − 1) 1 + 4β A1(β−1) + 1 2 log β − A1(β−1) 2 1 + 4β A1(β−1) − 1 if δ ≤ R ≤ C |hAC | 2 PC NC 1 − γ + A1 2 + 1 2 log γ − A1 2 + 1 2 log(γ) − R if R < δ(18)
it turns out that the optimum ratio m/n decreases as n increases. We are currently trying to analytically compute the limit to which the optimum m converges as n → ∞. 4 shows the variation of the achievable rate r CD with h AD for different values of h AC . Notice the nonmonotonic variation of r CD with h AD which can be explained as follows. First consider h AC = is small. In this case, the transmitter cannot reliably decode node A's signal. If in addition, h AD is also small, then node D cannot decode node A's signal either. Thus, as h AD increases, the interference of node A at node D increases and the achievable rate r CD decreases. Now, as h AD increases beyond a certain value, node D can begin to decode node A's signal and the probability of error is captured by Gallager's error exponents. In this scenario, as h AD increases, the error probability decreases and thus node D can cancel out more and more of interference from node A. Consequently, r CD increases. Similar qualitative behavior occurs for other values of h AC . However, for large h AC , node C can decode (with some errors) the signal from node A and then use a noisy DPC scheme to achieve higher rates r CD . Notice also that as explained before for large h AD , the outer bound on the rate is achieved for all values of h AC .
The variation of r CD with h AC is given in Fig. 5. First consider the case |h AD | = 0.2. In this case, node D cannot decode the signal of node A reliably. Now, for small values of |h AC | node C also cannot decode node A's signal. Hence, the achievable rate equals the lower bound, R CD−lb . As |h AC | increases, node C can begin to decode node A's signal and cancel out a part of the interference using the noisy DPC scheme; hence r CD begins to increase. Similar behavior is observed for |h AD | = 0.6. However, when |h AD | = 0.9, node D can decode node A's signal with some errors and cancel out part of the interference. Hence, in this case, even for small values of |h AC | the achievable rate r CD is greater than the lower bound. As before r CD increases with |h AC | since node A can cancel out an increasing portion of the interference using the noisy DPC technique. Note however, that a larger h AD causes more interference at node D, which is reflected in the decrease of the lower bound. Thus, for a given |h AC | the achievable rate can be lower or higher depending on the value of |h AD |.
| 4,165 |
0901.2934
|
1512489499
|
In this paper, we first consider a channel that is contaminated by two independent Gaussian noises @math and @math . The capacity of this channel is computed when independent noisy versions of @math are known to the transmitter and or receiver. It is shown that the channel capacity is greater then the capacity when @math is completely unknown, but is less then the capacity when @math is perfectly known at the transmitter or receiver. For example, if there is one noisy version of @math known at the transmitter only, the capacity is @math , where @math is the input power constraint and @math is the power of the noise corrupting @math . We then consider a Gaussian cognitive interference channel (IC) and propose a causal noisy dirty paper coding (DPC) strategy. We compute the achievable region using this noisy DPC strategy and quantify the regions when it achieves the upper bound on the rate.
|
@cite_6 , Costa adopted the random coding argument given by @cite_3 @cite_0 . Based on the channel capacity @math given in @cite_3 @cite_0 , Costa constructed the auxiliary variable @math as a linear combination of @math and @math and showed that this simple construction of @math achieves capacity.
|
{
"abstract": [
"A computer memory with defects is modeled as a discrete memoryless channel with states that are statistically determined. The storage capacity is found when complete defect information is given to the encoder or to the decoder, and when the defect information is given completely to the decoder but only partially to the encoder. Achievable storage rates are established when partial defect information is provided at varying rates to both the encoder and the decoder. Arimoto-Blahut type algorithms are used to compute the storage capacity.",
"",
"A channel with output Y = X + S + Z is examined, The state S N(0, QI) and the noise Z N(0, NI) are multivariate Gaussian random variables ( I is the identity matrix.). The input X R^ n satisfies the power constraint (l n) i=1 ^ n X_ i ^ 2 P . If S is unknown to both transmitter and receiver then the capacity is 1 2 (1 + P ( N + Q)) nats per channel use. However, if the state S is known to the encoder, the capacity is shown to be C^ = 1 2 (1 + P N) , independent of Q . This is also the capacity of a standard Gaussian channel with signal-to-noise power ratio P N . Therefore, the state S does not affect the capacity of the channel, even though S is unknown to the receiver. It is shown that the optimal transmitter adapts its signal to the state S rather than attempting to cancel it."
],
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_6"
],
"mid": [
"1986389751",
"70904370",
"1976109068"
]
}
|
Noisy DPC and Application to a Cognitive Channel
|
Consider a channel in which the received signal, Y is corrupted by two independent additive white Gaussian noise (AWGN) sequences, S ∼ N (0, QI n ) and Z 0 ∼ N (0, N 0 I n ), where I n is the identity matrix of size n. The received signal is of the form,
Y = X + S + Z 0 ,(1)
where X is the transmitted sequence for n uses of the channel. Let the transmitter and receiver each has knowledge of independent noisy observations of S. We quantify the benefit of this additional knowledge by computing the capacity of the channel in (1) and presenting the coding scheme that achieves capacity. Our result indicates that the capacity is of the form C( P µQ+N0 ), where C(x) = 0.5 log(1 + x) and 0 ≤ µ ≤ 1 is the residual fraction (explicitly characterized in Sec. II-C) of the interference power, Q, that can not be canceled with the noisy observations at the transmitter and receiver.
We then consider the network in Fig. 2 in which the primary transmitter (node A) is sending information to its intended receiver (node B). There is also a secondary transmitter (node C) who wishes to communicate with its receiver (node D) on the same frequency as the primary nodes. We focus on the case when nodes C and D are relatively closer to node A than node B. Such a scenario might occur for instance when node A is a cellular base station and nodes C and D are two nearby nodes, while node B is at the cell-edge.
Let node A communicate with its receiver node B at rate R using transmit power P A . Let the transmit power of node C equal P C . Since we assumed that node B is much farther away from the other nodes, we do not explicitly consider the 1 The authors are with the Department of Electrical Engineering, Southern Methodist University, Dallas, TX, USA. Email: {ypeng,rajand}@lyle.smu.edu. This work has been supported in part by the National Science Foundation through grant CCF 0546519.
interference that P C causes at node B. A simple lower bound, R CD−lb on the rate that nodes C and D can communicate is
R CD−lb = C(|h CD | 2 P C /(N D + |h AD | 2 P A )),(2)
which is achieved by treating the signal from node A as noise at node D. Similarly, a simple upper bound on this rate is obtained (if either nodes C or D has perfect, noncausal knowledge of node A's signal) as
R CD−ub = C(|h CD | 2 P C /N D ).(3)
We The channel model is depicted in Fig. 1. The transmitter sends an index, W ∈ {1, 2, . . . , K}, to the receiver in n uses of the channel at rate R = 1 n log 2 K bits per transmission. The output of the channel in (1) is contaminated by two independent AWGN sequences, S ∼ N (0, QI n ) and Z 0 ∼ N (0, N 0 I n ). Side information M 1 = S + Z 1 , which is noisy observations of the interference is available at the transmitter. Similarly, noisy side information M 2 = S + Z 2 , is available at the receiver. The noise vectors are distributed as Z 1 ∼ N (0, N 1 I n ) and Z 2 ∼ N (0, N 2 I n ).
Based on index W and M 1 , the encoder transmits one codeword, X, from a (2 nR , n) code book, which satisfies average power constraint, 1 n X 2 ≤ P . LetŴ be the estimate of W at the receiver; an error occurs ifŴ = W .
C. Channel Capacity
Theorem 1: Consider a channel of the form (1) with an average transmit power constraint P . Let independent noisy observations M 1 = S + Z 1 and M 2 = S + Z 2 of the interference S be available, respectively, at the transmitter and receiver. The noise vectors have the following distributions:
Z i ∼ N (0, N i I n ), i = 0, 1, 2 and S ∼ N (0, QI n ). The capacity of this channel equals C P µQ+N0 , where 0 ≤ µ = 1 1+ Q N 1 + Q N 2 ≤ 1.
Remark: Clearly µ = 0 when either N 1 = 0 or N 2 = 0 and the capacity is C(P/N 0 ), which is consistent with [1] 1 . Further, µ = 1 when N 1 → ∞ and N 2 → ∞, and the capacity is C(P/(Q + N 0 )), which is the capacity of a Gaussian channel with noise Q + N 0 . Thus, one can interpret µ as the residual fractional power of the interference that cannot be canceled by the noisy observations at the transmitter and receiver.
Proof: We first compute an outer bound on the capacity of this channel. It is clear that the channel capacity can not exceed max p(x|m1,m2) I(X; Y |M 1 , M 2 ), which is the capacity when both M 1 and M 2 are known at the transmitter and receiver. Thus, a capacity bound of the channel can be calculated as 1 Costa's result is a special case with N 1 = 0 and N 2 = ∞.
I(X; Y |M 1 , M 2 ) = I(X; Y, M 1 , M 2 ) − I(X; M 1 , M 2 ) ≤ I(X; Y, M 1 , M 2 ) (4) = H(X) + H(Y, M 1 , M 2 ) − H(X, Y, M 1 , M 2 )= 1 2 log(2πe) 4 P P + Q + N 0 Q Q Q Q + N 1 Q Q Q Q + N 2 − 1 2 log(2πe) 4 P P 0 0 P P + Q + N 0 Q Q 0 Q Q + N 1 Q 0 Q Q Q + N 2 = C (P/(µQ + N 0 )) .(5)
where µ =
1 1+ Q N 1 + Q N 2
. Note that the inequality in (4) is actually a strict equality since I(X; M 1 , M 2 ) = 0.
D. Achievability of Capacity
We now prove that (5) is achievable. The codebook generation and encoding method we use follows the principles in [2], [3]. The construction of auxiliary variable is similar to [1].
Random codebook generation:
1) Generate 2 nI(U;Y,M2) i.i.d. length-n codewords U, whose elements are drawn i.i.d. according to U ∼ N (0, P + α 2 (Q + N 1 )), where α is a coefficient to be optimized.
2) Randomly place the 2 nI(U;Y,M2) codewords U into 2 nR cells in such a way that each of the cells has the same number of codewords. The codewords and their assignments to the 2 nR cells are revealed to both the transmitter and the receiver.
Encoding: 1) Given an index W and an observation, M 1 = M 1 (i), of the Gaussian noise sequence, S, the encoder searches among all the codewords U in the W th cell to find a codeword that is jointly typical with M 1 (i). It is easy to show using the joint asymptotic equipartition property (AEP) [8] that if the number of codewords in each cell is at least 2 nI(U,M1) , the probability of finding such a codeword U = U(i) exponentially approaches 1 as n → ∞.
2) Once a jointly typical pair (U(i), M 1 (i)) is found, the encoder calculates the codeword to be transmitted as X(i) = U(i) − αM 1 (i). With high probability, X(i) will be a typical sequence which satisfies 1 n X(i) 2 ≤ P . Decoding:
1) Given X(i) is transmitted, the received signal is Y(i) = X(i) + S + Z 0 .
The decoder searches among all 2 nI(U;Y,M2) codewords U for a sequence that is jointly typical with Y(i). By joint AEP, the decoder will find U(i) as the only jointly typical codeword with probability approaching 1.
2) Based on the knowledge of the codeword assignment to the cells, the decoder estimatesŴ as the index of the cell that U(i) belongs to.
Proof of achievability:
Let U = X + αM 1 = X + α(S + Z 1 ), Y = X + S + Z 0 and M 2 = S + Z 2 , where X ∼ N (0, P ), S ∼ N (0, Q) and Z i ∼ N (0, N i ), i = 0, 1, 2 are independent Gaussian random variables.
To ensure that with high probability, in each of the 2 nR cells, at least one jointly typical pair of U and M 1 can be found. The rate, R, which is a function of α, must satisfy
R(α) ≤ I(U ; Y, M 2 ) − I(U ; M 1 ).(6)
The two mutual informations in (6) can be calculated as
I(U ; Y, M 2 ) = H(U ) + H(Y, M 2 ) − H(U, Y, M 2 ) = 1 2 log P + α 2 (Q + N 1 ) P + Q + N 0 Q Q Q + N 2 (7) − 1 2 log P + α 2 (Q + N 1 ) P + αQ αQ P + αQ P + Q + N 0 Q αQ Q Q + N 2 and I(U ; M 1 ) = 1 2 log P + α 2 (Q + N 1 ) P .(8)
Substituting (7) and (8) into (6), we find
R(α) ≤ 1 2 log P [(Q + P + N 0 )(Q + N 2 ) − Q 2 ] − 1 2 log α 2 [Q(P + N 0 )(N 1 + N 2 ) + (Q + P + N 0 )N 1 N 2 ] −2αQP N 2 + P (QN 0 + QN 2 + N 0 N 2 )} .(9)
After simple algebraic manipulations, the optimal coefficient, α * , that maximizes the right hand side of (9) is found to be
α * = QP N 2 Q(P + N 0 )(N 1 + N 2 ) + (Q + P + N 0 )N 1 N 2 .(10)
Substituting for α * in (9), the maximal rate equals
R(α * ) = C (P/(µQ + N 0 ))(11)with 1 µ = 1 + Q N1 + Q N2
, which equals the upper bound (5).
E. Special cases
Noisy estimate at transmitter/receiver only: When the observation of S is only available at the transmitter or receiver, the channel is equivalent to our original model when N 2 → ∞ and N 1 → ∞, respectively. Their capacity are, respectively
I(X; Y |M 1 ) = C(P/(Q[N 1 /(Q + N 1 )] + N 0 )) (12) I(X; Y, M 2 ) = C(P/(Q[N 2 /(Q + N 2 )] + N 0 )),(13)
Note that when N 1 = 0, the channel model further reduces to Costa's DPC channel model [1]. This paper extends that result to the case of noisy interference. Indeed, by setting N 1 = N 2 in (13) and (12), we can see that the capacity with noisy interference known to transmitter only equals the capacity with a statistically similar noisy interference known to receiver only.
From (12), one may intuitively interpret the effect of knowledge of M 1 at the transmitter. Indeed, a fraction Q Q+N1 of the interfering power can be canceled using the proposed coding scheme. The remaining N1 Q+N1 fraction of the interfering power, Q, is treated as 'residual' noise. Thus, unlike Costa's result [1], the capacity in this case depends on the power Q of the interfering source: For a fixed N 1 , as Q → ∞, the capacity decreases and approaches C (P/ (N 1 + N 0 )).
Multiple Independent Observations: Let there be n 1 independent observations M 1 , M 2 , . . . ,M n1 of S at the transmitter and n 2 independent observations M n1+1 ,M n1+2 ,. . . , M n1+n2 at the receiver. It can be easily shown that the capacity in this case is given by C (P/(μQ + N 0 )), whereμ = 1 1+ Q N 1 + Q N 2 +···+ Q N n 1 +n 2 and N 1 , N 2 , . . . , N n1+n2 are the variances of the Gaussian noise variables, corresponding to the n 1 +n 2 observations. The proof involves calculating maximum likelihood estimates (MLE) of the interference at both the transmit and receive nodes and using these estimates in Theorem 1. To avoid repetitive derivations, the proof is omitted.
It is easy to see that the capacity expression is symmetric in the noise variances at the transmitter and receiver. In other words, having all the n 1 + n 2 observations at the transmitter would result in the same capacity. Thus, the observations of S made at the transmitter and the receiver are equivalent in achievable rate, as long as the corrupting Gaussian noises have the same statistics.
In this section, we assumed non-causal knowledge of the interference at the transmitter and receiver nodes. In the next section, we propose a simple and practical transmission scheme that uses causal knowledge of the interference to increase the achievable rate. Proof: Consider the various cases as follows: 1. Let |h AD | 2 ≥ PC |hCD | 2 +ND PA (e 2R − 1). Now, consider the multiple access channel from nodes A, C to node D. Clearly, node D can decode the signal transmitted by node A by treating the signal from node C as noise. Hence, it can easily subtract this signal from the received signal and node C can achieve its rate upper bound C(P C |h CD | 2 /N C ).
III. APPLYING DPC TO A COGNITIVE CHANNEL
2. Consider the case |h AD | 2 ≤ ND PA (e 2R − 1) and |h AC | 2 ≤ NC PA (e 2R − 1). Now, neither node C nor node D can perfectly decode the signal from node A. Thus, an achievable rate of C( |hCD| 2 PC (ND+PA|hAD| 2 ) ) for node C is obtained simply by treating the signal from node A as noise at node D.
3. Now, consider the case |h AC | 2 ≥ NC PA (e 2R − 1) and
PC |hCD | 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1)
In the following we construct a simple practical scheme in which nodes C and D obtain causal, noisy estimates of the signal being sent from node A. Using these estimates and Theorem 1, the nodes cancel out a part of the interference to achieve a higher transmission rate as follows.
R CD = C( |hCD | 2 PC ND ) if |h AD | 2 ≥ PC |hCD| 2 +ND PA (e 2R − 1) C( |hCD | 2 PC (ND+PA|hAD | 2 ) ) if |h AD | 2 ≤ ND PA (e 2R − 1)and |h AC | 2 ≤ NC PA (e 2R − 1) C( |hCD | 2 PC µr |hAD| 2 PA+ND ) if |h AC | 2 ≤ NC PA (e 2R − 1) and PC |hCD| 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1) (1 − m n )C( |hCD | 2 PC (n/n−m) µt|hAD| 2 PA+ND ) if |h AC | 2 ≥ NC PA (e 2R − 1)and|h AD | 2 ≤ ND PA (e 2R − 1) (1 − m n )C( |hCD | 2 PC (n/n−m) µtr |hAD| 2 PA+ND ) if |h AC | 2 ≥ NC PA (e 2R − 1)and PC |hCD| 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≤ ND PA (e 2R − 1)(14)
Let us assume that node A uses a code book of size (2 nR , n) where each element is i.i.d. Gaussian distributed. The transmit signal is denoted as X A (i), i = 1, 2, . . . n. Nodes C and D listen to the signal transmitted by node A for m symbols in each block of n symbols. Based on the received signal, nodes C and D decodes the code word transmitted by node A.
Let P e,C and P e,D denote, respectively, the probability of decoding error at nodes C and D: These error probabilities depend on the channel gains as well as m. In the remaining n − m symbols, nodes C and D use their estimate of X A (i), i = m + 1, . . . n to increase their transmission rate. Using Theorem 1, the achievable rate is given by
r = 1 2 1 − m n log 1 + |h CD | 2 P C (n/n − m) µ tr |h AD | 2 P A + N D ,(15)
where
1 µ tr = 1 + |h AD | 2 P A N 1 + |h AD | 2 P A N 2(16)
The transmit power at node C is increased over the n − m symbols that it transmits to meet average power constraint P C . The variance of error in the estimate of X A at nodes C and D is given respectively by N 1 and N 2 . Because of the i.i.d Gaussian code book being used, N 1 = 2P e,C P A |h AD | 2 and N 2 = 2P e,D P A |h AD | 2 . The value of P e,C and P e,D can be obtained using the theory of error exponent. Specifically, using the random coding bound, we obtain, P e,C ≤ exp(−mE C (R)) and P e,D ≤ exp(−nE D (R)) (17)
where E C (R) and E D (R) represent the random coding exponent. E C (R) is derived in [9] and shown in (18) for easy reference (E D (R) is similarly defined). In (18),
A 1 = |hAC | 2 PA NC , β = exp(2R), γ = 0.5(1 + A1 2 + 1 + A 2 1 4 ), δ = 0.5 log(0.5 + A1 4 + 0.5 1 + A 2 1 4 )
. Substituting for N 1 and N 2 into (16), one can obtain the rate given in (14).
Note that there is no constraint that node C must use codes of length m − n since node A uses codes of length n. Node C can code over multiple codewords of A to achieve its desired probability of error.
The selection of m critically affects the achievable rates. On the one hand, increasing m results in lesser fraction of time available for actual data communications between nodes C and D and thus decreasing rate. On the other hand, increasing m results in improved decoding of node A's signal at nodes C and D consequently reducing P e,C and P e,D and increasing the achievable rate. The optimal value of m can be obtained by equating the derivative of (15) to 0. Due to the analytical intractability, we resort to simple numerical optimization to find the optimal value of m. For a given n, we evaluate the rate r CD for all values of m = 1, 2, . . . n and then simply pick the largest value. We are currently trying to derive analytical expressions for the optimum value of m.
4. Let |h AC | 2 ≤ NC PA (e 2R −1) and PC |hCD| 2 +ND PA (e 2R −1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1). In this case, the transmitter node C cannot decode node A's signal. However, node D uses all n received symbols to first decode node A's signal (with certain error probability) and then cancel its effect from the received signal. Subsequently, node D will decode node C's signal and the achievable rate is obtained from Theorem 1.
5. Finally, let |h AC | 2 ≥ NC PA (e 2R − 1) and |h AD | 2 ≤ ND PA (e 2R − 1). In this case, node D cannot decode node A's signal. However, node C uses the first m received symbols to first decode node A's signal (with certain error probability) and then employ a noisy DPC transmission strategy. Subsequently, the achievable rate is obtained from Theorem 1.
A. Numerical Results
In our numerical results we fix the values for the parameters as: P A = 10, P C = 2, N C = N D = 1. For simplicity we fix |h CD | = 1 and vary h AC and h AD . 3 shows the variation of the achievable rate with m for different values of n. As n increases the fractional penalty on the rate for larger m is offset by the gains due to better decoding. Thus, the optimum value of m increases. However,
E C (R) = 0 if R > C |hAC | 2 PC NC A1 4β (β + 1) − (β − 1) 1 + 4β A1(β−1) + 1 2 log β − A1(β−1) 2 1 + 4β A1(β−1) − 1 if δ ≤ R ≤ C |hAC | 2 PC NC 1 − γ + A1 2 + 1 2 log γ − A1 2 + 1 2 log(γ) − R if R < δ(18)
it turns out that the optimum ratio m/n decreases as n increases. We are currently trying to analytically compute the limit to which the optimum m converges as n → ∞. 4 shows the variation of the achievable rate r CD with h AD for different values of h AC . Notice the nonmonotonic variation of r CD with h AD which can be explained as follows. First consider h AC = is small. In this case, the transmitter cannot reliably decode node A's signal. If in addition, h AD is also small, then node D cannot decode node A's signal either. Thus, as h AD increases, the interference of node A at node D increases and the achievable rate r CD decreases. Now, as h AD increases beyond a certain value, node D can begin to decode node A's signal and the probability of error is captured by Gallager's error exponents. In this scenario, as h AD increases, the error probability decreases and thus node D can cancel out more and more of interference from node A. Consequently, r CD increases. Similar qualitative behavior occurs for other values of h AC . However, for large h AC , node C can decode (with some errors) the signal from node A and then use a noisy DPC scheme to achieve higher rates r CD . Notice also that as explained before for large h AD , the outer bound on the rate is achieved for all values of h AC .
The variation of r CD with h AC is given in Fig. 5. First consider the case |h AD | = 0.2. In this case, node D cannot decode the signal of node A reliably. Now, for small values of |h AC | node C also cannot decode node A's signal. Hence, the achievable rate equals the lower bound, R CD−lb . As |h AC | increases, node C can begin to decode node A's signal and cancel out a part of the interference using the noisy DPC scheme; hence r CD begins to increase. Similar behavior is observed for |h AD | = 0.6. However, when |h AD | = 0.9, node D can decode node A's signal with some errors and cancel out part of the interference. Hence, in this case, even for small values of |h AC | the achievable rate r CD is greater than the lower bound. As before r CD increases with |h AC | since node A can cancel out an increasing portion of the interference using the noisy DPC technique. Note however, that a larger h AD causes more interference at node D, which is reflected in the decrease of the lower bound. Thus, for a given |h AC | the achievable rate can be lower or higher depending on the value of |h AD |.
| 4,165 |
0901.2934
|
1512489499
|
In this paper, we first consider a channel that is contaminated by two independent Gaussian noises @math and @math . The capacity of this channel is computed when independent noisy versions of @math are known to the transmitter and or receiver. It is shown that the channel capacity is greater then the capacity when @math is completely unknown, but is less then the capacity when @math is perfectly known at the transmitter or receiver. For example, if there is one noisy version of @math known at the transmitter only, the capacity is @math , where @math is the input power constraint and @math is the power of the noise corrupting @math . We then consider a Gaussian cognitive interference channel (IC) and propose a causal noisy dirty paper coding (DPC) strategy. We compute the achievable region using this noisy DPC strategy and quantify the regions when it achieves the upper bound on the rate.
|
Following Costa's work, several extensions of DPC have been studied, , colored Gaussian noise @cite_2 , arbitrary distributions of @math @cite_8 and deterministic sequences @cite_5 . The case when @math is perfectly known to the encoder and a noisy version is known to the decoder is considered in @cite_9 , mainly focusing on discrete memoryless channels. The only result in @cite_9 for Gaussian channel reveals no additional gain due to the presence of the noisy estimate at the decoder, since perfect knowledge is available at the encoder and DPC can be used. In contrast, in this paper we study the case when only noisy knowledge of @math is available at both transmitter and receiver.
|
{
"abstract": [
"We expand Costa's (1983) writing on dirty paper model to consider general distributions on the two sources of additive noise - one known non-causally to the encoder. We show that, under certain conditions, the capacity is unaffected by the known noise if and only if the unknown noise is Gaussian.",
"We consider the generalized dirty-paper channel Y=X+S+N,E X sup 2 spl les P sub X , where N is not necessarily Gaussian, and the interference S is known causally or noncausally to the transmitter. We derive worst case capacity formulas and strategies for \"strong\" or arbitrarily varying interference. In the causal side information (SI) case, we develop a capacity formula based on minimum noise entropy strategies. We then show that strategies associated with entropy-constrained quantizers provide lower and upper bounds on the capacity. At high signal-to-noise ratio (SNR) conditions, i.e., if N is weak relative to the power constraint P sub X , these bounds coincide, the optimum strategies take the form of scalar lattice quantizers, and the capacity loss due to not having S at the receiver is shown to be exactly the \"shaping gain\" 1 2log(2 spl pi e 12) spl ap 0.254 bit. We extend the schemes to obtain achievable rates at any SNR and to noncausal SI, by incorporating minimum mean-squared error (MMSE) scaling, and by using k-dimensional lattices. For Gaussian N, the capacity loss of this scheme is upper-bounded by 1 2log2 spl pi eG( spl Lambda ), where G( spl Lambda ) is the normalized second moment of the lattice. With a proper choice of lattice, the loss goes to zero as the dimension k goes to infinity, in agreement with the results of Costa. These results provide an information-theoretic framework for the study of common communication problems such as precoding for intersymbol interference (ISI) channels and broadcast channels.",
"This paper examines the problems of dirty-paper and dirty-tape coding with some additional side information (SI) at the decoder. In particular we focus on the situation in which the decoder SI is a noisy version of the SI at the encoder. We show that noisy SI at the decoder is advantageous for many channels in terms of capacity and, in general, the optimal coding strategy needs to take it into account. In particular, the capacities of the binary dirty-paper and dirty-tape channels are computed when the decoder SI is modeled as the output of a binary symmetric channel with the encoder SI as input. Moreover, the advantage offered by the decoder SI on the minimum achievable Eb NO in the dirty-tape scenario with Gaussian interference is demonstrated",
"A Gaussian channel when corrupted by an additive Gaussian interfering signal that is not necessarily stationary or ergodic, but whose complete sample sequence is known to the transmitter, has the same capacity as if the interfering signal were not present."
],
"cite_N": [
"@cite_8",
"@cite_5",
"@cite_9",
"@cite_2"
],
"mid": [
"2138078601",
"2143664144",
"2533315451",
"2168015527"
]
}
|
Noisy DPC and Application to a Cognitive Channel
|
Consider a channel in which the received signal, Y is corrupted by two independent additive white Gaussian noise (AWGN) sequences, S ∼ N (0, QI n ) and Z 0 ∼ N (0, N 0 I n ), where I n is the identity matrix of size n. The received signal is of the form,
Y = X + S + Z 0 ,(1)
where X is the transmitted sequence for n uses of the channel. Let the transmitter and receiver each has knowledge of independent noisy observations of S. We quantify the benefit of this additional knowledge by computing the capacity of the channel in (1) and presenting the coding scheme that achieves capacity. Our result indicates that the capacity is of the form C( P µQ+N0 ), where C(x) = 0.5 log(1 + x) and 0 ≤ µ ≤ 1 is the residual fraction (explicitly characterized in Sec. II-C) of the interference power, Q, that can not be canceled with the noisy observations at the transmitter and receiver.
We then consider the network in Fig. 2 in which the primary transmitter (node A) is sending information to its intended receiver (node B). There is also a secondary transmitter (node C) who wishes to communicate with its receiver (node D) on the same frequency as the primary nodes. We focus on the case when nodes C and D are relatively closer to node A than node B. Such a scenario might occur for instance when node A is a cellular base station and nodes C and D are two nearby nodes, while node B is at the cell-edge.
Let node A communicate with its receiver node B at rate R using transmit power P A . Let the transmit power of node C equal P C . Since we assumed that node B is much farther away from the other nodes, we do not explicitly consider the 1 The authors are with the Department of Electrical Engineering, Southern Methodist University, Dallas, TX, USA. Email: {ypeng,rajand}@lyle.smu.edu. This work has been supported in part by the National Science Foundation through grant CCF 0546519.
interference that P C causes at node B. A simple lower bound, R CD−lb on the rate that nodes C and D can communicate is
R CD−lb = C(|h CD | 2 P C /(N D + |h AD | 2 P A )),(2)
which is achieved by treating the signal from node A as noise at node D. Similarly, a simple upper bound on this rate is obtained (if either nodes C or D has perfect, noncausal knowledge of node A's signal) as
R CD−ub = C(|h CD | 2 P C /N D ).(3)
We The channel model is depicted in Fig. 1. The transmitter sends an index, W ∈ {1, 2, . . . , K}, to the receiver in n uses of the channel at rate R = 1 n log 2 K bits per transmission. The output of the channel in (1) is contaminated by two independent AWGN sequences, S ∼ N (0, QI n ) and Z 0 ∼ N (0, N 0 I n ). Side information M 1 = S + Z 1 , which is noisy observations of the interference is available at the transmitter. Similarly, noisy side information M 2 = S + Z 2 , is available at the receiver. The noise vectors are distributed as Z 1 ∼ N (0, N 1 I n ) and Z 2 ∼ N (0, N 2 I n ).
Based on index W and M 1 , the encoder transmits one codeword, X, from a (2 nR , n) code book, which satisfies average power constraint, 1 n X 2 ≤ P . LetŴ be the estimate of W at the receiver; an error occurs ifŴ = W .
C. Channel Capacity
Theorem 1: Consider a channel of the form (1) with an average transmit power constraint P . Let independent noisy observations M 1 = S + Z 1 and M 2 = S + Z 2 of the interference S be available, respectively, at the transmitter and receiver. The noise vectors have the following distributions:
Z i ∼ N (0, N i I n ), i = 0, 1, 2 and S ∼ N (0, QI n ). The capacity of this channel equals C P µQ+N0 , where 0 ≤ µ = 1 1+ Q N 1 + Q N 2 ≤ 1.
Remark: Clearly µ = 0 when either N 1 = 0 or N 2 = 0 and the capacity is C(P/N 0 ), which is consistent with [1] 1 . Further, µ = 1 when N 1 → ∞ and N 2 → ∞, and the capacity is C(P/(Q + N 0 )), which is the capacity of a Gaussian channel with noise Q + N 0 . Thus, one can interpret µ as the residual fractional power of the interference that cannot be canceled by the noisy observations at the transmitter and receiver.
Proof: We first compute an outer bound on the capacity of this channel. It is clear that the channel capacity can not exceed max p(x|m1,m2) I(X; Y |M 1 , M 2 ), which is the capacity when both M 1 and M 2 are known at the transmitter and receiver. Thus, a capacity bound of the channel can be calculated as 1 Costa's result is a special case with N 1 = 0 and N 2 = ∞.
I(X; Y |M 1 , M 2 ) = I(X; Y, M 1 , M 2 ) − I(X; M 1 , M 2 ) ≤ I(X; Y, M 1 , M 2 ) (4) = H(X) + H(Y, M 1 , M 2 ) − H(X, Y, M 1 , M 2 )= 1 2 log(2πe) 4 P P + Q + N 0 Q Q Q Q + N 1 Q Q Q Q + N 2 − 1 2 log(2πe) 4 P P 0 0 P P + Q + N 0 Q Q 0 Q Q + N 1 Q 0 Q Q Q + N 2 = C (P/(µQ + N 0 )) .(5)
where µ =
1 1+ Q N 1 + Q N 2
. Note that the inequality in (4) is actually a strict equality since I(X; M 1 , M 2 ) = 0.
D. Achievability of Capacity
We now prove that (5) is achievable. The codebook generation and encoding method we use follows the principles in [2], [3]. The construction of auxiliary variable is similar to [1].
Random codebook generation:
1) Generate 2 nI(U;Y,M2) i.i.d. length-n codewords U, whose elements are drawn i.i.d. according to U ∼ N (0, P + α 2 (Q + N 1 )), where α is a coefficient to be optimized.
2) Randomly place the 2 nI(U;Y,M2) codewords U into 2 nR cells in such a way that each of the cells has the same number of codewords. The codewords and their assignments to the 2 nR cells are revealed to both the transmitter and the receiver.
Encoding: 1) Given an index W and an observation, M 1 = M 1 (i), of the Gaussian noise sequence, S, the encoder searches among all the codewords U in the W th cell to find a codeword that is jointly typical with M 1 (i). It is easy to show using the joint asymptotic equipartition property (AEP) [8] that if the number of codewords in each cell is at least 2 nI(U,M1) , the probability of finding such a codeword U = U(i) exponentially approaches 1 as n → ∞.
2) Once a jointly typical pair (U(i), M 1 (i)) is found, the encoder calculates the codeword to be transmitted as X(i) = U(i) − αM 1 (i). With high probability, X(i) will be a typical sequence which satisfies 1 n X(i) 2 ≤ P . Decoding:
1) Given X(i) is transmitted, the received signal is Y(i) = X(i) + S + Z 0 .
The decoder searches among all 2 nI(U;Y,M2) codewords U for a sequence that is jointly typical with Y(i). By joint AEP, the decoder will find U(i) as the only jointly typical codeword with probability approaching 1.
2) Based on the knowledge of the codeword assignment to the cells, the decoder estimatesŴ as the index of the cell that U(i) belongs to.
Proof of achievability:
Let U = X + αM 1 = X + α(S + Z 1 ), Y = X + S + Z 0 and M 2 = S + Z 2 , where X ∼ N (0, P ), S ∼ N (0, Q) and Z i ∼ N (0, N i ), i = 0, 1, 2 are independent Gaussian random variables.
To ensure that with high probability, in each of the 2 nR cells, at least one jointly typical pair of U and M 1 can be found. The rate, R, which is a function of α, must satisfy
R(α) ≤ I(U ; Y, M 2 ) − I(U ; M 1 ).(6)
The two mutual informations in (6) can be calculated as
I(U ; Y, M 2 ) = H(U ) + H(Y, M 2 ) − H(U, Y, M 2 ) = 1 2 log P + α 2 (Q + N 1 ) P + Q + N 0 Q Q Q + N 2 (7) − 1 2 log P + α 2 (Q + N 1 ) P + αQ αQ P + αQ P + Q + N 0 Q αQ Q Q + N 2 and I(U ; M 1 ) = 1 2 log P + α 2 (Q + N 1 ) P .(8)
Substituting (7) and (8) into (6), we find
R(α) ≤ 1 2 log P [(Q + P + N 0 )(Q + N 2 ) − Q 2 ] − 1 2 log α 2 [Q(P + N 0 )(N 1 + N 2 ) + (Q + P + N 0 )N 1 N 2 ] −2αQP N 2 + P (QN 0 + QN 2 + N 0 N 2 )} .(9)
After simple algebraic manipulations, the optimal coefficient, α * , that maximizes the right hand side of (9) is found to be
α * = QP N 2 Q(P + N 0 )(N 1 + N 2 ) + (Q + P + N 0 )N 1 N 2 .(10)
Substituting for α * in (9), the maximal rate equals
R(α * ) = C (P/(µQ + N 0 ))(11)with 1 µ = 1 + Q N1 + Q N2
, which equals the upper bound (5).
E. Special cases
Noisy estimate at transmitter/receiver only: When the observation of S is only available at the transmitter or receiver, the channel is equivalent to our original model when N 2 → ∞ and N 1 → ∞, respectively. Their capacity are, respectively
I(X; Y |M 1 ) = C(P/(Q[N 1 /(Q + N 1 )] + N 0 )) (12) I(X; Y, M 2 ) = C(P/(Q[N 2 /(Q + N 2 )] + N 0 )),(13)
Note that when N 1 = 0, the channel model further reduces to Costa's DPC channel model [1]. This paper extends that result to the case of noisy interference. Indeed, by setting N 1 = N 2 in (13) and (12), we can see that the capacity with noisy interference known to transmitter only equals the capacity with a statistically similar noisy interference known to receiver only.
From (12), one may intuitively interpret the effect of knowledge of M 1 at the transmitter. Indeed, a fraction Q Q+N1 of the interfering power can be canceled using the proposed coding scheme. The remaining N1 Q+N1 fraction of the interfering power, Q, is treated as 'residual' noise. Thus, unlike Costa's result [1], the capacity in this case depends on the power Q of the interfering source: For a fixed N 1 , as Q → ∞, the capacity decreases and approaches C (P/ (N 1 + N 0 )).
Multiple Independent Observations: Let there be n 1 independent observations M 1 , M 2 , . . . ,M n1 of S at the transmitter and n 2 independent observations M n1+1 ,M n1+2 ,. . . , M n1+n2 at the receiver. It can be easily shown that the capacity in this case is given by C (P/(μQ + N 0 )), whereμ = 1 1+ Q N 1 + Q N 2 +···+ Q N n 1 +n 2 and N 1 , N 2 , . . . , N n1+n2 are the variances of the Gaussian noise variables, corresponding to the n 1 +n 2 observations. The proof involves calculating maximum likelihood estimates (MLE) of the interference at both the transmit and receive nodes and using these estimates in Theorem 1. To avoid repetitive derivations, the proof is omitted.
It is easy to see that the capacity expression is symmetric in the noise variances at the transmitter and receiver. In other words, having all the n 1 + n 2 observations at the transmitter would result in the same capacity. Thus, the observations of S made at the transmitter and the receiver are equivalent in achievable rate, as long as the corrupting Gaussian noises have the same statistics.
In this section, we assumed non-causal knowledge of the interference at the transmitter and receiver nodes. In the next section, we propose a simple and practical transmission scheme that uses causal knowledge of the interference to increase the achievable rate. Proof: Consider the various cases as follows: 1. Let |h AD | 2 ≥ PC |hCD | 2 +ND PA (e 2R − 1). Now, consider the multiple access channel from nodes A, C to node D. Clearly, node D can decode the signal transmitted by node A by treating the signal from node C as noise. Hence, it can easily subtract this signal from the received signal and node C can achieve its rate upper bound C(P C |h CD | 2 /N C ).
III. APPLYING DPC TO A COGNITIVE CHANNEL
2. Consider the case |h AD | 2 ≤ ND PA (e 2R − 1) and |h AC | 2 ≤ NC PA (e 2R − 1). Now, neither node C nor node D can perfectly decode the signal from node A. Thus, an achievable rate of C( |hCD| 2 PC (ND+PA|hAD| 2 ) ) for node C is obtained simply by treating the signal from node A as noise at node D.
3. Now, consider the case |h AC | 2 ≥ NC PA (e 2R − 1) and
PC |hCD | 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1)
In the following we construct a simple practical scheme in which nodes C and D obtain causal, noisy estimates of the signal being sent from node A. Using these estimates and Theorem 1, the nodes cancel out a part of the interference to achieve a higher transmission rate as follows.
R CD = C( |hCD | 2 PC ND ) if |h AD | 2 ≥ PC |hCD| 2 +ND PA (e 2R − 1) C( |hCD | 2 PC (ND+PA|hAD | 2 ) ) if |h AD | 2 ≤ ND PA (e 2R − 1)and |h AC | 2 ≤ NC PA (e 2R − 1) C( |hCD | 2 PC µr |hAD| 2 PA+ND ) if |h AC | 2 ≤ NC PA (e 2R − 1) and PC |hCD| 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1) (1 − m n )C( |hCD | 2 PC (n/n−m) µt|hAD| 2 PA+ND ) if |h AC | 2 ≥ NC PA (e 2R − 1)and|h AD | 2 ≤ ND PA (e 2R − 1) (1 − m n )C( |hCD | 2 PC (n/n−m) µtr |hAD| 2 PA+ND ) if |h AC | 2 ≥ NC PA (e 2R − 1)and PC |hCD| 2 +ND PA (e 2R − 1) ≥ |h AD | 2 ≤ ND PA (e 2R − 1)(14)
Let us assume that node A uses a code book of size (2 nR , n) where each element is i.i.d. Gaussian distributed. The transmit signal is denoted as X A (i), i = 1, 2, . . . n. Nodes C and D listen to the signal transmitted by node A for m symbols in each block of n symbols. Based on the received signal, nodes C and D decodes the code word transmitted by node A.
Let P e,C and P e,D denote, respectively, the probability of decoding error at nodes C and D: These error probabilities depend on the channel gains as well as m. In the remaining n − m symbols, nodes C and D use their estimate of X A (i), i = m + 1, . . . n to increase their transmission rate. Using Theorem 1, the achievable rate is given by
r = 1 2 1 − m n log 1 + |h CD | 2 P C (n/n − m) µ tr |h AD | 2 P A + N D ,(15)
where
1 µ tr = 1 + |h AD | 2 P A N 1 + |h AD | 2 P A N 2(16)
The transmit power at node C is increased over the n − m symbols that it transmits to meet average power constraint P C . The variance of error in the estimate of X A at nodes C and D is given respectively by N 1 and N 2 . Because of the i.i.d Gaussian code book being used, N 1 = 2P e,C P A |h AD | 2 and N 2 = 2P e,D P A |h AD | 2 . The value of P e,C and P e,D can be obtained using the theory of error exponent. Specifically, using the random coding bound, we obtain, P e,C ≤ exp(−mE C (R)) and P e,D ≤ exp(−nE D (R)) (17)
where E C (R) and E D (R) represent the random coding exponent. E C (R) is derived in [9] and shown in (18) for easy reference (E D (R) is similarly defined). In (18),
A 1 = |hAC | 2 PA NC , β = exp(2R), γ = 0.5(1 + A1 2 + 1 + A 2 1 4 ), δ = 0.5 log(0.5 + A1 4 + 0.5 1 + A 2 1 4 )
. Substituting for N 1 and N 2 into (16), one can obtain the rate given in (14).
Note that there is no constraint that node C must use codes of length m − n since node A uses codes of length n. Node C can code over multiple codewords of A to achieve its desired probability of error.
The selection of m critically affects the achievable rates. On the one hand, increasing m results in lesser fraction of time available for actual data communications between nodes C and D and thus decreasing rate. On the other hand, increasing m results in improved decoding of node A's signal at nodes C and D consequently reducing P e,C and P e,D and increasing the achievable rate. The optimal value of m can be obtained by equating the derivative of (15) to 0. Due to the analytical intractability, we resort to simple numerical optimization to find the optimal value of m. For a given n, we evaluate the rate r CD for all values of m = 1, 2, . . . n and then simply pick the largest value. We are currently trying to derive analytical expressions for the optimum value of m.
4. Let |h AC | 2 ≤ NC PA (e 2R −1) and PC |hCD| 2 +ND PA (e 2R −1) ≥ |h AD | 2 ≥ ND PA (e 2R − 1). In this case, the transmitter node C cannot decode node A's signal. However, node D uses all n received symbols to first decode node A's signal (with certain error probability) and then cancel its effect from the received signal. Subsequently, node D will decode node C's signal and the achievable rate is obtained from Theorem 1.
5. Finally, let |h AC | 2 ≥ NC PA (e 2R − 1) and |h AD | 2 ≤ ND PA (e 2R − 1). In this case, node D cannot decode node A's signal. However, node C uses the first m received symbols to first decode node A's signal (with certain error probability) and then employ a noisy DPC transmission strategy. Subsequently, the achievable rate is obtained from Theorem 1.
A. Numerical Results
In our numerical results we fix the values for the parameters as: P A = 10, P C = 2, N C = N D = 1. For simplicity we fix |h CD | = 1 and vary h AC and h AD . 3 shows the variation of the achievable rate with m for different values of n. As n increases the fractional penalty on the rate for larger m is offset by the gains due to better decoding. Thus, the optimum value of m increases. However,
E C (R) = 0 if R > C |hAC | 2 PC NC A1 4β (β + 1) − (β − 1) 1 + 4β A1(β−1) + 1 2 log β − A1(β−1) 2 1 + 4β A1(β−1) − 1 if δ ≤ R ≤ C |hAC | 2 PC NC 1 − γ + A1 2 + 1 2 log γ − A1 2 + 1 2 log(γ) − R if R < δ(18)
it turns out that the optimum ratio m/n decreases as n increases. We are currently trying to analytically compute the limit to which the optimum m converges as n → ∞. 4 shows the variation of the achievable rate r CD with h AD for different values of h AC . Notice the nonmonotonic variation of r CD with h AD which can be explained as follows. First consider h AC = is small. In this case, the transmitter cannot reliably decode node A's signal. If in addition, h AD is also small, then node D cannot decode node A's signal either. Thus, as h AD increases, the interference of node A at node D increases and the achievable rate r CD decreases. Now, as h AD increases beyond a certain value, node D can begin to decode node A's signal and the probability of error is captured by Gallager's error exponents. In this scenario, as h AD increases, the error probability decreases and thus node D can cancel out more and more of interference from node A. Consequently, r CD increases. Similar qualitative behavior occurs for other values of h AC . However, for large h AC , node C can decode (with some errors) the signal from node A and then use a noisy DPC scheme to achieve higher rates r CD . Notice also that as explained before for large h AD , the outer bound on the rate is achieved for all values of h AC .
The variation of r CD with h AC is given in Fig. 5. First consider the case |h AD | = 0.2. In this case, node D cannot decode the signal of node A reliably. Now, for small values of |h AC | node C also cannot decode node A's signal. Hence, the achievable rate equals the lower bound, R CD−lb . As |h AC | increases, node C can begin to decode node A's signal and cancel out a part of the interference using the noisy DPC scheme; hence r CD begins to increase. Similar behavior is observed for |h AD | = 0.6. However, when |h AD | = 0.9, node D can decode node A's signal with some errors and cancel out part of the interference. Hence, in this case, even for small values of |h AC | the achievable rate r CD is greater than the lower bound. As before r CD increases with |h AC | since node A can cancel out an increasing portion of the interference using the noisy DPC technique. Note however, that a larger h AD causes more interference at node D, which is reflected in the decrease of the lower bound. Thus, for a given |h AC | the achievable rate can be lower or higher depending on the value of |h AD |.
| 4,165 |
0901.2962
|
2950849145
|
This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly group-sparse signals. This provides a convincing theoretical justification for using group sparse regularization when the underlying group structure is consistent with the data. Moreover, the theory predicts some limitations of the group Lasso formulation that are confirmed by simulation studies.
|
In @cite_13 , the authors attempted to derive a bound on the number of samples needed to recover block sparse signals, where the coefficients in each block are either all zero or all nonzero. In our terminology, this corresponds to the case of group sparsity with equal size groups. The algorithm considered there is a special case of ) with @math . However, their result is very loose, and does not demonstrate the advantage of group Lasso over standard Lasso.
|
{
"abstract": [
"Let A be an M by N matrix (M 1 - 1 d, and d = Omega(log(1 isin) isin3) . The relaxation given in (*) can be solved in polynomial time using semi-definite programming."
],
"cite_N": [
"@cite_13"
],
"mid": [
"2098996169"
]
}
|
The Benet of Group Sparsity
| 0 |
|
0901.2962
|
2950849145
|
This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly group-sparse signals. This provides a convincing theoretical justification for using group sparse regularization when the underlying group structure is consistent with the data. Moreover, the theory predicts some limitations of the group Lasso formulation that are confirmed by simulation studies.
|
Finally, we shall mention that independent of the authors, results similar to those presented in this paper have also been obtained in @cite_1 with a similar technical analysis. However, while our paper studies the general group Lasso formulation, only the special case of multi-task learning is considered in @cite_1 .
|
{
"abstract": [
"We study the problem of estimating multiple linear regression equations for the purpose of both prediction and variable selection. Following recent work on multi-task learning [2008], we assume that the regression vectors share the same sparsity pattern. This means that the set of relevant predictor variables is the same across the different equations. This assumption leads us to consider the Group Lasso as a candidate estimation method. We show that this estimator enjoys nice sparsity oracle inequalities and variable selection properties. The results hold under a certain restricted eigenvalue condition and a coherence condition on the design matrix, which naturally extend recent work in [2007], Lounici [2008]. In particular, in the multi-task learning scenario, in which the number of tasks can grow, we are able to remove completely the effect of the number of predictor variables in the bounds. Finally, we show how our results can be extended to more general noise distributions, of which we only require the variance to be finite."
],
"cite_N": [
"@cite_1"
],
"mid": [
"1800306869"
]
}
|
The Benet of Group Sparsity
| 0 |
|
0901.1848
|
2953354114
|
We consider solutions to the equation f = h^r for polynomials f and h and integer r > 1. Given a polynomial f in the lacunary (also called sparse or super-sparse) representation, we first show how to determine if f can be written as h^r and, if so, to find such an r. This is a Monte Carlo randomized algorithm whose cost is polynomial in the number of non-zero terms of f and in log(deg f), i.e., polynomial in the size of the lacunary representation, and it works over GF(q)[x] (for large characteristic) as well as Q[x]. We also give two deterministic algorithms to compute the perfect root h given f and r. The first is output-sensitive (based on the sparsity of h) and works only over Q[x]. A sparsity-sensitive Newton iteration forms the basis for the second approach to computing h, which is extremely efficient and works over both GF(q)[x] (for large characteristic) and Q[x], but depends on a number-theoretic conjecture. Work of Erdos, Schinzel, Zannier, and others suggests that both of these algorithms are unconditionally polynomial-time in the lacunary size of the input polynomial f. Finally, we demonstrate the efficiency of the randomized detection algorithm and the latter perfect root computation algorithm with an implementation in the C++ library NTL.
|
Newton iteration has also been applied to finding perfect polynomial roots of lacunary (or other) polynomials given by straight-line programs. @cite_12 shows how to compute a straight-line program for @math , given a straight-line program for @math and the value of @math . This method has complexity polynomial in the size of the straight-line program for @math , and in the degree of @math , and in particular is effective for large @math . We do not address the powerful generality of straight-line programs, but do avoid the dependence on the degree of @math .
|
{
"abstract": [
"Three theorems are presented that establish polynomial straight-line complexity for certain operations on polynomials given by straight-line programs of unbounded input degree. The first theorem shows how to compute a higher order partial derivative in a single variable. The other two theorems impose the degree of the output polynomial as a parameter of the length of the output program. First it is shown that if a straight-line program computes an arbitrary power of a multivariate polynomial, that polynomial also admits a polynomial bounded straight-line computation. Second, any factor of a multivariate polynomial given by a division-free straight-line program with relatively prime co-factor also admits a straight-line computation of length polynomial in the input length and the degree of the factor. This result is based on a new Hensel lifting process, one where only one factor image is lifted back to the original factor. As an application we get that the greatest common divisor of polynomials given by a division-free straight-line program has polynomial straight-line complexity in terms of the input length and its own degree."
],
"cite_N": [
"@cite_12"
],
"mid": [
"2040922227"
]
}
|
Detecting lacunary perfect powers and computing their roots
|
In this paper we consider the problem of determining whether a polynomial f equals h r for some other polynomial h and positive integer r, and if so, finding h and r. The novel aspect of this current work is that our algorithms are efficient for the lacunary (also called sparse or supersparse) representation of polynomials. Specifically, we assume
f = 1≤i≤t c i x ei ∈ F[x 1 , . . . , x ℓ ],
(1.1) e i 1 = 1≤j≤ℓ e ij . We say f is t-sparse and write τ (f ) = t. We present algorithms which require time polynomial in τ (f ) and log deg f . Computational work on lacunary polynomials has proceeded steadily for the past three decades. From the dramatic initial intractability results of Plaisted (1977Plaisted ( , 1984, through progress in algorithms (e.g., Ben-Or and Tiwari (1988), Shparlinski (2000), and Kaltofen and Lee (2003)) and complexity (e.g., Karpinski and Shparlinski (1999), Quick (1986), and von zur Gathen et al. (1993)), to recent breakthroughs in root finding and factorization (Cucker et al., 1999;Kaltofen and Koiran, 2006;Lenstra, 1999), these works have important theoretical and practical consequences. The lacunary representation is arguably more intuitive than the standard dense representation, and in fact corresponds to the default linked-list representation of polynomials in modern computer algebra systems such as Maple and Mathematica.
We will always assume that τ (f ) ≥ 2; otherwise f = x n , and determining whether f is a perfect power is equivalent to determining whether n ∈ N is composite, and to factoring n if we wish to produce r dividing n such that f = (x n/r ) r . Surprisingly, the intractability of the latter problem is avoided when τ (f ) ≥ 2.
We first consider detecting perfect powers and computing the power r for the univariate
case f = 1≤i≤t c i x ei ∈ F[x],(1.2)
where 0 ≤ e 1 < e 2 < · · · < e t = deg f . Two cases for the field F are handled: the integers and finite fields of characteristic p greater than the degree of f . When f ∈ Z[x], our algorithms also require time polynomial in log f ∞ , where f ∞ = max 1≤i≤t |c i | (for f ∈ Q[x], we simply work with f = cf ∈ Z[x], for the smallest c ∈ Z\{0}). This reflects the bit-length of coefficients encountered in the computations. Efficient techniques will also be presented for reducing the multivariate case to the univariate one, and for computing a root h such that f = h r .
Our contributions
Given a lacunary polynomial f ∈ Z[x] with τ (f ) ≥ 2 and degree n, we first present an algorithm to compute an integer r > 1 such that f = h r for some h ∈ Z[x], or determine that no such r exists. The algorithm requires O˜(t log 2 f ∞ log 2 n) machine operations * , and is probabilistic of the Monte Carlo type. That is, for any input, on any execution the probability of producing an incorrect answer is strictly less than 1/2, assuming the ability to generate random bits at unit cost. This possibility of error can be made arbitrarily small with repeated executions.
A similar algorithm is presented to answer Shparlinski's open question on perfect powers of lacunary polynomials over finite fields, at least for the case of large characteristic. That is, when the characteristic p of a finite field F is greater than deg f , we provide a Monte Carlo algorithm that determines if there exists an h ∈ F[x] and r such that f = h r , and finds r if it exists, which requires O˜(t log 2 n) operations in F.
An implementation of our algorithm over Z in NTL indicates excellent performance on sparse inputs when compared to a fast implementation based on previous technology (a variable-precision Newton iteration to find a power-series rth root of f , followed by a Monte Carlo correctness check).
Actually computing h such that f = h r is a somewhat trickier problem, at least insofar as bounds on the sparsity of h have not been completely resolved. Conjectures of Schinzel (1987) and recent work of Zannier (2007) suggest that, provided the characteristic of F is zero or sufficiently large, h is lacunary as well. To avoid this lack of sufficient theoretical understanding, we develop an algorithm which requires time polynomial in both the representation size of the input f (i.e., τ (f ), log n and log f ∞ ) and the representation size of the output (i.e., τ (h) and log f ∞ ). This algorithm works by projecting f into a sequence of small cyclotomic fields. Images of the desired h in these fields are discovered by factorization over an algebraic extension. Finally, a form of interpolation of the sparse exponents is used to recover the global h. The algorithm is probabilistic of the Monte Carlo type. While this algorithm is polynomial time, we do not claim it will be efficient in practice. Instead, we also present and analyze a simpler alternative based on a kind of Newton iteration. Subject to what we believe is a reasonable conjecture, this is shown to be very fast.
The remainder of the paper is arranged as follows. In Section 2 we present the main theoretical tool for our algorithm to determine if f = h r , and to find r. We also show how to reduce the multivariate problem to the univariate one. In Section 3 we show how to compute h such that f = h r (given that such h and r exist). Finally, in Section 4, we present an experimental implementation of some of our algorithms in the C++ library NTL.
An earlier version of some of this work was presented in the ISSAC 2008 conference (Giesbrecht and Roche, 2008).
Testing for perfect powers
In this section we describe a method to determine if a lacunary polynomial f ∈ F[x] is a perfect power. That is, do there exist h ∈ F[x] and r > 1 such that f = h r ? The polynomial h need not be lacunary, though some conjectures suggest it may well have to be. We will find r, but not h.
We first describe algorithms to test if an f ∈ F[x] is an rth power of some polynomial h ∈ F[x], where f and r are both given and r is assumed to be prime. We present and analyze variants that work over finite fields F q and over Z. In fact, these algorithms for given r are for black-box polynomials: they only need to evaluate f at a small number of points. That this evaluation can be done quickly is a property of lacunary and other classes of polynomials.
For lacunary f we then show that, in fact, if h exists at all then r must be small unless f = x n . And if f is a perfect power, then there certainly exists a prime r such that f is an rth power. So in fact the restrictions that r is small and prime are sufficient to cover all nontrivial cases, and our method is complete.
Detecting given rth powers
Our main tool in this work is the following theorem which says that, with reasonable probability, a polynomial is an rth power if and only if the modular image of an evaluation in a specially constructed finite field is an rth power.
Theorem 2.1. Let ̺ ∈ Z be a prime power and r ∈ N a prime dividing ̺ − 1. Suppose
that f ∈ F ̺ [x] has degree n ≤ 1 + √ ̺/2 and is not a perfect rth power in F ̺ [x]. Then R (r) f = # {c ∈ F ̺ : f (c) ∈ F ̺ is an rth power} ≤ 3̺ 4 .
Proof. The rth powers in F ̺ form a subgroup H of F * ̺ of index r and size (̺−1)/r in F * ̺ . Also, a ∈ F * ̺ is an rth power if and only if a (̺−1)/r = 1. We use the method of "completing the sum" from the theory of character sums. We refer to Lidl and Niederreiter (1983), Chapter 5, for an excellent discussion of character sums. By a multiplicative character we mean a homomorphism χ : F * ̺ → C which necessarily maps F ̺ onto the unit circle. As usual we extend our multiplicative characters χ so that χ(0) = 0, and define the trivial character χ 0 (a) to be 0 when a = 0 and 1 otherwise. For any a ∈ F * ̺ , 1
r χ r =χ 0 χ(a) = 1 if a ∈ H, 0 if a ∈ H,
where χ ranges over all the multiplicative characters of order r on F * ̺ -that is, all characters that are isomorphic to the trivial character on the subgroup H. Thus
R (r) f = a∈F * ̺ 1 r χ r =χ0 χ(f (a)) = 1 r χ r =χ0 a∈F * ̺ χ(f (a)) ≤ ̺ r + 1 r χ r =χ 0 χ =χ 0 a∈F̺ χ(f (a)) .
Here we use the obvious fact that
a∈F * ̺ χ 0 (f (a)) ≤ a∈F̺ χ 0 (f (a)) = ̺ − d ≤ ̺,
where d is the number of distinct roots of f in F ̺ . We next employ the powerful theorem of Weil (1948) on character sums with polynomial arguments (see Theorem 5.41 of Lidl and Niederreiter (1983)), which shows that if f is not a perfect rth power of another polynomial, and χ has order r > 1, then
a∈F̺ χ(f (a)) ≤ (n − 1)̺ 1/2 ≤ ̺ 2 ,
using the fact that we insisted n ≤ 1+ √ ̺/2. Summing over the r−1 non-trivial characters of order r, we deduce that
R (r) f ≤ ̺ r + r − 1 r · ̺ 2 ≤ 3̺ 4 . 2
Certifying specified powers over F q [x]
Theorem 2.1 allows us to detect when a polynomial f ∈ F ̺ [x] is a perfect rth power, for known r dividing ̺ − 1: choose random α ∈ F ̺ and evaluate ξ = f (α) (̺−1)/r ∈ F ̺ . Recall that ξ = 1 if and only if f (α) is an rth power.
• If f is an rth power, then clearly f (α) is an rth power and we always have ξ = 1.
• If f is not an rth power, Theorem 2.1 demonstrates that for at least 1/4 of the elements of F ̺ , f (α) is not an rth power. Thus, for α chosen randomly from F ̺ we would expect ξ = 1 with probability at least 1/4. For a polynomial f ∈ F q [x] over an arbitrary finite field F q , where q is a prime power such that q − 1 is not divisible by r, we proceed by constructing an extension field F q r−1 over F q . From Fermat's Little Theorem and the fact that r ∤ q, we know r | (q r−1 − 1), and we can proceed as above. We now present and analyze this more formally.
Algorithm IsPerfectRthPowerGF
Input: A prime power q, f ∈ F q [x] of degree n ≤ 1 + √ q/2, r ∈ N a prime dividing n, and ǫ ∈ R >0 Output: True if f is the rth power of a polynomial in F ̺ [x]; False otherwise.
1: Find an irreducible Γ ∈ F q [z] of degree r − 1, successful with probability at least ǫ/2 2: ̺ ← q r−1 3: Define F ̺ = F q [z]/(Γ) 4: m ← 2.5(1 + ⌈log 2 (1/ǫ)⌉) 5: for i from 1 to m do 6:
Choose random α ∈ F ̺ 7:
ξ ← f (α) (̺−1)/r ∈ F ̺ 8:
if ξ = 1 then 9:
return False 10: return True Notes on IsPerfectRthPowerGF.
To accomplish
Step 1, a number of fast probabilistic methods are available to find irreducible polynomials. We employ the algorithm of Shoup (1994). This algorithm requires O((r 2 log r + r log q) log r log log r) operations in F q . It is probabilistic of the Las Vegas type, and we assume that it always stops within the number of operations specified, and returns the correct answer with probability at least 1/2 and "Fail" otherwise (it never returns an incorrect answer). The algorithm is actually presented in Shoup (1994) as always finding an irreducible polynomial, but requiring expected time as above; by not iterating indefinitely our restatement allows for a Monte Carlo analysis in what follows. To obtain an irreducible Γ with failure probability at most ǫ/2 we run (our modified) Shoup's algorithm 1 + ⌈log 2 (1/ǫ)⌉ times.
The restriction that n ≤ 1 + √ 2 (or alternatively q ≥ 4(n − 1) 2 ) is not problematic. If this condition is not met, simply extend F q with an extension of degree ν = ⌈log q (4(n − 1) 2 )⌉ and perform the algorithm over F q ν . At worst, each operation in F q ν requires O(M(log n)) operations in F q .
Here we define M(r) as a number of operations in F to multiply two polynomials of degree ≤ r over F, for any field F, or the number of bit operations to multiply two integers with at most r bits. Using classical arithmetic M(r) is O(r 2 ), while using the fast algorithm of Cantor and Kaltofen (1991) we may assume M(r) is O(r log r log log r).
Theorem 2.2. Let q be a prime power, f ∈ F q [x], r ∈ N a prime dividing deg f and ǫ > 0. If f is a perfect rth power the algorithm IsPerfectRthPowerGF always reports this. If f is not a perfect rth power then, on any invocation, this is reported correctly with probability at least 1 − ǫ.
Proof. It is clear from the above discussion that the algorithm always works when f is perfect power. When f is not a perfect power, each iteration of the loop will obtain ξ = 1 (and hence a correct output) with probability at least 1/4. By iterating the loop m times we ensure that the probability of failure is at most ǫ/2. Adding this to the probability that Shoup's algorithm (for Step 1) fails yields a total probability of failure of at most ǫ. 2 Theorem 2.3. On inputs as specified, the algorithm IsPerfectRthPowerGF requires O((rM(r) log r log q) · log(1/ǫ)) operations in F q plus the cost to evaluate α → f (α) at O(log(1/ǫ)) points α ∈ F q r−1 .
Proof. As noted above, Shoup's 1994 algorithm requires O((r 2 log r+r log q) log r log log r) field operations per iteration, which is within the time specified. The main cost of the loop in Steps 4-8 is computing f (α) (̺−1)/r , which requires O(log ̺) or O(r log q) operations in F ̺ using repeated squaring, plus one evaluation of f at a point in F ̺ . Each operation in F ̺ requires O(M(r)) operations in F q , and we repeat the loop O(log(1/ǫ)) times. 2
Corollary 2.4. Given f ∈ F q [x] of degree n with τ (f ) = t, and r ∈ N a prime dividing n, we can determine if f is an rth power with O ((rM(r) log r log q + tM(r) log n) · log(1/ǫ))
operations in F q . When f is an rth power, the output is always correct, while if f is not an rth power, the output is correct with probability at least 1 − ǫ.
Certifying specified powers over Z[x]
For an integer polynomial f ∈ Z[x], we proceed by working in the homomorphic image of Z in F p (and then in an extension of that field). We must ensure that the homomorphism preserves the perfect power property we are interested in with high probability. For any polynomial g ∈ F[x], let disc(g) = res(f, f ′ ) be the discriminant of g (the resultant of f and its first derivative). It is well known that g is squarefree if and only if disc(g) = 0. Also define lcoeff(f ) as the leading coefficient of f , the coefficient of the highest power of x in f . Gathen and Gerhard (2003), Lemma 14.1), and each of thef i mod p must be pairwise relatively prime and squarefree for 1 ≤ i ≤ m. Now suppose f mod p is a perfect rth power modulo p. Then we must have r | s i for 1 ≤ i ≤ m. But this immediately implies that f is a perfect power in Z[x] as well. 2
Lemma 2.5. Let f ∈ Z[x] andf = f / gcd(f, f ′ ) its squarefree part. Let p be a prime such that p ∤ disc(f ) and p ∤ lcoeff(f ). Then f is a perfect power in Z[x] if and only if f mod p is a perfect power in F p [x]. Proof. Clearly if f is a perfect power, then f mod p is a perfect power in Z[x]. To show the converse, assume that f = f s1 1 · · · f sm m for distinct irreducible f 1 , . . . , f m ∈ Z[x], sof = f 1 · · · f m . Clearly f ≡ f s1 1 · · · f sm m mod p as well, and because p ∤ lcoeff(f ) we know deg(f i mod p) = deg f i for 1 ≤ i ≤ m. Since p ∤ disc(f ),f mod p is squarefree (see von zur
Given any polynomial
g = g 0 + g 1 x + · · · + g m x m ∈ Z[x], we define the height or coefficient ∞-norm of g as g ∞ = max i |g i |.
Similarly, we define the coefficient 1-norm of g as g 1 = i |g i |, and 2-norm as g 2 = i |g i | 2 1/2 . Sincef divides f , we can employ the factor bound of Mignotte (1974) to obtain
f ∞ ≤ 2 n f 2 ≤ 2 n √ n · f ∞ .
Since disc(f ) = res(f ,f ′ ) is the determinant of matrix of size at most (2n − 1) × (2n − 1), Hadamard's inequality implies
|disc(f )| ≤ 2 n n 1/2 f ∞ n−1 2 n n 3/2 f ∞ n < 2 2n 2 n 2n · f 2n ∞ .
Also observe that |lcoeff(f )| ≤ f ∞ . Thus, the product disc(f ) · lcoeff(f ) has at most µ = log 2 2 2n 2 n 2n f 2n+1 ∞ /⌊log 2 (4(n − 1) 2 )⌋ prime factors greater than 4(n−1) 2 (we require the lower bound 4(n−1) 2 to employ Theorem 2.1 without resorting to field extensions). Choose a γ ≥ 4(n−1) 2 such that the number of primes π(2γ) − π(γ) between γ and 2γ is at least 4µ + 1. By Rosser and Schoenfeld (1962), π(2γ) − π(γ) ≥ 2γ/(5 ln γ) for γ ≥ 59. Thus if γ ≥ max{14µ ln(14µ), 100}, then a random prime not equal to r in the range γ . . . 2γ divides lcoeff(f ) · disc(f ) with probability at most 1/4. Primes p of this size have only log 2 p ∈ O(log n + log log f ∞ ) bits.
Algorithm IsPerfectRthPowerZ Input: f ∈ Z[x] of degree n; r ∈ N a prime dividing n; ǫ ∈ R >0 ; Output: True if f is the rth power of a polynomial in Z[x]; False otherwise
1: µ ← log 2 2 2n 2 n 2n f 2n+1 ∞ /⌊log 2 (4(n − 1) 2 )⌋ 2: γ ← max{14µ ln(14µ), 4(n − 1) 2 , 100} 3: for i from 1 to . . . ⌈log 2 (1/ǫ)⌉ do 4:
p ← random prime in the range γ . . . 2γ
5:
if NOT IsPerfectRthPowerGF(p, f mod p, r, 1/4 ) then 6: return False 7: return True Theorem 2.6. Let f ∈ Z[x] of degree n, r ∈ N dividing n and ǫ ∈ R >0 . If f is a perfect rth power, the algorithm IsPerfectRthPowerZ always reports this. If f is not a perfect rth power, on any invocation of the algorithm, this is reported correctly with probability at least 1 − ǫ.
Proof. If f is an rth power then so is f mod p for any prime p, and so is any f (α) ∈ F p . Thus, the algorithm always reports that f is an rth power. Now suppose f is not an rth power. If p | disc(f ) it may happen that f mod p is an rth power. This happens with probability at most 1/4 and we will assume that the worst happens in this case. When p ∤ disc(f ), the probability that IsPerfectRthPowerGF incorrectly reports that f is an rth power is also at most 1/4, by our choice of parameter ǫ. Thus, on any iteration of steps 4-6, the probability of finding that f is an rth power is at most 1/2. The probability of this happening ⌈log 2 (1/ǫ)⌉ times is clearly at most ǫ. 2 Theorem 2.7. On inputs as specified, the algorithm IsPerfectRthPowerZ requires O rM(r) log r · M(log n + log log f ∞ ) · (log n + log log f ∞ ) · log(1/ǫ) , or O˜(r 2 (log n+log log f ∞ ) 2 ·log(1/ǫ)) bit operations, plus the cost to evaluate (α, p) → f (α) mod p at O(log(1/ǫ)) points α ∈ F p for primes p with log p ∈ O(log n+log log f ∞ ).
Proof. The number of operations required by each iteration is dominated by Step 5, for which O(rM(r) log r log p) operations in F p is sufficient by Theorem 2.3. Since log p ∈ O(log n + log log f ∞ ) we obtain the final complexity as stated. 2
We obtain the following corollary for t-sparse polynomials in Z[x]. This follows since the cost of evaluating a t-sparse polynomial f ∈ Z[x] modulo a prime p is O(t log f ∞ log p+ t log nM(log p)) bit operations.
Corollary 2.8. Given f ∈ Z[x] of degree n, with τ (f ) = t, and r ∈ N a prime dividing n, we can determine if f is an rth power with O˜ (r 2 log 2 n + t log 2 n + t log f ∞ log n) · log(1/ǫ) bit operations. When f is an rth power, the output is always correct, while if f is not an rth power, the output is correct with probability at least 1 − ǫ.
An upper bound on r.
In this subsection we show that if f = h r and f = x n then r must be small. Over Z[x] we show that h 2 is small as well. A sufficiently strong result over many fields is demonstrated by Schinzel (1987), Theorem 1, where it is shown that if f has sparsity t ≥ 2 then t ≥ r + 1 (in fact a stronger result is shown involving the sparsity of h as well). This holds when either the characteristic of the ground field of f is zero or greater than deg f .
Here we give a (much) simpler result for polynomials in Z[x], which bounds h 2 and is stronger at least in its dependency on t though it also depends upon the coefficients of f . Proof. Let p > n be prime and ζ ∈ C a pth primitive root of unity. Then
h 2 2 = 0≤i≤s |h i | 2 = 1 p 0≤i<p |h(ζ i )| 2 .
(this follows from the fact that the Discrete Fourier Transform (DFT) matrix is orthogonal). In other words, the average value of |h(ζ i )| 2 for i = 0 . . . p − 1 is h 2 2 , and so there exists a k ∈ {0, . . . , p − 1} with |h(ζ k )| 2 ≥ h 2 2 . Let θ = ζ k . Then clearly |h(θ)| ≥ h 2 . We also note that f (θ) = h(θ) r and |f (θ)| ≤ f 1 , since |θ| = 1. Thus,
h 2 ≤ |h(θ)| = |f (θ)| 1/r ≤ f 1/r 1 . 2
The following corollary is particularly useful.
Corollary 2.10. If f ∈ Z[x]
is not of the form x n , and f = h r for some h ∈ Z[x], then
(i) r ≤ 2 log 2 f 1 . (ii) τ (h) ≤ f 2/r 1 Proof. Part (i) follows since h 2 ≥ √ 2. Part (ii) follows because h 2 ≥ τ (h). 2
These bounds relate to the sparsity of f since f 1 ≤ τ (f ) f ∞ .
Perfect Power Detection Algorithm
We can now complete the perfect power detection algorithm, when we are given only the t-sparse polynomial f (and not r).
Algorithm IsPerfectPowerZ Input: f ∈ Z[x] of degree n and sparsity t ≥ 2, ǫ ∈ R >0 Output: True and r if f = h r for some h ∈ Z[x]
False otherwise. 1: P ← {primes r | n and r ≤ 2 log 2 (t f ∞ )} 2: for r ∈ P do 3:
if IsPerfectRthPowerZ(f , r, ǫ/#P) then 4:
return True and r 5: return False
Theorem 2.11. If f ∈ Z[x] = h r for some h ∈ Z[x]
, the algorithm IsPerfectPowerZ always returns "True" and returns r correctly with probability at least 1 − ǫ. Otherwise, it returns "False" with probability at least 1 − ǫ. The algorithm requires O˜(t log 2 f ∞ · log 2 (n) · log(1/ǫ)) bit operations.
Proof. From the preceding discussions, we can see that if f is a perfect power, then it must be a perfect rth power for some r ∈ P. So the algorithm must return true on some iteration of the loop. However, it may incorrectly return true too early for an r such that f is not actually an rth power; the probability of this occurring is the probability of error when f is not a perfect power, and is less than ǫ/#P at each iteration. So the probability of error on any iteration is at most ǫ, which is what we wanted.
The complexity result follows from the fact that each r ∈ O(log t + log f ∞ ) and using Corollary 2.8. 2
For polynomials in F q [x] we use Schinzel's bound that r ≤ t − 1 and obtain the following algorithm.
Algorithm IsPerfectPowerGF Input: f ∈ F q [x] of degree n and sparsity t, where the characteristic of F q is greater than n, and ǫ ∈ R >0 Output: True and r if f = h r for some h ∈ F q [x]; False otherwise. 1: P ← {primes r | n and r ≤ t} 2: for p ∈ P do 3:
if IsPerfectRthPowerGF( f , r, ǫ/#P ) then 4: return True and r;
Theorem 2.12. If f = h r for h ∈ F q [x], the algorithm IsPerfectPowerGF always returns "True" and returns r correctly with probability at least 1−ǫ. Otherwise, it returns "False" with probability at least 1 − ǫ. The algorithm requires O˜(t 3 (log q + log n)) operations in F q .
Proof. The proof is equivalent to that of Theorem 2.11, using the complexity bounds in Corollary 2.4. 2
Detecting multivariate perfect powers
In this subsection we examine the problem of detecting multivariate perfect powers. That is, given a lacunary f ∈ F[x 1 , . . . , x ℓ ] of total degree n as in (1.1), we want to determine if f = h r for some h ∈ F[x 1 , . . . , x ℓ ] and r ∈ N. This is done simply as a reduction to the univariate case.
First, given f ∈ F[x 1 , . . . , x ℓ ], define the squarefree partf ∈ F[x 1 , . . . , x ℓ ] as the squarefree polynomial of highest total degree which divides f . Lemma 2.13. Let f ∈ F[x 1 , . . . , x ℓ ] be of total degree n > 0 and letf ∈ F[x 1 , . . . , x ℓ ] be the squarefree part of f . Define ∆ = disc x (f (y 1 x, . . . , y ℓ x)) = res x (f (y 1 x, . . . , y ℓ x),f ′ (y 1 x, . . . , y ℓ x)) ∈ F[y 1 , . . . , y ℓ ] and Λ = lcoeff x (f (y 1 x, . . . , y ℓ x)) ∈ F[y 1 , . . . , y ℓ ] for independent indeterminates x, y 1 , . . . , y ℓ . Assume that a 1 , . . . , a ℓ ∈ F with ∆(a 1 , . . . , a ℓ ) = 0 and Λ(a 1 , . . . , a n ) = 0. Then f (x 1 , . . . , x ℓ ) is a perfect power if and only if f (a 1 x, . . . , a ℓ x) ∈ F[x] is a perfect power.
Proof. Clearly if f is a perfect power, then f (a 1 x, . . . , a ℓ x) is a perfect power. To prove the converse, assume that
f = f s1 1 f s2 2 · · · f sm m for irreducible f 1 , . . . , f m ∈ F[x 1 , . . . , x ℓ ].
Then f (y 1 x, . . . , y m x) = f 1 (y 1 x, . . . , y m x) s1 · · · f m (y 1 x, . . . , y m x) sm and each of the f i (y 1 x, . . . , y m x) are irreducible. Now, since Λ(a 1 , . . . , a m ) = 0, we know the deg(f (a 1 x, . . . , a ℓ x)) = deg f (the total degree of f ). Thus, deg f i (a 1 x, . . . , a ℓ x) = deg f i for 1 ≤ i ≤ ℓ as well. Also, by our assumption, disc(f (a 1 x, . . . , a ℓ x)) = 0, so all of the f i (a 1 x, . . . , a ℓ x) are squarefree and pairwise relatively prime for 1 ≤ i ≤ k, and f (a 1 x, . . . , a ℓ x) = f 1 (a 1 x, . . . , a ℓ x) s1 · · · f m (a 1 x, . . . , a ℓ x) sm .
Assume now that f (a 1 x, . . . , a ℓ x) is an rth perfect power. Then r divides s i for 1 ≤ i ≤ m. This immediately imples that f itself is an rth perfect power. 2
It is easy to see that the total degree of ∆ is less than 2n 2 and the total degree of Λ is less than n, and that both ∆ and Λ are non-zero. Thus, for randomly chosen a 1 , . . . , a ℓ from a set S ⊆ F of size at least 8n 2 + 4n we have ∆(a 1 , . . . , a ℓ ) = 0 or Λ(a 1 , . . . , a ℓ ) = 0 with probability less than 1/4, by Zippel (1979) or Schwartz (1980). This can be made arbitrarily small by increasing the set size and/or repetition. We then run the appropriate univariate algorithm over F[x] (depending upon the field) to identify whether or not f is a perfect power, and if so, to find r.
Computing perfect roots
Once we have determined that f ∈ F[x] is equal to h r for some h ∈ F[x], the next task is to actually compute h. Unfortunately, as noted in the introduction, there are no known bounds on τ (h) which are polynomial in τ (f ).
The question of how sparse the polynomial root of a sparse polynomial must be (or equivalently, how dense any power of a dense polynomial must be) relates to some questions first raised by Erdös (1949) on the number of terms in the square of a polynomial. Schinzel extended this work to the case of perfect powers and proved that τ (h r ) tends to infinity as τ (h) tends to infinity (Schinzel, 1987). Some conjectures of Schinzel suggest that τ (h) should be O(τ (f )). A recent breakthrough of Zannier (2007) show that τ (h) is bounded by a function which does not depend on deg f , but this bound is unfortunately not polynomial in τ (f ).
However, our own (limited) investigations, along with more extensive ones by Coppersmith and Davenport (1991), and later Abbott (2002), suggest that, for any h ∈ F[x], where the characteristic of F is not too small, τ (h) ∈ O(τ (h r ) + r). We skirt this problem here by simply making our algorithms output sensitive; the time required is polynomial in the lacunary size of the input and the output.
Computing rth roots in polynomial-time (without conditions)
In this subsection we present an algorithm for computing an h such that f = h r given f ∈ Z[x] and r ∈ Z and assuming that such an h exists. The algorithm requires time polynomial in t = τ (f ), log deg f , log f ∞ and a given upper bound µ ≥ m = τ (h). It is not conditional on any conjectures, but is probabilistic of the Monte Carlo type. That is, the computed polynomial h is such that h r = f with high probability. We will only demonstrate that this algorithm requires polynomial time. A more detailed analysis is performed on the (more efficient) algorithm of the next subsection (though that complexity is subject to a modest conjecture).
The basic idea of the algorithm here is that we can recover all the coefficients in Q as well as modular information about the exponents of h from a homomorphism into a small cylotomic field over Q. Doing this for a relatively small number of cyclotomic fields yields h.
Assume that (the unknown) h ∈ Z[x] has form
h = 1≤i≤m b i x di for b 1 , . . . , b m ∈ Z\{0}, and 0 ≤ d 1 < d 2 < · · · < d m ,
and that p > 2 is a prime distinct from r such that
p ∤ 1≤i<j≤m (d j − d i ), and p ∤ 1≤i≤m (d i + 1).
Let ζ p ∈ C be a pth primitive root of unity, and Φ p = 1 + z + · · · + z p−1 ∈ Z[z] its minimal polynomial, the pth cyclotomic polynomial (which is irreducible in Q[z]).
Computationally we represent Q(ζ p ) as Q[z]/(Φ p ), with ζ p ≡ z mod Φ p . Observe that ζ k p = ζ k rem p p for any k ∈ Z, where k rem p is the least non-negative residue of k modulo p. Thus h(ζ p ) = h p (ζ p ) for h p = 1≤i≤m b i x di rem p ∈ Z[x],
and h p is the unique representation of h(ζ p ) as a polynomial of degree less than p − 1. By our choice of p, none of the exponents of h are equivalent modulo p and all the exponents reduced modulo p are strictly less than p−1 (since our conditions imply d i ≡ (p−1) mod p for 1 ≤ i ≤ m). This also implies that the coefficients of h p are exactly the same as those of h, albeit in a different order. Now observe that we can determine h p quite easily from the roots of
Γ p (y) = y r − f (ζ p ) ∈ Q(ζ p )[y].
These roots can be found by factoring the polynomial Γ p (y) in Q(ζ p )[y], and the roots in C must be ω i h(ζ p ) ∈ C for 0 ≤ i < r, where ω is a primitive rth root of unity. When r > 2, and since we chose p distinct from r, the only rth root of unity in Q(ζ p ) is 1. Thus Γ p (y) has exactly one linear factor, and this must equal to y − h(ζ p ) = y − h p (ζ p ), precisely determining h p . When r = 2, we have
Γ p (y) = (y − h(ζ p ))(y + h(ζ p )) = (y − h p (ζ p ))(y + h p (ζ p ))
and we can only determine h p (ζ p ) (and h p and, for that matter, h) up to a factor of ±1. However, the exponents of h p and −h p are the same, and the ambiguity is only in the coefficients (which we resolve later). Finally, we need to perform the above operations for a sequence of cyclotomic fields Q(ζ p1 ), Q(ζ p2 ), . . . , Q(ζ p k ) such that the primes in P = {p 1 , . . . , p k } allow us to recover all the exponents in h. Each prime p ∈ P gives the set of exponents of h reduced modulo that prime, as well as all the coefficients of h in Z. That is, from each of the computations with p ∈ P we obtain C = {b 1 , . . . , b m } and E p = {d 1 rem p, d 2 rem p, . . . , d rem p} , but with no clear information about the order of these sets. In particular, it is not obvious how to correlate the exponents modulo the different primes directly. To do this we employ the clever sparse interpolation technique of Garg and Schost (2008) (based on a method of Grigoriev and Karpinski (1987) for a different problem), which interpolates the symmetric polynomial in the exponents:
g = (x − d 1 )(x − d 2 ) · · · (x − d m ) ∈ Z[x].
For each p ∈ P we compute the symmetric polynomial modulo p,
g p = (x − (d 1 rem p))(x − (d 2 rem p)) · · · (x − (d m rem p)) ≡ g mod p,
for which we do not need to know the order of the exponent residues. We then determine g ∈ Z[x] by the Chinese remainder theorem and factor g over Z[x] to find the d 1 , . . . , d m ∈ Z. Thus the product of all primes in p ∈ P must be at least 2 g ∞ to recover the coefficients of g uniquely. It is easily seen that 2 g ∞ ≤ 2n m .
As noted above, the computation with each p ∈ P recovers all the exponents of h in Z, so using only one prime p ∈ P, we determine the jth exponent of h as the coefficient of x dj rem p in h p for 1 ≤ j ≤ m. If r = 2 we can choose either of the roots of Γ p (y) (they differ by only a sign) to recover the coefficients of h.
The above discussion is summarized in the following algorithm.
Algorithm ComputeRootAlgebraic Input: f ∈ Z[x] as in (1.2) and r, µ ∈ N.
Output: h ∈ Z[x] such that f = h r and τ (h) ≤ µ, provided such an h exists. 1: γ ← smallest integer such that 2γ/5 log γ ≥ 10µ 2 (log 2 n)(1 + µ log 2 n). 2: P ← set of k > m log 2 n primes chosen uniformly at random from {γ, . . . , 2γ}. 3: for p ∈ P do 4:
Represent Q(ζ p ) by Q[x]/(Φ p ), where Φ p = 1 + z + · · · + z p−1 and ζ p ≡ z mod Φ p .
5:
Compute f (ζ p ) = 1≤i≤t c i ζ ei rem p p ∈ Q(ζ p ).
6:
h p ← root of Γ p = y r − f (ζ p ) in Q(ζ p ), found by factoring Γ p over Q(ζ p )[y].
7:
if deg h p ≥ p − 1 or h p has non-integer coefficients then 8:
return FAIL 9:
Write h p ∈ Z[x] as 1≤i≤m b ip x dip .
10:
if m differs from previous values of m then 11:
return FAIL 12:
g p ← (x − d 1p )(x − d 2p ) · · · (x − d mp ) ∈ Z p [x]
. 13: Reconstruct g ∈ Z[x] from {g p } p∈P by the Chinese remainder algorithm. 14: {d 1 , d 2 , . . . , d m } ← distinct integer roots of g ∈ Z[x]. 15: Choose any p ∈ P. For 1 ≤ j ≤ m, let b j ∈ Z be the coefficient of
x dj rem p in h p . 16: Return h = 1≤i≤m b j x dj .
Theorem 3.1. The algorithm ComputeRootAlgebraic works as stated. It is probabilistic of the Monte Carlo type and returns the correct answer with probability at least 9/10 on any execution. It requires a number of bit operations polynomial in t = τ (f ), log deg f , log f , and µ.
Proof. In Step 1 we need to choose a set of primes P which are all good with sufficiently high probability, in the sense that for all p ∈ P
β = r · 1≤i<j≤m (d j − d i ) · 1≤i≤m (d i + 1) ≡ 0 mod p.
It is easily derived that β ≤ n µ 2 , which has fewer than log 2 β ≤ µ 2 log 2 n prime factors. We also need to recover g in Step 7, and g ∞ ≤ n µ , so we need at least 1 + log 2 g ≤ 1 + µ log 2 n primes. Thus, if P has at least 10µ 2 log 2 (n)(1 + µ log 2 n), the probability of choosing a bad prime from P is at most 1/(10(1 + µ log 2 n)). The probability of choosing a bad prime with (1 + µ log 2 n) choices is at most 1/10, and the probability that all the primes are good is at least 9/10. Numbers are chosen uniformly and randomly from {γ, . . . , 2γ} and tested for primality, say by Agrawal et al. (2004). Correctness of the remainder of the algorithm follows from the previous discussion. Factoring the polynomials Γ p ∈ Q(ζ p )[y] can be performed in polynomial time with the algorithm of, for example, Landau (1985), and all other steps clearly require polynomial time as well. 2
Faster root computation subject to conjecture
Algorithm ComputeRootAlgebraic is probabilistic of the Monte Carlo type and not of the Las Vegas type because we have no way of certifying the output -i.e. that h r = f for given lacunary h, f ∈ Z[x] -in polynomial time. One way to accomplish this would be to simply compute h r by repeated squaring and comparing the result to f , but to do so in polynomial time would require bounds on the sparsity of each intermediate power τ (h i ) for 2 ≤ i < r based on τ (h) and τ (f ).
In fact, with such sparsity bounds we can actually derive a deterministic algorithm based on Newton iteration. This approach does not rely on advanced techniques such as factoring over algebraic extension fields, and hence will be much more efficient in practice. It is also more general as it applies to fields other than Z and to powers r which are not prime.
Unfortunately, this algorithm is not purely output-sensitive, as it relies on the following conjecture regarding the sparsity of powers of h:
Conjecture 3.2. For r, s ∈ N, if the characteristic of F is zero or greater than rs, and
h ∈ F[x] with deg h = s, then τ (h i mod x 2s ) < τ (h r mod x 2s ) + r, i = 1, 2, . . . , r − 1.
This corresponds to intuition and experience, as the system is still overly contrained with only s degrees of freedom. A weaker conjecture would suffice to prove polynomial time, but we use the stated bounds as we believe these give more accurate complexity measures.
Our algorithm is essentially a Newton iteration, with special care taken to preserve sparsity. We start with the image of h modulo x, using the fact that f (0) = h(0) r , and at Step i = 1, 2, . . . , ⌈log 2 (deg h + 1)⌉, we compute the image of h modulo x i .
Here, and for the remainder of this section, we will assume that f, h ∈ F[x] with degrees n and s respectively such that f = h r for r ∈ N at least 2, and that the characteristic of F is either zero or greater than n. As usual, we define t = τ (f ). We require the following simple lemma.
Lemma 3.3. * Let k, ℓ ∈ N such that ℓ ≤ k and k + ℓ ≤ s, and suppose h 1 ∈ F[x] is the unique polynomial with degree less than k satisfying h r 1 ≡ f mod x k . Then τ (h r+1 l mod x k+ℓ ) ≤ 2t(t + r).
Proof. Let h 2 ∈ F[x] be the unique polynomial of degree less than ℓ satisfying h 1 + h 2 x k ≡ h mod x k+ℓ . Since h r = f ,
f ≡ h r 1 + rh r−1 1 h 2 x k mod x k+ℓ .
Multiplying by h 1 and rearranging gives h r+1
1 ≡ h 1 f − rf h 2 x k mod x k+ℓ .
Because h 1 mod x k and h 2 mod x ℓ each have at most τ (h) terms, which by Conjecture 3.2 is less than t − r, the total number of terms in h r−1 1 mod x k+ℓ is less than 2t(t − r). 2
This essentially tells us that the "error" introduced by examining higher-order terms of h r 1 is not too dense. It leads to the following algorithm for computing h.
Algorithm
ComputeRootNewton
Input: f ∈ F[x], r ∈ N such that f is a perfect rth power Output: h ∈ F[x] such that f = h r 1: u ← highest power of x dividing f 2: f u ← coefficient of x u in f 3: g ← f /(f u x u ) 4: h ← 1, k ← 1 5: while kr ≤ deg g do 6: ℓ ← min{k, (deg g)/r + 1 − k} 7: a ← hg − h r+1 mod x k+ℓ rx k 8: h ← h + (a/g mod x ℓ )x k 9: k ← k + ℓ 10: b ← any rth root of f u in F 11: return bhx u/r Theorem 3.4. If f ∈ F[x] is a perfect rth power, then ComputeRootNewton returns an h ∈ F[x] such that h r = f .
Proof.
Let u, f u , g be as defined in Steps 1-4. Thus f = f u gx u . Now letĥ be some rth root of f , which we assume exists. If we similarly writeĥ =ĥ vĝ x v , withĥ v ∈ F and g ∈ F[x] such thatĝ(0) = 1, thenĥ r =ĥ r vĝ r x vr . Therefore f u must be a perfect rth power in F, r|u, and g is a perfect rth power in F[x] of some polynomial with constant coefficient equal to 1.
Denote by h i the value of h at the beginning of the ith iteration of the while loop. So h 1 = 1. We claim that at each iteration through Step 6, h r i ≡ g mod x k . From the discussion above, this holds for i = 1. Assuming the claim holds for all i = 1, 2, . . . , j, we prove it also holds for i = j + 1.
From
Step 8, h j+1 = h j + (a/g mod x l )x k , where a is as defined on the jth iteration of Step 7. We observe that
h j h r j ≡ h r+1 j + rh r j (a/g mod x l )x k mod x k+ℓ .
From our assumption, h r j ≡ f mod x k , and l ≤ k, so we have
h j h r j+1 ≡ h r+1 j + rax k ≡ h r+1 j + h j f − h r+1 j ≡ h j f mod x k+ℓ
Therefore h r j+1 ≡ f mod x k+ℓ , and so by induction the claim holds at each step. Since the algorithm terminates when kr > deg g, we can see that the final value of h is an rth root of g. Finally, bhx u/r r = f u gx u = f , so the theorem holds. 2 Theorem 3.5. † If f ∈ F[x] has degree n and t nonzero terms, then ComputeRootNewton uses O (t + r) 4 log r log n operations in F and an additional O (t + r) 4 log r log 2 n bit operations, not counting the cost of root-finding in the base field F on Step 10.
Proof. First consider the cost of computing h r+1 in Step 7. This will be accomplished by repeatedly squaring and multiplying by h, for a total of at most 2⌊log 2 (r + 1)⌋ multiplications. As well, each intermediate product will have at most τ (f ) + r < (t + r) 2 terms, by Conjecture 3.2. The number of field operations required, at each iteration, is O (t + r) 4 log r , for a total cost of O (t + r) 4 log r log n .
Furthermore, since k + ℓ ≤ 2 i at the i'th step, for 1 ≤ i < log 2 n, the total cost in bit operations is less than 1≤i<log 2 n (t + r) 4 log 2 ri ∈ O (t + r) 4 log r log 2 n .
In fact, this is the most costly step. The initialization in Steps 1-4 uses only O(t) operations in F and on integers at most n. And the cost of computing the quotient on
Step 8 is proportional to the cost of multiplying the quotient and dividend, which is at most O(t(t + r)). 2
When F = Q, we must account for coefficient growth. We use the normal notion of the size of a rational number: For α ∈ Q, write α = a/b for a, b relatively prime integers. Then define H(α) = max{|a|, |b|}. And for f ∈ Q[x] with coefficients c 1 , . . . , c t ∈ Q, write H(f ) = max H(c i ).
Thus, the size of the lacunary representation of f ∈ Q[x] is proportional to τ (f ), deg f , and log H(f ). Now we prove the bit complexity of our algorithm is polynomial in these values, when F = Q.
Theorem 3.6. † Suppose f ∈ Q[x] has degree n and t nonzero terms, and is a perfect rth power. ComputeRootNewton computes an rth root of f using O˜ t(t + r) 4 · log n · log H(f ) bit operations.
Proof. Let h ∈ Q[x] such that h r = f , and let c ∈ Z >0 be minimal such that ch ∈ Z[x]. Gauß's Lemma tells us that c r must be the least positive integer such that c r f ∈ Z[x] as well. Then, using Theorem 2.9, we have:
H(h) ≤ ch ∞ ≤ ch 2 ≤ (t c r f ∞ ) 1/r ≤ t 1/r H(f ) (t+1)/r .
(The last inequality comes from the fact that the lcm of the denominators of f is at most
H(f ) t .)
Hence log H(h) ∈ O ((t log H(f ))/r). Clearly the most costly step in the algorithm will still be the computation of h r+1 i at each iteration through Step 7. For simplicity in our analysis, we can just treat h i (the value of h at the ith iteration of the while loop in our algorithm) as equal to h (the actual root of f ), since we know τ (h i ) ≤ τ (h) and
H(h i ) ≤ H(h).
Lemma 3.3 and Conjecture 3.2 tell us that τ (h i ) ≤ 2(t + r) 2 for i = 1, 2, . . . , r. To compute h r+1 , we will actually compute (ch) r+1 ∈ Z[x] by repeatedly squaring and multiplying by ch, and then divide out c r+1 . This requires at most ⌊log 2 r + 1⌋ squares and products.
Note
that (ch) 2i ∞ ≤ (t + r) 2 (ch) i 2 ∞ and (ch) i+1 ∞ ≤ (t + r) 2 (ch) i ∞ ch ∞ . Therefore (ch) i ∞ ≤ (t + r) 2r ch r ∞ , i = 1, 2, .
. . , r, and thus log (ch) i ∞ ∈ O (r(t + r) + t log H(f )), for each intermediate power (ch) i .
Thus each of the O (t + r) 4 log r field operations at each iteration costs O(M(t log H(f )+ log r(t + r))) bit operations, which then gives the stated result. 2
The method used for Step 10 depends on the field F. For F = Q, we just need to find two integer perfect roots, which can be done in "nearly linear" time by the algorithm of Bernstein (1998). Otherwise, we can use any of the well-known fast root-finding methods over F[x] to compute a root of x r − f u .
Implementation
To investigate the practicality of our algorithms, we implemented IsPerfectPowerZ using Victor Shoup's NTL. This is a high-performance C++ for fast dense univariate polynomial computations over Z[x] or F q [x].
NTL does not natively support a lacunary polynomial representation, so we wrote our own using vectors of coefficients and of exponents. In fact, since IsPerfectPowerZ is a black-box algorithm, the only sparse polynomial arithmetic we needed to implement was for evaluation at a given point.
The only significant diversion between our implementation and the algorithm specified in Section 2 is our choice of the ground field. Rather than working in a degree-(r − 1) extension of F p , we simply find a random p in the same range such that (r − 1) | p. It is more difficult to prove that we can find such a p quickly (using e.g. the best known bounds on Linnik's Constant), but in practice this approach is very fast because it avoids computing in field extensions.
As a point of comparison, we also implemented the Newton iteration approach to computing perfect polynomial roots, which appears to be the fastest known method for dense polynomials. This is not too dissimilar from the techniques from the previous section on computing a lacunary rth root, but without paying special attention to sparsity. We work modulo a randomly chosen prime p to compute an rth perfect root h, and then use random evaluations of h and the original input polynomial f to certify correctness. This yields a Monte Carlo algorithm with the same success probability as ours, and so provides a suitable and fair comparison.
We ran two sets of tests comparing these algorithms. The first set, depicted in Figure 1, does not take advantage of sparsity at all; that is, the polynomials are dense and have close to the maximal number of terms. It appears that the worst-case running time of our algorithm is actually a bit better than the Newton iteration method on dense input, but on the average they perform roughly the same. The lower triangular shape comes from the fact that both algorithms can (and often do) terminate early. The visual gap in the timings for the sparse algorithm comes from the fact that exactly half of the input polynomials were perfect powers. It appears our algorithm terminates more quickly when the polynomial is not a perfect power, but usually takes close to the full amount of time otherwise.
The second set of tests, depicted in Figure 2, held the number of terms of the perfect power, τ (f ), roughly fixed, letting the degree n grow linearly. Here we can see that, for sufficiently sparse f , our algorithm performs significantly and consistently better than the Newton iteration. In fact, we can see that, with some notable but rare exceptions, it appears that the running time of our algorithm is largely independent of the degree when the number of terms remains fixed. The outliers we see probably come from inputs that were unluckily dense (it is not trivial to produce examples of h r with a given fixed number of nonzero terms, so the sparsity did vary to some extent).
Perhaps most surprisingly, although the choices of parameters for these two algorithms only guaranteed a probability of success of at least 1/2, in fact over literally millions of tests performed with both algorithms and a wide range of input polynomials, not a single failure was recorded. This is of course due to the loose bounds employed in our analysis, indicating a lack of understanding at some level, but it also hints at the possibility of a deterministic algorithm, or at least one which is probabilistic of the Las Vegas type.
Both implementations are available as C++ code downloadable from the second author's website.
| 10,021 |
0901.1848
|
2953354114
|
We consider solutions to the equation f = h^r for polynomials f and h and integer r > 1. Given a polynomial f in the lacunary (also called sparse or super-sparse) representation, we first show how to determine if f can be written as h^r and, if so, to find such an r. This is a Monte Carlo randomized algorithm whose cost is polynomial in the number of non-zero terms of f and in log(deg f), i.e., polynomial in the size of the lacunary representation, and it works over GF(q)[x] (for large characteristic) as well as Q[x]. We also give two deterministic algorithms to compute the perfect root h given f and r. The first is output-sensitive (based on the sparsity of h) and works only over Q[x]. A sparsity-sensitive Newton iteration forms the basis for the second approach to computing h, which is extremely efficient and works over both GF(q)[x] (for large characteristic) and Q[x], but depends on a number-theoretic conjecture. Work of Erdos, Schinzel, Zannier, and others suggests that both of these algorithms are unconditionally polynomial-time in the lacunary size of the input polynomial f. Finally, we demonstrate the efficiency of the randomized detection algorithm and the latter perfect root computation algorithm with an implementation in the C++ library NTL.
|
Closest to this current work, @cite_14 shows how to recognize whether @math for a lacunary polynomial @math . Shparlinski uses random evaluations and tests for quadratic residues. How to determine whether a lacunary polynomial is perfect power is posed as an open question.
|
{
"abstract": [
"We describe a polynomial time algorithm to compute Jacobi symbols of exponentially large integers of special form, including so-called sparse integers which are exponentially large integers with only polynomially many nonzero binary digits. In a number of papers sequences of Jacobi symbols have been proposed as generators of cryptographically secure pseudorandom bits. Our algorithm allows us to use much larger moduli in such constructions. We also use our algorithm to design a probabilistic polynomial time test which decides if a given integer of the aforementioned type is a perfect square (assuming the Extended Riemann Hypothesis). We also obtain analogues of these results for polynomials over finite fields. Moreover, in this case the perfect square testing algorithm is unconditional. These results can be compared with many known NP-hardness results for some natural problems on sparse integers and polynomials."
],
"cite_N": [
"@cite_14"
],
"mid": [
"1993714280"
]
}
|
Detecting lacunary perfect powers and computing their roots
|
In this paper we consider the problem of determining whether a polynomial f equals h r for some other polynomial h and positive integer r, and if so, finding h and r. The novel aspect of this current work is that our algorithms are efficient for the lacunary (also called sparse or supersparse) representation of polynomials. Specifically, we assume
f = 1≤i≤t c i x ei ∈ F[x 1 , . . . , x ℓ ],
(1.1) e i 1 = 1≤j≤ℓ e ij . We say f is t-sparse and write τ (f ) = t. We present algorithms which require time polynomial in τ (f ) and log deg f . Computational work on lacunary polynomials has proceeded steadily for the past three decades. From the dramatic initial intractability results of Plaisted (1977Plaisted ( , 1984, through progress in algorithms (e.g., Ben-Or and Tiwari (1988), Shparlinski (2000), and Kaltofen and Lee (2003)) and complexity (e.g., Karpinski and Shparlinski (1999), Quick (1986), and von zur Gathen et al. (1993)), to recent breakthroughs in root finding and factorization (Cucker et al., 1999;Kaltofen and Koiran, 2006;Lenstra, 1999), these works have important theoretical and practical consequences. The lacunary representation is arguably more intuitive than the standard dense representation, and in fact corresponds to the default linked-list representation of polynomials in modern computer algebra systems such as Maple and Mathematica.
We will always assume that τ (f ) ≥ 2; otherwise f = x n , and determining whether f is a perfect power is equivalent to determining whether n ∈ N is composite, and to factoring n if we wish to produce r dividing n such that f = (x n/r ) r . Surprisingly, the intractability of the latter problem is avoided when τ (f ) ≥ 2.
We first consider detecting perfect powers and computing the power r for the univariate
case f = 1≤i≤t c i x ei ∈ F[x],(1.2)
where 0 ≤ e 1 < e 2 < · · · < e t = deg f . Two cases for the field F are handled: the integers and finite fields of characteristic p greater than the degree of f . When f ∈ Z[x], our algorithms also require time polynomial in log f ∞ , where f ∞ = max 1≤i≤t |c i | (for f ∈ Q[x], we simply work with f = cf ∈ Z[x], for the smallest c ∈ Z\{0}). This reflects the bit-length of coefficients encountered in the computations. Efficient techniques will also be presented for reducing the multivariate case to the univariate one, and for computing a root h such that f = h r .
Our contributions
Given a lacunary polynomial f ∈ Z[x] with τ (f ) ≥ 2 and degree n, we first present an algorithm to compute an integer r > 1 such that f = h r for some h ∈ Z[x], or determine that no such r exists. The algorithm requires O˜(t log 2 f ∞ log 2 n) machine operations * , and is probabilistic of the Monte Carlo type. That is, for any input, on any execution the probability of producing an incorrect answer is strictly less than 1/2, assuming the ability to generate random bits at unit cost. This possibility of error can be made arbitrarily small with repeated executions.
A similar algorithm is presented to answer Shparlinski's open question on perfect powers of lacunary polynomials over finite fields, at least for the case of large characteristic. That is, when the characteristic p of a finite field F is greater than deg f , we provide a Monte Carlo algorithm that determines if there exists an h ∈ F[x] and r such that f = h r , and finds r if it exists, which requires O˜(t log 2 n) operations in F.
An implementation of our algorithm over Z in NTL indicates excellent performance on sparse inputs when compared to a fast implementation based on previous technology (a variable-precision Newton iteration to find a power-series rth root of f , followed by a Monte Carlo correctness check).
Actually computing h such that f = h r is a somewhat trickier problem, at least insofar as bounds on the sparsity of h have not been completely resolved. Conjectures of Schinzel (1987) and recent work of Zannier (2007) suggest that, provided the characteristic of F is zero or sufficiently large, h is lacunary as well. To avoid this lack of sufficient theoretical understanding, we develop an algorithm which requires time polynomial in both the representation size of the input f (i.e., τ (f ), log n and log f ∞ ) and the representation size of the output (i.e., τ (h) and log f ∞ ). This algorithm works by projecting f into a sequence of small cyclotomic fields. Images of the desired h in these fields are discovered by factorization over an algebraic extension. Finally, a form of interpolation of the sparse exponents is used to recover the global h. The algorithm is probabilistic of the Monte Carlo type. While this algorithm is polynomial time, we do not claim it will be efficient in practice. Instead, we also present and analyze a simpler alternative based on a kind of Newton iteration. Subject to what we believe is a reasonable conjecture, this is shown to be very fast.
The remainder of the paper is arranged as follows. In Section 2 we present the main theoretical tool for our algorithm to determine if f = h r , and to find r. We also show how to reduce the multivariate problem to the univariate one. In Section 3 we show how to compute h such that f = h r (given that such h and r exist). Finally, in Section 4, we present an experimental implementation of some of our algorithms in the C++ library NTL.
An earlier version of some of this work was presented in the ISSAC 2008 conference (Giesbrecht and Roche, 2008).
Testing for perfect powers
In this section we describe a method to determine if a lacunary polynomial f ∈ F[x] is a perfect power. That is, do there exist h ∈ F[x] and r > 1 such that f = h r ? The polynomial h need not be lacunary, though some conjectures suggest it may well have to be. We will find r, but not h.
We first describe algorithms to test if an f ∈ F[x] is an rth power of some polynomial h ∈ F[x], where f and r are both given and r is assumed to be prime. We present and analyze variants that work over finite fields F q and over Z. In fact, these algorithms for given r are for black-box polynomials: they only need to evaluate f at a small number of points. That this evaluation can be done quickly is a property of lacunary and other classes of polynomials.
For lacunary f we then show that, in fact, if h exists at all then r must be small unless f = x n . And if f is a perfect power, then there certainly exists a prime r such that f is an rth power. So in fact the restrictions that r is small and prime are sufficient to cover all nontrivial cases, and our method is complete.
Detecting given rth powers
Our main tool in this work is the following theorem which says that, with reasonable probability, a polynomial is an rth power if and only if the modular image of an evaluation in a specially constructed finite field is an rth power.
Theorem 2.1. Let ̺ ∈ Z be a prime power and r ∈ N a prime dividing ̺ − 1. Suppose
that f ∈ F ̺ [x] has degree n ≤ 1 + √ ̺/2 and is not a perfect rth power in F ̺ [x]. Then R (r) f = # {c ∈ F ̺ : f (c) ∈ F ̺ is an rth power} ≤ 3̺ 4 .
Proof. The rth powers in F ̺ form a subgroup H of F * ̺ of index r and size (̺−1)/r in F * ̺ . Also, a ∈ F * ̺ is an rth power if and only if a (̺−1)/r = 1. We use the method of "completing the sum" from the theory of character sums. We refer to Lidl and Niederreiter (1983), Chapter 5, for an excellent discussion of character sums. By a multiplicative character we mean a homomorphism χ : F * ̺ → C which necessarily maps F ̺ onto the unit circle. As usual we extend our multiplicative characters χ so that χ(0) = 0, and define the trivial character χ 0 (a) to be 0 when a = 0 and 1 otherwise. For any a ∈ F * ̺ , 1
r χ r =χ 0 χ(a) = 1 if a ∈ H, 0 if a ∈ H,
where χ ranges over all the multiplicative characters of order r on F * ̺ -that is, all characters that are isomorphic to the trivial character on the subgroup H. Thus
R (r) f = a∈F * ̺ 1 r χ r =χ0 χ(f (a)) = 1 r χ r =χ0 a∈F * ̺ χ(f (a)) ≤ ̺ r + 1 r χ r =χ 0 χ =χ 0 a∈F̺ χ(f (a)) .
Here we use the obvious fact that
a∈F * ̺ χ 0 (f (a)) ≤ a∈F̺ χ 0 (f (a)) = ̺ − d ≤ ̺,
where d is the number of distinct roots of f in F ̺ . We next employ the powerful theorem of Weil (1948) on character sums with polynomial arguments (see Theorem 5.41 of Lidl and Niederreiter (1983)), which shows that if f is not a perfect rth power of another polynomial, and χ has order r > 1, then
a∈F̺ χ(f (a)) ≤ (n − 1)̺ 1/2 ≤ ̺ 2 ,
using the fact that we insisted n ≤ 1+ √ ̺/2. Summing over the r−1 non-trivial characters of order r, we deduce that
R (r) f ≤ ̺ r + r − 1 r · ̺ 2 ≤ 3̺ 4 . 2
Certifying specified powers over F q [x]
Theorem 2.1 allows us to detect when a polynomial f ∈ F ̺ [x] is a perfect rth power, for known r dividing ̺ − 1: choose random α ∈ F ̺ and evaluate ξ = f (α) (̺−1)/r ∈ F ̺ . Recall that ξ = 1 if and only if f (α) is an rth power.
• If f is an rth power, then clearly f (α) is an rth power and we always have ξ = 1.
• If f is not an rth power, Theorem 2.1 demonstrates that for at least 1/4 of the elements of F ̺ , f (α) is not an rth power. Thus, for α chosen randomly from F ̺ we would expect ξ = 1 with probability at least 1/4. For a polynomial f ∈ F q [x] over an arbitrary finite field F q , where q is a prime power such that q − 1 is not divisible by r, we proceed by constructing an extension field F q r−1 over F q . From Fermat's Little Theorem and the fact that r ∤ q, we know r | (q r−1 − 1), and we can proceed as above. We now present and analyze this more formally.
Algorithm IsPerfectRthPowerGF
Input: A prime power q, f ∈ F q [x] of degree n ≤ 1 + √ q/2, r ∈ N a prime dividing n, and ǫ ∈ R >0 Output: True if f is the rth power of a polynomial in F ̺ [x]; False otherwise.
1: Find an irreducible Γ ∈ F q [z] of degree r − 1, successful with probability at least ǫ/2 2: ̺ ← q r−1 3: Define F ̺ = F q [z]/(Γ) 4: m ← 2.5(1 + ⌈log 2 (1/ǫ)⌉) 5: for i from 1 to m do 6:
Choose random α ∈ F ̺ 7:
ξ ← f (α) (̺−1)/r ∈ F ̺ 8:
if ξ = 1 then 9:
return False 10: return True Notes on IsPerfectRthPowerGF.
To accomplish
Step 1, a number of fast probabilistic methods are available to find irreducible polynomials. We employ the algorithm of Shoup (1994). This algorithm requires O((r 2 log r + r log q) log r log log r) operations in F q . It is probabilistic of the Las Vegas type, and we assume that it always stops within the number of operations specified, and returns the correct answer with probability at least 1/2 and "Fail" otherwise (it never returns an incorrect answer). The algorithm is actually presented in Shoup (1994) as always finding an irreducible polynomial, but requiring expected time as above; by not iterating indefinitely our restatement allows for a Monte Carlo analysis in what follows. To obtain an irreducible Γ with failure probability at most ǫ/2 we run (our modified) Shoup's algorithm 1 + ⌈log 2 (1/ǫ)⌉ times.
The restriction that n ≤ 1 + √ 2 (or alternatively q ≥ 4(n − 1) 2 ) is not problematic. If this condition is not met, simply extend F q with an extension of degree ν = ⌈log q (4(n − 1) 2 )⌉ and perform the algorithm over F q ν . At worst, each operation in F q ν requires O(M(log n)) operations in F q .
Here we define M(r) as a number of operations in F to multiply two polynomials of degree ≤ r over F, for any field F, or the number of bit operations to multiply two integers with at most r bits. Using classical arithmetic M(r) is O(r 2 ), while using the fast algorithm of Cantor and Kaltofen (1991) we may assume M(r) is O(r log r log log r).
Theorem 2.2. Let q be a prime power, f ∈ F q [x], r ∈ N a prime dividing deg f and ǫ > 0. If f is a perfect rth power the algorithm IsPerfectRthPowerGF always reports this. If f is not a perfect rth power then, on any invocation, this is reported correctly with probability at least 1 − ǫ.
Proof. It is clear from the above discussion that the algorithm always works when f is perfect power. When f is not a perfect power, each iteration of the loop will obtain ξ = 1 (and hence a correct output) with probability at least 1/4. By iterating the loop m times we ensure that the probability of failure is at most ǫ/2. Adding this to the probability that Shoup's algorithm (for Step 1) fails yields a total probability of failure of at most ǫ. 2 Theorem 2.3. On inputs as specified, the algorithm IsPerfectRthPowerGF requires O((rM(r) log r log q) · log(1/ǫ)) operations in F q plus the cost to evaluate α → f (α) at O(log(1/ǫ)) points α ∈ F q r−1 .
Proof. As noted above, Shoup's 1994 algorithm requires O((r 2 log r+r log q) log r log log r) field operations per iteration, which is within the time specified. The main cost of the loop in Steps 4-8 is computing f (α) (̺−1)/r , which requires O(log ̺) or O(r log q) operations in F ̺ using repeated squaring, plus one evaluation of f at a point in F ̺ . Each operation in F ̺ requires O(M(r)) operations in F q , and we repeat the loop O(log(1/ǫ)) times. 2
Corollary 2.4. Given f ∈ F q [x] of degree n with τ (f ) = t, and r ∈ N a prime dividing n, we can determine if f is an rth power with O ((rM(r) log r log q + tM(r) log n) · log(1/ǫ))
operations in F q . When f is an rth power, the output is always correct, while if f is not an rth power, the output is correct with probability at least 1 − ǫ.
Certifying specified powers over Z[x]
For an integer polynomial f ∈ Z[x], we proceed by working in the homomorphic image of Z in F p (and then in an extension of that field). We must ensure that the homomorphism preserves the perfect power property we are interested in with high probability. For any polynomial g ∈ F[x], let disc(g) = res(f, f ′ ) be the discriminant of g (the resultant of f and its first derivative). It is well known that g is squarefree if and only if disc(g) = 0. Also define lcoeff(f ) as the leading coefficient of f , the coefficient of the highest power of x in f . Gathen and Gerhard (2003), Lemma 14.1), and each of thef i mod p must be pairwise relatively prime and squarefree for 1 ≤ i ≤ m. Now suppose f mod p is a perfect rth power modulo p. Then we must have r | s i for 1 ≤ i ≤ m. But this immediately implies that f is a perfect power in Z[x] as well. 2
Lemma 2.5. Let f ∈ Z[x] andf = f / gcd(f, f ′ ) its squarefree part. Let p be a prime such that p ∤ disc(f ) and p ∤ lcoeff(f ). Then f is a perfect power in Z[x] if and only if f mod p is a perfect power in F p [x]. Proof. Clearly if f is a perfect power, then f mod p is a perfect power in Z[x]. To show the converse, assume that f = f s1 1 · · · f sm m for distinct irreducible f 1 , . . . , f m ∈ Z[x], sof = f 1 · · · f m . Clearly f ≡ f s1 1 · · · f sm m mod p as well, and because p ∤ lcoeff(f ) we know deg(f i mod p) = deg f i for 1 ≤ i ≤ m. Since p ∤ disc(f ),f mod p is squarefree (see von zur
Given any polynomial
g = g 0 + g 1 x + · · · + g m x m ∈ Z[x], we define the height or coefficient ∞-norm of g as g ∞ = max i |g i |.
Similarly, we define the coefficient 1-norm of g as g 1 = i |g i |, and 2-norm as g 2 = i |g i | 2 1/2 . Sincef divides f , we can employ the factor bound of Mignotte (1974) to obtain
f ∞ ≤ 2 n f 2 ≤ 2 n √ n · f ∞ .
Since disc(f ) = res(f ,f ′ ) is the determinant of matrix of size at most (2n − 1) × (2n − 1), Hadamard's inequality implies
|disc(f )| ≤ 2 n n 1/2 f ∞ n−1 2 n n 3/2 f ∞ n < 2 2n 2 n 2n · f 2n ∞ .
Also observe that |lcoeff(f )| ≤ f ∞ . Thus, the product disc(f ) · lcoeff(f ) has at most µ = log 2 2 2n 2 n 2n f 2n+1 ∞ /⌊log 2 (4(n − 1) 2 )⌋ prime factors greater than 4(n−1) 2 (we require the lower bound 4(n−1) 2 to employ Theorem 2.1 without resorting to field extensions). Choose a γ ≥ 4(n−1) 2 such that the number of primes π(2γ) − π(γ) between γ and 2γ is at least 4µ + 1. By Rosser and Schoenfeld (1962), π(2γ) − π(γ) ≥ 2γ/(5 ln γ) for γ ≥ 59. Thus if γ ≥ max{14µ ln(14µ), 100}, then a random prime not equal to r in the range γ . . . 2γ divides lcoeff(f ) · disc(f ) with probability at most 1/4. Primes p of this size have only log 2 p ∈ O(log n + log log f ∞ ) bits.
Algorithm IsPerfectRthPowerZ Input: f ∈ Z[x] of degree n; r ∈ N a prime dividing n; ǫ ∈ R >0 ; Output: True if f is the rth power of a polynomial in Z[x]; False otherwise
1: µ ← log 2 2 2n 2 n 2n f 2n+1 ∞ /⌊log 2 (4(n − 1) 2 )⌋ 2: γ ← max{14µ ln(14µ), 4(n − 1) 2 , 100} 3: for i from 1 to . . . ⌈log 2 (1/ǫ)⌉ do 4:
p ← random prime in the range γ . . . 2γ
5:
if NOT IsPerfectRthPowerGF(p, f mod p, r, 1/4 ) then 6: return False 7: return True Theorem 2.6. Let f ∈ Z[x] of degree n, r ∈ N dividing n and ǫ ∈ R >0 . If f is a perfect rth power, the algorithm IsPerfectRthPowerZ always reports this. If f is not a perfect rth power, on any invocation of the algorithm, this is reported correctly with probability at least 1 − ǫ.
Proof. If f is an rth power then so is f mod p for any prime p, and so is any f (α) ∈ F p . Thus, the algorithm always reports that f is an rth power. Now suppose f is not an rth power. If p | disc(f ) it may happen that f mod p is an rth power. This happens with probability at most 1/4 and we will assume that the worst happens in this case. When p ∤ disc(f ), the probability that IsPerfectRthPowerGF incorrectly reports that f is an rth power is also at most 1/4, by our choice of parameter ǫ. Thus, on any iteration of steps 4-6, the probability of finding that f is an rth power is at most 1/2. The probability of this happening ⌈log 2 (1/ǫ)⌉ times is clearly at most ǫ. 2 Theorem 2.7. On inputs as specified, the algorithm IsPerfectRthPowerZ requires O rM(r) log r · M(log n + log log f ∞ ) · (log n + log log f ∞ ) · log(1/ǫ) , or O˜(r 2 (log n+log log f ∞ ) 2 ·log(1/ǫ)) bit operations, plus the cost to evaluate (α, p) → f (α) mod p at O(log(1/ǫ)) points α ∈ F p for primes p with log p ∈ O(log n+log log f ∞ ).
Proof. The number of operations required by each iteration is dominated by Step 5, for which O(rM(r) log r log p) operations in F p is sufficient by Theorem 2.3. Since log p ∈ O(log n + log log f ∞ ) we obtain the final complexity as stated. 2
We obtain the following corollary for t-sparse polynomials in Z[x]. This follows since the cost of evaluating a t-sparse polynomial f ∈ Z[x] modulo a prime p is O(t log f ∞ log p+ t log nM(log p)) bit operations.
Corollary 2.8. Given f ∈ Z[x] of degree n, with τ (f ) = t, and r ∈ N a prime dividing n, we can determine if f is an rth power with O˜ (r 2 log 2 n + t log 2 n + t log f ∞ log n) · log(1/ǫ) bit operations. When f is an rth power, the output is always correct, while if f is not an rth power, the output is correct with probability at least 1 − ǫ.
An upper bound on r.
In this subsection we show that if f = h r and f = x n then r must be small. Over Z[x] we show that h 2 is small as well. A sufficiently strong result over many fields is demonstrated by Schinzel (1987), Theorem 1, where it is shown that if f has sparsity t ≥ 2 then t ≥ r + 1 (in fact a stronger result is shown involving the sparsity of h as well). This holds when either the characteristic of the ground field of f is zero or greater than deg f .
Here we give a (much) simpler result for polynomials in Z[x], which bounds h 2 and is stronger at least in its dependency on t though it also depends upon the coefficients of f . Proof. Let p > n be prime and ζ ∈ C a pth primitive root of unity. Then
h 2 2 = 0≤i≤s |h i | 2 = 1 p 0≤i<p |h(ζ i )| 2 .
(this follows from the fact that the Discrete Fourier Transform (DFT) matrix is orthogonal). In other words, the average value of |h(ζ i )| 2 for i = 0 . . . p − 1 is h 2 2 , and so there exists a k ∈ {0, . . . , p − 1} with |h(ζ k )| 2 ≥ h 2 2 . Let θ = ζ k . Then clearly |h(θ)| ≥ h 2 . We also note that f (θ) = h(θ) r and |f (θ)| ≤ f 1 , since |θ| = 1. Thus,
h 2 ≤ |h(θ)| = |f (θ)| 1/r ≤ f 1/r 1 . 2
The following corollary is particularly useful.
Corollary 2.10. If f ∈ Z[x]
is not of the form x n , and f = h r for some h ∈ Z[x], then
(i) r ≤ 2 log 2 f 1 . (ii) τ (h) ≤ f 2/r 1 Proof. Part (i) follows since h 2 ≥ √ 2. Part (ii) follows because h 2 ≥ τ (h). 2
These bounds relate to the sparsity of f since f 1 ≤ τ (f ) f ∞ .
Perfect Power Detection Algorithm
We can now complete the perfect power detection algorithm, when we are given only the t-sparse polynomial f (and not r).
Algorithm IsPerfectPowerZ Input: f ∈ Z[x] of degree n and sparsity t ≥ 2, ǫ ∈ R >0 Output: True and r if f = h r for some h ∈ Z[x]
False otherwise. 1: P ← {primes r | n and r ≤ 2 log 2 (t f ∞ )} 2: for r ∈ P do 3:
if IsPerfectRthPowerZ(f , r, ǫ/#P) then 4:
return True and r 5: return False
Theorem 2.11. If f ∈ Z[x] = h r for some h ∈ Z[x]
, the algorithm IsPerfectPowerZ always returns "True" and returns r correctly with probability at least 1 − ǫ. Otherwise, it returns "False" with probability at least 1 − ǫ. The algorithm requires O˜(t log 2 f ∞ · log 2 (n) · log(1/ǫ)) bit operations.
Proof. From the preceding discussions, we can see that if f is a perfect power, then it must be a perfect rth power for some r ∈ P. So the algorithm must return true on some iteration of the loop. However, it may incorrectly return true too early for an r such that f is not actually an rth power; the probability of this occurring is the probability of error when f is not a perfect power, and is less than ǫ/#P at each iteration. So the probability of error on any iteration is at most ǫ, which is what we wanted.
The complexity result follows from the fact that each r ∈ O(log t + log f ∞ ) and using Corollary 2.8. 2
For polynomials in F q [x] we use Schinzel's bound that r ≤ t − 1 and obtain the following algorithm.
Algorithm IsPerfectPowerGF Input: f ∈ F q [x] of degree n and sparsity t, where the characteristic of F q is greater than n, and ǫ ∈ R >0 Output: True and r if f = h r for some h ∈ F q [x]; False otherwise. 1: P ← {primes r | n and r ≤ t} 2: for p ∈ P do 3:
if IsPerfectRthPowerGF( f , r, ǫ/#P ) then 4: return True and r;
Theorem 2.12. If f = h r for h ∈ F q [x], the algorithm IsPerfectPowerGF always returns "True" and returns r correctly with probability at least 1−ǫ. Otherwise, it returns "False" with probability at least 1 − ǫ. The algorithm requires O˜(t 3 (log q + log n)) operations in F q .
Proof. The proof is equivalent to that of Theorem 2.11, using the complexity bounds in Corollary 2.4. 2
Detecting multivariate perfect powers
In this subsection we examine the problem of detecting multivariate perfect powers. That is, given a lacunary f ∈ F[x 1 , . . . , x ℓ ] of total degree n as in (1.1), we want to determine if f = h r for some h ∈ F[x 1 , . . . , x ℓ ] and r ∈ N. This is done simply as a reduction to the univariate case.
First, given f ∈ F[x 1 , . . . , x ℓ ], define the squarefree partf ∈ F[x 1 , . . . , x ℓ ] as the squarefree polynomial of highest total degree which divides f . Lemma 2.13. Let f ∈ F[x 1 , . . . , x ℓ ] be of total degree n > 0 and letf ∈ F[x 1 , . . . , x ℓ ] be the squarefree part of f . Define ∆ = disc x (f (y 1 x, . . . , y ℓ x)) = res x (f (y 1 x, . . . , y ℓ x),f ′ (y 1 x, . . . , y ℓ x)) ∈ F[y 1 , . . . , y ℓ ] and Λ = lcoeff x (f (y 1 x, . . . , y ℓ x)) ∈ F[y 1 , . . . , y ℓ ] for independent indeterminates x, y 1 , . . . , y ℓ . Assume that a 1 , . . . , a ℓ ∈ F with ∆(a 1 , . . . , a ℓ ) = 0 and Λ(a 1 , . . . , a n ) = 0. Then f (x 1 , . . . , x ℓ ) is a perfect power if and only if f (a 1 x, . . . , a ℓ x) ∈ F[x] is a perfect power.
Proof. Clearly if f is a perfect power, then f (a 1 x, . . . , a ℓ x) is a perfect power. To prove the converse, assume that
f = f s1 1 f s2 2 · · · f sm m for irreducible f 1 , . . . , f m ∈ F[x 1 , . . . , x ℓ ].
Then f (y 1 x, . . . , y m x) = f 1 (y 1 x, . . . , y m x) s1 · · · f m (y 1 x, . . . , y m x) sm and each of the f i (y 1 x, . . . , y m x) are irreducible. Now, since Λ(a 1 , . . . , a m ) = 0, we know the deg(f (a 1 x, . . . , a ℓ x)) = deg f (the total degree of f ). Thus, deg f i (a 1 x, . . . , a ℓ x) = deg f i for 1 ≤ i ≤ ℓ as well. Also, by our assumption, disc(f (a 1 x, . . . , a ℓ x)) = 0, so all of the f i (a 1 x, . . . , a ℓ x) are squarefree and pairwise relatively prime for 1 ≤ i ≤ k, and f (a 1 x, . . . , a ℓ x) = f 1 (a 1 x, . . . , a ℓ x) s1 · · · f m (a 1 x, . . . , a ℓ x) sm .
Assume now that f (a 1 x, . . . , a ℓ x) is an rth perfect power. Then r divides s i for 1 ≤ i ≤ m. This immediately imples that f itself is an rth perfect power. 2
It is easy to see that the total degree of ∆ is less than 2n 2 and the total degree of Λ is less than n, and that both ∆ and Λ are non-zero. Thus, for randomly chosen a 1 , . . . , a ℓ from a set S ⊆ F of size at least 8n 2 + 4n we have ∆(a 1 , . . . , a ℓ ) = 0 or Λ(a 1 , . . . , a ℓ ) = 0 with probability less than 1/4, by Zippel (1979) or Schwartz (1980). This can be made arbitrarily small by increasing the set size and/or repetition. We then run the appropriate univariate algorithm over F[x] (depending upon the field) to identify whether or not f is a perfect power, and if so, to find r.
Computing perfect roots
Once we have determined that f ∈ F[x] is equal to h r for some h ∈ F[x], the next task is to actually compute h. Unfortunately, as noted in the introduction, there are no known bounds on τ (h) which are polynomial in τ (f ).
The question of how sparse the polynomial root of a sparse polynomial must be (or equivalently, how dense any power of a dense polynomial must be) relates to some questions first raised by Erdös (1949) on the number of terms in the square of a polynomial. Schinzel extended this work to the case of perfect powers and proved that τ (h r ) tends to infinity as τ (h) tends to infinity (Schinzel, 1987). Some conjectures of Schinzel suggest that τ (h) should be O(τ (f )). A recent breakthrough of Zannier (2007) show that τ (h) is bounded by a function which does not depend on deg f , but this bound is unfortunately not polynomial in τ (f ).
However, our own (limited) investigations, along with more extensive ones by Coppersmith and Davenport (1991), and later Abbott (2002), suggest that, for any h ∈ F[x], where the characteristic of F is not too small, τ (h) ∈ O(τ (h r ) + r). We skirt this problem here by simply making our algorithms output sensitive; the time required is polynomial in the lacunary size of the input and the output.
Computing rth roots in polynomial-time (without conditions)
In this subsection we present an algorithm for computing an h such that f = h r given f ∈ Z[x] and r ∈ Z and assuming that such an h exists. The algorithm requires time polynomial in t = τ (f ), log deg f , log f ∞ and a given upper bound µ ≥ m = τ (h). It is not conditional on any conjectures, but is probabilistic of the Monte Carlo type. That is, the computed polynomial h is such that h r = f with high probability. We will only demonstrate that this algorithm requires polynomial time. A more detailed analysis is performed on the (more efficient) algorithm of the next subsection (though that complexity is subject to a modest conjecture).
The basic idea of the algorithm here is that we can recover all the coefficients in Q as well as modular information about the exponents of h from a homomorphism into a small cylotomic field over Q. Doing this for a relatively small number of cyclotomic fields yields h.
Assume that (the unknown) h ∈ Z[x] has form
h = 1≤i≤m b i x di for b 1 , . . . , b m ∈ Z\{0}, and 0 ≤ d 1 < d 2 < · · · < d m ,
and that p > 2 is a prime distinct from r such that
p ∤ 1≤i<j≤m (d j − d i ), and p ∤ 1≤i≤m (d i + 1).
Let ζ p ∈ C be a pth primitive root of unity, and Φ p = 1 + z + · · · + z p−1 ∈ Z[z] its minimal polynomial, the pth cyclotomic polynomial (which is irreducible in Q[z]).
Computationally we represent Q(ζ p ) as Q[z]/(Φ p ), with ζ p ≡ z mod Φ p . Observe that ζ k p = ζ k rem p p for any k ∈ Z, where k rem p is the least non-negative residue of k modulo p. Thus h(ζ p ) = h p (ζ p ) for h p = 1≤i≤m b i x di rem p ∈ Z[x],
and h p is the unique representation of h(ζ p ) as a polynomial of degree less than p − 1. By our choice of p, none of the exponents of h are equivalent modulo p and all the exponents reduced modulo p are strictly less than p−1 (since our conditions imply d i ≡ (p−1) mod p for 1 ≤ i ≤ m). This also implies that the coefficients of h p are exactly the same as those of h, albeit in a different order. Now observe that we can determine h p quite easily from the roots of
Γ p (y) = y r − f (ζ p ) ∈ Q(ζ p )[y].
These roots can be found by factoring the polynomial Γ p (y) in Q(ζ p )[y], and the roots in C must be ω i h(ζ p ) ∈ C for 0 ≤ i < r, where ω is a primitive rth root of unity. When r > 2, and since we chose p distinct from r, the only rth root of unity in Q(ζ p ) is 1. Thus Γ p (y) has exactly one linear factor, and this must equal to y − h(ζ p ) = y − h p (ζ p ), precisely determining h p . When r = 2, we have
Γ p (y) = (y − h(ζ p ))(y + h(ζ p )) = (y − h p (ζ p ))(y + h p (ζ p ))
and we can only determine h p (ζ p ) (and h p and, for that matter, h) up to a factor of ±1. However, the exponents of h p and −h p are the same, and the ambiguity is only in the coefficients (which we resolve later). Finally, we need to perform the above operations for a sequence of cyclotomic fields Q(ζ p1 ), Q(ζ p2 ), . . . , Q(ζ p k ) such that the primes in P = {p 1 , . . . , p k } allow us to recover all the exponents in h. Each prime p ∈ P gives the set of exponents of h reduced modulo that prime, as well as all the coefficients of h in Z. That is, from each of the computations with p ∈ P we obtain C = {b 1 , . . . , b m } and E p = {d 1 rem p, d 2 rem p, . . . , d rem p} , but with no clear information about the order of these sets. In particular, it is not obvious how to correlate the exponents modulo the different primes directly. To do this we employ the clever sparse interpolation technique of Garg and Schost (2008) (based on a method of Grigoriev and Karpinski (1987) for a different problem), which interpolates the symmetric polynomial in the exponents:
g = (x − d 1 )(x − d 2 ) · · · (x − d m ) ∈ Z[x].
For each p ∈ P we compute the symmetric polynomial modulo p,
g p = (x − (d 1 rem p))(x − (d 2 rem p)) · · · (x − (d m rem p)) ≡ g mod p,
for which we do not need to know the order of the exponent residues. We then determine g ∈ Z[x] by the Chinese remainder theorem and factor g over Z[x] to find the d 1 , . . . , d m ∈ Z. Thus the product of all primes in p ∈ P must be at least 2 g ∞ to recover the coefficients of g uniquely. It is easily seen that 2 g ∞ ≤ 2n m .
As noted above, the computation with each p ∈ P recovers all the exponents of h in Z, so using only one prime p ∈ P, we determine the jth exponent of h as the coefficient of x dj rem p in h p for 1 ≤ j ≤ m. If r = 2 we can choose either of the roots of Γ p (y) (they differ by only a sign) to recover the coefficients of h.
The above discussion is summarized in the following algorithm.
Algorithm ComputeRootAlgebraic Input: f ∈ Z[x] as in (1.2) and r, µ ∈ N.
Output: h ∈ Z[x] such that f = h r and τ (h) ≤ µ, provided such an h exists. 1: γ ← smallest integer such that 2γ/5 log γ ≥ 10µ 2 (log 2 n)(1 + µ log 2 n). 2: P ← set of k > m log 2 n primes chosen uniformly at random from {γ, . . . , 2γ}. 3: for p ∈ P do 4:
Represent Q(ζ p ) by Q[x]/(Φ p ), where Φ p = 1 + z + · · · + z p−1 and ζ p ≡ z mod Φ p .
5:
Compute f (ζ p ) = 1≤i≤t c i ζ ei rem p p ∈ Q(ζ p ).
6:
h p ← root of Γ p = y r − f (ζ p ) in Q(ζ p ), found by factoring Γ p over Q(ζ p )[y].
7:
if deg h p ≥ p − 1 or h p has non-integer coefficients then 8:
return FAIL 9:
Write h p ∈ Z[x] as 1≤i≤m b ip x dip .
10:
if m differs from previous values of m then 11:
return FAIL 12:
g p ← (x − d 1p )(x − d 2p ) · · · (x − d mp ) ∈ Z p [x]
. 13: Reconstruct g ∈ Z[x] from {g p } p∈P by the Chinese remainder algorithm. 14: {d 1 , d 2 , . . . , d m } ← distinct integer roots of g ∈ Z[x]. 15: Choose any p ∈ P. For 1 ≤ j ≤ m, let b j ∈ Z be the coefficient of
x dj rem p in h p . 16: Return h = 1≤i≤m b j x dj .
Theorem 3.1. The algorithm ComputeRootAlgebraic works as stated. It is probabilistic of the Monte Carlo type and returns the correct answer with probability at least 9/10 on any execution. It requires a number of bit operations polynomial in t = τ (f ), log deg f , log f , and µ.
Proof. In Step 1 we need to choose a set of primes P which are all good with sufficiently high probability, in the sense that for all p ∈ P
β = r · 1≤i<j≤m (d j − d i ) · 1≤i≤m (d i + 1) ≡ 0 mod p.
It is easily derived that β ≤ n µ 2 , which has fewer than log 2 β ≤ µ 2 log 2 n prime factors. We also need to recover g in Step 7, and g ∞ ≤ n µ , so we need at least 1 + log 2 g ≤ 1 + µ log 2 n primes. Thus, if P has at least 10µ 2 log 2 (n)(1 + µ log 2 n), the probability of choosing a bad prime from P is at most 1/(10(1 + µ log 2 n)). The probability of choosing a bad prime with (1 + µ log 2 n) choices is at most 1/10, and the probability that all the primes are good is at least 9/10. Numbers are chosen uniformly and randomly from {γ, . . . , 2γ} and tested for primality, say by Agrawal et al. (2004). Correctness of the remainder of the algorithm follows from the previous discussion. Factoring the polynomials Γ p ∈ Q(ζ p )[y] can be performed in polynomial time with the algorithm of, for example, Landau (1985), and all other steps clearly require polynomial time as well. 2
Faster root computation subject to conjecture
Algorithm ComputeRootAlgebraic is probabilistic of the Monte Carlo type and not of the Las Vegas type because we have no way of certifying the output -i.e. that h r = f for given lacunary h, f ∈ Z[x] -in polynomial time. One way to accomplish this would be to simply compute h r by repeated squaring and comparing the result to f , but to do so in polynomial time would require bounds on the sparsity of each intermediate power τ (h i ) for 2 ≤ i < r based on τ (h) and τ (f ).
In fact, with such sparsity bounds we can actually derive a deterministic algorithm based on Newton iteration. This approach does not rely on advanced techniques such as factoring over algebraic extension fields, and hence will be much more efficient in practice. It is also more general as it applies to fields other than Z and to powers r which are not prime.
Unfortunately, this algorithm is not purely output-sensitive, as it relies on the following conjecture regarding the sparsity of powers of h:
Conjecture 3.2. For r, s ∈ N, if the characteristic of F is zero or greater than rs, and
h ∈ F[x] with deg h = s, then τ (h i mod x 2s ) < τ (h r mod x 2s ) + r, i = 1, 2, . . . , r − 1.
This corresponds to intuition and experience, as the system is still overly contrained with only s degrees of freedom. A weaker conjecture would suffice to prove polynomial time, but we use the stated bounds as we believe these give more accurate complexity measures.
Our algorithm is essentially a Newton iteration, with special care taken to preserve sparsity. We start with the image of h modulo x, using the fact that f (0) = h(0) r , and at Step i = 1, 2, . . . , ⌈log 2 (deg h + 1)⌉, we compute the image of h modulo x i .
Here, and for the remainder of this section, we will assume that f, h ∈ F[x] with degrees n and s respectively such that f = h r for r ∈ N at least 2, and that the characteristic of F is either zero or greater than n. As usual, we define t = τ (f ). We require the following simple lemma.
Lemma 3.3. * Let k, ℓ ∈ N such that ℓ ≤ k and k + ℓ ≤ s, and suppose h 1 ∈ F[x] is the unique polynomial with degree less than k satisfying h r 1 ≡ f mod x k . Then τ (h r+1 l mod x k+ℓ ) ≤ 2t(t + r).
Proof. Let h 2 ∈ F[x] be the unique polynomial of degree less than ℓ satisfying h 1 + h 2 x k ≡ h mod x k+ℓ . Since h r = f ,
f ≡ h r 1 + rh r−1 1 h 2 x k mod x k+ℓ .
Multiplying by h 1 and rearranging gives h r+1
1 ≡ h 1 f − rf h 2 x k mod x k+ℓ .
Because h 1 mod x k and h 2 mod x ℓ each have at most τ (h) terms, which by Conjecture 3.2 is less than t − r, the total number of terms in h r−1 1 mod x k+ℓ is less than 2t(t − r). 2
This essentially tells us that the "error" introduced by examining higher-order terms of h r 1 is not too dense. It leads to the following algorithm for computing h.
Algorithm
ComputeRootNewton
Input: f ∈ F[x], r ∈ N such that f is a perfect rth power Output: h ∈ F[x] such that f = h r 1: u ← highest power of x dividing f 2: f u ← coefficient of x u in f 3: g ← f /(f u x u ) 4: h ← 1, k ← 1 5: while kr ≤ deg g do 6: ℓ ← min{k, (deg g)/r + 1 − k} 7: a ← hg − h r+1 mod x k+ℓ rx k 8: h ← h + (a/g mod x ℓ )x k 9: k ← k + ℓ 10: b ← any rth root of f u in F 11: return bhx u/r Theorem 3.4. If f ∈ F[x] is a perfect rth power, then ComputeRootNewton returns an h ∈ F[x] such that h r = f .
Proof.
Let u, f u , g be as defined in Steps 1-4. Thus f = f u gx u . Now letĥ be some rth root of f , which we assume exists. If we similarly writeĥ =ĥ vĝ x v , withĥ v ∈ F and g ∈ F[x] such thatĝ(0) = 1, thenĥ r =ĥ r vĝ r x vr . Therefore f u must be a perfect rth power in F, r|u, and g is a perfect rth power in F[x] of some polynomial with constant coefficient equal to 1.
Denote by h i the value of h at the beginning of the ith iteration of the while loop. So h 1 = 1. We claim that at each iteration through Step 6, h r i ≡ g mod x k . From the discussion above, this holds for i = 1. Assuming the claim holds for all i = 1, 2, . . . , j, we prove it also holds for i = j + 1.
From
Step 8, h j+1 = h j + (a/g mod x l )x k , where a is as defined on the jth iteration of Step 7. We observe that
h j h r j ≡ h r+1 j + rh r j (a/g mod x l )x k mod x k+ℓ .
From our assumption, h r j ≡ f mod x k , and l ≤ k, so we have
h j h r j+1 ≡ h r+1 j + rax k ≡ h r+1 j + h j f − h r+1 j ≡ h j f mod x k+ℓ
Therefore h r j+1 ≡ f mod x k+ℓ , and so by induction the claim holds at each step. Since the algorithm terminates when kr > deg g, we can see that the final value of h is an rth root of g. Finally, bhx u/r r = f u gx u = f , so the theorem holds. 2 Theorem 3.5. † If f ∈ F[x] has degree n and t nonzero terms, then ComputeRootNewton uses O (t + r) 4 log r log n operations in F and an additional O (t + r) 4 log r log 2 n bit operations, not counting the cost of root-finding in the base field F on Step 10.
Proof. First consider the cost of computing h r+1 in Step 7. This will be accomplished by repeatedly squaring and multiplying by h, for a total of at most 2⌊log 2 (r + 1)⌋ multiplications. As well, each intermediate product will have at most τ (f ) + r < (t + r) 2 terms, by Conjecture 3.2. The number of field operations required, at each iteration, is O (t + r) 4 log r , for a total cost of O (t + r) 4 log r log n .
Furthermore, since k + ℓ ≤ 2 i at the i'th step, for 1 ≤ i < log 2 n, the total cost in bit operations is less than 1≤i<log 2 n (t + r) 4 log 2 ri ∈ O (t + r) 4 log r log 2 n .
In fact, this is the most costly step. The initialization in Steps 1-4 uses only O(t) operations in F and on integers at most n. And the cost of computing the quotient on
Step 8 is proportional to the cost of multiplying the quotient and dividend, which is at most O(t(t + r)). 2
When F = Q, we must account for coefficient growth. We use the normal notion of the size of a rational number: For α ∈ Q, write α = a/b for a, b relatively prime integers. Then define H(α) = max{|a|, |b|}. And for f ∈ Q[x] with coefficients c 1 , . . . , c t ∈ Q, write H(f ) = max H(c i ).
Thus, the size of the lacunary representation of f ∈ Q[x] is proportional to τ (f ), deg f , and log H(f ). Now we prove the bit complexity of our algorithm is polynomial in these values, when F = Q.
Theorem 3.6. † Suppose f ∈ Q[x] has degree n and t nonzero terms, and is a perfect rth power. ComputeRootNewton computes an rth root of f using O˜ t(t + r) 4 · log n · log H(f ) bit operations.
Proof. Let h ∈ Q[x] such that h r = f , and let c ∈ Z >0 be minimal such that ch ∈ Z[x]. Gauß's Lemma tells us that c r must be the least positive integer such that c r f ∈ Z[x] as well. Then, using Theorem 2.9, we have:
H(h) ≤ ch ∞ ≤ ch 2 ≤ (t c r f ∞ ) 1/r ≤ t 1/r H(f ) (t+1)/r .
(The last inequality comes from the fact that the lcm of the denominators of f is at most
H(f ) t .)
Hence log H(h) ∈ O ((t log H(f ))/r). Clearly the most costly step in the algorithm will still be the computation of h r+1 i at each iteration through Step 7. For simplicity in our analysis, we can just treat h i (the value of h at the ith iteration of the while loop in our algorithm) as equal to h (the actual root of f ), since we know τ (h i ) ≤ τ (h) and
H(h i ) ≤ H(h).
Lemma 3.3 and Conjecture 3.2 tell us that τ (h i ) ≤ 2(t + r) 2 for i = 1, 2, . . . , r. To compute h r+1 , we will actually compute (ch) r+1 ∈ Z[x] by repeatedly squaring and multiplying by ch, and then divide out c r+1 . This requires at most ⌊log 2 r + 1⌋ squares and products.
Note
that (ch) 2i ∞ ≤ (t + r) 2 (ch) i 2 ∞ and (ch) i+1 ∞ ≤ (t + r) 2 (ch) i ∞ ch ∞ . Therefore (ch) i ∞ ≤ (t + r) 2r ch r ∞ , i = 1, 2, .
. . , r, and thus log (ch) i ∞ ∈ O (r(t + r) + t log H(f )), for each intermediate power (ch) i .
Thus each of the O (t + r) 4 log r field operations at each iteration costs O(M(t log H(f )+ log r(t + r))) bit operations, which then gives the stated result. 2
The method used for Step 10 depends on the field F. For F = Q, we just need to find two integer perfect roots, which can be done in "nearly linear" time by the algorithm of Bernstein (1998). Otherwise, we can use any of the well-known fast root-finding methods over F[x] to compute a root of x r − f u .
Implementation
To investigate the practicality of our algorithms, we implemented IsPerfectPowerZ using Victor Shoup's NTL. This is a high-performance C++ for fast dense univariate polynomial computations over Z[x] or F q [x].
NTL does not natively support a lacunary polynomial representation, so we wrote our own using vectors of coefficients and of exponents. In fact, since IsPerfectPowerZ is a black-box algorithm, the only sparse polynomial arithmetic we needed to implement was for evaluation at a given point.
The only significant diversion between our implementation and the algorithm specified in Section 2 is our choice of the ground field. Rather than working in a degree-(r − 1) extension of F p , we simply find a random p in the same range such that (r − 1) | p. It is more difficult to prove that we can find such a p quickly (using e.g. the best known bounds on Linnik's Constant), but in practice this approach is very fast because it avoids computing in field extensions.
As a point of comparison, we also implemented the Newton iteration approach to computing perfect polynomial roots, which appears to be the fastest known method for dense polynomials. This is not too dissimilar from the techniques from the previous section on computing a lacunary rth root, but without paying special attention to sparsity. We work modulo a randomly chosen prime p to compute an rth perfect root h, and then use random evaluations of h and the original input polynomial f to certify correctness. This yields a Monte Carlo algorithm with the same success probability as ours, and so provides a suitable and fair comparison.
We ran two sets of tests comparing these algorithms. The first set, depicted in Figure 1, does not take advantage of sparsity at all; that is, the polynomials are dense and have close to the maximal number of terms. It appears that the worst-case running time of our algorithm is actually a bit better than the Newton iteration method on dense input, but on the average they perform roughly the same. The lower triangular shape comes from the fact that both algorithms can (and often do) terminate early. The visual gap in the timings for the sparse algorithm comes from the fact that exactly half of the input polynomials were perfect powers. It appears our algorithm terminates more quickly when the polynomial is not a perfect power, but usually takes close to the full amount of time otherwise.
The second set of tests, depicted in Figure 2, held the number of terms of the perfect power, τ (f ), roughly fixed, letting the degree n grow linearly. Here we can see that, for sufficiently sparse f , our algorithm performs significantly and consistently better than the Newton iteration. In fact, we can see that, with some notable but rare exceptions, it appears that the running time of our algorithm is largely independent of the degree when the number of terms remains fixed. The outliers we see probably come from inputs that were unluckily dense (it is not trivial to produce examples of h r with a given fixed number of nonzero terms, so the sparsity did vary to some extent).
Perhaps most surprisingly, although the choices of parameters for these two algorithms only guaranteed a probability of success of at least 1/2, in fact over literally millions of tests performed with both algorithms and a wide range of input polynomials, not a single failure was recorded. This is of course due to the loose bounds employed in our analysis, indicating a lack of understanding at some level, but it also hints at the possibility of a deterministic algorithm, or at least one which is probabilistic of the Las Vegas type.
Both implementations are available as C++ code downloadable from the second author's website.
| 10,021 |
0901.1062
|
2950105700
|
Biometrics make human identification possible with a sample of a biometric trait and an associated database. Classical identification techniques lead to privacy concerns. This paper introduces a new method to identify someone using his biometrics in an encrypted way. Our construction combines Bloom Filters with Storage and Locality-Sensitive Hashing. We apply this error-tolerant scheme, in a Hamming space, to achieve biometric identification in an efficient way. This is the first non-trivial identification scheme dealing with fuzziness and encrypted data.
|
Security of biometric systems is widely studied -- cf. @cite_30 @cite_35 @cite_43 -- and although a lot of vulnerabilities are now well understood and controlled, it is still difficult to achieve an end-to-end system which satisfies all constraints. In particular, biometric template privacy is an important issue due to the non-revocability and non-renewability of biometric features.
|
{
"abstract": [
"",
"A biometric system is vulnerable to a variety of attacks aimed at undermining the integrity of the authentication process. These attacks are intended to either circumvent the security afforded by the system or to deter the normal functioning of the system. We describe the various threats that can be encountered by a biometric system. We specifically focus on attacks designed to elicit information about the original biometric data of an individual from the stored template. A few algorithms presented in the literature are discussed in this regard. We also examine techniques that can be used to deter or detect these attacks. Furthermore, we provide experimental results pertaining to a hybrid system combining biometrics with cryptography, that converts traditional fingerprint templates into novel cryptographic structures.",
"Abstract Biometrics authentication offers many advantages over conventional authentication systems that rely on possessions or special knowledge. With conventional technology, often the mere possession of an employee ID card is proof of ID, while a password potentially can be used by large groups of colleagues for long times without change. The fact that biometrics authentication is non-repudiable (hard to refute) and, yet, convenient, is among its most important advantages. Biometrics systems, however, suffer from some inherent biometrics-specific security threats. These threats are mainly related to the use of digital signals and the need for additional input devices, though we also discuss brute-force attacks of biometrics systems. There are also problems common to any pattern recognition system. These include “wolves” and “lambs”, and a new group we call “chameleons”. An additional issue with the use of biometrics is the invasion of privacy because the user has to enroll with an image of a body part. We discuss these issues and suggest some methods for mitigating their impact."
],
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_43"
],
"mid": [
"",
"2096521846",
"2015867063"
]
}
|
Identification with Encrypted Biometric Data *
|
The arising of biometric recognition systems is based on the uniqueness of some natural information every human being carries along. For instance, it is possible to verify that a given individual is the one he claims to be (Verification). It is also possible to find someone's identity among a collection thanks to his biometrics (Identification).
In this paper, we design a biometric identification system that is based on encrypted data, so that privacy is guaranteed, and in a way that does not take too much time and memory to process. For that purpose, we need to find a way to:
• mitigate the effects of biometrics fuzziness,
• and efficiently identify someone over an encrypted database.
It follows the idea of searchable encryption and we here explain how to make efficient queries to the database, that look for a pattern close to a given one in encrypted data, i.e. a search with error-tolerance.
Biometrics and Cryptography
A specific difficulty concerning biometrics is their fuzziness. It is nearly impossible for a sensor to obtain the same image from a biometric data twice: there will always be significant differences. The classical way to supersede variations between different captures is to use a matching function, which basically tells if two measures represent the same biometric data or not.
The integration of biometrics into cryptographic protocols is thus difficult as state-of-the-art protocols are not designed for error-tolerance and fuzziness in their inputs. The two main leads for that are achieving a good stable coding of the data or making the matching algorithm part of the protocol.
Both sides of the problem are quite hard. The extraction of a constant-length vector has been studied for the iris [18] and the fingerprint [32,48]; the result is a fixed-length bit string on which the matching is realized with the Hamming distance. Following this, we solely focus in this paper on binary biometric data compared with Hamming distance.
Most of protocols involving biometric data and cryptography use Secure Sketches or Fuzzy Extractors [19,34]. It uses error correction to reduce variations between the different measures, and to somehow hide the biometric data behind a random codeword -e.g. [46,39,27,9,8,11].
On the other hand, several biometrics verification protocols, e.g. [14,10,12,44,47], have proposed to embed the matching directly. They use the property of homomorphic encryption schemes to compute the Hamming distance between two encrypted templates. Some other interesting solutions based on adaptation of known cryptographic protocols are also investigated in [7,13].
The drawback with all these techniques is that they do not fit well with identification in large databases as the way to run an identification among N data would be to run almost as many authentication algorithms. As far as we know, no non-trivial protocol for biometric identification involving privacy and confidentiality features has been proposed yet.
Identification
Several algorithms have been proposed for the so-called Nearest Neighbour and Approximate Nearest Neighbour (ANN) problems. Indyk wrote a review on these topics in [29]. Recently, Hao et al. [26] demonstrated the efficiency of the ANN approach for iris biometrics where projected values of iris templates are used to speed up identification requests into a large database; indeed [26] derived a specific ANN algorithm from the iris structure and statistical properties.
However, in their construction the iris biometric data are never encrypted, and the way they boost the search for the nearest match reveals a large amount of information about sensitive data.
Our works are also influenced by the problem of finding a match on encrypted data. Boneh et al. defined the notion of Public-key encryption with Keyword Search (PEKS) [5], in which specific trapdoors are created for the lookup of keywords over public-key encrypted messages. Several other papers, e.g. [24,2,15,35,43], have also elaborated solutions in this field. However the main difference between the search for a keyword as understood by Boneh et al. [5,6] and biometric matching is that an exact match for a given bit string in the plaintext suffices for the former, but not for our motivation. For this purpose, we introduce a new model for error-tolerant search in Sec. 3 and specific functions to take into account fuzziness in Sec. 4.1.
The most significant difference here from the primitives introduced previously in [5] is that messages are no longer associated to keywords. Moreover, our primitives enable some imprecision on the message that is looked up. For example, one can imagine a mailing application, where all the mails are encrypted, and where it is possible to make queries on the mail subject. If there is a typo in the query, then looking for the correct word should also give the mail among the results -at least, we would like that to happen. Note that wildcards are not well-adapted to this kind of application, as a wildcard permits to catch errors providing that we know where it is located, whereas error-tolerance does not have this constraint.
Construction Outline
We propose to use recent advances done in the fields of similarity searching and public-key cryptography. Our technique narrows our identification to a few candidates. In a further step, we must complete it by fine-tuning the results in checking the remaining identities so that the identification request gets a definite answer.
The first step is accomplished by combining Bloom filters with localitysensitive hashing functions. Bloom filters enable to speed up the search for a specified keyword using a time-space trade-off. We use locality-sensitive hashing functions to speed the search for the (approximate-)nearest neighbour of an element in a reference set. Combining these primitives enables to efficiently use cryptographic methods on biometric templates, and to achieve error-tolerant searchable encryption.
Organization
In Section 2 we describe the biometric identification architecture that we consider and explain our security objectives to reach. Section 3 introduces the security model for the cryptographic primitives that we use, based on the new concept of Error-Tolerant Searchable Encryption. We introduce the different functions used for our proposition in Section 4. We give in Section 5 a stepby-step construction of an error-tolerant searchable scheme, together with its security analysis. Application to biometric identification is explained in Section 6 and Section 6.2 gives a practical illustration with IrisCodes. Section 7 concludes.
An additional property of symmetric privacy is analyzed in Appendix A.
2 Architecture for Biometric Identification
Introduction to Biometric Identification
For a given biometrics technology, such as the fingerprint or the iris, let B be the set of all possible corresponding biometric features -i.e. data which are captured by biometric sensors. For biometric recognition, a matching algorithm m : B × B → R is used to compute a dissimilarity score between two data. Its goal is to differentiate similar data from different ones:
Definition 1 A biometric template b ∈ B
is the result of a measurement from someone's biometrics thanks to a sensor. For a specific user whose biometrics is β, we note b ← β the fact that b is a measure of β. Two different measures of the same user b, b ′ ← β have with high probability a small score
m(b, b ′ ); measures of different users b 1 ← β 1 , b 2 ← β 2 have a large value m(b 1 , b 2 ).
In practice, some thresholds λ min , λ max are chosen and the score is considered as small (resp. large) if it is less (resp. greater) than the threshold λ min (resp. λ max ). This score is usually enough to determine with some precision if two measures correspond to the same user or not. Errors, called False Reject and False Acceptance, are possible but this problem is outside the scope of our paper.
In the following, we restrict ourselves to B = {0, 1} N equipped with the Hamming distance d. A biometric template b ∈ B is the result of a measurement from someone's biometrics thanks to a sensor. Two different measures b, b ′ of the same user U are with high probability at a Hamming distance d(b, b ′ ) ≤ λ min ; measures b 1 , b 2 of different users U 1 , U 2 are at a Hamming distance d(b 1 , b 2 ) > λ max . In this case, the matching algorithm m simply consists in evaluating the Hamming distance.
Remark 1 For instance, iris biometric features are binary vectors of length 2048 when coded as IrisCodes following [18]. In this case of IrisCode [18], the matching algorithm m is related to the computation of a Hamming distance between two IrisCodes. A biometric identification system -also called a one-to-many biometric system -recognizes a person among a collection of templates. A system is given by a reference data set D ⊂ B and a identification function id : B → P(D). On input b new , the system outputs a subset C of D containing biometric templates b ref ∈ D such that the matching score between b new and b ref is small. This means that b new and b ref possibly corresponds to the same person. C is the ∅ if no such template can be found; the size of C depends on the accuracy of the system. With pseudo-identities (either real identities of persons or pseudonyms) registered together with the reference templates in D, the set C gives a list of candidates for the pseudo-identity of the person associated to b new .
Architecture
Our general model for biometric identification relies on the following entities.
• Human users U i : a set of N users are registered thanks to a sample of their biometrics β i and pseudo-identities ID i , more human users U j (j > N ) represent possible impostors with biometrics β j .
• Sensor client SC: a device that extracts the biometric template from β i .
• Identity Provider IP: replies to queries sent by SC by providing an identity,
• Database DB: stores the biometric data.
Remark 2
Here the sensor client is a client which captures the raw image of a biometric data and extracts its characteristics to output a so-called biometric template. Consequently, we assume that the sensor client is always honest and trusted by all other components. Indeed, as biometrics are public information, additional credentials are always required to establish security links in order to prevent some well-known attacks (e.g. replay attacks) and to ensure that, with a high probability, the biometric template captured by the sensor and used in the system is from a living human user. In other words, we assume that it is difficult to produce a fake biometric template that can be accepted by the sensor.
In an identification system, we have two main services:
1. Enrolment registers users thanks to their physiological characteristics (for a user U i , it requires a biometric sample b i ← β i and its identity ID i ) 2. Identification answers to a request by returning a subset of the data which was registered
The enrolment service can be run each time a new user has to be registered. Depending on the application, the identification service can output either the identity of the candidates or their reference templates.
As protection against outsiders, such as eavesdroppers, can be achieved with classical cryptographic techniques, our main objective is the protection of the data against insiders. In particular we assume that no attacker is able to interfere with these communications.
Informal Objectives
We here formulate the properties we would like to achieve in order to meet good privacy standards.
Condition 1 When the biometric identification system is dealing with the identification of a template b coming from the registered user U i with identity ID i , it should return a subset containing a reference to (ID i , b i ) except for a negligible probability.
Condition 2 When the system is dealing with the identification of a template b coming from an unregistered user, it should return the empty set ∅ except for a negligible probability.
We do not want a malicious database to be able to link an identity to a biometric template, nor to be able to make relations between different identities.
Condition 3
The database DB should not be able to distinguish two enrolled biometric data.
Another desired property is the fact that the database knows nothing of the identity of the user who goes through the identification process, for example, to avoid unwanted statistics.
Condition 4
The database DB should not be able to guess which identification request is executed.
Security Model for Error-Tolerant Searchable Encryption
In this section, we describe a formal model for an error-tolerant searchable encryption protocol. A specific construction fitting in this model is described in Section 5. This scheme enables to approximately search and retrieve a message stored in a database, i.e. with some error-tolerance on the request. This is in fact a problem quite close to biometric identification and the corresponding cryptographic primitives are thus used in our system, cf. Section 6.
In the sequel, we note {m, . . . , n} the set of all integers between m and n (inclusive).
Entities for the Protocol
Our primitive models the interactions between users that store and retrieve information, and a remote server. We distinguish the user who stores the data from the one who wants to get it. This leads to three entities:
• The server S: a remote storage system. As the server is untrusted, we consider the content to be public. Communications to and from this server are also subject to eavesdropping,
• The sender X incrementally creates the database, by sending data to S,
• The receiver Y makes queries to the server S.
In a latter part (Sec. 6), we integrate our cryptographic protocols into our biometric identification system. This doing, we merge the entities defined in Sec. 2.2 and those just previously introduced.
We emphasize that X and Y are not necessarily the same user, as X has full knowledge of the database he created whereas Y knows only what he receives from S.
Definition of the Primitives
In the sequel, messages are binary strings of a fixed length N , and d(x 1 , x 2 ) the Hamming Distance between
x 1 , x 2 ∈ {0, 1} N is the canonical distance, i.e. the number of positions in {1, . . . , N } in which x 1 and x 2 differ.
Here comes a formal definition of the primitives that enable to perform an error-tolerant searchable encryption; this definition cannot be parted from the definition of Completeness(λ min ) and ǫ-Soundness(λ max ), which follows.
Definition 2 A (ǫ, λ min , λ max )-Public Key Error-Tolerant Searchable Encryption is obtained with the following probabilistic polynomial-time methods:
• KeyGen(1 k ) initializes the system, and outputs public and private keys (pk, sk); k is the security parameter. The public key pk is used to store data on a server, and the secret key sk is used to retrieve information from that server.
• Send X ,S (x, pk) is a protocol in which X sends to S the data x ∈ {0, 1} N to be stored on the storage system. At the end of the protocol, S associated an identifier to x, noted ϕ(x).
• Retrieve Y,S (x ′ , sk) is a protocol in which, given a fresh data x ′ ∈ {0, 1} N , Y asks for the identifiers of all data that are stored on S and are close to x ′ , with Completeness(λ min ) and ǫ-Soundness(λ max ). This outputs a set of identifiers, noted Φ(x ′ ).
These definitions are comforted by the condition 5 of Section 3.3 that defines Completeness and ǫ-Soundness for the parameters already introduced in Section 2.1, λ min , λ max . In a few words, Completeness implies that a registered message x is indeed found if the query word x ′ is at a distance less than λ min from x, while ǫ-Soundness means that with probability greater than 1 − ǫ, no message at a distance greater than λ max from x ′ will be returned.
The Send protocol produces an output ϕ(x) that identifies the data x. This output ϕ(x) is meant to be a unique identifier, which is a binary string of undetermined length -in other words, elements of {0, 1} ⋆ -that enables to retrieve x. It can be a timestamp, a name or nickname, etc. depending on the application.
Security Requirements
First of all, it is important that the scheme actually works, i.e. that the retrieval of a message near a registered one gives the correct result. This can be formalized into the following condition:
Condition 5 (Completeness(λ min ), ǫ-Soundness(λ max )) Let x 1 , . . . , x p ∈ B = {0,
1} N be p different binary vectors, and let x ′ ∈ B be another binary vector. Suppose that the system was initialized, that all the messages x i have been sent by user X to the system S with identifiers ϕ(x i ), and that user Y retrieved the set of identifiers Φ(x ′ ) associated to x ′ .
1. The scheme is said to be complete if the identifiers of all the x i that are
near x ′ are almost all in the resulting set Φ(x ′ ), i.e. if η c = Pr x ′ [∃i s.t. d(x ′ , x i ) ≤ λ min and ϕ(x i ) / ∈ Φ(x ′ )]
is negligible.
2. The scheme is said to be ǫ-sound if the probability of finding an unwanted
result in Φ(x ′ ), i.e. η s = Pr x ′ [∃i ∈ {1, . . . , p} s.t. d(x ′ , x i ) > λ max and ϕ(x i ) ∈ Φ(x ′ )] , is bounded by ǫ.
The first condition simply means that registered data is effectively retrieved if the input is close. η c expresses the probability of failure of this Retrieve operation.
The second condition means that only the close messages are retrieved, thus limiting false alarms. η s measures the reliability of the Retrieve query, i.e. if all the results are identifiers of messages near to x ′ .
These two properties (Completeness and ǫ-Soundness) are sufficient to have a working set of primitives which allows to make approximate queries on a remote storage server. The following conditions, namely Sender Privacy and Receiver Privacy, ensure that the data stored in the server is secure, and that communications can be done on an untrusted network.
Condition 6 (Sender Privacy) The scheme is said to respect Sender Privacy if the advantage of any malicious server is negligible in the Exp Sender Privacy A experiment, described below. Here, A is an "honest-but-curious" opponent taking the place of S, and C is a challenger at the user side.
Exp Sender Privacy A 1. (pk, sk) ← KeyGen(1 k ) (C) 2. {x2, . . . , xΩ} ← A (A) 3. ϕ(xi) ← SendX,S (xi, pk) (C) 4. {x0, x1} ← A (A) 5. ϕ(xe) ← SendX,S (xe, pk), (C) e ∈R {0, 1} 6. Repeat steps (2, 3) 7. e ′ ∈ {0, 1} ← A (A)
The advantage of the adversary is | Pr [e ′ = e] − 1 2 |.
Informally, in a first step, the adversary receives Send requests that he chose himself; A then looks for a couple (x 0 , x 1 ) of messages on which he should have an advantage. C chooses one of the two messages, and the adversary must guess, by receiving the Send requests, which one of x 0 or x 1 it was.
This condition permits to have privacy on the content stored on the server. The content that the sender transmits is protected, justifying the title "Sender Privacy".
Another important privacy aspect is the secrecy of the data that is retrieved. We do not want the server to have information on the fresh data x ′ that is queried; this is expressed by the following condition.
Condition 7 (Receiver Privacy)
The scheme is said to respect Receiver Privacy if the advantage of any malicious server is negligible in the Exp Receiver Privacy A experiment described below. As in the previous condition, A denotes the "honestbut-curious" opponent taking the place of S, and C the challenger at the user side. This condition is the mirror image of the previous one. It transposes the idea that the receiver Y can make his queries to S without leaking information on their content. The processing of the experiment is the same as the Sender Privacy experiment, except that A has to distinguish between Retrieve queries instead of Send queries.
Exp Receiver Privacy A 1. (pk, sk) ← KeyGen(1 k ) (C) 2. {x1, . . . , xΩ} ← A (A) d(xi, xj) > λmax, ∀i, j ∈ {1, . . . , Ω} 3. ϕ(xi), (i ∈ {1, . . . , Ω}) ← SendX,S (xi, pk) (C) 4. {x ′ 2 , . . . , x ′ p } ← A (A) 5. Φ(x ′ j ), (j ∈ {2, . . . , p}) ← RetrieveY,S (x ′ j , sk) (C) 6. (x ′ 0 , x ′ 1 ) ← A (A) 7. Φ(x ′ e ) ← RetrieveY,S (x ′ e , sk), (C) e ∈R {0,
Remark 3 Conditions 6 and 7 are the transposition of their homonym statement in [6]. They aim for the same goal, i.e. privacy -against the server -of the data that is registered first, then looked for.
Section 5 is dedicated to give a construction that fits these security conditions.
Our Data Structure for Approximate Searching
After the recall of the notions of locality-sensitive hashing and Bloom filters, we introduce a new structure which enables approximate searching by combining both notions. We end this section with the introduction of some classical cryptographic protocols.
In the sequel, we denote [a, b] the interval of all real values between a and b (inclusive).
Locality-Sensitive Hashing
We first consider the following problem:
Problem 1 (Approximate Nearest Neighbour Problem) Given a set P of points in the metric space (B, d) pre-process P to efficiently answer queries. The answer of a query x is a point p x ∈ P such that d(x, p x ) ≤ (1+ǫ) min p∈P d(x, p).
This problem has been widely studied over the last decades; reviews on the subject include [29]. However, most algorithms proposed to solve the matter consider real spaces over the l p distance, which is not relevant in our case. A way to search the approximate nearest neighbour in a Hamming space is to use a generic construction called locality-sensitive hashing. It looks for hash functions (not cryptographic ones) that give the same result for near points, as defined in [30]:
h ∈ H, x, x ′ ∈ B, Pr [h(x) = h(x ′ )] > p 1 (resp. Pr [h(x) = h(x ′ )] < p 2 ) if d B (x, x ′ ) < r 1 (resp. d B (x, x ′ ) > r 2 ).
Such functions reduce the differences occurring between similar data with high probability, whereas distant data should remain significantly remote.
A noticeable example of a LSH family was proposed by Kushilevitz et al. in [37]; see also [36,30,1].
Bloom Filters
As introduced by Bloom in [3], a set of Bloom filters is a data structure used for answering set membership queries.
t α = 1 if ∃i ∈ {1, . . . , ν}, y ∈ D s.t. h ′ i (y) = α 0 otherwise
With this setting, testing if y is in D is the same as checking if for all i ∈ {1, . . . , ν}, t h ′ i (y) = 1. The best setting for the filter is that the involved hash function be as randomized as possible, in order to fill all the buckets t α .
In this setting, some false positive may happen, i.e. it is possible for all t h ′ i (y) to be set to 1 and y / ∈ D. This event is well known, and the probability for a query to be a false positive is:
1 − 1 − ν m |D| ν
. This probability can be made as small as needed. On the other hand, no false negative is enabled.
We work here with the Bloom filters with storage (BFS) defined in [6] as an extension of Bloom filters. Their aim is to give not only the result of the set membership test, but also an index associated to the element. The iterative definition below introduces these objects and the notion of tags and buckets which are used in the construction.
T α with T α ← T α ∪ ψ(y) where α = h ′ j (y).
In other words, the bucket structure is empty at first, and for each element y ∈ D to be indexed, we add to the bucket T α all the tags associated to y. Construction of such a structure is illustrated in Fig. 1.
· · · · · · ψ(y1) ψ(y1) ψ(y1) ψ(y2) ψ(y2) ψ(y2) ψ(y3) ψ(y3) ψ(y3) h ′ 2 (y3) = 2 h ′ 1 (y3) = α h ′ 3 (y3) = m ∅
Figure 1: Construction of Bloom Filters with Storage
Example 1 In Fig. 1, assume that D = {y 1 , y 2 , y 3 } and ν = 3, the tags associated to y 1 (resp. y 2 ) have already been incorporated into the buckets T 2 , T 3 and T α (resp. T 1 , T 2 and T 3 ) so that T 1 = {ψ(y 2 )}, T 2 = T 3 = {ψ(y 1 ), ψ(y 2 )}, T α = {ψ(y 1 )} and T i = ∅ otherwise. We are now treating the case of y 3 :
• h ′ 1 (y 3 ) = α so T α ← T α ∪ {ψ(y 3 )}, i.e. T α = {ψ(y 1 ), ψ(y 3 )}; • h ′ 2 (y 3 ) = 2 so T 2 ← T 2 ∪ {ψ(y 3 )}, i.e. T 2 = {ψ(y 1 ), ψ(y 2 ), ψ(y 3 )}; • h ′ 3 (y 3 ) = m so T m ← T m ∪ {ψ(y 3 )}, i.e. T m = {ψ(y 3 )}.
This construction enables to retrieve a set of tags associated to an element y ∈ D: it is designed to obtain ψ(y), the set of tags associated to y, by computing ν j=1 T h ′ j (y) . For instance, in the previous example,
ν j=1 T h ′ j (y3) = T 2 ∩ T α ∩ T m = {ψ(y 3 )}.
This intersection may capture inappropriate tags, but the choice of relevant hash functions and increasing their number allow to reduce the probability of that event. These properties are summed up in the following lemma.
Lemma 1 ([3])
Let (H ′ , T 1 , . . . , T m ) be a (ν, m)-Bloom filter with storage indexing a set D with tags from a tag set V . Then, for y ∈ D, the following properties hold:
• ψ(y) ⊂ T (y) = ν j=1 T h ′ j (y) , i.e. each of y's tag is retrieved, • the probability for a false positive t ∈ V is Pr [t ∈ T (y) and t ∈ ψ(y)] = 1 − 1 − ν m |D| ν .
Combining BFS and LSH
We want to apply Bloom filters to data that are very likely to vary. To this aim, we first apply LSH-families as input to Bloom filters.
We To sum up, we modify the update of the buckets in Def. 5 by α = h ′ j (h i (y) i). Later on, to recover tags related to an approximate query x ′ ∈ B, all we have to consider is ν
j=1 µ i=1 T h ′ j (hi(x ′ ) i) . Indeed, if
x and x ′ are close enough, then the LSH functions give the same results on x and x ′ , effectively providing a Bloom filter with storage that has the LSH property. This property is numerically estimated in the following lemma:
Lemma 2 Let H, H ′ , H c be families constructed in this setting. Let x, x ′ ∈ B be two binary vectors. Assume that H is (λ min , λ max , ǫ 1 , ǫ 2 )-LSH from B to {0, 1} t ; assume that H ′ is a family of ν pseudo-random hash functions. If the tagging function ψ associates only one tag per element, then the following properties stand:
1. If x and x ′ are far enough, then except with a small probability, ψ(x ′ ) does not intersect all the buckets that index x, i.e.
Pr x ′ " ψ(x ′ ) ⊂ \ h c ∈H c T h c (x) and d(x, x ′ ) ≥ λmax # ≤ " ǫ2 + (1 − ǫ2) 1 m « |H c | ,
2. If x and x ′ are close enough, then except with a small probability, ψ(x ′ ) is in all the buckets that index x ′ , i.e.
Pr x ′ " ψ(x ′ ) ⊂ \ h c ∈H c T h c (x) and d(x, x ′ ) ≤ λmin # ≤ 1 − (1 − ǫ1) |H c | .
Note that this lemma used the simplified hypothesis that ∀x, |ψ(x)| = 1, which means that there is only one tag per vector. This has a direct application in Section 5.2. In practice, ψ(x) can be a unique handle for x.
Sketch of proof. The first part of the lemma expresses the fact that if d(x, x ′ ) ≥ λ max , due to the composition of a LSH function with a pseudorandom function, the collision probability is 1 m . Indeed, if h ′ 1 (y 1 ) = h ′ 2 (y 2 ), either y 1 = y 2 and h ′ 1 = h ′ 2 , or there is a collision of two independent pseudo-random hash function. In our case, if y 1 = y 2 , that means that y 1 = h i1 (x)||i 1 and y 2 = h i2 (x ′ )||i 2 . To these vectors to be the same, i 1 = i 2 and h i1 (x) = h i2 (x ′ ), which happens with probability ǫ 2 .
The second part of the lemma says that for each h c ∈ H c , h c (x) and h c (x ′ ) are the same with probability 1 − ǫ 1 . Combining the incremental construction of the T i with this property gives the lemma.
Cryptographic Primitives
Public Key Cryptosystem Our construction requires a semantically secure public key cryptosystem -as defined in [25], see for instance [20,42] -to store some encrypted data in the database. Encryption function is noted Enc and decryption function Dec, the use of the keys is implicit. An encryption scheme is said to be semantically secure (against a chosen plaintext attack, also noted IND-CPA [25]) if an adversary without access to the secret key sk, cannot distinguish between the encryptions of a message x 0 and a message x 1 .
Private Information Retrieval Protocols A primitive that enables privacyensuring queries to databases is Private Information Retrieval protocol (PIR, [17]). Its goal is to retrieve a specific information from a remote server in such a way that he does not know which data was sent. This is done through a method Query P IR Y,S (a), that allows Y to recover the element stored at index a in S by running the PIR protocol.
Suppose a database is constituted with M bits X = x 1 , ..., x M . To be secure, the protocol should satisfy the following properties [23]:
• Soundness: When the user and the database follow the protocol, the result of the request is exactly the requested bit.
• User Privacy: For all X ∈ {0, 1} M , for 1 ≤ i, j ≤ M , for any algorithm used by the database, it cannot distinguish with a non-negligible probability the difference between the requests of index i and j.
Among the known constructions of computational secure PIR, block-based PIR -i.e. working on block of bits -allows to efficiently reduce the cost. The best performances are from Gentry and Ramzan [22] and Lipmaa [38] with a communication complexity polynomial in the logarithm of M . Surveys of the subject are available in [21,40].
Some PIR protocols are called Symmetric Private Information Retrieval, when they comply with the Data Privacy requirement [23]. This condition states that the querier cannot distinguish between a database that possesses only the information he requested, and a regular one; in other words, that the querier do not get more information that what he asked.
Private Information Storage (PIS) Protocols PIR protocols enable to retrieve information of a database. A Private Information Storage (PIS) protocol [40] is a protocol that enables to write information in a database with properties that are similar to that of PIR. The goal is to prevent the database from knowing the content of the information that is being stored; for detailed description of such protocols, see [6,41].
Such a protocol provides a method update(val, index), which takes as input an element and a database index, and puts the value val into the database entry index. To be secure, the protocol must also satisfy the Soundness and User Privacy properties, meaning that 1. update BF does update the database with the appropriate value, and 2. any algorithm run by the database cannot distinguish between the writing requests of (val i , ind i ) and (val j , ind j ).
Our Construction for Error-Tolerant Searchable Encryption
Technical Description
Our searching scheme uses all the tools we described in the previous section. As we will see in section 5.2, this enables to meet the privacy requirements of section 3.3. More precisely: • We use a semantically secure public key cryptosystem (Setup, Enc, Dec) [25],
•
• We use a PIR protocol with query function Query P IR Y,S .
• We use a PIS function update BF (val, i) that adds val to the i-th bucket of the Bloom filter, see Sec. 4.4.
Here come the details of the implementation. In a few words, storage and indexing of the data are separated, so that it becomes feasible to search over the encrypted documents. Indexing is made thanks to Bloom Filters, with an extra precaution of encrypting the content of all the buckets. Finally, using our locality-sensitive hashing functions permits error-tolerance.
System setup
The method KeyGen(1 k ) initializes m different buckets to ∅. The public and secret keys of the cryptosystem (pk, sk) are generated by Setup(1 k ), and sk is given to Y.
Sending a message
The protocol Send X ,S (x, pk) goes through the following steps (cf. Fig. 2):
1. Identifier establishment S attributes to x a unique identifier ϕ(x), and sends it to X .
2.
Data storage X sends Enc(x) to S, who stores it in a memory cell that depends on ϕ(x).
Data indexing
• X computes h c (x) for all h c ∈ H c ,
• and executes update BF (Enc(ϕ(x)), h c (x)) to send Enc(ϕ(x)) to be added to the filter's bucket of index h c (x) on the server side.
Note that for privacy concerns, we complete the buckets with random data in order to get the same bucket size l for the whole data structure. The first phase (identifier establishment) is done to create an identifier that can be used to register and then retrieve x from the database. For example, ϕ(x) can be the time at which S received x, or the first memory address that is free for the storage of Enc(x).
The third phase applies the combination of BFS and LSH functions (see Sec. 4.3) to x so that it is possible to retrieve x with some approximate data. This is done with the procedure described hereafter.
Retrieving data
The protocol Retrieve Y,S (x ′ , sk) goes through the following steps (cf. Fig. 3): As we can see, the retrieving process follows that of Sec. 4.3, with the noticeable differences that 1. the identifiers are always encrypted in the database, and 2. the query is made following a PIR protocol. This permits to benefit from both the Bloom filter structure, the locality-sensitive hashing, and the privacy-preserving protocols.
1. Y computes each α i = h c i (x ′ ) for each h c i ∈ H c ,
The secure protocols involved do not leak information on the requests made, and the next section discusses more precisely the security properties achieved.
Security Properties
We now demonstrate that this construction faithfully achieves the security requirements we defined in Sec. 3.3.
Proposition 1 (Completeness)
Provided that H is a (λ min , λ max , ǫ 1 , ǫ 2 )-LSH family, for a negligible ǫ 1 , this scheme is complete.
Proposition 2 (ǫ-Soundness) Provided that H is a (λ min , λ max , ǫ 1 , ǫ 2 )-LSH family from {0, 1} N to {0, 1} t , and provided that the Bloom filter functions H ′ behave like pseudo-random functions from {0, 1} t × {1, . . . , |H|} to {1, . . . , m}, then the scheme is ǫ-sound, with:
ǫ = " ǫ2 + (1 − ǫ2) 1 m « |H c |
Propositions 1 and 2 are direct consequence of Lemma 2.
Remark 4 Proposition 2 assumes that the Bloom filter hash functions are pseudorandom; this hypothesis is pretty standard for Bloom filter analysis. It can be achieved by using cryptographic hash functions with a random oracle-like behaviour.
Proposition 3 (Sender Privacy) Assume that the underlying cryptosystem is semantically secure and that the PIS function update BF achieves User Privacy, then the scheme ensures Sender Privacy.
Proof. If the scheme does not ensure Sender Privacy, that means that there exists an attacker who can distinguish between the output of Send(x 0 , pk) and Send(x 1 , pk), after the execution of Send(x i , pk), i ∈ {2, . . . , Ω}.
Note that the content of the Bloom filter buckets does not reveal information that can permit to distinguish between x 0 and x 1 . Indeed, the only information A has with the filter structure is a set of Enc(ϕ(x i )) placed at different indexes h c (x i ), i = e, 2, . . . , Ω. Thanks to the semantic security of Enc, this does not permit to distinguish between ϕ(x 0 ) and ϕ(x 1 ).
This implies that, with inputs Enc(x i ), update BF (Enc(ϕ(x i )), h c (x i )) ( for i ≥ 2), the attacker can distinguish between Enc(x 0 ), update BF (Enc(ϕ(x 0 )), h c (x 0 )) and Enc(x 1 ), update BF (Enc(ϕ(x 1 )), h c (x 1 )).
As update BF does not leak information on its inputs, that means that the attacker can distinguish between Enc(x 0 ) and Enc(x 1 ) by choosing some other inputs to Enc. That contradicts the semantic security assumption. Proof. This property is a direct deduction of the PIR's User Privacy, as the only information S gets from the execution of a Retrieve is a set of Query P IR .
2 These properties show that this protocol for Error-Tolerant Searchable Encryption has the security properties that we looked for. LSH functions are used in such a way that they do not degrade the security properties of the system.
Application to Identification with Encrypted
Biometric Data
Our Biometric Identification System
We now apply our construction for error-tolerant searchable encryption to our biometric identification purpose. Thanks to the security properties of the above construction, this enables us to design a biometric identification system which achieves the security objectives stated in Section 2.3. While applying the primitives of error-tolerant searchable encryption, the database DB takes the place of the server S; the role of the Identity Provider IP varies with the step we are involved in. During the Enrolment step, IP behaves as X , and as Y during the Identification step. In this step, IP is in possession of the private key sk used for the Retrieve query.
Enrolment
• To enrol a user U i , the sensor SC acquires a sample b i from his biometrics and sends it to IP,
• The Identity Provider IP then executes Send X ,S (b i , pk).
Identification
• SC captures a fresh biometric template b ′ from a user U and sends it to IP,.
• The Identity Provider IP then executes Retrieve Y,S (b ′ , sk).
At the end of the identification, IP has the fresh biometric template b ′ along with the address of the candidate reference templates in DB. To reduce the list of identities, we can use a secure matching scheme [12,44] to run a final secure comparison between b ′ and the candidates.
Practical Considerations
Choosing the LSH family: an Example
Let's place ourself in the practical setting of human identification through iris recognition. A well-known method to doing so is to use Daugman's IrisCode [18]. This extracts a 2048-bit vector, along with a "mask", that defines the relevant information in this vector. Iris recognition is then performed by computing a simple Hamming distance; vectors that are at a Hamming distance less than a given threshold are believed to come from the same individual, while vectors that come from different eyes will be at a significantly larger distance.
There are several paths to design LSH functions adapted to this kind of data. Random projections such as those defined in [37], is a convenient way to create LSH functions for binary vectors. However, for the sake of simplicity, we propose to use the functions used in [26], in which they are referred as 'beacon indexes'. These functions are based on the fact that all IrisCode bits do not have the same distribution probability.
In a few words, these functions first reorder the bits of the IrisCode by rows, so that in each row, the bits that are the most likely to induce an error are the least significant ones. The column are then reordered to avoid correlations between following bits. The most significant bits of rows are then taken as 10-bit hashes. The efficiency of this approach is demonstrated in [26] where the authors apply these LSH functions to identify a person thanks to his IrisCode. They interact with the UAE database which contains N = 632500 records; trivial identification would then require about N/2 classical matching computation, which is way too much for a large database. Instead, they apply µ = 128 of those hashes to the biometric data, and look for IrisCodes that get the same LSH results for at least 3 functions. In doing this, they limit the number of necessary matching to 41 instead of N .
To determine the LSH capacity of these hash functions is not easy to do with real data; however, if we model b and b ′ as binary vectors such that the each bit of b is flipped with a fixed probability (i.e. if b ′ is obtained out of b through a binary symmetric channel), then the family induced is (r 1 , r 2 , 1 − (1 − r1 2048 ) 10 , (1 − r2 2048 ) 10 )-LSH. This estimation is conservative as IrisCodes are designed for biometric matching.
Combining these functions with a Bloom filter with storage in the way described in Sec. 4.3 enables to have an secure identification scheme.
Overall complexity and efficiency
We here evaluate the computational complexity of an identification request on the client's side. We note κ(op) the cost of operation op, and |S| the size of the set S. Recalling Section 5.1, the overall cost of a request is:
κ(request)
= |H c |(κ(hash) + κ(P IR) + |T |κ(Dec)) + κ(intersection) ≤ |H c | (κ (hBF ) + κ (hLSH) + κ (P IR) + |T |κ (Dec)) + O(|T ||H c |)
We here used data structures in which intersection of sets is linear in the set length, hence the term O(|T ||H c |); |T | is the maximum size of a Bloom filter with storage bucket.
To conclude this complexity estimation, let us recall that the cost of a hash function can be neglected in front of the cost of a decryption step. The PIR query complexity at the sensor level depends on the scheme used (recall that the PIR query is made only over the set of buckets and not over the whole database); in the case of Lipmaa's PIR [38], this cost κ(P IR) is dominated by the cost of a Damgård-Jurik encryption. The overall sensor complexity of an identification request is O(µν(|T |κ(Dec) + κ(P IR))).
This paper details the first non-trivial construction for biometric identification over encrypted binary templates. This construction meets the privacy model one can expect from an identification scheme and the computation costs are sublinear in the size of the database.
We studied identification scheme using binary data, together with Hamming distance. We plan to extend our scope to other metrics. A first lead to follow is to use techniques from [37] which reduce the problem of ANN over Euclidean spaces into ANN over a Hamming space.
| 7,715 |
0812.4893
|
2004854811
|
We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
|
Abraham al @cite_6 study almost stable matchings in the stable roommates problem. The recent work by Bir al @cite_28 is particularly close to ours: they, too, consider the stable marriage problem with incomplete preference lists, and aim at finding a matching with few unstable edges. However, in terms of computational complexity, their work goes in the opposite direction. Their task is to find a matching that minimises the number of unstable edges. It turns out that this makes the problem computationally much more : the problem is NP-hard, unlike the classical stable marriage problem. In contrast, we do not require that the matching is a maximum matching, which makes the problem computationally : the problem admits a constant-time distributed algorithm, unlike the classical stable marriage problem. The algorithm works even when ties in the preferences lists are allowed; this should be contrasted with the fact that if ties are allowed, it is NP-hard to find a stable perfect matching @cite_30 @cite_24 @cite_11 .
|
{
"abstract": [
"We consider instances of the classical stable marriage problem in which persons may include ties in their preference lists. We show that, in such a setting, strong lower bounds hold for the approximability of each of the problems of finding an egalitarian, minimum regret and sex-equal stable matching. We also consider stable marriage instances in which persons may express unacceptable partners in addition to ties. In this setting, we prove that there are constants ?,?? such that each of the problems of approximating a maximum and minimum cardinality stable matching within factors of ?,?? (respectively) is NP-hard, under strong restrictions. We also give an approximation algorithm for both problems that has a performance guarantee expressible in terms of the number of lists with ties. This significantly improves on the best-known previous performance guarantee, for the case that the ties are sparse. Our results have applications to large-scale centralized matching schemes.",
"Given an instance I of the classical Stable Marriage problem with Incomplete preference lists (smi), a maximum cardinality matching can be larger than a stable matching. In many large-scale applications of smi, we seek to match as many agents as possible. This motivates the problem of finding a maximum cardinality matching in I that admits the smallest number of blocking pairs (so is ''as stable as possible''). We show that this problem is NP-hard and not approximable within n^1^-^@e, for any @e>0, unless P=NP, where n is the number of men in I. Further, even if all preference lists are of length at most 3, we show that the problem remains NP-hard and not approximable within @d, for some @d>1. By contrast, we give a polynomial-time algorithm for the case where the preference lists of one sex are of length at most 2. We also extend these results to the cases where (i) preference lists may include ties, and (ii) we seek to minimize the number of agents involved in a blocking pair.",
"An instance of the classical Stable Roommates problem (sr) need not admit a stable matching. This motivates the problem of finding a matching that is “as stable as possible”, i.e. admits the fewest number of blocking pairs. In this paper we prove that, given an sr instance with n agents, in which all preference lists are complete, the problem of finding a matching with the fewest number of blocking pairs is NP-hard and not approximable within @math , for any e>0, unless P=NP. If the preference lists contain ties, we improve this result to n1−e. Also, we show that, given an integer K and an sr instance I in which all preference lists are complete, the problem of deciding whether I admits a matching with exactly K blocking pairs is NP-complete. By contrast, if K is constant, we give a polynomial-time algorithm that finds a matching with at most (or exactly) K blocking pairs, or reports that no such matching exists. Finally, we give upper and lower bounds for the minimum number of blocking pairs over all matchings in terms of some properties of a stable partition, given an sr instance I.",
"We consider variants of the classical stable marriage problem in which preference lists may contain ties, and may be of bounded length. Such restrictions arise naturally in practical applications, such as centralised matching schemes that assign graduating medical students to their first hospital posts. In such a setting, weak stability is the most common solution concept, and it is known that weakly stable matchings can have different sizes. This motivates the problem of finding a maximum cardinality weakly stable matching, which is known to be NP-hard in general. We show that this problem is solvable in polynomial time if each man's list is of length at most 2 (even for women's lists that are of unbounded length). However if each man's list is of length at most 3, we show that the problem becomes NP-hard (even if each women's list is of length at most 3) and not approximable within some @d>1 (even if each woman's list is of length at most 4).",
"The original stable marriage problem requires all men and women to submit a complete and strictly ordered preference list. This is obviously often unrealistic in practice, and several relaxations have been proposed, including the following two common ones: one is to allow an incomplete list, i.e., a man is permitted to accept only a subset of the women and vice versa. The other is to allow a preference list including ties. Fortunately, it is known that both relaxed problems can still be solved in polynomial time. In this paper, we show that the situation changes substantially if we allow both relaxations (incomplete lists and ties) at the same time: the problem not only becomes NP-hard, but also the optimal cost version has no approximation algorithm achieving the approximation ratio of N1-Ɛ, where N is the instance size, unless P=NP."
],
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_6",
"@cite_24",
"@cite_11"
],
"mid": [
"2133312780",
"2100727443",
"1507630703",
"1996180590",
"1480100487"
]
}
| 0 |
||
0812.4893
|
2004854811
|
We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
|
Note that switching partners along unstable edges can be done in a distributed manner. The Gale--Shapley algorithm is also parallel by its nature: the proposals rejects can be undertaken by all men women simultaneously during synchronised rounds (albeit it can happen that only one man is free at a round [Section A.3] gusfield89stable , @cite_22 ). Lower bounds on the running time of the algorithm [Section 1.5] gusfield89stable show that a linear number of rounds is required to attain stability. But can a nearly stable matching be obtained with fewer rounds?
|
{
"abstract": [
"In this paper a parallel algorithm to solve the stable marriage problem is given. The worst case performance of this algorithm is stated. A theoretical analysis shows that the probability of the occurrence of this worst case is extremely small. For instance, if there are sixteen men and sixteen women involved, then the probability that the worst case occurs is only 10−45. Possible future research is also discussed in this paper."
],
"cite_N": [
"@cite_22"
],
"mid": [
"2052785830"
]
}
| 0 |
||
0812.4893
|
2004854811
|
We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
|
Several works have addressed the last question with experiments. Quinn @cite_34 observes experimentally that a matching with only a fraction of unstable edges emerges long before the Gale--Shapley algorithm converges. Lu and Zheng @cite_13 propose a parallel algorithm that outperforms the Gale--Shapley algorithm in practice. Theorem gives theoretical support to the findings in Quinn @cite_34 . Theorem also addresses the concern expressed in the conclusions of Lu and Zheng @cite_13 where it is claimed that Most of existing parallel stable matching algorithms cannot guarantee a matching with a small number of unstable pairs within a given time interval.'' Theorem suggests that if the number of acceptable partners for each participant is bounded, the Gale--Shapley algorithm guarantees a small relative number of unstable edges.
|
{
"abstract": [
"Evidence is presented showing that the McVitie and Wilson algorithm to solve the stable marriage problem has a sequential component that is quite large on the average. Hence parallel implementations of the algorithm are likely to achieve only mediocre average case speedup. A corollary result is that an approximate solution with a few unstable pairings can be found much faster than an exact solution.",
"In this paper, we propose a new approach, parallel iterative improvement (PII), to solving the stable matching problem. This approach treats the stable matching problem as an optimization problem with all possible matchings forming its solution space. Since a stable matching always exists for any stable matching problem instance, finding a stable matching is equivalent to finding a matching with the minimum number (which is always zero) of unstable pairs. A particular PII algorithm is presented to show the effectiveness of this approach by constructing a new matching from an existing matching and using techniques such as randomization and greedy selection to speedup the convergence process. Simulation results show that the PII algorithm has better average performance compared with the classical stable matching algorithms and converges in n iterations with high probability. The proposed algorithm is also useful for some real-time applications with stringent time constraint."
],
"cite_N": [
"@cite_34",
"@cite_13"
],
"mid": [
"2042477843",
"2104688227"
]
}
| 0 |
||
0812.4893
|
2004854811
|
We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
|
From a theory perspective, apparently only a few papers address decentralised implementations of the Gale--Shapley algorithm and or stability after early termination of a stable matching algorithm. In a recent paper @cite_18 it is claimed that little theory exists concerning instability.'' @cite_32 give bounds on the performance of a simple online algorithm. Feder al @cite_29 show that a stable matching can be found on a polynomial number of processors in sublinear time; their algorithm is not local. Other work on parallel stable matching include Tseng and Lee @cite_22 , Tseng @cite_20 , and Hull @cite_3 . Eriksson and H " a ggstr " o m @cite_18 prove that a simple heuristic works well for random inputs. Our Theorem shows that the Gale--Shapley algorithm works well for an arbitrary input.
|
{
"abstract": [
"In any two-sided matching market, a stable matching can be found by a central agency using the deferred acceptance procedure of Gale and Shapley. But if the market is decentralized and information is incomplete then stability of the ensuing matching is not to be expected. Despite the prevalence of such matching situations, and the importance of stability, little theory exists concerning instability. We discuss various measures of instability and analyze how they interact with the structure of the underlying preferences. Our main result is that even the outcome of decentralized matching with incomplete information can be expected to be “almost stable” under reasonable assumptions.",
"In this paper a parallel algorithm to solve the stable marriage problem is given. The worst case performance of this algorithm is stated. A theoretical analysis shows that the probability of the occurrence of this worst case is extremely small. For instance, if there are sixteen men and sixteen women involved, then the probability that the worst case occurs is only 10−45. Possible future research is also discussed in this paper.",
"Abstract A parallel algorithm for the stable matching problem is presented. The algorithm is based on the primal-dual interior path-following method for linear programming. The main result is that a stable matching can be found in O ∗ ( m ) time by a polynomial number of processors, where m is the total length of preference lists of individuals.",
"Abstract We give an on-line deterministic algorithm for the weighted bipartite matching problem that achieves a competitive ratio of (2 n −1) in any metric space (where n is the number of vertices). This algorithm is optimal - there is no on-line deterministic algorithm that achieves a competitive ratio better than (2 n −1) in all metric spaces. We also study the stable marriage problem, where we are interested in the number of unstable pairs produced. We show that the simple “first come, first served” deterministic algorithm yields on the average O( n log n ) unstable pairs, but in the worst case no deterministic or randomized on-line algorithm can do better than ω( n 2 ) unstable pairs. This appears to be the first on-line problem for which provably one cannot do better with randomization; for most on-line problems studied in the past, randomization has helped in improving the performance.",
"",
"In this paper, Tseng and Lee's parallel algorithm to solve the stable marriage prolem is analyzed. It is shown that the average number of parallel proposals of the algorithm is of ordern by usingn processors on a CREW PRAM, where each parallel proposal requiresO(loglog(n) time on CREW PRAM by applying the parallel selection algorithms of Valiant or Shiloach and Vishkin. Therefore, our parallel algorithm requiresO(nloglog(n)) time. The speed-up achieved is log(n) loglog(n) since the average number of proposals required by applying McVitie and Wilson's algorithm to solve the stable marriage problem isO(nlog(n))."
],
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_29",
"@cite_32",
"@cite_3",
"@cite_20"
],
"mid": [
"2163984834",
"2052785830",
"2066306418",
"2093153678",
"2054583428",
"2035381225"
]
}
| 0 |
||
0812.4893
|
2004854811
|
We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
|
As we mentioned in , there is a range of negative results related to local algorithms (constant-time distributed algorithms) for maximal matching @cite_0 and approximate maximum matching @cite_26 @cite_1 @cite_15 @cite_5 . Even if each node is assigned a unique identifier and the network topology is an @math -cycle, it is not possible to break the symmetry in the network and find a constant-factor approximation for maximum matching. Without any auxiliary information beyond unique node identifiers, positive results are known only in rare special cases, most notably for graphs where each node has an odd degree @cite_12 @cite_25 .
|
{
"abstract": [
"We give deterministic distributed algorithms that given i¾?> 0 find in a planar graph G, (1±i¾?)-approximations of a maximum independent set, a maximum matching, and a minimum dominating set. The algorithms run in O(log*|G|) rounds. In addition, we prove that no faster deterministic approximation is possible and show that if randomization is allowed it is possible to beat the lower bound for deterministic algorithms.",
"",
"This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion. The computational model is such that each node of the graph is occupied by a processor which has its own ID. Processors are restricted to collecting data from others which are at a distance at most t away from them in t time units, but are otherwise computationally unbounded. This model focuses on the issue of locality in distributed processing, namely, to what extent a global solution to a computational problem can be obtained from locally available data.Three results are proved within this model: • A 3-coloring of an n-cycle requires time @math . This bound is tight, by previous work of Cole and Vishkin. • Any algorithm for coloring the d-regular tree of radius r which runs for time at most @math requires at least @math colors. • In an n-vertex graph of largest degree @math , an @math -coloring may be found in time @math .",
"",
"Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1 k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1 k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood.",
"The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs).",
"The purpose of this paper is a study of computation that can be done locally in a distributed network. By locally we mean within time (or distance) independent of the size of the network. In particular we are interested in algorithms that ore robust, i.e., perform well even if the underlying graph is not stable and links continuously fail and come-up. We introduce and study the happy coloring and orientation problem and show that it yields a robust local solution to the (d,m)-dining philosophers problem of Naor and Stockmeyer [17]. This problem is similar to the usual dining philosophers problem, except that each philosopher has access to d forks but needs only m of them to eat. We give a robust local solution if m spl les [d 2] (necessity of this inequality for any local solution was known previously). Two other problems we investigate are: (1) the amount of initial symmetry-breaking needed to solve certain problems locally (for example, our algorithms need considerably less symmetry-breaking than having a unique ID on each node), and (2) the single-step color reduction problem: given a coloring with c colors of the nodes of a graph, what is the smallest number of colors c' such that every node can recolor itself with one of c' colors as a function of its immediate neighborhood only. >"
],
"cite_N": [
"@cite_26",
"@cite_1",
"@cite_0",
"@cite_5",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"1869515244",
"2470787773",
"2054910423",
"2098327371",
"1998137177",
"2017345786",
"2108918420"
]
}
| 0 |
||
0812.4893
|
2004854811
|
We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
|
Other work on constant-time distributed algorithms for matching usually assumes either randomness @cite_7 @cite_1 @cite_15 @cite_21 @cite_14 or geometric information @cite_4 @cite_31 . We refer to the survey @cite_17 for further information on local algorithms.
|
{
"abstract": [
"We present 1 ? ?approximation algorithms for the maximum matching problem in location aware unit disc graphs and in growth-bounded graphs. The algorithm for unit disk graph is local in the sense that whether or not an edge is in the matching depends only on other vertices which are at most a constant number of hops away from it. The algorithm for growth-bounded graphs needs at most @math @math communication rounds during its execution. Using these matching algorithms we can compute vertex covers of the respective graph classes whose size are at most twice the optimal.",
"In this paper, we present fast and fully distributed algorithms for matching in weighted trees and general weighted graphs. The time complexity as well as the approximation ratio of the tree algorithm is constant. In particular, the approximation ratio is 4. For the general graph algorithm we prove a constant ratio bound of 5 and a polylogarithmic time complexity of O(log2 n).",
"A local algorithm with local horizon r is a distributed algorithm that runs in r synchronous communication rounds; here r is a constant that does not depend on the size of the network. As a consequence, the output of a node in a local algorithm only depends on the input within r hops from the node. We give tight bounds on the local horizon for a class of local algorithms for combinatorial problems on unit-disk graphs (UDGs). Most of our bounds are due to a refined analysis of existing approaches, while others are obtained by suggesting new algorithms. The algorithms we consider are based on network decompositions guided by a rectangular tiling of the plane. The algorithms are applied to matching, independent set, graph colouring, vertex cover, and dominating set. We also study local algorithms on quasi-UDGs, which are a popular generalisation of UDGs, aimed at more realistic modelling of communication between the network nodes. Analysing the local algorithms on quasi-UDGs allows one to assume that the nodes know their coordinates only approximately, up to an additive error. Despite the localisation error, the quality of the solution to problems on quasi-UDGs remains the same as for the case of UDGs with perfect location awareness. We analyse the increase in the local horizon that comes along with moving from UDGs to quasi-UDGs.",
"In this paper, we study distributed algorithms to compute a weighted matching that have constant (or at least sub-logarithmic) running time and that achieve approximation ratio 2 + e or better. In fact we present two such synchronous algorithms, that work on arbitrary weighted trees The first algorithm is a randomised distributed algorithm that computes a weighted matching of an arbitrary weighted tree, that approximates the maximum weighted matching by a factor 2 + e. The running time is O(1). The second algorithm is deterministic, and approximates the maximum weighted matching by a factor 2 + e, but has running time O(log* |V|). Our algorithms can also be used to compute maximum unweighted matchings on regular and almost regular graphs within a constant approximation",
"We present a technique for transforming classical approximation algorithms into constant-time algorithms that approximate the size of the optimal solution. Our technique is applicable to a certain subclass of algorithms that compute a solution in a constant number of phases. The technique is based on greedily considering local improvements in random order.The problems amenable to our technique include vertex cover, maximum matching, maximum weight matching, set cover, and minimum dominating set. For example, for maximum matching, we give the first constant-time algorithm that for the class of graphs of degree bounded by d, computes the maximum matching size to within epsivn, for any epsivn > 0, where n is the number of nodes in the graph. The running time of the algorithm is independent of n, and only depends on d and epsiv.",
"",
"Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1 k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1 k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood.",
"A local algorithm is a distributed algorithm that runs in constant time, independently of the size of the network. Being highly scalable and fault tolerant, such algorithms are ideal in the operation of large-scale distributed systems. Furthermore, even though the model of local algorithms is very limited, in recent years we have seen many positive results for nontrivial problems. This work surveys the state-of-the-art in the field, covering impossibility results, deterministic local algorithms, randomized local algorithms, and local algorithms for geometric graphs."
],
"cite_N": [
"@cite_31",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_21",
"@cite_1",
"@cite_15",
"@cite_17"
],
"mid": [
"1549878614",
"1523785148",
"1965026568",
"2117761060",
"2109330224",
"2470787773",
"1998137177",
"2138623498"
]
}
| 0 |
||
0812.4893
|
2004854811
|
We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.
|
Our centralised constant-time approximation algorithm in Theorem is based on the ideas of Parnas and Ron @cite_9 and Nguyen and Onak @cite_21 . Their work presents constant-time approximation algorithms for estimating the size of a maximal matching, maximum-cardinality matching, and maximum-weight matching. Our work complements this line of research by presenting an algorithm for estimating the size of a stable matching.
|
{
"abstract": [
"For a given graph G over n vertices, let OPT\"G denote the size of an optimal solution in G of a particular minimization problem (e.g., the size of a minimum vertex cover). A randomized algorithm will be called an @a-approximation algorithm with an additive error for this minimization problem if for any given additive error parameter @e>0 it computes a value [email protected]? such that, with probability at least 2 3, it holds that OPT\"[email protected][email protected][email protected][email protected]@?OPT\"[email protected] Assume that the maximum degree or average degree of G is bounded. In this case, we show a reduction from local distributed approximation algorithms for the vertex cover problem to sublinear approximation algorithms for this problem. This reduction can be modified easily and applied to other optimization problems that have local distributed approximation algorithms, such as the dominating set problem. We also show that for the minimum vertex cover problem, the query complexity of such approximation algorithms must grow at least linearly with the average degree [email protected]? of the graph. This lower bound holds for every multiplicative factor @a and small constant @e as long as [email protected]?=O(n @a). In particular this means that for dense graphs it is not possible to design an algorithm whose complexity is o(n).",
"We present a technique for transforming classical approximation algorithms into constant-time algorithms that approximate the size of the optimal solution. Our technique is applicable to a certain subclass of algorithms that compute a solution in a constant number of phases. The technique is based on greedily considering local improvements in random order.The problems amenable to our technique include vertex cover, maximum matching, maximum weight matching, set cover, and minimum dominating set. For example, for maximum matching, we give the first constant-time algorithm that for the class of graphs of degree bounded by d, computes the maximum matching size to within epsivn, for any epsivn > 0, where n is the number of nodes in the graph. The running time of the algorithm is independent of n, and only depends on d and epsiv."
],
"cite_N": [
"@cite_9",
"@cite_21"
],
"mid": [
"1980155175",
"2109330224"
]
}
| 0 |
||
0811.1878
|
1784807126
|
Like any other logical theory, domain descriptions in reasoning about actions may evolve, and thus need revision methods to adequately accommodate new information about the behavior of actions. The present work is about changing action domain descriptions in propositional dynamic logic. Its contribution is threefold: first we revisit the semantics of action theory contraction that has been done in previous work, giving more robust operators that express minimal change based on a notion of distance between Kripke-models. Second we give algorithms for syntactical action theory contraction and establish their correctness w.r.t. our semantics. Finally we state postulates for action theory contraction and assess the behavior of our operators w.r.t. them. Moreover, we also address the revision counterpart of action theory change, showing that it benefits from our semantics for contraction.
|
Liberatore @cite_37 proposes a framework for reasoning about actions in which it is possible to express a given semantics of belief update, like Winslett's @cite_42 and Katsuno and Mendelzon's @cite_1 . This means it is the formalism, essentially an action description language, that is used to describe updates (the change of propositions from one state of the world to another) by expressing them as laws in the action theory. The main difference between Liberatore's work and Li and Pereira's is that, despite not being concerned, at least a priori, with changing action laws, Liberatore's framework allows for abductively introducing in the action theory new effect propositions (effect laws, in our terms) that consistently explain the occurrence of an event.
|
{
"abstract": [
"In this paper we show how several different semantics for belief update can be expressed in a framework for reasoning about actions. This framework can therefore be considered as a common core of all these update formalisms, thus making it clear what they have in common. This framework also allows expressing scenarios that are problematic for the classical formalization of belief update.",
"Ginsberg and Smith [6, 7] propose a new method for reasoning about action, which they term a possible worlds approach (PWA). The PWA is an elegant, simple, and potentially very powerful domain-independent technique that has proven fruitful in other areas of AI [13, 5]. In the domain of reasoning about action, Ginsberg and Smith offer the PWA as a solution to the frame problem (What facts about the world remain true when an action is performed?) and its dual, the ramification problem [3] (What facts about the world must change when an action is performed?). In addition, Ginsberg and Smith offer the PWA as a solution to the qualification problem (When is it reasonable to assume that an action will succeed?), and claim for the PWA computational advantages over other approaches such as situation calculus. Here and in [16] I show that the PWA fails to solve the frame, ramification, and qualification problems, even with additional simplifying restrictions not imposed by Ginsberg and Smith. The cause of the failure seems to be a lack of distinction in the PWA between the state of the world and the description of the state of the world. I introduce a new approach to reasoning about action, called the possible models approach, and show that the possible models approach works as well as the PWA on the examples of [6, 7] but does not suffer from its deficiencies.",
""
],
"cite_N": [
"@cite_37",
"@cite_42",
"@cite_1"
],
"mid": [
"1574702170",
"175258934",
"1594099509"
]
}
| 0 |
||
0811.1878
|
1784807126
|
Like any other logical theory, domain descriptions in reasoning about actions may evolve, and thus need revision methods to adequately accommodate new information about the behavior of actions. The present work is about changing action domain descriptions in propositional dynamic logic. Its contribution is threefold: first we revisit the semantics of action theory contraction that has been done in previous work, giving more robust operators that express minimal change based on a notion of distance between Kripke-models. Second we give algorithms for syntactical action theory contraction and establish their correctness w.r.t. our semantics. Finally we state postulates for action theory contraction and assess the behavior of our operators w.r.t. them. Moreover, we also address the revision counterpart of action theory change, showing that it benefits from our semantics for contraction.
|
The work by Eiter al @cite_5 @cite_52 is similar to ours in that they also propose a framework that is oriented to updating action laws. They mainly investigate the case where a new effect law is added to the description (and then has to be true in all models of the modified theory). This problem is the dual of contraction and is then closer to our definition of revision ( ).
|
{
"abstract": [
"How can an intelligent agent update her knowledge base about an action domain, relative to some conditions (possibly obtained from earlier observations)? We study this question in a formal framework for reasoning about actions and change, in which the meaning of an action domain description can be represented by a directed graph whose nodes correspond to states and whose edges correspond to action occurrences. We define the update of an action domain description in this framework, and show among other results that a solution to this problem can be obtained by a divide-and-conquer approach in some cases. We also introduce methods to compute a solution and an approximate solution to this problem, and analyze the computational complexity of these problems. Finally, we discuss techniques to improve the quality of solutions.",
"We study resolving conflicts between an action description and a set of conditions (possibly obtained from observations), in the context of action languages. In this formal framework, the meaning of an action description can be represented by a transition diagram---a directed graph whose nodes correspond to states and whose edges correspond to transitions describing action occurrences. This allows us to characterize conflicts by means of states and transitions of the given action description that violate some given conditions. We introduce a basic method to resolve such conflicts by modifying the action description, and discuss how the user can be supported in obtaining more preferred solutions. For that, we identify helpful questions the user may ask (e.g., which specific parts of the action description cause a conflict with some given condition), and we provide answers to them using properties of action descriptions and transition diagrams. Finally, we discuss the computational complexity of these questions in terms of related decision problems."
],
"cite_N": [
"@cite_5",
"@cite_52"
],
"mid": [
"2152033849",
"1496850533"
]
}
| 0 |
||
0811.1878
|
1784807126
|
Like any other logical theory, domain descriptions in reasoning about actions may evolve, and thus need revision methods to adequately accommodate new information about the behavior of actions. The present work is about changing action domain descriptions in propositional dynamic logic. Its contribution is threefold: first we revisit the semantics of action theory contraction that has been done in previous work, giving more robust operators that express minimal change based on a notion of distance between Kripke-models. Second we give algorithms for syntactical action theory contraction and establish their correctness w.r.t. our semantics. Finally we state postulates for action theory contraction and assess the behavior of our operators w.r.t. them. Moreover, we also address the revision counterpart of action theory change, showing that it benefits from our semantics for contraction.
|
Herzig al @cite_39 define a method for action theory contraction that, despite the similarity with the current work and the common underlying motivations, is more limited than the present constructions. First, with the referred approach we do not get minimal change. For example, in the referred work the operator for contracting executability laws is such that in the resulting theory the modified set of executabilities is given by [ - = ( i ) : i ] which, according to its semantics, gives theories among whose models are those resulting from removing arrows from all @math -worlds. A similar comment can be made contraction of effect laws.
|
{
"abstract": [
"In this work we address the problem of elaborating domain descriptions (alias action theories), in particular those that are expressed in dynamic logic. We define a general method based on contraction of formulas in a version of propositional dynamic logic with a solution to the frame problem. We present the semantics of our theory change and define syntactical operators for contracting a domain description. We establish soundness and completeness of the operators w.r.t. the semantics for descriptions that satisfy a principle of modularity that we have defined in previous work."
],
"cite_N": [
"@cite_39"
],
"mid": [
"149626383"
]
}
| 0 |
||
0811.1922
|
1533393351
|
A hypergraph dictatorship test is first introduced by Samorodnitsky and Trevisan and serves as a key component in their unique games based @math construction. Such a test has oracle access to a collection of functions and determines whether all the functions are the same dictatorship, or all their low degree influences are o (1). Their test makes q *** 3 queries, has amortized query complexity @math , but has an inherent loss of perfect completeness. In this paper we give an (adaptive) hypergraph dictatorship test that achieves both perfect completeness and amortized query complexity @math .
|
The orthogonal question of designing testers or @math s with as few queries as possible was also considered. In a highly influential paper @cite_8 , H stad constructed a @math system making only three queries. Many variants also followed. In particular @math systems with perfect completeness making three queries were also achieved in @cite_2 @cite_15 . Similar to our approach, O'Donnell and Wu @cite_10 designed an optimal three bit dictatorship test with perfect completeness, and later the same authors constructed a conditional @math system @cite_11 .
|
{
"abstract": [
"We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover.",
"It is known that there exists a PCP characterization of NP where the verifier makes 3 queries and has a one-sided error that is bounded away from 1; and also that 2 queries do not suffice for such a characterization. Thus PCPs with 3 queries possess non-trivial verification power and motivate the task of determining the lowest error that can be achieved with a 3-query PCP. Recently, Hastad (1997) has shown a tight characterization of NP by constructing a 3-query PCP verifier with \"error\" arbitrarily close to 1 2. Unfortunately this verifier makes two-sided error and Hastad makes essential use of this feature. One-sided error, on the other hand, is a natural notion to associate with a proof system, since it has the desirable property that every rejected proof has a short counterexample. The question of determining the smallest error for which there exists a 3-query PCP verifier making one-sided error and accepting an NP-complete language, however, remained open. We resolve this question by showing that NP has a 3-query PCP with a one-sided error that is arbitrarily close to 1 2. This characterization is tight, i.e., the error cannot be lower. This result is in seeming contradiction with the results of Trevisan (1997) and Zwick (1998) who show that in order to recognize an NP-complete language, the error probability of a PCP verifier making 3 non-adaptive queries and having one-sided error must be at least 5 8. We get around this bottleneck by designing an adaptive 3-query PCP for NP. Our result yields the first tight analysis of an adaptive PCP; and reveals a previously unsuspected separation between the powers of adaptive and non-adaptive PCPs. Our design and analysis of adaptive PCPs can be extended to higher number of queries as well and we give an example of such a proof system with 5 queries. Our adaptive verifiers yield proof systems whose error probabilities match those of previous constructions, while also achieving one-sidedness in the error. This raises new questions about the power of adaptive PCPs, which deserve further study.",
"",
"In the conclusion of his monumental paper on optimal inapproximability results, Hastad [13] suggested that Fourier analysis of Dictator (Long Code) Tests may not be universally applicable in the study of CSPs. His main open question was to determine if the technique could resolve the approximability of satisfiable 3-bit constraint satisfaction problems. In particular, he asked if the \"Not Two\" (NTW) predicate is non-approximable beyond the random assignment threshold of 5 8 on satisfiable instances. Around the same time, Zwick [30] showed that all satisfiable 3-CSPs are 5 8-approximable and conjectured that the 5 8 is optimal. In this work we show that Fourier analysis techniques can produce a Dictator Test based on NTW with completeness 1 and soundness 5 8. Our test's analysis uses the Bonami-Gross-Beckner hypercontractive inequality. We also show a soundness lower bound of 5 8 for all 3-query Dictator Tests with perfect completeness. This lower bound for Property Testing is proved in part via a semidefinite programming algorithm of Zwick [30]. Our work precisely determines the 3-query \"Dictatorship Testing gap\". Although this represents progress on Zwick's conjecture, current PCP \"outer verifier\" technology is insufficient to convert our Dictator Test into an NP-hardness-of-approximation result.",
"In this paper we study a fundamental open problem in the area of probabilistic checkable proofs: What is the smallest s such that NP ⊆ naPCP1,s[O(log n),3]? In the language of hardness of approximation, this problem is equivalent to determining the smallest s such that getting an s-approximation for satisfiable 3-bit constraint satisfaction problems (\"3-CSPs\") is NP-hard. The previous best upper bound and lower bound for s are 20 27+µ by Khot and Saket [KS06], and 5 8 (assuming NP subseteq BPP) by Zwick [Zwi98]. In this paper we close the gap assuming Khot's d-to-1 Conjecture. Formally, we prove that if Khot's d-to-1 Conjecture holds for any finite constant integer d, then NP naPCP1,5 8+ µ[O(log n),3] for any constant µ > 0. Our conditional result also solves Hastad's open question [Has01] on determining the inapproximability of satisfiable Max-NTW (\"Not Two\") instances and confirms Zwick's conjecture [Zwi98] that the 5 8-approximation algorithm for satisfiable 3-CSPs is optimal."
],
"cite_N": [
"@cite_8",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"1999032440",
"2133344054",
"",
"1556276033",
"2008724760"
]
}
|
A Hypergraph Dictatorship Test with Perfect Completeness
|
Linearity and dictatorship testing have been studied in the past decade both for their combinatorial interest and connection to complexity theory. These tests distinguish functions which are linear/dictator from those which are far from being a linear/dictator function. The tests do so by making queries to a function at certain points and receiving the function's values at these points. The parameters of interest are the number of queries a test makes and the completeness and soundness of a test.
In this paper we shall work with boolean functions of the form f : {0, 1} n → {-1, 1}. We say a function f is linear if f = (−1) queries as possible. One way to measure this tradeoff between the soundness s and the number of queries q is amortized query complexity, defined as q log s −1 . This investigation, initiated in [25], has since spurred a long sequence of works [22,20,11,6]. All the testers from these works run many iterations of a single dictatorship test by reusing queries from previous iterations. The techniques used are Fourier analytic, and the best amortized query complexity from this sequence of works has the form 1 + O 1 √ q .
The next breakthrough occurs when Samorodnitsky [19] introduces the notion of a relaxed linearity test along with new ideas from additive combinatorics. In property testing, the goal is to distinguish objects that are very structured from those that are pseudorandom. In the case of linearity/dictatorship testing, the structured objects are the linear/dictator functions, and functions that are far from being linear/dictator are interpreted as pseudorandom. The recent paradigm in additive combinatorics is to find the right framework of structure and pseudorandomness and analyze combinatorial objects by dividing them into structured and pseudorandom components, see e.g. [24] for a survey. One success is the notion of Gowers norm [7], which has been fruitful in attacking many problems in additive combinatorics and computer science. In [19], the notion of pseudorandomness for linearity testing is relaxed; instead of designating the functions that are far from being linear as pseudorandom, the functions having small low degree Gowers norm are considered to be pseudorandom. By doing so, an optimal tradeoff between soundness and query complexity is obtained for the problem of relaxed linearity testing. (Here the tradeoff is stronger than the tradeoff for the traditional problem of linearity testing.)
In a similar fashion, in the PCP literature since [9], the pseudorandom objects in dictatorship tests are not functions that are far from being a dictator. The pseudorandom functions are typically defined to be either functions that are far from all "juntas" or functions whose "low-degree influences" are o(1). Both considerations of a dictatorship test are sufficient to compose the test in a PCP construction. In [21], building on the analysis of the relaxed linearity test in [19], Samorodnitsky and Trevisan construct a dictatorship test (taking the view that functions with arbitrary small "low-degree influences are pseudorandom) with amortized query complexity 1 + O log q q . Furthermore, the test is used as the inner verifier in a conditional PCP construction (based on unique games [12]) with the same parameters. However, their dictatorship test suffers from an inherent loss of perfect completeness. Ideally one would like testers with one-sided errors. One, for aesthetic reasons, testers should always accept valid inputs. Two, for some hardness of approximation applications, in particular coloring problems (see e.g. [10] or [5]), it is important to construct PCP systems with one-sided errors.
In this paper, we prove the following theorem: Our tester is a variant of the one given in [21]. Our tester is adaptive in the sense that it makes its queries in two stages. It first makes roughly log q nonadaptive queries into the function. Based on the values of these queries, the tester then selects the rest of the query points nonadaptively. Our analysis is based on techniques developed in [11,21,10,8].
Future Direction
Unfortunately, the adaptivity of our test is a drawback. The correspondence between PCP constructions and hardness of approximation needs the test to be fully nonadaptive. However, a more pressing issue is that our hypergraph dictatorship test does not immediately imply a new PCP characterization of NP. The reason is that a dictatorship test without "consistency checks" is most easily composed with the unique label cover defined in [12] as the outer verifier in a PCP reduction. As the conjectured NP-hardness of the unique label cover cannot have perfect completeness, the obvious approach in combining our test with the unique games-based outer verifier does not imply a new PCP result. However, there are variants of the unique label cover (e.g., Khot's d to 1 Conjecture) [12] that do have conjectured perfect completeness, and these variants are used to derive hardness of coloring problems in [5]. We hope that our result combined with similar techniques used in [5] may obtain a new conditional PCP construction and will motivate more progress on constraint satisfaction problems with bounded projection .
Preliminaries
We fix some notation and provide the necessary background in this section. We let [n] denote the set {1, 2, . . . , n}. For a vector v ∈ {0, 1} n , we write |v| = i∈[n] v i . We let ∧ denote the boolean AND, where a∧b = 1 iff a = b = 1. For vectors v, w ∈ {0, 1} n , we write v ∧w to denote the vector obtained by applying AND to v and w component-wise. We abuse notation and sometimes interpret a vector v ∈ {0, 1} n as a
subset v ⊆ [n] where i ∈ v iff v i = 1.
Fourier Analysis
f : {0, 1} n → R to be f (α) = E x∈{0,1} n f (x)χ α (x),
where χ α (x) = (−1) It is easy to see that for α, β ∈ {0, 1} n , E χ α ·χ β is 1 if α = β and 0 otherwise. Since there are 2 n characters, they form an orthonormal basis for functions on {0, 1} n , and we have the Fourier inversion formula
f (x) = α∈{0,1} n f (α)χ α (x)
and Parseval's Identity
α∈{0,1} n f (α) 2 = E x [f (x) 2 ].
Influence of Variables
For a boolean function f :
{0, 1} n → {-1, 1}, the influence of the i-variable, I i (f ), is defined to be Pr x∈{0,1} n [f (x) = f (x + e i )]
, where e i is a vector in {0, 1} n with 1 on the i-th coordinate 0 everywhere else. This corresponds to our intuitive notion of influence: how likely the outcome of f changes when the i-th variable on a random input is flipped. For the rest of this paper, it will be convenient to work with the Fourier analytic definition of I i (f ) instead, and we leave it to the readers to verify that the two definitions are equivalent when f is a boolean function.
Definition 2.2. Let f : {0, 1} n → R. We define the influence of the i-th variable of f to be I i (f ) = α∈{0,1} n : α i =1f (α) 2 .
We shall need the following technical lemma, which is Lemma 4 from [21], and it gives an upper bound on the influence of a product of functions. ,
I i (f ) ≤ k · k j=1 I i (f j ).
When {f i } are boolean functions, it is easy to see that I i (f ) ≤ k j=1 I i (f j ) by the union bound. We now define the notion of low-degree influence. Definition 2.3. Let w be an integer between 0 and n. We define the w-th degree influence of the i-th variable of a function f : {0, 1} n → R to be
I ≤w i (f ) = α∈{0,1} n : α i =1, |α|≤wf (α) 2 .
While the definition of low-degree influence is standard in the literature, we shall make a few remarks since this definition does not have a clean combinatorial interpretation or an immediate justification. Dictatorship tests (those based on influences) classify functions in the NO instances to be those whose low-degree influences are o(1) for two reasons. One is that large parity functions, which have many variables with influence 1 but no variables with low-degree influence, must be rejected by the test. The second is that if w is fixed, then a bounded function has only a finite number of variables with large w-th degree influence. This easy fact, though we won't need it here, is often needed to lift a dictatorship test to a PCP construction. Both such considerations fail if we substitute the low-degree influence requirement by just influence, thus the need for a thresholded version of influence.
Gowers norm
In [7], Gowers uses analytic techniques to give a new proof of Szeméredi's Theorem [23] and in particular, initiates the study of a new norm of a function as a measure of pseudorandomness. Subsequently this norm is termed the Gowers uniformity norm and has been intensively studied and applied in additive combinatorics, see e.g. [24] for a survey. The use of the Gowers norm in computer science is initiated in [19,21].
Definition 2.4. Let f : {0, 1} n → R. We define the d-th dimension Gowers uniformity norm of f to be ||f || U d = E x, x 1 ,...,x d S⊆[d] f x + i∈S x i 1/2 d . For a collection of 2 d functions f S : {0, 1} n → R, S ⊂ [d], we define the d-th dimension Gowers inner product of {f S } S⊆d to be {f S } S⊆[d] U d = E x, x 1 ,...,x d S⊆[d] f S x + i∈S x i .
When f is a boolean function, one can interpret the Gowers norm as simply the expected number of "affine parallelepipeds" of dimension d. While this expression may look cumbersome at first glance, the use of the Gowers norm is in some sense to control expectations over some other expressions. For instance, to count the number of d + 1-term progressions of the form x, x + y, . . . , x + d · y in a subset, one may be interested in approximating expressions of the form
E x,y [f 1 (x)f 2 (x + y) · · · f d (x + d · y)], where f 1 , .
. . , f d are some bounded functions over some appropriate domain. In fact, as shown by Gowers, these expectations are upper bounded by the Gowers inner product of f i , which is also upper bounded by min i∈[d] ||f i || 2 d U d . Thus, in a rough sense, questions regarding progressions are then reduced to questions regarding the Gowers norms, which are more amenable to analytic techniques.
The proof showing that E x,y [f 1 (x)f 2 (x+y) · · · f d (x+d·y)]
is upper bounded by the minimum Gowers norm of all the functions f i is not difficult; it proceeds by repeated applications of the Cauchy-Schwarz inequality and substitution of variables. Collectively, statements saying that certain expressions are governed by the Gowers norm are coined von-Neumann type theorems in the literature.
For the analysis of hypergraph-based dictatorship test, we shall encounter the following expression.
Definition 2.5. Let {f S } S⊆[d] be a collection of functions where f S : {0, 1} n → R. We define the d-th dimension Gowers linear inner product of {f S } to be {f S } S⊆[d] LU d = E x 1 ,...,x d S⊆[d] f S i∈S x i .
This definition is a variant of the Gowers inner product and is in fact upper bounded by the square root of the Gowers inner product as shown in [21]. Furthermore they showed that if a collection of functions has large Gowers inner product, then two functions must share an influential variable. Thus, one can infer the weaker statement that large linear Gowers inner product implies two functions have an influential variable.
For our purposes, we can encapsulate all the prior discussion into the following statement, which is Lemma 16 from [21]. This is the only fact on the Gowers norm that we explicitly need.
Dictatorship Test
Definition 3.1 (dictatorship). For i ∈ [n], the i-th dictator is the function f (x) = (−1) x i .
In the PCP literature, the i-th dictator is also known as the long code encoding of i, (−1) x i x∈{0,1} n , which is simply the evaluation of the i-th dictator function at all points. Now let us define a t-function dictatorship test. Suppose we are given oracle access to a collection of boolean functions f 1 , . . . , f t . We want to make as few queries as possible into these functions to decide if all the functions are the same dictatorship, or no two functions have some common structure. More precisely, we have the following definition:
i ∈ [n] such that I ≤w i (f a ), I ≤w i (f b ) ≥ τ .
A q-function dictatorship test making q queries, with soundness q+1 2 q was proved in [21], but the test suffers from imperfect completeness. We obtain a (q − O(log q))-dictatorship test that makes q queries, has completeness 1, soundness O(q 3 ) 2 q , and in particular has amortized query complexity 1 + O log q q , the same as the test in [21]. By a simple change of variable, we can more precisely state the following: Theorem 3.1 (main theorem restated). For infinitely many t, there exists an adaptive t-function dictatorship test that makes t + log(t + 1) queries, has completeness 1, and soundness (t+1) 2 2 t .
Our test is adaptive and selects queries in two passes. During the first pass, it picks an arbitrary subset of log(t + 1) functions out of the t functions. For each function selected, our test picks a random entry y and queries the function at entry y. Then based on the values of these log(t + 1) queries, during the second pass, the test selects t positions nonadaptively, one from each function, then queries all t positions at once. The adaptivity is necessary in our analysis, and it is unclear if one can prove an analogous result with only one pass.
Folding
As introduced by Bellare, Goldreich, and Sudan [3], we shall assume that the functions are "folded" as only half of the entries of a function are accessed. We require our dictatorship test to make queries in a special manner. Suppose the test wants to query f at the point x ∈ {0, 1} n . If x 1 = 1, then the test queries f (x) as usual. If x 1 = 0, then the test queries f at the point 1 + x = (1, 1 + x 2 , . . . , 1 + x n ) and negates the value it receives. It is instructive to note that folding ensures f ( 1 + x) = −f (x) and E f = 0.
Basic Test
For ease of exposition, we first consider the following simplistic scenario. Suppose we have oracle access to just one boolean function. Furthermore we ignore the tradeoff between soundness and query complexity. We simply want a dictatorship test that has completeness 1 and soundness 1 2 . There are many such tests in the literature; however, we need a suitable one which our hypergraph dictatorship test can base on. Our basic test below is a close variant of the one proposed by Guruswami, Lewin, Sudan, and Trevisan [8].
BASIC TEST T : with oracle access to f , 1. Pick x i , x j , y, z uniformly at random from {0, 1} n .
2. Query f (y).
Let
v = 1−f (y) 2 . Accept iff f (x i )f (x j ) = f (x i + x j + (v 1 + y) ∧ z).
Lemma 3.2. The test T is a dictatorship test with completeness 1.
Proof
. Suppose f is the ℓ-th dictator, i.e., f (x) = (−1) x ℓ . First note that v + y ℓ = 1 − (−1) y ℓ 2 + y ℓ ,
which evaluates to 0. Thus by linearity of f
f (x i + x j + (v 1 + y) ∧ z) = f (x i )f (x j )f ((v 1 + y) ∧ z) = f (x i )f (x j )(−1) (v+y ℓ )∧z ℓ = f (x i )f (x j )
and the test always accepts.
To analyze the soundness of the test T , we need to derive a Fourier analytic expression for the acceptance probability of T .
Proposition 3.3. Let p be the acceptance probability of T . Then
p = 1 2 + 1 2 α∈{0,1} n f (α) 3 2 −|α| 1 + β⊆α f (β) .
For sanity check, let us interpret the expression for p. Suppose f = χ α for some α = 0 ∈ {0, 1} n , i.e., f (α) = 1 and all other Fourier coefficients of f are 0. Then clearly p = 1 2 + 2 −|α| , which equals 1 whenever f is a dictator function as we have just shown. If |α| is large, then T accepts with probability close to 1 2 . We shall first analyze the soundness and then derive this analytic expression for p. 1 2 .
Lemma 3.4. The test T is a dictatorship test with soundness
Proof. Suppose the test T passes with probability at least 1 2 +ǫ, for some ǫ > 0. By applying Proposition 3.3, Cauchy-Schwarz, and Parseval's Identity, respectively, we obtain
ǫ ≤ 1 2 α∈{0,1} n f (α) 3 2 −|α| 1 + β⊆α f (β) ≤ 1 2 α∈{0,1} n f (α) 3 2 −|α| 1 + β⊆α f (β) 2 1 2 · 2 |α| 2 ≤ α∈{0,1} n f (α) 3 2 − |α| 2 .
Pick the least positive integer w such that 2 − w 2 ≤ ǫ 2 . Then by Parseval's again,
ǫ 2 ≤ α∈{0,1} n :|α|≤w f (α) 3 ≤ max α∈{0,1} n :|α|≤w f (α) .
So there exists some β ∈ {0, 1} n , |β| ≤ w such that ǫ 2 ≤ f (β) . With f being folded, β = 0. Thus, there exists an i ∈ [n] such that β i = 1 and
ǫ 2 4 ≤ f (β) 2 ≤ α∈{0,1} n :α i =1,|α|≤w f (α) 2 .
Now we give the straightforward Fourier analytic calculation for p.
Proof of Proposition 3.3. As usual, we first arithmetize p. We write
p = E x i ,x j ,y,z 1 + f (y) 2 1 + Acc(x i , x j , y, z) 2 + E x i ,x j ,y,z 1 − f (y) 2 1 + Acc(x i , x j , 1 + y, z) 2 ,
where
Acc(x i , x j , y, z) = f (x i )f (x j )f (x i + x j + (y ∧ z)
).
Since f is folded, f ( 1 + y) = −f (y). As y and 1 + y are both identically distributed in {0, 1} n , we have
p = 2 E x i ,x j ,y,z 1 + f (y) 2 1 + Acc(x i , x j , y, z) 2
.
Since E f = 0, we can further simplify the above expression to be
p = 1 2 + 1 2 E x i ,x j ,y,z [(1 + f (y)) Acc(x i , x j , y, z)] .
It suffices to expand out the terms
E x i ,x j ,y,z [Acc(x i , x j , y, z)] and E x i ,x j ,y,z [f (y) Acc(x i , x j , y, z)].
For the first term, it is not hard to show that
E x i ,x j ,y,z [Acc(x i , x j , y, z)] = α∈{0,1} n f (α) 3 2 −|α| ,
by applying the Fourier inversion formula on f and averaging over x i and x j and then averaging over y and z over the AND operator.
Now we compute the second term. Applying the Fourier inversion formula to the last three occurrences of f and averaging over x i and x j , we obtain
E x i ,x j ,y,z [f (y) Acc(x i , x j , y, z)] = α∈{0,1} n f (α) 3 E y,z [f (y)χ α (y ∧ z)] .
It suffices to expand out E y,z [f (y)χ α (y ∧ z)]. By grouping the z's according to their intersection with different possible subsets β of α, we have
E y,z [f (y)χ α (y ∧ z)] = β⊆α Pr z∈{0,1} n [z ∩ α = β] E y f (y) i: α i =1 (−1) y i ∧z i = β⊆α 2 −|α| E y f (y) i: β i =1 (−1) y i = 2 −|α| β⊆α f (β).
Putting everything together, it is easy to see that we have the Fourier analytic expression for p as stated in the lemma.
Hypergraph Dictatorship Test
We prove the main theorem in this section. The basis of our hypergraph dictatorship test will be very similar to the test in the previous section. We remark that we did not choose to present the exact same basic test for hopefully a clearer exposition.
We now address the tradeoff between query complexity and soundness. If we simply repeat the basic test a number of iterations independently, the error is reduced, but the query complexity increases. In other words, the amortized query complexity does not change if we simply run the basic test for many independent iterations. Following Trevisan [25], all the dictatorship tests that save query complexity do so by reusing queries made in previous iterations of the basic test. To illustrate this idea, suppose test T queries f at the points x 1 + h 1 , x 2 + h 2 , x 1 + x 2 + h 1,2 to make a decision. For the second iteration, we let T query f at the points x 3 + h 3 and x 1 + x 3 + h 1,3 and reuse the value f (x 1 + h 1 ) queried during the first run of T . T then uses the three values to make a second decision. In total T makes five queries to run two iterations.
We may think of the first run of T as parametrized by the points x 1 and x 2 and the second run of T by x 1 and x 3 . In general, we may have k points x 1 , . . . , x k and a graph on [k] vertices, such that each edge e of the graph corresponds to an iteration of T parametrized by the points {x i } i∈e . We shall use a complete hypergraph on k vertices to save on query complexity, and we will argue that the soundness of the algorithm decreases exponentially with respect to the number of iterations.
Let
v i = 1−f i (y i ) 2 .
Accept iff for every e ∈ E,
i∈e f i (x i + (v i 1 + y i ) ∧ z i ) = f e i∈e x i + Σ i∈e (v i 1 + y i ) ∧ z e .
We make a few remarks regarding the design of H-Test. The hypergraph test by Samorodnitsky and Trevisan [21] accepts iff for every e ∈ E, i∈e f i (x i + η i ) equals f e ( i∈e x i + η e ), where the bits in each vector η a are chosen independently to be 1 with some small constant, say 0.01. The noise vectors η a rule out the possibility that linear functions with large support can be accepted. To obtain a test with perfect completeness, we use ideas from [8,16,10] to simulate the effect of the noise perturbation.
Note that for y, z chosen uniformly at random from {0, 1} n , the vector y∧z is a 1 4 -noisy vector. As observed by Parnas, Ron, and Samorodnitsky [16], the test f (y ∧ z) = f (y) ∧ f (z) distinguishes between dictators and linear functions with large support. One can also combine linearity and dictatorship testing into a single test of the form f (
x 1 + x 2 + y ∧ z)(f (y) ∧ f (z)) = f (x 1 )f (x 2 )
as Håstad and Khot demonstrated [10]. However, iterating this test is too costly for us. In fact, Håstad and Khot also consider an adaptive variant that reads k 2 + 2k bits to obtain a soundness of 2 −k 2 , the same parameters as in [20], while achieving perfect completeness as well. Without adaptivity, the test in [10] reads k 2 + 4k bits. While both the nonadaptive and adaptive tests in [10] have the same amortized query complexity, extending the nonadaptive test by Håstad and Khot to the hypergraph setting does not work for us. So to achieve the same amortized query complexity as the hypergraph test in [21], we also exploit adaptivity in our test. Theorem 3.5 (main theorem restated). For infinitely many t, there exists an adaptive t-function dictatorship test with t + log(t + 1) queries, completeness 1, and soundness (t+1) 2 2 t .
Proof. Take a complete hypergraph on k vertices, where k = log(t + 1). The statement follows by applying Lemmas 3.6 and 3.7.
Lemma 3.6. The H-Test is a (k + |E|)-function dictatorship test that makes |E| + 2k queries and has completeness 1.
Proof. The test makes k queries f i (y i ) in the first pass, and based on the answers to these k queries, the test then makes one query into each function f a , for each a ∈ [k] ∪ E. So the total number of queries is |E| + 2k.
Now suppose all the functions are the ℓ-th dictator for some ℓ ∈ [n], i.e., for all a ∈
[k] ∪ E, f a = f, where f (x) = (−1) x ℓ . Note that for each i ∈ [k], v i + y i (ℓ) = 1 − (−1) y i (ℓ) 2 + y i (ℓ),
which evaluates to 0. Thus for each e ∈ E,
i∈e f i (x i + (v i 1 + y i ) ∧ z i ) = f i∈e x i · i∈e f ((v i 1 + y i ) ∧ z i ) = f i∈e x i · i∈e (−1) (v i +y i (ℓ))∧z i (ℓ) = f i∈e x i ,
and similarly,
f e i∈e x i + Σ i∈e (v i 1 + y i ) ∧ z e = f i∈e x i .
Hence the test always accepts.
g(x; y) = E z∈{0,1} n f (c ′ + x + (c + y) ∧ z),
where c, c ′ are some fixed vectors in {0, 1} n . Then
g(α; β) 2 = f (α) 2 1 {β⊆α} 4 −|α| .
Proof. This is a straightforward Fourier analytic calculation. By definition,
g(α; β) 2 = E x,y,z∈{0,1} n f (c ′ + x + (c + y) ∧ z)χ α (x)χ β (y) 2 .
By averaging over x it is easy to see that
g(α; β) 2 = f (α) 2 E y,z∈{0,1} n χ α ((c + y) ∧ z)χ β (y) 2 .
Since the bits of y are chosen independently and uniformly at random, if β\α is nonempty, the above expression is zero. So we can write
g(α; β) 2 = f (α) 2 1 {β⊆α} i∈α\β E y i ,z i (−1) (c i +y i )∧z i · i∈β E y i ,z i (−1) (c i +y i )∧z i +y i 2 .
It is easy to see that the term E y i ,z i (−1) (c i +y i )∧z i evaluates to 1 2 and the term E y i ,z i (−1) (c i +y i )∧z i +y i evaluates to (−1) c i 1 2 . Thus g(α; β) 2 = f (α) 2
I ≤w i (f a ), I ≤w i (f b ) ≥ ǫ ′ .
As usual we first arithmetize p. We write
p = v∈{0,1} k E {x i },{y i },{za} i∈[k] 1 + (−1) v i f i (y i ) 2 e∈E 1 + Acc({x i , y i , v i , z i } i∈e , z e ) 2 ,
where (1 + f i (y i ))
Acc({x i , y i , v i , z i } i∈e , z e ) = i∈e f i (x i + (v i 1 + y i ) ∧ z i ) · f e i∈e x i + Σ i∈e (v i 1 + y i ) ∧ z e . For each i ∈ [k], f i is folded, so (−1) v i f i (y i ) = f i (v i 1 + y ie∈E 1 + (Acc{x i , y i , 0, z i } i∈e , z e ) 2 .
Instead of writing Acc({x i , y i , 0, z i } i∈e , z e ), for convenience we shall write Acc(e) to be a notational shorthand. Observe that since 1 + f i (y i ) is either 0 or 2, we may write
p ≤ 2 k E {x i },{y i },{za} e∈E 1 + Acc(e) 2 .
Note that the product of sums e∈E 1+Acc(e) 2 expands into a sum of products of the form
2 −|E| 1 + ∅ =E ′ ⊆E e∈E ′ Acc(e) , so we have ǫ 2 k ≤ E {x i },{y i },{za} 2 −|E| ∅ =E ′ ⊆E e∈E ′ Acc(e) .
By averaging, there must exist some nonempty subset E ′ ⊆ E such that
ǫ 2 k ≤ E {x i },{y i },{za} e∈E ′ Acc(e) .
Let Odd consists of the vertices in [k] with odd degree in E ′ . Expanding out the definition of Acc(e), we can conclude
ǫ 2 k ≤ E {x i },{y i },{za} i∈Odd f i (x i + y i ∧ z i ) · e∈E ′ f e i∈e x i + i∈e y i ∧ z e .
We now define a family of functions that represent the "noisy versions" of f a . For a ∈
[k] ∪ E, define g ′ a : {0, 1} 2n → [−1, 1] to be g ′ a (x; y) = E z∈{0,1} n f a (x + y ∧ z). Thus we have ǫ 2 k ≤ E {x i },{y i } i∈Odd g ′ i (x i ; y i ) · e∈E ′ g ′ e i∈e x i ; i∈e y i .
Following the approach in [11,21], we are going to reduce the analysis of the iterated test to one hyperedge. Let d be the maximum size of an edge in E ′ , and without loss of generality, let (1, 2, . . . , d) be a maximal edge in E ′ . Now, fix the values of x d+1 , . . . , x k and y d+1 , . . . , y k so that the following inequality holds: , τ > 0, such that I i (G S ), I i (G T ) ≥ τ, where τ = ǫ 4 2 O(d) . Note that G ∅ is the product of all the functions g ′ a that are indexed by vertices or edges outside of [d]. So G ∅ is a constant function, and all of its variables clearly have influence 0. Thus neither S nor T is empty. Since G S and G T are products of at most 2 k functions, by Lemma 2.1 there must exist some a = b ∈ [d] ∪ E ′ such that I i (g a ), I i (g b ) ≥ τ 2 2k . Recall that we have defined g a (x; y) to be E z f a (c ′ a + x + (c a + y) ∧ z). Thus we can apply Proposition 3.8 to obtain I i (g a ) = (α,β)∈{0,1} 2n ;i∈(α,β) g a (α; β) 2 = α∈{0,1} n ;i∈α β⊆αf a (α) 2 4 −|α| = α∈{0,1} n ;i∈αf a (α) 2 2 −|α| .
ǫ 2 k ≤ E x 1 ,
Let w be the least positive integer such that 2 −w ≤ τ 2 2k+1 . Then it is easy to see that I ≤w i (f a ) ≥ τ 2 2k+1 . Similarly, I ≤w i (f b ) ≥ τ 2 2k+1 as well. Hence this completes the proof.
Acknowledgments
I am grateful to Alex Samorodnitsky for many useful discussions and his help with the Gowers norm. I also thank Madhu Sudan for his advice and support and Swastik Kopparty for an encouraging discussion during the initial stage of this research.
| 5,736 |
0810.3438
|
1658695903
|
Single node failures represent more than 85 of all node failures in the today's large communication networks such as the Internet. Also, these node failures are usually transient. Consequently, having the routing paths globally recomputed does not pay off since the failed nodes recover fairly quickly, and the recomputed routing paths need to be discarded. Instead, we develop algorithms and protocols for dealing with such transient single node failures by suppressing the failure (instead of advertising it across the network), and routing messages to the destination via alternate paths that do not use the failed node. We compare our solution to that of Ref. [11] wherein the authors have presented a "Failure Insensitive Routing" protocol as a proactive recovery scheme for handling transient node failures. We show that our algorithms are faster by an order of magnitude while our paths are equally good. We show via simulation results that our paths are usually within 15 of the optimal for randomly generated graph with 100-1000 nodes.
|
One popular approach of tackling the issues related to transient failures of network elements is that of using proactive recovery schemes . These schemes typically work by precomputing alternate paths at the network setup time for the failure scenarios, and then using these alternate paths to re-route the traffic when the failure actually occurs. Also, the information of the failure is suppressed in the hope that it is a transient failure. The local rerouting based solutions proposed in @cite_8 @cite_9 @cite_4 @cite_7 @cite_0 fall into this category.
|
{
"abstract": [
"The increasing proportion of data traffic being carried in public networks is necessitating tractable and scalable algorithms in the design of ATM networks. In particular, the design of routing tables for ATM networks operated under the interim inter-switch signalling protocol (IISP) requires a significant amount of manual work in order to design and implement the underlying static routing tables that enable end-to-end connectivity as the network grows. This paper presents a scalable algorithm that generates IISP routing table entries such that no loops are created and so that connectivity is maintained between all origin destination nodes under single-link failures. The algorithm generates shortest (i.e., lowest-cost) primary and alternate paths for any single-link failure scenario, while also demonstrating that at least one such solution can be found for any network graph devoid of bridges. Note that re-routing for single-link failures is considered adequate when sufficient protection is provided at the lower network layers. The algorithm has been fully implemented in a practical software tool, with execution time being a polynomial function of the network complexity.",
"As the Internet becomes the critical information infrastructure for both personal and business applications, survivable routing protocols need to be designed that maintain the performance of those services in the presence of failures. This paper examines the survivability of interdoamin routing protocols in the presence of routing failure events, and provides a backup route aware routing protocol that performs non-stop routing in the presence of failures. We demonstrate through simulation its effectiveness in preventing packet losses during transient routing failures.",
"We investigate the single link failure recovery problem and its application to the alternate path routing problem for ATM networks, and the k-replacement edges for each edge of a minimum cost spanning tree. Specifically, given a 2-connected graph G, a specified node s, and a shortest paths tree Ts = e1, e2, . . . , eni1 of s, where ei = (xi, yi) and xi = parentTs(yi), find a shortest path from yi to s in the graph G for 1 · i · n i 1. We present an O(m + n log n) time algorithm for this problem and a linear time algorithm for the case when all weights are equal. When the edge weights are integers, we present an algorithm that takes O(m + Tsort(n)) time, where Tsort(n) is the time required to sort n integers. We establish a lower bound of (min(m p n, n 2 )) for the directed version of our problem under the path comparison model, where Ts is the shortest paths destination tree of s. We show that any solution to the single link recovery problem can be adapted to solve the alternate path routing problem in ATM networks. Our technique for the single link failure recovery problem is adapted to find the k-replacement edges for the tree edges of a minimum cost spanning tree in O(m + n log n) time.",
"Dealing with network failures effectively is a major operational challenge for Internet service providers. Commonly deployed link state routing protocols such as OSPF react to link failures through global (i.e., network-wide) link state advertisements and routing table recomputations, causing significant forwarding discontinuity after a failure. The drawback with these protocols is that they need to trade off routing stability and forwarding continuity. To improve failure resiliency without jeopardizing routing stability, we propose a proactive local rerouting based approach called failure insensitive routing (FIR). The proposed approach prepares for failures using interface-specific forwarding, and upon a failure, suppresses the link state advertisement and instead triggers local rerouting using a backwarding table. In this paper, we prove that when no more than one link failure notification is suppressed, FIR always finds a loop-free path to a destination if one such path exists. We also formally analyze routing stability and network availability under both proactive and reactive approaches, and show that FIR provides better stability and availability than OSPF.",
"With the emergence of voice over IP and other real-time business applications, there is a growing demand for an IP network with high service availability. Unfortunately, in today's Internet, transient failures occur frequently due to faulty interfaces, router crashes, etc., and current IP networks lack the resiliency needed to provide high availability. To enhance availability, we proposed failure inferencing based fast rerouting (FIFR) approach that exploits the existence of a forwarding table per line-card, for lookup efficiency in current routers, to provide fast rerouting similar to MPLS, while adhering to the destination-based forwarding paradigm. In our previous work, we have shown that the FIFR approach can deal with single link failures. In this paper, we extend the FIFR approach to ensure loop-free packet delivery in case of single router failures also, thus mitigating the impact of many scenarios of failures. We demonstrate that the proposed approach not only provides high service availability but also incurs minimal routing overhead."
],
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_0"
],
"mid": [
"2157578392",
"2104602019",
"2132422341",
"2110916358",
"2107096560"
]
}
|
EFFICIENT ALGORITHMS AND ROUTING PROTOCOLS FOR HANDLING TRANSIENT SINGLE NODE FAILURES
|
Let G = (V, E) be an edge weighted graph that represents a computer network, where the weight (positive real number), denoted by cost(e), of the edges represents the cost (time) required to transmit a packet through the edge (link). The number of vertices (|V |) is n and the number of edges (|E|) is m. It is well known that a shortest paths tree of a node s, T s , specifies the fastest way of transmitting a message to node s originating at any given node in the graph under the assumption that messages can be transmitted at the specified costs. Under normal operation the routes are the fastest, but when the system carries heavy traffic on some links these routes might not be the best routes. These trees can be constructed (in polynomial time) by finding a shortest path between every pair of nodes. In this paper we consider the case when the nodes in the network are * Currently at Amazon.com, 1200 12 th Ave. S., Seattle, WA -98144 susceptible to transient faults. These are sporadic faults of at most one node 1 at a time that last for a relatively short period of time. This type of situation has been studied in the past [11] because it represents most of the node failures occurring in networks. Single node failures represent more than 85% of all node failures [7]. Also, these node failures are usually transient, with 46% lasting less than a minute, and 86% lasting less than 10 minutes [7]. Because nodes fail for relative short periods of time, propagating information about the failure throughout the network is not recommended.
In this paper we consider the case where the network is biconnected (2-node-connected), meaning that the deletion of a single node does not disconnect the network. Based on our previous assumptions about failures, a message originating at node x with destination s will be sent along the path specified by T s until it reaches node s or a node (other than s) that failed. In the latter case, we need to use a recovery path to s from that point. Since we assume single node faults and the graph is biconnected, such a path always exists. We call this problem of finding the recovery paths the Single Node Failure Recovery (SNFR) problem. It is important to recognize that the recovery path depends heavily on the protocol being deployed in the system. Later on we discuss our (simple) routing protocol.
Preliminaries
Our communication network is modeled by an edgeweighted biconnected undirected graph G = (V, E), with n = |V | and m = |E|. Each edge e ∈ E has an associated cost (weight), denoted by cost(e), which is a non-negative real number. p G (s, t) denotes a shortest path between s and t in graph G and d G (s, t) to denote its cost (weight).
A shortest path tree T s for a node s is a collection of n−1 edges {e 1 , e 2 , . . . , e n−1 } of G which form a spanning tree of G such that the path from node v to s in T s is a shortest path from v to s in G. We say that T s is rooted at node s. With respect to this root we define the set of nodes that are the children of each node x as follows. In T s we say that every node y that is adjacent to x such that x is on 1 The nodes are single-or multi-processor computers the path in T s from y to s, is a child of x. For each node x in the shortest paths tree, k x denotes the number of children of x in the tree, and C x = {x 1 , x 2 , . . . x kx } denotes this set of children of the node x. Also, x is said to be the parent of each x i ∈ C x in the tree T s . With respect to s, the parent node, p, of a node c is sometimes referred to as the primary neighbor or primary router of c, while c is referred to as an upstream neighbor or upstream router of p. The children of a particular node are said to be siblings of each other. V x (T ) denotes the set of nodes in the subtree of x in the tree T and E x ⊆ E denotes the set of all edges incident on the node x in the graph G. We use nextHop(x, y) to denote the next node from x on the shortest path tree from x to y. Note that by definition, nextHop(x, y) is the parent of x in T y .
Finally, we use ρ x to denote the escape edge in G(E)\T s that the node x uses to recover from the failure of its parent. As we discuss later, having the information of a single escape edge ρ x for each node x ∈ G(V ) and x = s is sufficient to construct the entire alternate path for any node to recover from the failure of its parent, even though the path may actually contain multiple non-tree edges.
Problem Definition
The Single Node Failure Recovery problem, is defined as follows: (SNFR) Given a biconnected undirected edge weighted graph G = (V, E), and the shortest paths tree T s (G) of a node s in G where C x = {x 1 , x 2 , . . . x kx } denotes the set of children of the node x in T s , for each node x ∈ V and x = s, find a path from
x i ∈ C x to s in the graph G = (V \ {x}, E \ E x ),
where E x is the set of edges adjacent to vertex x.
In other words, for each node x in the graph, we are interested in finding alternate paths from each of its children to the source 2 node s when the node x fails. Note that we don't consider the problem to be well defined when the node s fails.
The above definition of alternate paths matches that in [10] for reverse paths: for each node x ∈ G(V ), find a path from x to the node s that does not use the primary neighbor (parent node) y of x in T s .
Main Results
We discuss our efficient 3 algorithm for the SNFR problem that has a running time of O(m log n) (by contrast, the alternate path algorithms of [6,8,11] have a time complexity of Ω(mn log n) per destination). We further develop protocols based on this algorithm for recovering from single node transient failures in communication networks. In the failure free case, our protocol does not use any extra resources.
The recovery paths computed by our algorithm are not necessarily the shortest recovery paths. However, we demonstrate via simulation results that they are very close to the optimal paths.
We compare our results with those of [11] wherein the authors have also studied the same problem and presented protocols based on local rerouting for dealing with transient single node failures. One important difference between the algorithms of [6,8,11] and our's is that unlike our algorithm, these are based primarily on recomputations. Consequently, our algorithm is faster by an order of magnitude than those in [6,8,11], and as shown by our simulation results, our recovery paths are usually comparable, and sometimes better.
Algorithm for Single Node Failure Recovery
A naive algorithm for the SNFR problem is based on recomputation: for each node v ∈ G(V ) and v = s, compute the shortest paths tree of s in the graph G(V \v, E\E v ). Of interest are the paths from s to each of the nodes v i ∈ C v . This naive algorithm invokes a shortest paths algorithm n − 1 times, and thus takes O(mn + n 2 log n) time when it uses the Fibonacci heap [3] implementation of Dijkstra's shortest paths algorithm [2]. While these paths are optimal recovery paths for recovering from the node failure, their structure can be much different from each other, and from the original shortest paths (in absence of any failures) -to the extent that routing messages along these paths may involve recomputing large parts of the primary routing tables at the nodes through which these paths pass. The recovery paths computed by our algorithm have a well defined structure, and they overlap with the paths in the original shortest paths tree (T s ) to an extent that storing the information of a single edge, ρ x , at each node x provides sufficient information to infer the entire recovery path.
Basic Principles and Observations
We start by describing some basic observations about the characteristics of the recovery paths. We also categorize the graph edges according to their role in providing recovery paths for a node when its parent fails. Figure 1. Recovery paths for recovering from the failure of x. Figure 1 illustrates a scenario of a single node failure. In this case, the node x has failed, and we need to find recovery paths to s from each x i ∈ C x . When a node fails, the shortest paths tree of s, T s , gets split into k x + 1 components -one containing the source node s and each of the remaining ones contain one subtree of a child x i ∈ C x .
x 1 x x i k x x j x b b b b g r a r b u q p v p s y g q
Notice that the edge {g p , g q } (Figure 1), which has one end point in the subtree of x j , and the other outside the subtree of x provides a candidate recovery path for the node x j . The complete path is of the form p G (x j , g p ) ; {g p , g q } ; p G (g q , s). Since g q is outside the subtree of x, the path p G (g q , s) is not affected by the failure of x. Edges of this type (from a node in the subtree of x j ∈ C x to a node outside the subtree of x) can be used by x j ∈ C x to escape the failure of node x. Such edges are called green edges. For example, edge {g p , g q } is a green edge.
Next, consider the edge {b u , b v } ( Figure 1) between a node in the subtree of x i and a node in the subtree of x j . Although there is no green edge with an end point in the subtree of x i , the edges {b u , b v } and {g p , g q } together offer a candidate recovery path that can be used by x i to recover from the failure of x. Part of this path connects
x i to x j (p G (x i , b u ) ; {b u , b v } ; p G (b v , x j ))
, after which it uses the recovery path of x j (via x j 's green edge, {g p , g q }). Edges of this type (from a node in the subtree of x i to a node in the subtree of a sibling x j for some i = j) are called blue edges. Another example of a blue edge is edge {b p , b q } which can be used the node x 1 to recover from the failure of x.
Note that edges like {r a , r b } and {b v , g p } (Figure 1) with both end points within the subtree of the same child of x do not help any of the nodes in C x to find a recovery path from the failure of node x. We do not consider such edges in the computation of recovery paths, even though they may provide a shorter recovery path for some nodes (e.g. {b v , g p } may offer a shorter recovery path to x i ). The reason for this is that routing protocols would need to be quite complex in order to use this information. We carefully organize the green and blue edges in a way that allows us to retain only the useful edges and eliminate useless (red) ones efficiently.
We now describe the construction of a new graph R x , the recovery graph for x, which will be used to compute recovery paths for the elements of C x when the node x fails. A single source shortest paths computation on this graph suffices to compute the recovery paths for all x i ∈ C x .
The graph R x has k x + 1 nodes, where k x = |C x |. A special node, s x , represents the source node s in the original graph G = (V, E). Apart from s x , we have one node, denoted by y i , for each x i ∈ C x . We add all the green and blue edges defined earlier to the graph R x as follows. A green edge with an end point in the subtree of x i (by definition, green edges have the other end point outside the subtree of x) translates to an edge between s x and y i . A blue edge with an end point in the subtree of x i and the other in the subtree of x j translates to an edge between nodes y i and y j . However, the weight of each edge added to R x is not the same as the weight of the green or blue edge in G = (V, E) used to define it. The weights are specified below.
Note that the candidate recovery path of x j that uses the green edge g = {g p , g q } has total cost equal to: greenW eight(g) = d G (x j , g p ) + cost(g p , g q ) + d G (g q , s)
(1) As discussed earlier, a blue edge provides a path connecting two siblings of x, say x i and x j . Once the path reaches x j , the remaining part of the recovery path of x i coincides with that of x j . If {b u , b v } is the blue edge connecting the subtrees of x i and x j (the cheapest one corre-sponding to the edge {y i , y j }), the length of the subpath from x i to x j is:
blueW eight(b) = d G (x i , b u ) + cost(b u , b v ) + d G (b v , x j )
(2) We assign this weight to the edge corresponding to the blue edge {b u , b v } that is added in R x between y i and y j .
The construction of our graph R x is now complete. Computing the shortest paths tree of s x in R x provides enough information to compute the recovery paths for all nodes x i ∈ C x when x fails.
Description of the Algorithm and its Analysis
We now incorporate the basic observations described earlier into a formal algorithm for the SNFR problem. Then we analyze the complexity of our algorithm and show that it has a nearly optimal running time of O(m log n).
Our algorithm is a depth-first recursive algorithm over T s . We maintain the following information at each node x:
• Green Edges: The set of green edges in G = (V, E) that offer a recovery path for x to escape the failure of its parent.
• Blue Edges: A set of edges {p, q} in G = (V, E) such that x is the nearest-common-ancestor of p and q with respect to the tree T s .
The set of green edges for node x is maintained in a min heap (priority queue) data structure, which is denoted by H x . The heap elements are tuples of the form < e, greenW eight(e) + d G (s, x) > where e is a green edge, and greenW eight(·) + d G (s, x) defines its priority as an element of the heap. Note that the extra element d G (s, x) is added in order to maintain invariance that the priority of an edge in any heap H remains constant as the path to s is traversed. Initially H x contains an entry for each edge of x which serves as a green edge for it (i.e. an edge of x whose other end point does not lie in the subtree of the parent of x). A linked list, B x , stores the tuples < e, blueW eight(e) >, where e is a blue edge, and blueW eight(e) is the weight of e as defined by the equation (2).
The heap H xi is built by merging together the H heaps of the nodes in C xi , the set of children on x i . Consequently, all the elements in H xi may not be green edges for x i . Using a dfs labeling scheme similar to the one in [1], we can quickly determine whether the edge retrieved by f indM in(H xi ) is a valid green edge for x i or not. If not, we remove the entry corresponding to the edge from H xi via a deleteM in(H xi ) operation. Note that since the deleted edge cannot serve as a green edge for x i , it cannot serve as one for any of the ancestors of x i , and it doesn't need to be added back to the H x heap for any x. We continue deleting the minimum weight edges from H xi till either H xi becomes empty or we find a green edge valid for x i to escape x's failure, in which case we add it to R x .
After adding the green edges to R x , we add the blue edges from B x to R x .
Finally, we compute the shortest paths tree of the node s x in the graph R x using a standard shortest paths algorithm (e.g. Dijkstra's algorithm [2]). The escape edge for the node x i is stored as the parent edge of x i in T sx , the shortest paths tree of s x in R x . Since the communication graph is assumed to be bi-connected, there exists a path from each node x i ∈ C x to s x , provided that the failing node is not s.
For brevity, we omit the detailed analysis of the algorithm. The O(m log n) time complexity of the algorithm follows from the fact that (1) An edge can be a blue edge in the recovery graph of exactly one node: that of the nearestcommon-ancestor of its two end points, and (2) An edge can be deleted at most once from any H heap. We state the result as the following theorem.
Single Node Failure Recovery Protocol
When routing a message to a node s, if a node x needs to forward the message to another node y, the node y is the parent of x in the shortest paths tree T s of s. The SNFR algorithm computes the recovery path from x to s which does not use the node y. In case a node has failed, the protocol re-routes the messages along these alternate paths that have been computed by the SNFR algorithm.
Embedding the Escape Edge
In our protocol, the node x that discovers the failure of y embeds information about the escape edge to use in the message. The escape edge is same as the ρ x edge identified for the node x to use when its parent (y, in this example) has failed. We describe two alteratives for embedding the escape edge information in the message, depending on the particular routing protocol being used.
Protocol Headers
In several routing protocols, including TCP, the message headers are not of fixed size, and other header fields (e.g. Data Offset in TCP) indicate where the actual message data begins. For our purpose, we need an additional header space for two node identifiers (e.g. IP addresses, and the port numbers) which define the two end points of the escape edge. It is important to note that this extra space is required only when the messages are being re-routed as part of a failure recovery. In absence of failures, we do not need to modify the message headers.
Recovery Message
In some cases, it may not be feasible or desirable to add the information about the escape edge to the protocol headers. In such situations, the node x that discovers the failure of its parent node y during the delivery of a message M o , constructs a new message, M r , that contains information for recovering from the failure. In particular, the recovery message, M r contains (a) M o : the original message, and (b) ρ x = (p x , q x ): the escape edge to be used by x to recover from the failure of its parent.
With either of the above two approaches, a light weight application is used to determine if a message is being routed in a failure free case or as part of a failure recovery, and take appropriate actions. Depending on whether the escape edge information is present in the messagae, the application decides which node to forward the message to. This process consumes almost negligible additional resources. As a further optimization, this application can use a special reserved port on the routers, and messages would be sent to it only during the failure recovery mode. This would ensure that no additional resources are consumed in the failure free case.
Protocol Illustration
For brevity we do not formally specify our protocol, but only illustrate how it works. Consider the network in Figure 1. If x i notices that x has failed, it adds information in the message (using one of the two options discussed above) about {b u , b v } as the escape edge to use, and reroutes the message to b u . b u clears the escape edge information, and sends the message to b v , after which it follows the regular path to s. If x has not recovered when the message reaches x j , x j reroutes with message to g p with {g p , g q } as the escape edge to use. This continues till the message reaches a node outside the subtree of x, or till x recovers.
Note that since the alternate paths are used only during failure recovery, and the escape edges dictate the alternate paths, the protocol ensures loop free routing, even though the alternate paths may form loops with the original routing (shortest) paths.
Simulation Results and Comparisons
We present the simulation results for our algorithm, and compare the lengths of the recovery paths generated by our algorithm to the theoretically optimal paths as well as with the ones computed by the algorithm in [11]. In the implementation of our algorithm, we have used standard data structures (e.g. binary heaps instead of Fibonacci heaps [3]: binary heaps suffer from a linear-time merge/meld operation as opposed to constant time for the latter). Consequently, our algorithms have the potential to produce much better running times than what we report.
We ran our simulations on randomly generated graphs, with varying the following parameters: (a) Number of nodes, and (b) Average degree of a node. The edge weights are randomly generated numbers between 100 and 1000. In order to guarantee that the graph is 2-
Figure 2.
node-connected (biconnected), we ensure that the generated graph contains a Hamiltonian cycle. Finally, for each set of these parameters, we simulate our algorithm on multiple random graphs to compute the average value of the of a metric for the parameter set. The algorithms have been implemented in the Java programming language (1.5.0.12 patch), and were run on an Intel machine (Pentium IV 3.06GHz with 2GB RAM). The stretch factor is defined as the ratio of the lengths of recovery paths generated by our algorithm to the lengths of the theoretically optimal paths. The optimal recovery path lengths are computed by recomputing the shortest paths tree of s in the graph G(V \x, E\E x ). In the figures [2,3], the Fir labels relate to the performance of the alternate paths algorithm used by the Failure Insensitive Routing protocol of [11], while the Crp labels relate to the performance of our algorithm for the SNFR problem.
Though [11] doesn't present a detailed analysis of their algorithm, from our analysis, their algorithm needs at least Ω(mn log n) time per sink node in the system. Figures [2,3] compare the performance of our algorithm (CRP) to that of [11] (FIR). The plots for the running times of our algorithm and that of [11] fall in line with the theoretical analysis that our algorithms are faster by an order of magnitude than those of [11]. Interestingly, the stretch factors of the two algorithms are very close for most of the cases, and stay within 15%. The running time of the algorithms fall in line with our theoretical analysis. Our CRP algorithm runs within 50 seconds for graphs upto 600-700 nodes, while the FIR algorithm's runtime shoots up to as high as 5 minutes as the number of nodes increase. The metrics are plotted against the variation in (1) the number of nodes ( Figure [2]), and (2) the average degree of the nodes (Figure [3]). The average degree of a node is fixed at 15 for the cases where we vary the number of nodes ( Figure [2]), and the number of nodes is fixed at 300 for the cases where we plot the impact of varying average node degree ( Figure [3]). As expected, the stretch factors improve as the number of nodes increase. Our algorithm falls behind in finding the optimal paths in cases when the recovery path passes through the subtrees of multiple siblings. Instead of finding the best exit point out of the subtree, in order to keep the protocol simple and the paths well structured, our paths go to the root of the subtree and then follow its alternate path beyond that. These paths are formed using the blue edges. Paths discovered using a node's green edges are optimal such paths. In other words, if most of the edges of a node are green, our algorithm is more likely to find paths close to the optimal ones. Since the average degree of the nodes is kept fixed in these simulations, increasing the number of nodes increases the probability of the edges being green. A similar logic explains the plots in Figure [3]. When the number of nodes is fixed, increasing the average degree of a node results in an increase in the number of green edges for the nodes, 4 as well as the stretch factors.
Concluding Remarks
In this paper we have presented an efficient algorithm for the SNFR problem, and developed protocols for dealing with transient single node failures in communication networks. Via simulation results, we show that our algorithms are much faster than those of [11], while the stretch factor of our paths are usually better or comparable.
Previous algorithms [6,8,11] for computing alternate paths are much slower, and thus impose a much longer network setup time as compared to our approach. The setup time becomes critical in more dynamic networks, where the configuration changes due to events other than transient node or link failures. Note that in several kinds of configuration changes (e.g. permanent node failure, node additions, etc), recomputing the routing paths (or other information) cannot be avoided, and it is desirable to have shorter network setup times.
For the case where we need to solve the SNFR problem for all nodes in the graph, our algorithm would need O(mn log n) time, which is still very close to the time required (O(mn + n 2 log n)) to build the routing tables for the all-pairs setting. The space requirement still stays linear in m and n.
The directed version of the SNFR problem, where one needs to find the optimal (shortest) recovery paths can be shown to have a lower bound of Ω(min(m √ n, n 2 )) using a construction similar to those used for proving the same lower bound on the directed version of SLFR [1] and replacement paths [4] problems. The bound holds under the path comparison model of [5] for shortest paths algorithms.
| 4,908 |
0810.3438
|
1658695903
|
Single node failures represent more than 85 of all node failures in the today's large communication networks such as the Internet. Also, these node failures are usually transient. Consequently, having the routing paths globally recomputed does not pay off since the failed nodes recover fairly quickly, and the recomputed routing paths need to be discarded. Instead, we develop algorithms and protocols for dealing with such transient single node failures by suppressing the failure (instead of advertising it across the network), and routing messages to the destination via alternate paths that do not use the failed node. We compare our solution to that of Ref. [11] wherein the authors have presented a "Failure Insensitive Routing" protocol as a proactive recovery scheme for handling transient node failures. We show that our algorithms are faster by an order of magnitude while our paths are equally good. We show via simulation results that our paths are usually within 15 of the optimal for randomly generated graph with 100-1000 nodes.
|
Refs. @cite_5 @cite_0 present protocols based on local re-routing for dealing with transient single link and single node failures respectively. They demonstrate via simulations that the recovery paths computed by their algorithm are usually within 15 the theoretically optimal alternate paths.
|
{
"abstract": [
"With the emergence of voice over IP and other real-time business applications, there is a growing demand for an IP network with high service availability. Unfortunately, in today's Internet, transient failures occur frequently due to faulty interfaces, router crashes, etc., and current IP networks lack the resiliency needed to provide high availability. To enhance availability, we proposed failure inferencing based fast rerouting (FIFR) approach that exploits the existence of a forwarding table per line-card, for lookup efficiency in current routers, to provide fast rerouting similar to MPLS, while adhering to the destination-based forwarding paradigm. In our previous work, we have shown that the FIFR approach can deal with single link failures. In this paper, we extend the FIFR approach to ensure loop-free packet delivery in case of single router failures also, thus mitigating the impact of many scenarios of failures. We demonstrate that the proposed approach not only provides high service availability but also incurs minimal routing overhead.",
"Link failures are part of the day-to-day operation of a network due to many causes such as maintenance, faulty interfaces, and accidental fiber cuts. Commonly deployed link state routing protocols such as OSPF react to link failures through global link state advertisements and routing table recomputations causing significant forwarding discontinuity after a failure. Careful tuning of various parameters to accelerate routing convergence may cause instability when the majority of failures are transient. To enhance failure resiliency without jeopardizing routing stability, we propose a local rerouting based approach called failure insensitive routing. The proposed approach prepares for failures using interface-specific forwarding, and upon a failure, suppresses the link state advertisement and instead triggers local rerouting using a backwarding table. With this approach, when no more than one link failure notification is suppressed, a packet is guaranteed to be forwarded along a loop-free path to its destination if such a path exists. This paper demonstrates the feasibility, reliability, and stability of our approach."
],
"cite_N": [
"@cite_0",
"@cite_5"
],
"mid": [
"2107096560",
"2114234222"
]
}
|
EFFICIENT ALGORITHMS AND ROUTING PROTOCOLS FOR HANDLING TRANSIENT SINGLE NODE FAILURES
|
Let G = (V, E) be an edge weighted graph that represents a computer network, where the weight (positive real number), denoted by cost(e), of the edges represents the cost (time) required to transmit a packet through the edge (link). The number of vertices (|V |) is n and the number of edges (|E|) is m. It is well known that a shortest paths tree of a node s, T s , specifies the fastest way of transmitting a message to node s originating at any given node in the graph under the assumption that messages can be transmitted at the specified costs. Under normal operation the routes are the fastest, but when the system carries heavy traffic on some links these routes might not be the best routes. These trees can be constructed (in polynomial time) by finding a shortest path between every pair of nodes. In this paper we consider the case when the nodes in the network are * Currently at Amazon.com, 1200 12 th Ave. S., Seattle, WA -98144 susceptible to transient faults. These are sporadic faults of at most one node 1 at a time that last for a relatively short period of time. This type of situation has been studied in the past [11] because it represents most of the node failures occurring in networks. Single node failures represent more than 85% of all node failures [7]. Also, these node failures are usually transient, with 46% lasting less than a minute, and 86% lasting less than 10 minutes [7]. Because nodes fail for relative short periods of time, propagating information about the failure throughout the network is not recommended.
In this paper we consider the case where the network is biconnected (2-node-connected), meaning that the deletion of a single node does not disconnect the network. Based on our previous assumptions about failures, a message originating at node x with destination s will be sent along the path specified by T s until it reaches node s or a node (other than s) that failed. In the latter case, we need to use a recovery path to s from that point. Since we assume single node faults and the graph is biconnected, such a path always exists. We call this problem of finding the recovery paths the Single Node Failure Recovery (SNFR) problem. It is important to recognize that the recovery path depends heavily on the protocol being deployed in the system. Later on we discuss our (simple) routing protocol.
Preliminaries
Our communication network is modeled by an edgeweighted biconnected undirected graph G = (V, E), with n = |V | and m = |E|. Each edge e ∈ E has an associated cost (weight), denoted by cost(e), which is a non-negative real number. p G (s, t) denotes a shortest path between s and t in graph G and d G (s, t) to denote its cost (weight).
A shortest path tree T s for a node s is a collection of n−1 edges {e 1 , e 2 , . . . , e n−1 } of G which form a spanning tree of G such that the path from node v to s in T s is a shortest path from v to s in G. We say that T s is rooted at node s. With respect to this root we define the set of nodes that are the children of each node x as follows. In T s we say that every node y that is adjacent to x such that x is on 1 The nodes are single-or multi-processor computers the path in T s from y to s, is a child of x. For each node x in the shortest paths tree, k x denotes the number of children of x in the tree, and C x = {x 1 , x 2 , . . . x kx } denotes this set of children of the node x. Also, x is said to be the parent of each x i ∈ C x in the tree T s . With respect to s, the parent node, p, of a node c is sometimes referred to as the primary neighbor or primary router of c, while c is referred to as an upstream neighbor or upstream router of p. The children of a particular node are said to be siblings of each other. V x (T ) denotes the set of nodes in the subtree of x in the tree T and E x ⊆ E denotes the set of all edges incident on the node x in the graph G. We use nextHop(x, y) to denote the next node from x on the shortest path tree from x to y. Note that by definition, nextHop(x, y) is the parent of x in T y .
Finally, we use ρ x to denote the escape edge in G(E)\T s that the node x uses to recover from the failure of its parent. As we discuss later, having the information of a single escape edge ρ x for each node x ∈ G(V ) and x = s is sufficient to construct the entire alternate path for any node to recover from the failure of its parent, even though the path may actually contain multiple non-tree edges.
Problem Definition
The Single Node Failure Recovery problem, is defined as follows: (SNFR) Given a biconnected undirected edge weighted graph G = (V, E), and the shortest paths tree T s (G) of a node s in G where C x = {x 1 , x 2 , . . . x kx } denotes the set of children of the node x in T s , for each node x ∈ V and x = s, find a path from
x i ∈ C x to s in the graph G = (V \ {x}, E \ E x ),
where E x is the set of edges adjacent to vertex x.
In other words, for each node x in the graph, we are interested in finding alternate paths from each of its children to the source 2 node s when the node x fails. Note that we don't consider the problem to be well defined when the node s fails.
The above definition of alternate paths matches that in [10] for reverse paths: for each node x ∈ G(V ), find a path from x to the node s that does not use the primary neighbor (parent node) y of x in T s .
Main Results
We discuss our efficient 3 algorithm for the SNFR problem that has a running time of O(m log n) (by contrast, the alternate path algorithms of [6,8,11] have a time complexity of Ω(mn log n) per destination). We further develop protocols based on this algorithm for recovering from single node transient failures in communication networks. In the failure free case, our protocol does not use any extra resources.
The recovery paths computed by our algorithm are not necessarily the shortest recovery paths. However, we demonstrate via simulation results that they are very close to the optimal paths.
We compare our results with those of [11] wherein the authors have also studied the same problem and presented protocols based on local rerouting for dealing with transient single node failures. One important difference between the algorithms of [6,8,11] and our's is that unlike our algorithm, these are based primarily on recomputations. Consequently, our algorithm is faster by an order of magnitude than those in [6,8,11], and as shown by our simulation results, our recovery paths are usually comparable, and sometimes better.
Algorithm for Single Node Failure Recovery
A naive algorithm for the SNFR problem is based on recomputation: for each node v ∈ G(V ) and v = s, compute the shortest paths tree of s in the graph G(V \v, E\E v ). Of interest are the paths from s to each of the nodes v i ∈ C v . This naive algorithm invokes a shortest paths algorithm n − 1 times, and thus takes O(mn + n 2 log n) time when it uses the Fibonacci heap [3] implementation of Dijkstra's shortest paths algorithm [2]. While these paths are optimal recovery paths for recovering from the node failure, their structure can be much different from each other, and from the original shortest paths (in absence of any failures) -to the extent that routing messages along these paths may involve recomputing large parts of the primary routing tables at the nodes through which these paths pass. The recovery paths computed by our algorithm have a well defined structure, and they overlap with the paths in the original shortest paths tree (T s ) to an extent that storing the information of a single edge, ρ x , at each node x provides sufficient information to infer the entire recovery path.
Basic Principles and Observations
We start by describing some basic observations about the characteristics of the recovery paths. We also categorize the graph edges according to their role in providing recovery paths for a node when its parent fails. Figure 1. Recovery paths for recovering from the failure of x. Figure 1 illustrates a scenario of a single node failure. In this case, the node x has failed, and we need to find recovery paths to s from each x i ∈ C x . When a node fails, the shortest paths tree of s, T s , gets split into k x + 1 components -one containing the source node s and each of the remaining ones contain one subtree of a child x i ∈ C x .
x 1 x x i k x x j x b b b b g r a r b u q p v p s y g q
Notice that the edge {g p , g q } (Figure 1), which has one end point in the subtree of x j , and the other outside the subtree of x provides a candidate recovery path for the node x j . The complete path is of the form p G (x j , g p ) ; {g p , g q } ; p G (g q , s). Since g q is outside the subtree of x, the path p G (g q , s) is not affected by the failure of x. Edges of this type (from a node in the subtree of x j ∈ C x to a node outside the subtree of x) can be used by x j ∈ C x to escape the failure of node x. Such edges are called green edges. For example, edge {g p , g q } is a green edge.
Next, consider the edge {b u , b v } ( Figure 1) between a node in the subtree of x i and a node in the subtree of x j . Although there is no green edge with an end point in the subtree of x i , the edges {b u , b v } and {g p , g q } together offer a candidate recovery path that can be used by x i to recover from the failure of x. Part of this path connects
x i to x j (p G (x i , b u ) ; {b u , b v } ; p G (b v , x j ))
, after which it uses the recovery path of x j (via x j 's green edge, {g p , g q }). Edges of this type (from a node in the subtree of x i to a node in the subtree of a sibling x j for some i = j) are called blue edges. Another example of a blue edge is edge {b p , b q } which can be used the node x 1 to recover from the failure of x.
Note that edges like {r a , r b } and {b v , g p } (Figure 1) with both end points within the subtree of the same child of x do not help any of the nodes in C x to find a recovery path from the failure of node x. We do not consider such edges in the computation of recovery paths, even though they may provide a shorter recovery path for some nodes (e.g. {b v , g p } may offer a shorter recovery path to x i ). The reason for this is that routing protocols would need to be quite complex in order to use this information. We carefully organize the green and blue edges in a way that allows us to retain only the useful edges and eliminate useless (red) ones efficiently.
We now describe the construction of a new graph R x , the recovery graph for x, which will be used to compute recovery paths for the elements of C x when the node x fails. A single source shortest paths computation on this graph suffices to compute the recovery paths for all x i ∈ C x .
The graph R x has k x + 1 nodes, where k x = |C x |. A special node, s x , represents the source node s in the original graph G = (V, E). Apart from s x , we have one node, denoted by y i , for each x i ∈ C x . We add all the green and blue edges defined earlier to the graph R x as follows. A green edge with an end point in the subtree of x i (by definition, green edges have the other end point outside the subtree of x) translates to an edge between s x and y i . A blue edge with an end point in the subtree of x i and the other in the subtree of x j translates to an edge between nodes y i and y j . However, the weight of each edge added to R x is not the same as the weight of the green or blue edge in G = (V, E) used to define it. The weights are specified below.
Note that the candidate recovery path of x j that uses the green edge g = {g p , g q } has total cost equal to: greenW eight(g) = d G (x j , g p ) + cost(g p , g q ) + d G (g q , s)
(1) As discussed earlier, a blue edge provides a path connecting two siblings of x, say x i and x j . Once the path reaches x j , the remaining part of the recovery path of x i coincides with that of x j . If {b u , b v } is the blue edge connecting the subtrees of x i and x j (the cheapest one corre-sponding to the edge {y i , y j }), the length of the subpath from x i to x j is:
blueW eight(b) = d G (x i , b u ) + cost(b u , b v ) + d G (b v , x j )
(2) We assign this weight to the edge corresponding to the blue edge {b u , b v } that is added in R x between y i and y j .
The construction of our graph R x is now complete. Computing the shortest paths tree of s x in R x provides enough information to compute the recovery paths for all nodes x i ∈ C x when x fails.
Description of the Algorithm and its Analysis
We now incorporate the basic observations described earlier into a formal algorithm for the SNFR problem. Then we analyze the complexity of our algorithm and show that it has a nearly optimal running time of O(m log n).
Our algorithm is a depth-first recursive algorithm over T s . We maintain the following information at each node x:
• Green Edges: The set of green edges in G = (V, E) that offer a recovery path for x to escape the failure of its parent.
• Blue Edges: A set of edges {p, q} in G = (V, E) such that x is the nearest-common-ancestor of p and q with respect to the tree T s .
The set of green edges for node x is maintained in a min heap (priority queue) data structure, which is denoted by H x . The heap elements are tuples of the form < e, greenW eight(e) + d G (s, x) > where e is a green edge, and greenW eight(·) + d G (s, x) defines its priority as an element of the heap. Note that the extra element d G (s, x) is added in order to maintain invariance that the priority of an edge in any heap H remains constant as the path to s is traversed. Initially H x contains an entry for each edge of x which serves as a green edge for it (i.e. an edge of x whose other end point does not lie in the subtree of the parent of x). A linked list, B x , stores the tuples < e, blueW eight(e) >, where e is a blue edge, and blueW eight(e) is the weight of e as defined by the equation (2).
The heap H xi is built by merging together the H heaps of the nodes in C xi , the set of children on x i . Consequently, all the elements in H xi may not be green edges for x i . Using a dfs labeling scheme similar to the one in [1], we can quickly determine whether the edge retrieved by f indM in(H xi ) is a valid green edge for x i or not. If not, we remove the entry corresponding to the edge from H xi via a deleteM in(H xi ) operation. Note that since the deleted edge cannot serve as a green edge for x i , it cannot serve as one for any of the ancestors of x i , and it doesn't need to be added back to the H x heap for any x. We continue deleting the minimum weight edges from H xi till either H xi becomes empty or we find a green edge valid for x i to escape x's failure, in which case we add it to R x .
After adding the green edges to R x , we add the blue edges from B x to R x .
Finally, we compute the shortest paths tree of the node s x in the graph R x using a standard shortest paths algorithm (e.g. Dijkstra's algorithm [2]). The escape edge for the node x i is stored as the parent edge of x i in T sx , the shortest paths tree of s x in R x . Since the communication graph is assumed to be bi-connected, there exists a path from each node x i ∈ C x to s x , provided that the failing node is not s.
For brevity, we omit the detailed analysis of the algorithm. The O(m log n) time complexity of the algorithm follows from the fact that (1) An edge can be a blue edge in the recovery graph of exactly one node: that of the nearestcommon-ancestor of its two end points, and (2) An edge can be deleted at most once from any H heap. We state the result as the following theorem.
Single Node Failure Recovery Protocol
When routing a message to a node s, if a node x needs to forward the message to another node y, the node y is the parent of x in the shortest paths tree T s of s. The SNFR algorithm computes the recovery path from x to s which does not use the node y. In case a node has failed, the protocol re-routes the messages along these alternate paths that have been computed by the SNFR algorithm.
Embedding the Escape Edge
In our protocol, the node x that discovers the failure of y embeds information about the escape edge to use in the message. The escape edge is same as the ρ x edge identified for the node x to use when its parent (y, in this example) has failed. We describe two alteratives for embedding the escape edge information in the message, depending on the particular routing protocol being used.
Protocol Headers
In several routing protocols, including TCP, the message headers are not of fixed size, and other header fields (e.g. Data Offset in TCP) indicate where the actual message data begins. For our purpose, we need an additional header space for two node identifiers (e.g. IP addresses, and the port numbers) which define the two end points of the escape edge. It is important to note that this extra space is required only when the messages are being re-routed as part of a failure recovery. In absence of failures, we do not need to modify the message headers.
Recovery Message
In some cases, it may not be feasible or desirable to add the information about the escape edge to the protocol headers. In such situations, the node x that discovers the failure of its parent node y during the delivery of a message M o , constructs a new message, M r , that contains information for recovering from the failure. In particular, the recovery message, M r contains (a) M o : the original message, and (b) ρ x = (p x , q x ): the escape edge to be used by x to recover from the failure of its parent.
With either of the above two approaches, a light weight application is used to determine if a message is being routed in a failure free case or as part of a failure recovery, and take appropriate actions. Depending on whether the escape edge information is present in the messagae, the application decides which node to forward the message to. This process consumes almost negligible additional resources. As a further optimization, this application can use a special reserved port on the routers, and messages would be sent to it only during the failure recovery mode. This would ensure that no additional resources are consumed in the failure free case.
Protocol Illustration
For brevity we do not formally specify our protocol, but only illustrate how it works. Consider the network in Figure 1. If x i notices that x has failed, it adds information in the message (using one of the two options discussed above) about {b u , b v } as the escape edge to use, and reroutes the message to b u . b u clears the escape edge information, and sends the message to b v , after which it follows the regular path to s. If x has not recovered when the message reaches x j , x j reroutes with message to g p with {g p , g q } as the escape edge to use. This continues till the message reaches a node outside the subtree of x, or till x recovers.
Note that since the alternate paths are used only during failure recovery, and the escape edges dictate the alternate paths, the protocol ensures loop free routing, even though the alternate paths may form loops with the original routing (shortest) paths.
Simulation Results and Comparisons
We present the simulation results for our algorithm, and compare the lengths of the recovery paths generated by our algorithm to the theoretically optimal paths as well as with the ones computed by the algorithm in [11]. In the implementation of our algorithm, we have used standard data structures (e.g. binary heaps instead of Fibonacci heaps [3]: binary heaps suffer from a linear-time merge/meld operation as opposed to constant time for the latter). Consequently, our algorithms have the potential to produce much better running times than what we report.
We ran our simulations on randomly generated graphs, with varying the following parameters: (a) Number of nodes, and (b) Average degree of a node. The edge weights are randomly generated numbers between 100 and 1000. In order to guarantee that the graph is 2-
Figure 2.
node-connected (biconnected), we ensure that the generated graph contains a Hamiltonian cycle. Finally, for each set of these parameters, we simulate our algorithm on multiple random graphs to compute the average value of the of a metric for the parameter set. The algorithms have been implemented in the Java programming language (1.5.0.12 patch), and were run on an Intel machine (Pentium IV 3.06GHz with 2GB RAM). The stretch factor is defined as the ratio of the lengths of recovery paths generated by our algorithm to the lengths of the theoretically optimal paths. The optimal recovery path lengths are computed by recomputing the shortest paths tree of s in the graph G(V \x, E\E x ). In the figures [2,3], the Fir labels relate to the performance of the alternate paths algorithm used by the Failure Insensitive Routing protocol of [11], while the Crp labels relate to the performance of our algorithm for the SNFR problem.
Though [11] doesn't present a detailed analysis of their algorithm, from our analysis, their algorithm needs at least Ω(mn log n) time per sink node in the system. Figures [2,3] compare the performance of our algorithm (CRP) to that of [11] (FIR). The plots for the running times of our algorithm and that of [11] fall in line with the theoretical analysis that our algorithms are faster by an order of magnitude than those of [11]. Interestingly, the stretch factors of the two algorithms are very close for most of the cases, and stay within 15%. The running time of the algorithms fall in line with our theoretical analysis. Our CRP algorithm runs within 50 seconds for graphs upto 600-700 nodes, while the FIR algorithm's runtime shoots up to as high as 5 minutes as the number of nodes increase. The metrics are plotted against the variation in (1) the number of nodes ( Figure [2]), and (2) the average degree of the nodes (Figure [3]). The average degree of a node is fixed at 15 for the cases where we vary the number of nodes ( Figure [2]), and the number of nodes is fixed at 300 for the cases where we plot the impact of varying average node degree ( Figure [3]). As expected, the stretch factors improve as the number of nodes increase. Our algorithm falls behind in finding the optimal paths in cases when the recovery path passes through the subtrees of multiple siblings. Instead of finding the best exit point out of the subtree, in order to keep the protocol simple and the paths well structured, our paths go to the root of the subtree and then follow its alternate path beyond that. These paths are formed using the blue edges. Paths discovered using a node's green edges are optimal such paths. In other words, if most of the edges of a node are green, our algorithm is more likely to find paths close to the optimal ones. Since the average degree of the nodes is kept fixed in these simulations, increasing the number of nodes increases the probability of the edges being green. A similar logic explains the plots in Figure [3]. When the number of nodes is fixed, increasing the average degree of a node results in an increase in the number of green edges for the nodes, 4 as well as the stretch factors.
Concluding Remarks
In this paper we have presented an efficient algorithm for the SNFR problem, and developed protocols for dealing with transient single node failures in communication networks. Via simulation results, we show that our algorithms are much faster than those of [11], while the stretch factor of our paths are usually better or comparable.
Previous algorithms [6,8,11] for computing alternate paths are much slower, and thus impose a much longer network setup time as compared to our approach. The setup time becomes critical in more dynamic networks, where the configuration changes due to events other than transient node or link failures. Note that in several kinds of configuration changes (e.g. permanent node failure, node additions, etc), recomputing the routing paths (or other information) cannot be avoided, and it is desirable to have shorter network setup times.
For the case where we need to solve the SNFR problem for all nodes in the graph, our algorithm would need O(mn log n) time, which is still very close to the time required (O(mn + n 2 log n)) to build the routing tables for the all-pairs setting. The space requirement still stays linear in m and n.
The directed version of the SNFR problem, where one needs to find the optimal (shortest) recovery paths can be shown to have a lower bound of Ω(min(m √ n, n 2 )) using a construction similar to those used for proving the same lower bound on the directed version of SLFR [1] and replacement paths [4] problems. The bound holds under the path comparison model of [5] for shortest paths algorithms.
| 4,908 |
0810.3438
|
1658695903
|
Single node failures represent more than 85 of all node failures in the today's large communication networks such as the Internet. Also, these node failures are usually transient. Consequently, having the routing paths globally recomputed does not pay off since the failed nodes recover fairly quickly, and the recomputed routing paths need to be discarded. Instead, we develop algorithms and protocols for dealing with such transient single node failures by suppressing the failure (instead of advertising it across the network), and routing messages to the destination via alternate paths that do not use the failed node. We compare our solution to that of Ref. [11] wherein the authors have presented a "Failure Insensitive Routing" protocol as a proactive recovery scheme for handling transient node failures. We show that our algorithms are faster by an order of magnitude while our paths are equally good. We show via simulation results that our paths are usually within 15 of the optimal for randomly generated graph with 100-1000 nodes.
|
Wang and Gao's Backup Route Aware Protocol @cite_7 also uses some precomputed backup routes in order to handle transient single link failures. One problem central to their solution asks for the availability of reverse paths at each node. However, they do not discuss the computation of these reverse paths. Interestingly, the alternate paths that our algorithm computes qualify as the reverse paths required by the BRAP protocol of @cite_7 .
|
{
"abstract": [
"As the Internet becomes the critical information infrastructure for both personal and business applications, survivable routing protocols need to be designed that maintain the performance of those services in the presence of failures. This paper examines the survivability of interdoamin routing protocols in the presence of routing failure events, and provides a backup route aware routing protocol that performs non-stop routing in the presence of failures. We demonstrate through simulation its effectiveness in preventing packet losses during transient routing failures."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2104602019"
]
}
|
EFFICIENT ALGORITHMS AND ROUTING PROTOCOLS FOR HANDLING TRANSIENT SINGLE NODE FAILURES
|
Let G = (V, E) be an edge weighted graph that represents a computer network, where the weight (positive real number), denoted by cost(e), of the edges represents the cost (time) required to transmit a packet through the edge (link). The number of vertices (|V |) is n and the number of edges (|E|) is m. It is well known that a shortest paths tree of a node s, T s , specifies the fastest way of transmitting a message to node s originating at any given node in the graph under the assumption that messages can be transmitted at the specified costs. Under normal operation the routes are the fastest, but when the system carries heavy traffic on some links these routes might not be the best routes. These trees can be constructed (in polynomial time) by finding a shortest path between every pair of nodes. In this paper we consider the case when the nodes in the network are * Currently at Amazon.com, 1200 12 th Ave. S., Seattle, WA -98144 susceptible to transient faults. These are sporadic faults of at most one node 1 at a time that last for a relatively short period of time. This type of situation has been studied in the past [11] because it represents most of the node failures occurring in networks. Single node failures represent more than 85% of all node failures [7]. Also, these node failures are usually transient, with 46% lasting less than a minute, and 86% lasting less than 10 minutes [7]. Because nodes fail for relative short periods of time, propagating information about the failure throughout the network is not recommended.
In this paper we consider the case where the network is biconnected (2-node-connected), meaning that the deletion of a single node does not disconnect the network. Based on our previous assumptions about failures, a message originating at node x with destination s will be sent along the path specified by T s until it reaches node s or a node (other than s) that failed. In the latter case, we need to use a recovery path to s from that point. Since we assume single node faults and the graph is biconnected, such a path always exists. We call this problem of finding the recovery paths the Single Node Failure Recovery (SNFR) problem. It is important to recognize that the recovery path depends heavily on the protocol being deployed in the system. Later on we discuss our (simple) routing protocol.
Preliminaries
Our communication network is modeled by an edgeweighted biconnected undirected graph G = (V, E), with n = |V | and m = |E|. Each edge e ∈ E has an associated cost (weight), denoted by cost(e), which is a non-negative real number. p G (s, t) denotes a shortest path between s and t in graph G and d G (s, t) to denote its cost (weight).
A shortest path tree T s for a node s is a collection of n−1 edges {e 1 , e 2 , . . . , e n−1 } of G which form a spanning tree of G such that the path from node v to s in T s is a shortest path from v to s in G. We say that T s is rooted at node s. With respect to this root we define the set of nodes that are the children of each node x as follows. In T s we say that every node y that is adjacent to x such that x is on 1 The nodes are single-or multi-processor computers the path in T s from y to s, is a child of x. For each node x in the shortest paths tree, k x denotes the number of children of x in the tree, and C x = {x 1 , x 2 , . . . x kx } denotes this set of children of the node x. Also, x is said to be the parent of each x i ∈ C x in the tree T s . With respect to s, the parent node, p, of a node c is sometimes referred to as the primary neighbor or primary router of c, while c is referred to as an upstream neighbor or upstream router of p. The children of a particular node are said to be siblings of each other. V x (T ) denotes the set of nodes in the subtree of x in the tree T and E x ⊆ E denotes the set of all edges incident on the node x in the graph G. We use nextHop(x, y) to denote the next node from x on the shortest path tree from x to y. Note that by definition, nextHop(x, y) is the parent of x in T y .
Finally, we use ρ x to denote the escape edge in G(E)\T s that the node x uses to recover from the failure of its parent. As we discuss later, having the information of a single escape edge ρ x for each node x ∈ G(V ) and x = s is sufficient to construct the entire alternate path for any node to recover from the failure of its parent, even though the path may actually contain multiple non-tree edges.
Problem Definition
The Single Node Failure Recovery problem, is defined as follows: (SNFR) Given a biconnected undirected edge weighted graph G = (V, E), and the shortest paths tree T s (G) of a node s in G where C x = {x 1 , x 2 , . . . x kx } denotes the set of children of the node x in T s , for each node x ∈ V and x = s, find a path from
x i ∈ C x to s in the graph G = (V \ {x}, E \ E x ),
where E x is the set of edges adjacent to vertex x.
In other words, for each node x in the graph, we are interested in finding alternate paths from each of its children to the source 2 node s when the node x fails. Note that we don't consider the problem to be well defined when the node s fails.
The above definition of alternate paths matches that in [10] for reverse paths: for each node x ∈ G(V ), find a path from x to the node s that does not use the primary neighbor (parent node) y of x in T s .
Main Results
We discuss our efficient 3 algorithm for the SNFR problem that has a running time of O(m log n) (by contrast, the alternate path algorithms of [6,8,11] have a time complexity of Ω(mn log n) per destination). We further develop protocols based on this algorithm for recovering from single node transient failures in communication networks. In the failure free case, our protocol does not use any extra resources.
The recovery paths computed by our algorithm are not necessarily the shortest recovery paths. However, we demonstrate via simulation results that they are very close to the optimal paths.
We compare our results with those of [11] wherein the authors have also studied the same problem and presented protocols based on local rerouting for dealing with transient single node failures. One important difference between the algorithms of [6,8,11] and our's is that unlike our algorithm, these are based primarily on recomputations. Consequently, our algorithm is faster by an order of magnitude than those in [6,8,11], and as shown by our simulation results, our recovery paths are usually comparable, and sometimes better.
Algorithm for Single Node Failure Recovery
A naive algorithm for the SNFR problem is based on recomputation: for each node v ∈ G(V ) and v = s, compute the shortest paths tree of s in the graph G(V \v, E\E v ). Of interest are the paths from s to each of the nodes v i ∈ C v . This naive algorithm invokes a shortest paths algorithm n − 1 times, and thus takes O(mn + n 2 log n) time when it uses the Fibonacci heap [3] implementation of Dijkstra's shortest paths algorithm [2]. While these paths are optimal recovery paths for recovering from the node failure, their structure can be much different from each other, and from the original shortest paths (in absence of any failures) -to the extent that routing messages along these paths may involve recomputing large parts of the primary routing tables at the nodes through which these paths pass. The recovery paths computed by our algorithm have a well defined structure, and they overlap with the paths in the original shortest paths tree (T s ) to an extent that storing the information of a single edge, ρ x , at each node x provides sufficient information to infer the entire recovery path.
Basic Principles and Observations
We start by describing some basic observations about the characteristics of the recovery paths. We also categorize the graph edges according to their role in providing recovery paths for a node when its parent fails. Figure 1. Recovery paths for recovering from the failure of x. Figure 1 illustrates a scenario of a single node failure. In this case, the node x has failed, and we need to find recovery paths to s from each x i ∈ C x . When a node fails, the shortest paths tree of s, T s , gets split into k x + 1 components -one containing the source node s and each of the remaining ones contain one subtree of a child x i ∈ C x .
x 1 x x i k x x j x b b b b g r a r b u q p v p s y g q
Notice that the edge {g p , g q } (Figure 1), which has one end point in the subtree of x j , and the other outside the subtree of x provides a candidate recovery path for the node x j . The complete path is of the form p G (x j , g p ) ; {g p , g q } ; p G (g q , s). Since g q is outside the subtree of x, the path p G (g q , s) is not affected by the failure of x. Edges of this type (from a node in the subtree of x j ∈ C x to a node outside the subtree of x) can be used by x j ∈ C x to escape the failure of node x. Such edges are called green edges. For example, edge {g p , g q } is a green edge.
Next, consider the edge {b u , b v } ( Figure 1) between a node in the subtree of x i and a node in the subtree of x j . Although there is no green edge with an end point in the subtree of x i , the edges {b u , b v } and {g p , g q } together offer a candidate recovery path that can be used by x i to recover from the failure of x. Part of this path connects
x i to x j (p G (x i , b u ) ; {b u , b v } ; p G (b v , x j ))
, after which it uses the recovery path of x j (via x j 's green edge, {g p , g q }). Edges of this type (from a node in the subtree of x i to a node in the subtree of a sibling x j for some i = j) are called blue edges. Another example of a blue edge is edge {b p , b q } which can be used the node x 1 to recover from the failure of x.
Note that edges like {r a , r b } and {b v , g p } (Figure 1) with both end points within the subtree of the same child of x do not help any of the nodes in C x to find a recovery path from the failure of node x. We do not consider such edges in the computation of recovery paths, even though they may provide a shorter recovery path for some nodes (e.g. {b v , g p } may offer a shorter recovery path to x i ). The reason for this is that routing protocols would need to be quite complex in order to use this information. We carefully organize the green and blue edges in a way that allows us to retain only the useful edges and eliminate useless (red) ones efficiently.
We now describe the construction of a new graph R x , the recovery graph for x, which will be used to compute recovery paths for the elements of C x when the node x fails. A single source shortest paths computation on this graph suffices to compute the recovery paths for all x i ∈ C x .
The graph R x has k x + 1 nodes, where k x = |C x |. A special node, s x , represents the source node s in the original graph G = (V, E). Apart from s x , we have one node, denoted by y i , for each x i ∈ C x . We add all the green and blue edges defined earlier to the graph R x as follows. A green edge with an end point in the subtree of x i (by definition, green edges have the other end point outside the subtree of x) translates to an edge between s x and y i . A blue edge with an end point in the subtree of x i and the other in the subtree of x j translates to an edge between nodes y i and y j . However, the weight of each edge added to R x is not the same as the weight of the green or blue edge in G = (V, E) used to define it. The weights are specified below.
Note that the candidate recovery path of x j that uses the green edge g = {g p , g q } has total cost equal to: greenW eight(g) = d G (x j , g p ) + cost(g p , g q ) + d G (g q , s)
(1) As discussed earlier, a blue edge provides a path connecting two siblings of x, say x i and x j . Once the path reaches x j , the remaining part of the recovery path of x i coincides with that of x j . If {b u , b v } is the blue edge connecting the subtrees of x i and x j (the cheapest one corre-sponding to the edge {y i , y j }), the length of the subpath from x i to x j is:
blueW eight(b) = d G (x i , b u ) + cost(b u , b v ) + d G (b v , x j )
(2) We assign this weight to the edge corresponding to the blue edge {b u , b v } that is added in R x between y i and y j .
The construction of our graph R x is now complete. Computing the shortest paths tree of s x in R x provides enough information to compute the recovery paths for all nodes x i ∈ C x when x fails.
Description of the Algorithm and its Analysis
We now incorporate the basic observations described earlier into a formal algorithm for the SNFR problem. Then we analyze the complexity of our algorithm and show that it has a nearly optimal running time of O(m log n).
Our algorithm is a depth-first recursive algorithm over T s . We maintain the following information at each node x:
• Green Edges: The set of green edges in G = (V, E) that offer a recovery path for x to escape the failure of its parent.
• Blue Edges: A set of edges {p, q} in G = (V, E) such that x is the nearest-common-ancestor of p and q with respect to the tree T s .
The set of green edges for node x is maintained in a min heap (priority queue) data structure, which is denoted by H x . The heap elements are tuples of the form < e, greenW eight(e) + d G (s, x) > where e is a green edge, and greenW eight(·) + d G (s, x) defines its priority as an element of the heap. Note that the extra element d G (s, x) is added in order to maintain invariance that the priority of an edge in any heap H remains constant as the path to s is traversed. Initially H x contains an entry for each edge of x which serves as a green edge for it (i.e. an edge of x whose other end point does not lie in the subtree of the parent of x). A linked list, B x , stores the tuples < e, blueW eight(e) >, where e is a blue edge, and blueW eight(e) is the weight of e as defined by the equation (2).
The heap H xi is built by merging together the H heaps of the nodes in C xi , the set of children on x i . Consequently, all the elements in H xi may not be green edges for x i . Using a dfs labeling scheme similar to the one in [1], we can quickly determine whether the edge retrieved by f indM in(H xi ) is a valid green edge for x i or not. If not, we remove the entry corresponding to the edge from H xi via a deleteM in(H xi ) operation. Note that since the deleted edge cannot serve as a green edge for x i , it cannot serve as one for any of the ancestors of x i , and it doesn't need to be added back to the H x heap for any x. We continue deleting the minimum weight edges from H xi till either H xi becomes empty or we find a green edge valid for x i to escape x's failure, in which case we add it to R x .
After adding the green edges to R x , we add the blue edges from B x to R x .
Finally, we compute the shortest paths tree of the node s x in the graph R x using a standard shortest paths algorithm (e.g. Dijkstra's algorithm [2]). The escape edge for the node x i is stored as the parent edge of x i in T sx , the shortest paths tree of s x in R x . Since the communication graph is assumed to be bi-connected, there exists a path from each node x i ∈ C x to s x , provided that the failing node is not s.
For brevity, we omit the detailed analysis of the algorithm. The O(m log n) time complexity of the algorithm follows from the fact that (1) An edge can be a blue edge in the recovery graph of exactly one node: that of the nearestcommon-ancestor of its two end points, and (2) An edge can be deleted at most once from any H heap. We state the result as the following theorem.
Single Node Failure Recovery Protocol
When routing a message to a node s, if a node x needs to forward the message to another node y, the node y is the parent of x in the shortest paths tree T s of s. The SNFR algorithm computes the recovery path from x to s which does not use the node y. In case a node has failed, the protocol re-routes the messages along these alternate paths that have been computed by the SNFR algorithm.
Embedding the Escape Edge
In our protocol, the node x that discovers the failure of y embeds information about the escape edge to use in the message. The escape edge is same as the ρ x edge identified for the node x to use when its parent (y, in this example) has failed. We describe two alteratives for embedding the escape edge information in the message, depending on the particular routing protocol being used.
Protocol Headers
In several routing protocols, including TCP, the message headers are not of fixed size, and other header fields (e.g. Data Offset in TCP) indicate where the actual message data begins. For our purpose, we need an additional header space for two node identifiers (e.g. IP addresses, and the port numbers) which define the two end points of the escape edge. It is important to note that this extra space is required only when the messages are being re-routed as part of a failure recovery. In absence of failures, we do not need to modify the message headers.
Recovery Message
In some cases, it may not be feasible or desirable to add the information about the escape edge to the protocol headers. In such situations, the node x that discovers the failure of its parent node y during the delivery of a message M o , constructs a new message, M r , that contains information for recovering from the failure. In particular, the recovery message, M r contains (a) M o : the original message, and (b) ρ x = (p x , q x ): the escape edge to be used by x to recover from the failure of its parent.
With either of the above two approaches, a light weight application is used to determine if a message is being routed in a failure free case or as part of a failure recovery, and take appropriate actions. Depending on whether the escape edge information is present in the messagae, the application decides which node to forward the message to. This process consumes almost negligible additional resources. As a further optimization, this application can use a special reserved port on the routers, and messages would be sent to it only during the failure recovery mode. This would ensure that no additional resources are consumed in the failure free case.
Protocol Illustration
For brevity we do not formally specify our protocol, but only illustrate how it works. Consider the network in Figure 1. If x i notices that x has failed, it adds information in the message (using one of the two options discussed above) about {b u , b v } as the escape edge to use, and reroutes the message to b u . b u clears the escape edge information, and sends the message to b v , after which it follows the regular path to s. If x has not recovered when the message reaches x j , x j reroutes with message to g p with {g p , g q } as the escape edge to use. This continues till the message reaches a node outside the subtree of x, or till x recovers.
Note that since the alternate paths are used only during failure recovery, and the escape edges dictate the alternate paths, the protocol ensures loop free routing, even though the alternate paths may form loops with the original routing (shortest) paths.
Simulation Results and Comparisons
We present the simulation results for our algorithm, and compare the lengths of the recovery paths generated by our algorithm to the theoretically optimal paths as well as with the ones computed by the algorithm in [11]. In the implementation of our algorithm, we have used standard data structures (e.g. binary heaps instead of Fibonacci heaps [3]: binary heaps suffer from a linear-time merge/meld operation as opposed to constant time for the latter). Consequently, our algorithms have the potential to produce much better running times than what we report.
We ran our simulations on randomly generated graphs, with varying the following parameters: (a) Number of nodes, and (b) Average degree of a node. The edge weights are randomly generated numbers between 100 and 1000. In order to guarantee that the graph is 2-
Figure 2.
node-connected (biconnected), we ensure that the generated graph contains a Hamiltonian cycle. Finally, for each set of these parameters, we simulate our algorithm on multiple random graphs to compute the average value of the of a metric for the parameter set. The algorithms have been implemented in the Java programming language (1.5.0.12 patch), and were run on an Intel machine (Pentium IV 3.06GHz with 2GB RAM). The stretch factor is defined as the ratio of the lengths of recovery paths generated by our algorithm to the lengths of the theoretically optimal paths. The optimal recovery path lengths are computed by recomputing the shortest paths tree of s in the graph G(V \x, E\E x ). In the figures [2,3], the Fir labels relate to the performance of the alternate paths algorithm used by the Failure Insensitive Routing protocol of [11], while the Crp labels relate to the performance of our algorithm for the SNFR problem.
Though [11] doesn't present a detailed analysis of their algorithm, from our analysis, their algorithm needs at least Ω(mn log n) time per sink node in the system. Figures [2,3] compare the performance of our algorithm (CRP) to that of [11] (FIR). The plots for the running times of our algorithm and that of [11] fall in line with the theoretical analysis that our algorithms are faster by an order of magnitude than those of [11]. Interestingly, the stretch factors of the two algorithms are very close for most of the cases, and stay within 15%. The running time of the algorithms fall in line with our theoretical analysis. Our CRP algorithm runs within 50 seconds for graphs upto 600-700 nodes, while the FIR algorithm's runtime shoots up to as high as 5 minutes as the number of nodes increase. The metrics are plotted against the variation in (1) the number of nodes ( Figure [2]), and (2) the average degree of the nodes (Figure [3]). The average degree of a node is fixed at 15 for the cases where we vary the number of nodes ( Figure [2]), and the number of nodes is fixed at 300 for the cases where we plot the impact of varying average node degree ( Figure [3]). As expected, the stretch factors improve as the number of nodes increase. Our algorithm falls behind in finding the optimal paths in cases when the recovery path passes through the subtrees of multiple siblings. Instead of finding the best exit point out of the subtree, in order to keep the protocol simple and the paths well structured, our paths go to the root of the subtree and then follow its alternate path beyond that. These paths are formed using the blue edges. Paths discovered using a node's green edges are optimal such paths. In other words, if most of the edges of a node are green, our algorithm is more likely to find paths close to the optimal ones. Since the average degree of the nodes is kept fixed in these simulations, increasing the number of nodes increases the probability of the edges being green. A similar logic explains the plots in Figure [3]. When the number of nodes is fixed, increasing the average degree of a node results in an increase in the number of green edges for the nodes, 4 as well as the stretch factors.
Concluding Remarks
In this paper we have presented an efficient algorithm for the SNFR problem, and developed protocols for dealing with transient single node failures in communication networks. Via simulation results, we show that our algorithms are much faster than those of [11], while the stretch factor of our paths are usually better or comparable.
Previous algorithms [6,8,11] for computing alternate paths are much slower, and thus impose a much longer network setup time as compared to our approach. The setup time becomes critical in more dynamic networks, where the configuration changes due to events other than transient node or link failures. Note that in several kinds of configuration changes (e.g. permanent node failure, node additions, etc), recomputing the routing paths (or other information) cannot be avoided, and it is desirable to have shorter network setup times.
For the case where we need to solve the SNFR problem for all nodes in the graph, our algorithm would need O(mn log n) time, which is still very close to the time required (O(mn + n 2 log n)) to build the routing tables for the all-pairs setting. The space requirement still stays linear in m and n.
The directed version of the SNFR problem, where one needs to find the optimal (shortest) recovery paths can be shown to have a lower bound of Ω(min(m √ n, n 2 )) using a construction similar to those used for proving the same lower bound on the directed version of SLFR [1] and replacement paths [4] problems. The bound holds under the path comparison model of [5] for shortest paths algorithms.
| 4,908 |
0810.3438
|
1658695903
|
Single node failures represent more than 85 of all node failures in the today's large communication networks such as the Internet. Also, these node failures are usually transient. Consequently, having the routing paths globally recomputed does not pay off since the failed nodes recover fairly quickly, and the recomputed routing paths need to be discarded. Instead, we develop algorithms and protocols for dealing with such transient single node failures by suppressing the failure (instead of advertising it across the network), and routing messages to the destination via alternate paths that do not use the failed node. We compare our solution to that of Ref. [11] wherein the authors have presented a "Failure Insensitive Routing" protocol as a proactive recovery scheme for handling transient node failures. We show that our algorithms are faster by an order of magnitude while our paths are equally good. We show via simulation results that our paths are usually within 15 of the optimal for randomly generated graph with 100-1000 nodes.
|
Slosiar and Latin @cite_4 studied the single link failure recovery problem and presented an @math time for computing the link-avoiding alternate paths. A faster algorithm, with a running time of @math for this problem was presented in @cite_8 . Our central protocol presented in this paper can be generalized to handle single link failures as well. Unlike the protocol of @cite_5 , this single link failure recovery protocol would use optimal recovery paths.
|
{
"abstract": [
"Link failures are part of the day-to-day operation of a network due to many causes such as maintenance, faulty interfaces, and accidental fiber cuts. Commonly deployed link state routing protocols such as OSPF react to link failures through global link state advertisements and routing table recomputations causing significant forwarding discontinuity after a failure. Careful tuning of various parameters to accelerate routing convergence may cause instability when the majority of failures are transient. To enhance failure resiliency without jeopardizing routing stability, we propose a local rerouting based approach called failure insensitive routing. The proposed approach prepares for failures using interface-specific forwarding, and upon a failure, suppresses the link state advertisement and instead triggers local rerouting using a backwarding table. With this approach, when no more than one link failure notification is suppressed, a packet is guaranteed to be forwarded along a loop-free path to its destination if such a path exists. This paper demonstrates the feasibility, reliability, and stability of our approach.",
"The increasing proportion of data traffic being carried in public networks is necessitating tractable and scalable algorithms in the design of ATM networks. In particular, the design of routing tables for ATM networks operated under the interim inter-switch signalling protocol (IISP) requires a significant amount of manual work in order to design and implement the underlying static routing tables that enable end-to-end connectivity as the network grows. This paper presents a scalable algorithm that generates IISP routing table entries such that no loops are created and so that connectivity is maintained between all origin destination nodes under single-link failures. The algorithm generates shortest (i.e., lowest-cost) primary and alternate paths for any single-link failure scenario, while also demonstrating that at least one such solution can be found for any network graph devoid of bridges. Note that re-routing for single-link failures is considered adequate when sufficient protection is provided at the lower network layers. The algorithm has been fully implemented in a practical software tool, with execution time being a polynomial function of the network complexity.",
"We investigate the single link failure recovery problem and its application to the alternate path routing problem for ATM networks, and the k-replacement edges for each edge of a minimum cost spanning tree. Specifically, given a 2-connected graph G, a specified node s, and a shortest paths tree Ts = e1, e2, . . . , eni1 of s, where ei = (xi, yi) and xi = parentTs(yi), find a shortest path from yi to s in the graph G for 1 · i · n i 1. We present an O(m + n log n) time algorithm for this problem and a linear time algorithm for the case when all weights are equal. When the edge weights are integers, we present an algorithm that takes O(m + Tsort(n)) time, where Tsort(n) is the time required to sort n integers. We establish a lower bound of (min(m p n, n 2 )) for the directed version of our problem under the path comparison model, where Ts is the shortest paths destination tree of s. We show that any solution to the single link recovery problem can be adapted to solve the alternate path routing problem in ATM networks. Our technique for the single link failure recovery problem is adapted to find the k-replacement edges for the tree edges of a minimum cost spanning tree in O(m + n log n) time."
],
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_8"
],
"mid": [
"2114234222",
"2157578392",
"2132422341"
]
}
|
EFFICIENT ALGORITHMS AND ROUTING PROTOCOLS FOR HANDLING TRANSIENT SINGLE NODE FAILURES
|
Let G = (V, E) be an edge weighted graph that represents a computer network, where the weight (positive real number), denoted by cost(e), of the edges represents the cost (time) required to transmit a packet through the edge (link). The number of vertices (|V |) is n and the number of edges (|E|) is m. It is well known that a shortest paths tree of a node s, T s , specifies the fastest way of transmitting a message to node s originating at any given node in the graph under the assumption that messages can be transmitted at the specified costs. Under normal operation the routes are the fastest, but when the system carries heavy traffic on some links these routes might not be the best routes. These trees can be constructed (in polynomial time) by finding a shortest path between every pair of nodes. In this paper we consider the case when the nodes in the network are * Currently at Amazon.com, 1200 12 th Ave. S., Seattle, WA -98144 susceptible to transient faults. These are sporadic faults of at most one node 1 at a time that last for a relatively short period of time. This type of situation has been studied in the past [11] because it represents most of the node failures occurring in networks. Single node failures represent more than 85% of all node failures [7]. Also, these node failures are usually transient, with 46% lasting less than a minute, and 86% lasting less than 10 minutes [7]. Because nodes fail for relative short periods of time, propagating information about the failure throughout the network is not recommended.
In this paper we consider the case where the network is biconnected (2-node-connected), meaning that the deletion of a single node does not disconnect the network. Based on our previous assumptions about failures, a message originating at node x with destination s will be sent along the path specified by T s until it reaches node s or a node (other than s) that failed. In the latter case, we need to use a recovery path to s from that point. Since we assume single node faults and the graph is biconnected, such a path always exists. We call this problem of finding the recovery paths the Single Node Failure Recovery (SNFR) problem. It is important to recognize that the recovery path depends heavily on the protocol being deployed in the system. Later on we discuss our (simple) routing protocol.
Preliminaries
Our communication network is modeled by an edgeweighted biconnected undirected graph G = (V, E), with n = |V | and m = |E|. Each edge e ∈ E has an associated cost (weight), denoted by cost(e), which is a non-negative real number. p G (s, t) denotes a shortest path between s and t in graph G and d G (s, t) to denote its cost (weight).
A shortest path tree T s for a node s is a collection of n−1 edges {e 1 , e 2 , . . . , e n−1 } of G which form a spanning tree of G such that the path from node v to s in T s is a shortest path from v to s in G. We say that T s is rooted at node s. With respect to this root we define the set of nodes that are the children of each node x as follows. In T s we say that every node y that is adjacent to x such that x is on 1 The nodes are single-or multi-processor computers the path in T s from y to s, is a child of x. For each node x in the shortest paths tree, k x denotes the number of children of x in the tree, and C x = {x 1 , x 2 , . . . x kx } denotes this set of children of the node x. Also, x is said to be the parent of each x i ∈ C x in the tree T s . With respect to s, the parent node, p, of a node c is sometimes referred to as the primary neighbor or primary router of c, while c is referred to as an upstream neighbor or upstream router of p. The children of a particular node are said to be siblings of each other. V x (T ) denotes the set of nodes in the subtree of x in the tree T and E x ⊆ E denotes the set of all edges incident on the node x in the graph G. We use nextHop(x, y) to denote the next node from x on the shortest path tree from x to y. Note that by definition, nextHop(x, y) is the parent of x in T y .
Finally, we use ρ x to denote the escape edge in G(E)\T s that the node x uses to recover from the failure of its parent. As we discuss later, having the information of a single escape edge ρ x for each node x ∈ G(V ) and x = s is sufficient to construct the entire alternate path for any node to recover from the failure of its parent, even though the path may actually contain multiple non-tree edges.
Problem Definition
The Single Node Failure Recovery problem, is defined as follows: (SNFR) Given a biconnected undirected edge weighted graph G = (V, E), and the shortest paths tree T s (G) of a node s in G where C x = {x 1 , x 2 , . . . x kx } denotes the set of children of the node x in T s , for each node x ∈ V and x = s, find a path from
x i ∈ C x to s in the graph G = (V \ {x}, E \ E x ),
where E x is the set of edges adjacent to vertex x.
In other words, for each node x in the graph, we are interested in finding alternate paths from each of its children to the source 2 node s when the node x fails. Note that we don't consider the problem to be well defined when the node s fails.
The above definition of alternate paths matches that in [10] for reverse paths: for each node x ∈ G(V ), find a path from x to the node s that does not use the primary neighbor (parent node) y of x in T s .
Main Results
We discuss our efficient 3 algorithm for the SNFR problem that has a running time of O(m log n) (by contrast, the alternate path algorithms of [6,8,11] have a time complexity of Ω(mn log n) per destination). We further develop protocols based on this algorithm for recovering from single node transient failures in communication networks. In the failure free case, our protocol does not use any extra resources.
The recovery paths computed by our algorithm are not necessarily the shortest recovery paths. However, we demonstrate via simulation results that they are very close to the optimal paths.
We compare our results with those of [11] wherein the authors have also studied the same problem and presented protocols based on local rerouting for dealing with transient single node failures. One important difference between the algorithms of [6,8,11] and our's is that unlike our algorithm, these are based primarily on recomputations. Consequently, our algorithm is faster by an order of magnitude than those in [6,8,11], and as shown by our simulation results, our recovery paths are usually comparable, and sometimes better.
Algorithm for Single Node Failure Recovery
A naive algorithm for the SNFR problem is based on recomputation: for each node v ∈ G(V ) and v = s, compute the shortest paths tree of s in the graph G(V \v, E\E v ). Of interest are the paths from s to each of the nodes v i ∈ C v . This naive algorithm invokes a shortest paths algorithm n − 1 times, and thus takes O(mn + n 2 log n) time when it uses the Fibonacci heap [3] implementation of Dijkstra's shortest paths algorithm [2]. While these paths are optimal recovery paths for recovering from the node failure, their structure can be much different from each other, and from the original shortest paths (in absence of any failures) -to the extent that routing messages along these paths may involve recomputing large parts of the primary routing tables at the nodes through which these paths pass. The recovery paths computed by our algorithm have a well defined structure, and they overlap with the paths in the original shortest paths tree (T s ) to an extent that storing the information of a single edge, ρ x , at each node x provides sufficient information to infer the entire recovery path.
Basic Principles and Observations
We start by describing some basic observations about the characteristics of the recovery paths. We also categorize the graph edges according to their role in providing recovery paths for a node when its parent fails. Figure 1. Recovery paths for recovering from the failure of x. Figure 1 illustrates a scenario of a single node failure. In this case, the node x has failed, and we need to find recovery paths to s from each x i ∈ C x . When a node fails, the shortest paths tree of s, T s , gets split into k x + 1 components -one containing the source node s and each of the remaining ones contain one subtree of a child x i ∈ C x .
x 1 x x i k x x j x b b b b g r a r b u q p v p s y g q
Notice that the edge {g p , g q } (Figure 1), which has one end point in the subtree of x j , and the other outside the subtree of x provides a candidate recovery path for the node x j . The complete path is of the form p G (x j , g p ) ; {g p , g q } ; p G (g q , s). Since g q is outside the subtree of x, the path p G (g q , s) is not affected by the failure of x. Edges of this type (from a node in the subtree of x j ∈ C x to a node outside the subtree of x) can be used by x j ∈ C x to escape the failure of node x. Such edges are called green edges. For example, edge {g p , g q } is a green edge.
Next, consider the edge {b u , b v } ( Figure 1) between a node in the subtree of x i and a node in the subtree of x j . Although there is no green edge with an end point in the subtree of x i , the edges {b u , b v } and {g p , g q } together offer a candidate recovery path that can be used by x i to recover from the failure of x. Part of this path connects
x i to x j (p G (x i , b u ) ; {b u , b v } ; p G (b v , x j ))
, after which it uses the recovery path of x j (via x j 's green edge, {g p , g q }). Edges of this type (from a node in the subtree of x i to a node in the subtree of a sibling x j for some i = j) are called blue edges. Another example of a blue edge is edge {b p , b q } which can be used the node x 1 to recover from the failure of x.
Note that edges like {r a , r b } and {b v , g p } (Figure 1) with both end points within the subtree of the same child of x do not help any of the nodes in C x to find a recovery path from the failure of node x. We do not consider such edges in the computation of recovery paths, even though they may provide a shorter recovery path for some nodes (e.g. {b v , g p } may offer a shorter recovery path to x i ). The reason for this is that routing protocols would need to be quite complex in order to use this information. We carefully organize the green and blue edges in a way that allows us to retain only the useful edges and eliminate useless (red) ones efficiently.
We now describe the construction of a new graph R x , the recovery graph for x, which will be used to compute recovery paths for the elements of C x when the node x fails. A single source shortest paths computation on this graph suffices to compute the recovery paths for all x i ∈ C x .
The graph R x has k x + 1 nodes, where k x = |C x |. A special node, s x , represents the source node s in the original graph G = (V, E). Apart from s x , we have one node, denoted by y i , for each x i ∈ C x . We add all the green and blue edges defined earlier to the graph R x as follows. A green edge with an end point in the subtree of x i (by definition, green edges have the other end point outside the subtree of x) translates to an edge between s x and y i . A blue edge with an end point in the subtree of x i and the other in the subtree of x j translates to an edge between nodes y i and y j . However, the weight of each edge added to R x is not the same as the weight of the green or blue edge in G = (V, E) used to define it. The weights are specified below.
Note that the candidate recovery path of x j that uses the green edge g = {g p , g q } has total cost equal to: greenW eight(g) = d G (x j , g p ) + cost(g p , g q ) + d G (g q , s)
(1) As discussed earlier, a blue edge provides a path connecting two siblings of x, say x i and x j . Once the path reaches x j , the remaining part of the recovery path of x i coincides with that of x j . If {b u , b v } is the blue edge connecting the subtrees of x i and x j (the cheapest one corre-sponding to the edge {y i , y j }), the length of the subpath from x i to x j is:
blueW eight(b) = d G (x i , b u ) + cost(b u , b v ) + d G (b v , x j )
(2) We assign this weight to the edge corresponding to the blue edge {b u , b v } that is added in R x between y i and y j .
The construction of our graph R x is now complete. Computing the shortest paths tree of s x in R x provides enough information to compute the recovery paths for all nodes x i ∈ C x when x fails.
Description of the Algorithm and its Analysis
We now incorporate the basic observations described earlier into a formal algorithm for the SNFR problem. Then we analyze the complexity of our algorithm and show that it has a nearly optimal running time of O(m log n).
Our algorithm is a depth-first recursive algorithm over T s . We maintain the following information at each node x:
• Green Edges: The set of green edges in G = (V, E) that offer a recovery path for x to escape the failure of its parent.
• Blue Edges: A set of edges {p, q} in G = (V, E) such that x is the nearest-common-ancestor of p and q with respect to the tree T s .
The set of green edges for node x is maintained in a min heap (priority queue) data structure, which is denoted by H x . The heap elements are tuples of the form < e, greenW eight(e) + d G (s, x) > where e is a green edge, and greenW eight(·) + d G (s, x) defines its priority as an element of the heap. Note that the extra element d G (s, x) is added in order to maintain invariance that the priority of an edge in any heap H remains constant as the path to s is traversed. Initially H x contains an entry for each edge of x which serves as a green edge for it (i.e. an edge of x whose other end point does not lie in the subtree of the parent of x). A linked list, B x , stores the tuples < e, blueW eight(e) >, where e is a blue edge, and blueW eight(e) is the weight of e as defined by the equation (2).
The heap H xi is built by merging together the H heaps of the nodes in C xi , the set of children on x i . Consequently, all the elements in H xi may not be green edges for x i . Using a dfs labeling scheme similar to the one in [1], we can quickly determine whether the edge retrieved by f indM in(H xi ) is a valid green edge for x i or not. If not, we remove the entry corresponding to the edge from H xi via a deleteM in(H xi ) operation. Note that since the deleted edge cannot serve as a green edge for x i , it cannot serve as one for any of the ancestors of x i , and it doesn't need to be added back to the H x heap for any x. We continue deleting the minimum weight edges from H xi till either H xi becomes empty or we find a green edge valid for x i to escape x's failure, in which case we add it to R x .
After adding the green edges to R x , we add the blue edges from B x to R x .
Finally, we compute the shortest paths tree of the node s x in the graph R x using a standard shortest paths algorithm (e.g. Dijkstra's algorithm [2]). The escape edge for the node x i is stored as the parent edge of x i in T sx , the shortest paths tree of s x in R x . Since the communication graph is assumed to be bi-connected, there exists a path from each node x i ∈ C x to s x , provided that the failing node is not s.
For brevity, we omit the detailed analysis of the algorithm. The O(m log n) time complexity of the algorithm follows from the fact that (1) An edge can be a blue edge in the recovery graph of exactly one node: that of the nearestcommon-ancestor of its two end points, and (2) An edge can be deleted at most once from any H heap. We state the result as the following theorem.
Single Node Failure Recovery Protocol
When routing a message to a node s, if a node x needs to forward the message to another node y, the node y is the parent of x in the shortest paths tree T s of s. The SNFR algorithm computes the recovery path from x to s which does not use the node y. In case a node has failed, the protocol re-routes the messages along these alternate paths that have been computed by the SNFR algorithm.
Embedding the Escape Edge
In our protocol, the node x that discovers the failure of y embeds information about the escape edge to use in the message. The escape edge is same as the ρ x edge identified for the node x to use when its parent (y, in this example) has failed. We describe two alteratives for embedding the escape edge information in the message, depending on the particular routing protocol being used.
Protocol Headers
In several routing protocols, including TCP, the message headers are not of fixed size, and other header fields (e.g. Data Offset in TCP) indicate where the actual message data begins. For our purpose, we need an additional header space for two node identifiers (e.g. IP addresses, and the port numbers) which define the two end points of the escape edge. It is important to note that this extra space is required only when the messages are being re-routed as part of a failure recovery. In absence of failures, we do not need to modify the message headers.
Recovery Message
In some cases, it may not be feasible or desirable to add the information about the escape edge to the protocol headers. In such situations, the node x that discovers the failure of its parent node y during the delivery of a message M o , constructs a new message, M r , that contains information for recovering from the failure. In particular, the recovery message, M r contains (a) M o : the original message, and (b) ρ x = (p x , q x ): the escape edge to be used by x to recover from the failure of its parent.
With either of the above two approaches, a light weight application is used to determine if a message is being routed in a failure free case or as part of a failure recovery, and take appropriate actions. Depending on whether the escape edge information is present in the messagae, the application decides which node to forward the message to. This process consumes almost negligible additional resources. As a further optimization, this application can use a special reserved port on the routers, and messages would be sent to it only during the failure recovery mode. This would ensure that no additional resources are consumed in the failure free case.
Protocol Illustration
For brevity we do not formally specify our protocol, but only illustrate how it works. Consider the network in Figure 1. If x i notices that x has failed, it adds information in the message (using one of the two options discussed above) about {b u , b v } as the escape edge to use, and reroutes the message to b u . b u clears the escape edge information, and sends the message to b v , after which it follows the regular path to s. If x has not recovered when the message reaches x j , x j reroutes with message to g p with {g p , g q } as the escape edge to use. This continues till the message reaches a node outside the subtree of x, or till x recovers.
Note that since the alternate paths are used only during failure recovery, and the escape edges dictate the alternate paths, the protocol ensures loop free routing, even though the alternate paths may form loops with the original routing (shortest) paths.
Simulation Results and Comparisons
We present the simulation results for our algorithm, and compare the lengths of the recovery paths generated by our algorithm to the theoretically optimal paths as well as with the ones computed by the algorithm in [11]. In the implementation of our algorithm, we have used standard data structures (e.g. binary heaps instead of Fibonacci heaps [3]: binary heaps suffer from a linear-time merge/meld operation as opposed to constant time for the latter). Consequently, our algorithms have the potential to produce much better running times than what we report.
We ran our simulations on randomly generated graphs, with varying the following parameters: (a) Number of nodes, and (b) Average degree of a node. The edge weights are randomly generated numbers between 100 and 1000. In order to guarantee that the graph is 2-
Figure 2.
node-connected (biconnected), we ensure that the generated graph contains a Hamiltonian cycle. Finally, for each set of these parameters, we simulate our algorithm on multiple random graphs to compute the average value of the of a metric for the parameter set. The algorithms have been implemented in the Java programming language (1.5.0.12 patch), and were run on an Intel machine (Pentium IV 3.06GHz with 2GB RAM). The stretch factor is defined as the ratio of the lengths of recovery paths generated by our algorithm to the lengths of the theoretically optimal paths. The optimal recovery path lengths are computed by recomputing the shortest paths tree of s in the graph G(V \x, E\E x ). In the figures [2,3], the Fir labels relate to the performance of the alternate paths algorithm used by the Failure Insensitive Routing protocol of [11], while the Crp labels relate to the performance of our algorithm for the SNFR problem.
Though [11] doesn't present a detailed analysis of their algorithm, from our analysis, their algorithm needs at least Ω(mn log n) time per sink node in the system. Figures [2,3] compare the performance of our algorithm (CRP) to that of [11] (FIR). The plots for the running times of our algorithm and that of [11] fall in line with the theoretical analysis that our algorithms are faster by an order of magnitude than those of [11]. Interestingly, the stretch factors of the two algorithms are very close for most of the cases, and stay within 15%. The running time of the algorithms fall in line with our theoretical analysis. Our CRP algorithm runs within 50 seconds for graphs upto 600-700 nodes, while the FIR algorithm's runtime shoots up to as high as 5 minutes as the number of nodes increase. The metrics are plotted against the variation in (1) the number of nodes ( Figure [2]), and (2) the average degree of the nodes (Figure [3]). The average degree of a node is fixed at 15 for the cases where we vary the number of nodes ( Figure [2]), and the number of nodes is fixed at 300 for the cases where we plot the impact of varying average node degree ( Figure [3]). As expected, the stretch factors improve as the number of nodes increase. Our algorithm falls behind in finding the optimal paths in cases when the recovery path passes through the subtrees of multiple siblings. Instead of finding the best exit point out of the subtree, in order to keep the protocol simple and the paths well structured, our paths go to the root of the subtree and then follow its alternate path beyond that. These paths are formed using the blue edges. Paths discovered using a node's green edges are optimal such paths. In other words, if most of the edges of a node are green, our algorithm is more likely to find paths close to the optimal ones. Since the average degree of the nodes is kept fixed in these simulations, increasing the number of nodes increases the probability of the edges being green. A similar logic explains the plots in Figure [3]. When the number of nodes is fixed, increasing the average degree of a node results in an increase in the number of green edges for the nodes, 4 as well as the stretch factors.
Concluding Remarks
In this paper we have presented an efficient algorithm for the SNFR problem, and developed protocols for dealing with transient single node failures in communication networks. Via simulation results, we show that our algorithms are much faster than those of [11], while the stretch factor of our paths are usually better or comparable.
Previous algorithms [6,8,11] for computing alternate paths are much slower, and thus impose a much longer network setup time as compared to our approach. The setup time becomes critical in more dynamic networks, where the configuration changes due to events other than transient node or link failures. Note that in several kinds of configuration changes (e.g. permanent node failure, node additions, etc), recomputing the routing paths (or other information) cannot be avoided, and it is desirable to have shorter network setup times.
For the case where we need to solve the SNFR problem for all nodes in the graph, our algorithm would need O(mn log n) time, which is still very close to the time required (O(mn + n 2 log n)) to build the routing tables for the all-pairs setting. The space requirement still stays linear in m and n.
The directed version of the SNFR problem, where one needs to find the optimal (shortest) recovery paths can be shown to have a lower bound of Ω(min(m √ n, n 2 )) using a construction similar to those used for proving the same lower bound on the directed version of SLFR [1] and replacement paths [4] problems. The bound holds under the path comparison model of [5] for shortest paths algorithms.
| 4,908 |
0809.3447
|
1619717240
|
In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.
|
There is a long history of calendar studies in human-com -pu -ter interaction literature. Early research on calendar use predates electronic calendars. In 1982, Kelley and Chapanis @cite_29 interviewed 23 professionals to discover how people in the business world kept track of their schedules. They found that for the individuals interviewed, calendars were indispensable and showed a lot of diversity in their use. The use of multiple calendars was prevalent, and a wide variation was seen in the time spans viewed, as well as in other aspects such as archiving, editing and portable access. Many of the problems identified in paper calendars could be solved in electronic calendars, and they concluded with a list of features for emerging electronic calendars to implement. Soon afterwards, Kincaid and Pierre @cite_28 examined the use of paper and electronic calendars in two groups, and concluded that electronic calendars failed to provide several key features such as flexibility, power, and convenience, that paper calendars did. They recommended many useful features to be incorporated into electronic calendar systems as well.
|
{
"abstract": [
"Manufacturers of integrated electronic office systems have included electronic versions of the calendar in almost every system they offer. This paper describes a survey of office workers, carried out to examine their use both of paper calendars and of electronic calendars that are commercially available as part of integrated office systems. It assesses the degree to which electronic calendars meet the needs of users. Our survey shows that the simple paper calendar is a tool whose power and flexibility is matched by few, if any, of the current commercially available electronic calendars. Recommendations for features that should be included in electronic calendars and automatic schedulers are included.",
"Twenty-three professional persons were interviewed extensively to find out how they keep their appointment calendars and to extract from that information suggestions about how calendars could be computerized. For the majority of the persons interviewed calendars are indispensable to the conduct of their business, and, in some cases, their personal lives. At the same time the data show an unexpectedly large amount of diversity in the kinds of calendars people use and in the ways they use them. Substantially more than half of the respondents have more than one calendar, with two persons using as many as six calendars at once. Portability and access from diverse locations are important for many persons. Concerns about privacy vary widely: some persons keep their calendars closely guarded, others allow free access to them. Relevant time spans covered by calendars are enormous. Some few people are concerned only with the current day and the day following, others may plan appointments a year or more in advance. A substantial number of appointments are changed after they have been made and, once again, the range is large, from about 2 per cent for some persons to about 80 per cent for others. Archiving, query patterns, and the insertion of correlated information into calendars also vary greatly among various users."
],
"cite_N": [
"@cite_28",
"@cite_29"
],
"mid": [
"1964994389",
"2041929848"
]
}
|
An Exploratory Study of Personal Calendar Use
|
Personal Information Management (PIM) is receiving attention as an area of research within the CHI community [Barreau et al., 2008, Bergman et al., 2004, Teevan et al., 2006. PIM research mostly is concerned with studying how people find, keep, organize, and re-find (or reuse) information in and around their personal information space. Calendar management, one of the typical PIM tasks, is done today using a variety of systems and methods, including several popular paper-based methods: At-A-Glance, one of the largest suppliers of paper planners, sold more than 100 million calendars in 2000 1 .
For computer-based systems, calendar management is often integrated into email clients (e.g. Microsoft Outlook); it is one of the most common applications in all personal digital assistants (PDAs, e.g. Blackberries and iPhones), and there are several online calendar systems (e.g. Yahoo! Calendar, Google Calendar, Apple Mobile Me). Date-and time-based information is ubiquitous, and is often available through many means such as postings on office doors, displays with dated announcements, through email conversations, written on wall calendars, etc. The result is that calendar information tends to be pervasive.
In this paper, we set out to explore how people use calendars in the presence of varied technological options. We are interested in understanding how calendar information is managed given the availability of these platforms. After a brief review of related work, we proceed to discuss our findings from the survey, interviews, and artifacts. From these, we suggest several opportunities for designers of future electronic calendar systems, and conclude the paper with a discussion of future research in personal information management.
Study Description
The ethnographic approach we took in this study follows techniques commonly reported in the Personal Information Management literature, notably [Kelley and Chapanis, 1982, Payne, 1993, Jones et al., 2005, Marshall and Bly, 2005. We did not attempt to test any a priori hypotheses, but were interested in examining how calendar practices have evolved in the years following previous calendar studies by Kelley and Chapanis [Kelley and Chapanis, 1982] and Payne [Payne, 1993].
Our study has three components to it: a survey (N=98), in-person interviews (N=16), and an examination of calendar artifacts such as screenshots and paper calendars. A large-scale online survey was distributed among members of a university. A total of 98 responses were received (54% male and 45% female), including faculty (56%), administrative staff (20%), and students (19%) (figure 1). While previous studies have examined organizational calendars [Dourish et al., 1993] and groupware calendar systems [Grudin, 1996, Palen andGrudin, 2003], our focus was on the personal use of calendars.
Other 4%
Staff 20%
Faculty 56% Students 19% Figure 1: Roles of survey participants
In part two, we conducted in-depth personal interviews with 16 participants, recruited from among the survey participants. The recruitment criterion for interview candidates was the same as in [Kelley and Chapanis, 1982]: that participants should be regular users of some form of calendar system, either electronic or paper or a combination of both. Participants included graduate students, faculty members, administrative assistants, a department head, the director of a small business, etc., among others.
Interviews ranged from 20 to 30 minutes each, and were conducted in situ at their workplaces so we could observe their calendaring practices directly (e.g. calendar programs or wall calendars or paper scraps.) Interviews were semistructured and open-ended: a prepared set of questions was asked in each interview. The questions we asked were closely modeled on those asked in similar studies [Kelley andChapanis, 1982, Payne, 1993]. The complete set of questions is available as an appendix in a technical report [Tungare and Pérez-Quiñones, 2008]. As an extension to past studies, we were able to explore the use of features of modern calendar systems such as alarms, reminders, and mobile use, which were absent in paper calendars. Interviewees were encouraged to talk freely and to expand upon any of the themes they wished to discuss in more detail. Additional topics were addressed as appropriate depending on the interviewee's calendar use. Examining the calendar systems in use at their desks or on their walls prompted specific questions from the interviewers about these practices.
All interviews were transcribed in full. We performed content analysis [Krippendorff, 2004] of the transcripts to extract common patterns of use. The main purpose of content analysis in this study was to summarize the findings into groups of common observations, as in [Marshall and Bly, 2005]. Individual responses were tagged into several categories by two of the authors and any differences reconciled by discussion. Nearly 410 tags resulted from this activity; these were then collapsed into 383 tags (grouping together tags that were near-duplicates) and 11 top-level groups during the clustering procedure.
From each interview participant, we collected copies of artifacts that were used for calendaring purposes: 2 weeks' worth of calendar information and any other idiosyncratic observations that were spotted by the interviewers. These included screenshots of their calendar programs, paper calendars, printouts of electronic calendars (that were already printed for their own use), sticky notes stuck on paper calendars, etc. Some of these reflected a degree of wear and tear that occurred naturally over time; others provided evidence of manipulations such as color highlights, annotations in the margins, or comments made in other ways. Artifacts were not coded on any particular dimension, but pictures of these artifacts are used to supplant our textual descriptions wherever appropriate.
Capturing and Adding Events
Capturing events refers to the act of knowing about an event and entering it into a calendaring system (also referred to as the 'keeping' phase in the PIM literature.) Most survey participants reported adding new events as soon as they were (made) aware of them (93%) while the rest added them before the end of the day. Even when at their desks, those users who owned PDAs reported using them to create new events in their calendar: this was deemed faster than trying to start the calendar program on a computer and then adding an event. When away from their desks, they used proxy artifacts such as printed calendar copies or paper scraps.
Information about new events reached the primary calendar user via one of several means: email, phone, and in-person were commonly reported (figure 2). The fact that email was the most common way reported in our study is an expected evolution from older findings [Kelley and Chapanis, 1982] that phones were the most common stimuli for calendar events. Interviewees mentioned several other methods through which they received events: flyers, posters, campus notices, meeting minutes, public calendars (such as academic schedules or sports events), newspapers, internet forums, (postal) mail, fax, radio, or scheduled directly by other people who had access to the calendar (e.g., shared calendars). The wide variety of sources here is a potential indication of the problem of information overload [Schick et al., 1990] faced by knowledge workers.
Personal Calendar View Preference
We refer to the most common time interval shown in a calendar program or on a paper calendar as the preferred personal calendar view: the week view was preferred by most of our survey participants at 44%, followed by the day view at 35%, and the month view at 21% (figure 3). These are very close to the numbers reported by Kelley et al. [Kelley and Chapanis, 1982] (45%, 33%, 22% respectively). That many interviewees preferred a week view suggests the use of the calendar for opportunistic rehearsal, because they browsed the entire week's appointments each time they viewed the calendar. This preference supports the analysis of [Payne, 1993] in that the printed versions of calendar do provide a valuable aid in opportunistic reading of the the week's activities. Users who kept multiple calendars within the same calendaring system indicated that they turned the visibility of each calendar on or off on demand, based on the specifics of what they needed to know during a particular lookup task. On smaller devices such as PDAs, the default view was the daily view.
Figure 3: Preferred calendar views
There seem to be two motivators for browsing calendars: looking for activities to attend in the near future, and looking for activities further out that require preparation. A daily view directly supports the first, while a week view partially supports the second one. Intermediates such as Google Calendar's 4-day view afford browsing for future events without losing local context for the current day. The downside of such a view, however, is that days no longer appear in a fixed column position, but in different locations based on the day. Thus, the preferred calendar view depends on the type of activity the user is doing.
Frequency of Consulting the Calendar
When asked about the frequency at which users consulted their calendars, we received a wide range of responses in the survey: keeping the calendar program always open (66%) and several times a day (21%) were the most common.
In the interviews, several other specific times were reported: just before bedtime or after waking up; only when prompted by an alarm; when scheduling a new event; once weekly; or on weekends only. Two interviewees reported consulting their calendar only to check for conflicts before scheduling new events, and for confirmation of events already scheduled.
Proxy Calendar Artifacts
We use the term 'proxy calendar artifacts' (or 'proxies' in short) to refer to ephemeral scraps or notes (characterized as micronotes in [Lin et al., 2004]) or printed calendars or electronic means such as email to self that are used for calendaring when primary calendar systems are unavailable or inaccessible (e.g. when users were away from their desks or offices).
Despite the prevalent use of electronic calendars, many were not portable and were tied to specific desktop computers. This prompted the users to use other means to view or add events to their calendar; about 27% reported that they used proxy artifacts such as scraps or notes to be entered into the primary calendar at a later time. A wide variety of proxy calendar artifacts was reported in our interviews: paper scraps were by far the most common medium; other techniques included carrying laptops solely for the purpose of calendaring, PDAs, voice recorders, and printouts of electronic calendars. Information captured via these proxies was transferred to the primary calendar after a delay: most often, users entered the events as soon as they could access their primary calendar (63% of survey participants), a few others reported entering them within the same day (25%), while the maximum delay reported was up to one week.
Information Stored in an Event Record
Calendar systems allow users to add several items of information to an event record. Typical information included the date of the event (97%), time (96%), location (93%) and purpose (69%) as indicated in the survey. In interviews, it was clear that common fields such as notes, other attendees and status were used only to a limited extent. Location was entered mostly for non-recurring events. However, many other pieces of information were frequently recorded, even though calendar programs do not have a specific field for these data. For example, information critical for participation at an event was entered inline for easy access: e.g. phone numbers for conference calls, cooking menus and shopping lists, meeting agenda, original email for reference, links to relevant web sites, and filenames of relevant files.
One participant mentioned adding meeting participants' email addresses in case she needed to inform them of a cancellation or rescheduling. For activities such as trips or flights, further details such as booking codes and flight details were included as a way of reducing information fragmentation between the calendar system and the email system.
Types of Events
The events most commonly recorded on calendars by survey participants were timed events such as appointments or meetings (98%), special events requiring advance planning, such as tests (93%), long duration events such as the week of final exams at the end of each semester (66%), and all-day events such as birthdays (81%). Several interviewees also mentioned recording to-do items in a calendar, such as phone calls to be made, or tasks which would remain on the calendar until completed, or which were scheduled in on their deadline. Specifically, we found several instances of the following types of events scheduled:
• Work-related events. Many interviewees used calendar scheduling for workrelated events such as meetings, deadlines, classes, public events such as talks and conferences, and work holidays. Users in work environments included vacation details for co-workers and subordinates. Time was routinely blocked off to prepare for other events: e.g. class preparation or ground work to be done before a meeting.
Interviewees who had administrative assistants reported that their assistant maintained or co-maintained their calendar (7 out of 16 interviewees). The dynamics of shared access were vastly different across all these situations. One interviewee mentioned that he would never let an assistant be their primary scheduler; the assistant was able to access only a paper copy and any new events would be reviewed and added by the primary calendar user. Two other users mentioned that they provided paper calendars to subordinates to keep track of their schedule and to be able to answer questions about it to third parties. One participant reported calling in to their secretary when they needed to consult their schedule while away from their desk (similar to previous reports in [Perry et al., 2001]), while another reported sending email to themselves as a way to quickly capture a newly-scheduled meeting. • Family/personal events. Half of the survey respondents indicated that they coordinate calendars with their spouses, roommates, or family. Even though family activities such as picking up kids from school, or attending church services, were easily remembered without the aid of a calendar, interviewees reported that they chose to record them anyway to provide "a visual idea of the entire day" (figure 4). Public holidays, family birthdays, and guest visits were added to prevent accidental scheduling of conflicting events. Figure 4: Family events such as attending church are added to calendars, not for remembering, but to be able to get a visual idea of the entire day.
Many participants reported having separate calendars for business use and for home/personal use, as was also seen in a majority of respondents in [Kelley and Chapanis, 1982]. Although events overlapped between them (e.g. work trips on family calendars and family medical appointments on work calendars), the calendars themselves were located at the respective places and maintained separately. Family calendars were most likely to be kept in the kitchen, on the refrigerator. Two contrasts between work calendars and home calendars were prominent: work calendars were more often electronic while home calendars more likely to be paper calendars, e.g. as a wall calendar, or on the refrigerator. Work calendars were updated by the primary users or their secretaries or their colleagues, while family calendars were overwhelmingly managed by women. No male participant reported being the only calendar manager at home; women reported either being the only person to edit it, or sharing responsibilities with their husbands. Family-related events and reminders were constrained to the home calendar, as in [Nippert-Eng, 1996], but they were sometimes added to work calendars if such events would impact work time. For example, medical appointments (of self or family members) that occurred during work hours were added to work calendars so that their co-workers were aware of their absence.
• Public events. Public events were added even when the user had no intention of attending that event. They were added to recommend to other people, or for personal planning purposes, or to start conversations related to the public activity. An administrator (from ANONYMIZED, a small university town with a very popular college football team) said that although he had no interest in football, he added home games to his calendar to ensure that visiting dignitaries were not invited during a time when all hotels in town would be booked to capacity. On the other hand, two interviewees considered such public events as contributing to clutter in their personal calendar, and chose not to add them.
Continued Use of Paper Calendars
In his 1993 study [Payne, 1993], Payne reports that the most stable characteristic he observed was the continued reliance of all but two participants on some kind of paper calendar. Our findings are similar: despite most of our users using electronic calendars, every one of them reported using paper calendars even if not regularly; 12 out of 16 interview participants reported using them regularly.
Reasons for the Continued Use of Paper Calendars
We group the several reasons and examples elicited from our participants into the following four categories:
• Paper trail. Cancelled events were scratched off the calendar, leaving a paper trail. Being able to make a distinction between cancelled and neverscheduled events was cited as an important concern for continuing with paper calendars.
• Opportunistic rehearsal. We found support for the idea of opportunistic rehearsal [Payne, 1993]. Users cited that wall calendars needed no more than a glance to read, and provided for quick reference. This also corroborates Dourish's argument [Dourish et al., 1993] that the presence of informational context in paper artifacts such as calendars is an important motivator for people to continue to use them, even though electronic systems support the information retrieval task better.
• Annotation. Paper calendars are more amenable to free-form annotation, as reported earlier [Kelley and Chapanis, 1982], and as the following quotes from our study illustrate:
"That's what I call the graffiti aspect of it, it's probably freer by virtue of being handwritten." "There is a lot of that [code and symbols]. Stars and dashes and circles and headlines, marked and completed." Figure 5 shows a printed calendar with a sticky note pasted on it. The event is about a community potluck dinner. The sticky note complements the scheduled appointment with information about the dish the participant plans to bring to the event. Figure 6 shows a picture of a pumpkin hand-drawn on a printed calendar to mark Halloween on October 31. Figure 5: Sticky notes are pasted on paper calendars to remind oneself of the preparation required for an event. • Prepopulated events. Participants reported that having holidays or other event details already printed in commercially-available paper calendars was an important reason for using them. Calendars distributed by the university contained details not only of academic deadlines, but also of athletic events and games; [Kelley and Chapanis, 1982] point to branding issues as well.
Paper calendars were used alongside electronic calendars in either a supplementary or complementary role, as follows:
Printouts of Electronic Calendars
Printouts of electronic calendars played a supplementary role: they were used as proxies of the master calendar when the master calendar was unavailable. 35% of survey participants reported printing their calendar. Among those printed, all views were commonly printed: monthly (43%), weekly (33%) and daily (25%) (figure 3). Among those who printed, many printed it monthly, weekly or daily (figure 7). How often do users ...
Monthly
Weekly Daily Never Figure 7: How often users perform activities related to paper calendars.
Based on our interviews, we found that electronic calendars were printed for three main reasons:
• Portability. Users carried a printed copy of the master calendar to venues where collaboration was anticipated, such as a meetings or trips. Even those who carried laptops and PDAs said that they relied on printed calendars for quick reference.
• Quick capture. Events were often entered into paper calendars first because of their easy accessibility, and were later transferred back to the digital calendar. 4.1.1 One-third of all interviewees reported making changes to paper copies of their calendars. Not all these changes were propagated back to the master calendar, however.
• Sharing a read-only view with associates. Taping a printed calendar to the outside of office doors was common practice, as reported by interviewees.
In one instance, a user provided printed calendars to his subordinates so they could schedule him for meetings. These events were then screened by him before being added to the master calendar.
Wall Calendars
Wall calendars typically played a complementary role, and there was little overlap between the events on a wall calendar and those in an electronic calendar. 70% of survey participants had a wall calendar in their home or office, however only 25% of users actually recorded events on it. Family events such as birthdays, vacations, and days off were most commonly recorded by interviewees. At home, wall calendars were located in the kitchen, on the fridge.
Index Cards
An extreme case of ad hoc paper calendar usage reported by one of our interviewees involved index cards, one for each day, that the participant carried in his shirt pocket when he forgot his PDA. Another interviewee reported exclusively using index cards for calendar management at their previous job because of their portability and trustworthiness. We report this not as a trend, but to illustrate the wide variety in the use of paper calendars.
Reminders and Alarms
Reminders and alarms are one of the major distinguishing features of modern electronic calendar systems. A majority of survey participants (63%) reported using these features. One user reported switching from paper to an online calendar because "a paper calendar cannot have an alarm feature". We use the term reminder to refer to any notification of a calendar event, and alarm to refer to the specific case of an interruption generated by the calendar system. Based on our interviews, we classified reminders into three categories taking into consideration the reasons, time, number, modalities and intervals of alarms. Before presenting the details of such a classification, however, we examine the individual factors in more detail.
Reasons for Using Alarms
Although reminding oneself of upcoming events is the most obvious use case for alarms, there were several other situations where users mentioned using reminders in addition to consulting their calendars regularly. Even when users were cognizant of upcoming events, they preferred to set alarms to interrupt them and grab their attention at the appointed hour. Alarms served as preparation reminders for events that were not necessarily in the immediate future.
When subordinates added events to a primary user's calendar, alarms were deemed an important way of notifying that user of such events. Early morning meeting reminders doubled up as wake-up alarms: one interviewee reported keeping their PDA by their bedside for this purpose. Another interviewee who needed to move his car out of a university parking lot where towing started at 8:00 am sharp had set a recurring alarm (figure 8). In one case, alarms were closely monitored by a user's secretary: if an event were missed by the user by a few minutes, the secretary would check on her boss and remind him to attend the meeting that was now overdue.
Number and Modalities of Reminders
While most survey participants only set a single reminder per event (52%), many others reported using multiple alarms. We conclude from our interviews that different semantic meanings were assigned to each such reminder: an alarm one day before an event was for preparation purposes, while an alarm 15 minutes before an event was a solicited interruption. Multimodal alarms were not used by many: the two most popular modalities used individually were audio (40%) and on-screen dialogs (41%).
Alarm Intervals
Reminders were set for varying intervals of time before the actual events took place, ranging from 5 minutes to several years. The two factors that affected this timing were (1) location of the event, and (2) whether or not (and how much) preparation was required. Users often set multiple alarms to be able to satisfy each of these requirements, because a single alarm could not satisfy them all. Based on these findings, we classify alarms into 3 categories:
• Interruption Reminders. Alarms set 5-15 minutes before an event were extremely short-term interruptions intended to get users up from their desks. Even if they knew in their mind that a particular event was coming up, it was likely that they were involved in their current activity deeply enough to overlook the event at the precise time it occurred. 15 minutes was the most common interval, as reported by 8 out of 16 interview participants. We found that the exact interval for interruption reminders was a function of the location of the event. Events that occurred in the same building as the user's current location had alarms set for between 5 and 15 minutes. Events in a different building had alarms for between 15 minutes and 30 minutes, based on the time it would take to reach there. Two interviewees reported that they set alarms for TV shows and other activities at home for up to 1 hour prior, because that is how long their commute took.
• Preparation Reminders. Users set multiple alarms when preparation was required for an event: the first (or earlier) alarm was to alert them to begin the preparation, while a later alarm was the interruption reminder for that event.
Payne [Payne, 1993] mentions the prevalence of this tendency as well: the reason for the first alarm (out of several) is to aid prospective remembering where the intention to look up an event is not in response to a specific temporal condition, but instead such conditions are checked after the intention is recalled. If certain items were needed to be taken to such meetings, preparation reminders were set for the previous night or early morning on the day of the event. Based on the interviews, preparation reminders were more commonly used for non-recurring events than for recurring events.
• Long-term Reminders. Events several months or years into the future were assigned reminders so that the user would not have to remember to consult the calendar at that time, but instead would have them show up automatically at (or around) the proper time. This is an illustration of using the calendar for prospective remembering tasks. Examples include a department head who put details of faculty coming up for tenure in up to 5 years, and a professor setting reminders for a conference submission deadline several months later.
Calendars as a Memory Aid
Calendars serve a value purpose as external memory for events [Payne, 1993]. In addition, in our data we found that the role that calendars play with respect to memory goes beyond this simple use. In particular, the following uses of calendars illustrate the different ways in which calendars serve as memory aids beyond simple lookups: First, users reported recording events in the calendar after the fact, not for the purpose of reminding, but to support reporting needs. Second, a few reported using previous years' calendars as a way to record memorable events to be remembered in future years. For those that used paper calendars, these events were often copied at the end of the year to newer calendars. The function of memory aid goes beyond remembering personal events (appointments and deadlines); it serves as a life journal, capturing events year after year. Kelley and Chapanis [Kelley and Chapanis, 1982] reported that 9 out of 11 respondents in their study kept calendars from two to 15 years.
Reporting Purposes
In our study, 10 out of 16 interviewees reported that they used their calendar to generate annual reports every year. Since it contained an accurate log of all their activities that year, it was the closest to a complete record of all accomplishments for that year. Among these, 5 users reported that they archived their calendars year after year to serve as a reference for years later. This tendency has also been reported in past studies [Kelley andChapanis, 1982, Payne, 1993]; Kelley referred to it as an 'audit trail', and highlighted the role of calendars in reporting and planning.
One person mentioned that they discovered their father's journal a few years after his death, and now they cultivate their calendar as a memento to be shared with their kids in the future.
"I think I occasionally even think about my kids. Because I do, I save them, I don't throw them away [...] I think that it's common with a little more sense of mortality or something. It's trying to moving things outwards."
Opportunities for Design
In this section, we highlight how some of our findings can be address through new electronic calendar designs.
Paper Calendars and Printing
We do not believe that paper calendars will disappear from use; they serve several useful functions that are hard to replace by technology. Electronic calendars in general are more feature-rich than paper calendars. Portable devices have good support for capturing information while mobile. Yet, we found that paper calendars and proxies continue to be prevalent in the use of calendar management. They provide support for easy capture of calendar information, are effective at sharing, and support the display of the calendar in public view with ease.
Therefore, given the many uses of paper calendars, we consider how electronic calendar systems can provide better support for these proxies. Richer printing capabilities might provide easy support for transferring online calendar information to the paper domain. Printing a wall calendar is a novelty relegated to specialized design software. However our findings show that wall calendars play a significant role in supporting calendar management, particularly at home. With affordable printing technology available, it is possible to print a wall calendar or table calendar at home, incorporating not only details of events from a user's personal electronic calendar, but also visual elements such as color coding, digital photos (for birthdays, etc.) and event icons. In a way, printed calendars are used in similar ways as discussed in [Lin et al., 2004].
Digital Paper Trails
Some of the features of paper calendars can be recreated in online systems. For example, current electronic calendar systems remove all traces of an event upon cancellation, without providing an option to retain this historical record. This was one of the shortcomings which led interview participants to rely on paper instead. Instead of deleting events, they could be faded out of view, and made visible upon request. Most calendar software support the notion of different calendars inside of the same program. A possibility is that all deleted events could simply be moved to a separate calendar, where events can be hidden easily. Yet, the events would remain in the calendar as a record of cancelled activity.
Tentative Event Scheduling
Several participants indicated that they 'penciled in' appointments in their paper calendars as tentative appointments to be confirmed later (also identified as a problem in [Kelley and Chapanis, 1982]). These tentative appointments served as a way of blocking particular date/time combinations while a meeting was being scheduled with others. Often, there were several of these tentative times for a particular meeting. Once the meeting was confirmed, only one of them was kept and the rest discarded. This type of activity is not well-supported in personal calendars. For corporate calendars, there is adequate support for scheduling group meetings, but it is often missing in personal calendars.
Intelligent Alarms
Calendar alarms and reminders have evolved from past systems and now allow notification in several ways: audible alarms, short text messages, popup reminders, and email are just a few. However, the fundamental concept of an alarm still tailors only to interruption reminders.
• Preparation reminders. To support preparation reminders, many electronic calendars allow the creation of multiple alarms per event, with different modalities for each (e.g., email, SMS, sounds, dialog box). However, when these reminders are used for preparation, as we found in the study, users often wanted to have more context: they expected to have an optional text note to indicate what preparation was required. E.g., alarms that would remind a user before leaving home to remember to carry material for an upcoming meeting, or a reminder the previous night to review documents.
• Location-related alarms. The location of events was found to be an important influencer of alarm time. If calendars supported the notion of location (besides simply providing a field to type it in), alarms could be automatically set based on how long it would take the user to reach the event.
• Alarms on multiple devices. When an alarm is set on multiple devices, each will go off at the exact same time without any knowledge of all the others. There is need to establish communication among the devices to present a single alarm to the user on the mutually-determined dominant device at the time.
Supporting a Rich Variety of Event Types
Users reported that not all events were equal: public events were merely for awareness, recurring events indicated that time was blocked out, and holidays were added to prevent accidental scheduling. From the users' point of view, each has different connotations, different visibility (public events should ideally fade out of sight when not required), and different types, number and intervals of alarms.
• Event templates. A calendar system that supports event types can provide ways and means for users to create event templates and categories with different default settings along each of the dimensions outlined above. By having event templates, quick capture is supported as well. When much of the extra information about an event is pre-filled, data entry can be minimized to just the title of the event. Certain types of events have special metadata fields associated with them, e.g. conference call events contain the dial code, flight events contain airline and arrival/departure info. This could be easily achieved by event templates.
• Showing/hiding public events. While a few users said they added public events for informational purposes, others did not want public events (that they would not necessarily attend) to clutter their calendar. If calendars supported making certain event types visible or invisible on demand, the needs of both user groups could be met. Again, by providing an option to keep all events in the same calendar, such a system would contribute to reducing information fragmentation.
Reporting and Archival Support
Report generation is a significant use of electronic calendars. Calendar software should have a way to generate reports and export information so that particular groups of events can be summarized in terms of when the meetings/events occurred, how many hours were devoted to them, and capture any notes entered in the calendar. One participant reported that he uses the search functionality in his calendar to obtain a listing of events related to a theme. This is used to get an idea of the number of hours devoted to particular activities and help to prepare an annual activity report.
Discussion & Future Work
The paradox of encoding and remembering, as described in [Payne, 1993], was clearly evident in our data. Participants seem to over-rely on calendar artifacts to remember appointments, as seen in the setting of multiple alarms, printing of calendars for meetings, carrying a PDA everywhere, and calling their secretary to confirm events. The unfortunate side effect of sharing the management of a calendar with other people is that the primary user no longer goes through the personal encoding episode of entering the information. Some participants relied on administrative assistants to enter events in their calendars. At home, many participants relied on their spouses to maintain the calendar. Some participants even suggested the need to have an alarm for when events were added to their calendars. All of this points to a diminished opportunity for encoding the information that is entered into one's calendar. This makes it very difficult for participants to remember what is in their calendar, given that at times the scheduled events have never been seen before they occur. On the other hand, the opportunity for rehearsal is greater today, if users take advantage of existing information dissemination and syndication techniques. For example, keeping a calendar on a desktop computer and publishing to an online calendar service such Google Calendar or Apple Mobile Me makes the calendar available in many other locations. Users can view their calendar on the web from any web browser, from mobile phones, or in the background on a desktop computer as part of widgets (tiny applications) such as Apple's Dashboard or Google Gadgets, or access it over a regular phone call [Pérez-Quiñones and Rode, 2004]. So, the possibility of opportunistic rehearsal is afforded by current systems. We did not observe this in our data, as many of our users did not use these services. However, the paradox of encoding, rehearsal, and recall seems to be in need of future work so we can understand the impact of electronic calendar systems on human memory. • What is your age group?
Calendar Use Basics
• Which devices do you own or use frequently?
• What computing-enabled calendars do you use?
• Do you use your computer to keep your calendar? If so, which program do you use for your main calendar management task on your desktop/laptop computer?
• If you own and/or use a PDA, which calendar program do you use on the PDA?
• Do you use an online calendar?
• What events do you record on your calendar?
• How often do you visit your calendar?
• How far ahead do you regularly look when you view your calendar?
• What would you consider your preferred view?
• If your calendar software includes a To-Do function, do you use it?
• Does your calendar software have a way to classify calendar events by categories? If so, how do you use this feature?
• Who changes and updates your calendar?
• How often do you add new events?
• Do you keep 'proxies' (for example, post-its) or other notes that need to be entered in the calendar at a later time?
• How long does it take for the proxy to make it into your main calendar?
New Events
• How frequently do you get events by phone (someone calls you) that go into your calendar?
• How frequently do you get events by e-mail (someone sends you email) that go into your calendar?
• How frequently do you get events in person (someone tells you of a meeting) that go into your calendar?
• By what other methods do new events arrive?
• Is there any overlap? Is one just a pared-down version of the other one or do they contain completely separate events?
• Do you coordinate calendar events with your spouse, roommate, family?
• If so, how do you go about doing that?
• Please explain any additional ways in which you use your calendar system.
• What are you habits as far as when you look at your calendar, how often, how far ahead do you look, how in-depth you examine events when you look, etc.
• Do you use a method of organization on a paper calendar that you cannot apply to an electronic calendar? (i.e.: specific types of events go into a specific area of the date box, highlighted events, etc)
• Is there anything else about your personal information management we have not covered?
| 6,477 |
0809.3447
|
1619717240
|
In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.
|
Nearly 10 years after Kelley and Chapanis' original study, Payne @cite_19 conducted interviews with 30 knowledge workers about both calendars and to-do lists, followed by a task analysis of his observations. He concluded that the central task supported by calendars was . Prospective remembering is the use of memory for remembering to do things in the future, as different from retrospective memory functions such as recalling past events.
|
{
"abstract": [
"This article is an interview study of calendar use and a cognitive analysis of the interactions between the design of calendars and the task of prospective remembering. The study and analysis are coordinated to present a general critique of current electronic calendar designs and to note opportunities for future designs. The interview data reveal continued use of paper calendars in a highly computerized setting. A key conclusion is that paper calendars support prospective remembering by promoting browsing of existing appointments during subsequent calendar keeping but that this advantage is compromised in current electronic designs. Other aspects of the interviews and the analyses address the representational limitations of both paper and electronic calendars. This research explores a methodology in which design implications of qualitative empirical data are understood and systematized through theoretical analyses of existing artifacts."
],
"cite_N": [
"@cite_19"
],
"mid": [
"2128207755"
]
}
|
An Exploratory Study of Personal Calendar Use
|
Personal Information Management (PIM) is receiving attention as an area of research within the CHI community [Barreau et al., 2008, Bergman et al., 2004, Teevan et al., 2006. PIM research mostly is concerned with studying how people find, keep, organize, and re-find (or reuse) information in and around their personal information space. Calendar management, one of the typical PIM tasks, is done today using a variety of systems and methods, including several popular paper-based methods: At-A-Glance, one of the largest suppliers of paper planners, sold more than 100 million calendars in 2000 1 .
For computer-based systems, calendar management is often integrated into email clients (e.g. Microsoft Outlook); it is one of the most common applications in all personal digital assistants (PDAs, e.g. Blackberries and iPhones), and there are several online calendar systems (e.g. Yahoo! Calendar, Google Calendar, Apple Mobile Me). Date-and time-based information is ubiquitous, and is often available through many means such as postings on office doors, displays with dated announcements, through email conversations, written on wall calendars, etc. The result is that calendar information tends to be pervasive.
In this paper, we set out to explore how people use calendars in the presence of varied technological options. We are interested in understanding how calendar information is managed given the availability of these platforms. After a brief review of related work, we proceed to discuss our findings from the survey, interviews, and artifacts. From these, we suggest several opportunities for designers of future electronic calendar systems, and conclude the paper with a discussion of future research in personal information management.
Study Description
The ethnographic approach we took in this study follows techniques commonly reported in the Personal Information Management literature, notably [Kelley and Chapanis, 1982, Payne, 1993, Jones et al., 2005, Marshall and Bly, 2005. We did not attempt to test any a priori hypotheses, but were interested in examining how calendar practices have evolved in the years following previous calendar studies by Kelley and Chapanis [Kelley and Chapanis, 1982] and Payne [Payne, 1993].
Our study has three components to it: a survey (N=98), in-person interviews (N=16), and an examination of calendar artifacts such as screenshots and paper calendars. A large-scale online survey was distributed among members of a university. A total of 98 responses were received (54% male and 45% female), including faculty (56%), administrative staff (20%), and students (19%) (figure 1). While previous studies have examined organizational calendars [Dourish et al., 1993] and groupware calendar systems [Grudin, 1996, Palen andGrudin, 2003], our focus was on the personal use of calendars.
Other 4%
Staff 20%
Faculty 56% Students 19% Figure 1: Roles of survey participants
In part two, we conducted in-depth personal interviews with 16 participants, recruited from among the survey participants. The recruitment criterion for interview candidates was the same as in [Kelley and Chapanis, 1982]: that participants should be regular users of some form of calendar system, either electronic or paper or a combination of both. Participants included graduate students, faculty members, administrative assistants, a department head, the director of a small business, etc., among others.
Interviews ranged from 20 to 30 minutes each, and were conducted in situ at their workplaces so we could observe their calendaring practices directly (e.g. calendar programs or wall calendars or paper scraps.) Interviews were semistructured and open-ended: a prepared set of questions was asked in each interview. The questions we asked were closely modeled on those asked in similar studies [Kelley andChapanis, 1982, Payne, 1993]. The complete set of questions is available as an appendix in a technical report [Tungare and Pérez-Quiñones, 2008]. As an extension to past studies, we were able to explore the use of features of modern calendar systems such as alarms, reminders, and mobile use, which were absent in paper calendars. Interviewees were encouraged to talk freely and to expand upon any of the themes they wished to discuss in more detail. Additional topics were addressed as appropriate depending on the interviewee's calendar use. Examining the calendar systems in use at their desks or on their walls prompted specific questions from the interviewers about these practices.
All interviews were transcribed in full. We performed content analysis [Krippendorff, 2004] of the transcripts to extract common patterns of use. The main purpose of content analysis in this study was to summarize the findings into groups of common observations, as in [Marshall and Bly, 2005]. Individual responses were tagged into several categories by two of the authors and any differences reconciled by discussion. Nearly 410 tags resulted from this activity; these were then collapsed into 383 tags (grouping together tags that were near-duplicates) and 11 top-level groups during the clustering procedure.
From each interview participant, we collected copies of artifacts that were used for calendaring purposes: 2 weeks' worth of calendar information and any other idiosyncratic observations that were spotted by the interviewers. These included screenshots of their calendar programs, paper calendars, printouts of electronic calendars (that were already printed for their own use), sticky notes stuck on paper calendars, etc. Some of these reflected a degree of wear and tear that occurred naturally over time; others provided evidence of manipulations such as color highlights, annotations in the margins, or comments made in other ways. Artifacts were not coded on any particular dimension, but pictures of these artifacts are used to supplant our textual descriptions wherever appropriate.
Capturing and Adding Events
Capturing events refers to the act of knowing about an event and entering it into a calendaring system (also referred to as the 'keeping' phase in the PIM literature.) Most survey participants reported adding new events as soon as they were (made) aware of them (93%) while the rest added them before the end of the day. Even when at their desks, those users who owned PDAs reported using them to create new events in their calendar: this was deemed faster than trying to start the calendar program on a computer and then adding an event. When away from their desks, they used proxy artifacts such as printed calendar copies or paper scraps.
Information about new events reached the primary calendar user via one of several means: email, phone, and in-person were commonly reported (figure 2). The fact that email was the most common way reported in our study is an expected evolution from older findings [Kelley and Chapanis, 1982] that phones were the most common stimuli for calendar events. Interviewees mentioned several other methods through which they received events: flyers, posters, campus notices, meeting minutes, public calendars (such as academic schedules or sports events), newspapers, internet forums, (postal) mail, fax, radio, or scheduled directly by other people who had access to the calendar (e.g., shared calendars). The wide variety of sources here is a potential indication of the problem of information overload [Schick et al., 1990] faced by knowledge workers.
Personal Calendar View Preference
We refer to the most common time interval shown in a calendar program or on a paper calendar as the preferred personal calendar view: the week view was preferred by most of our survey participants at 44%, followed by the day view at 35%, and the month view at 21% (figure 3). These are very close to the numbers reported by Kelley et al. [Kelley and Chapanis, 1982] (45%, 33%, 22% respectively). That many interviewees preferred a week view suggests the use of the calendar for opportunistic rehearsal, because they browsed the entire week's appointments each time they viewed the calendar. This preference supports the analysis of [Payne, 1993] in that the printed versions of calendar do provide a valuable aid in opportunistic reading of the the week's activities. Users who kept multiple calendars within the same calendaring system indicated that they turned the visibility of each calendar on or off on demand, based on the specifics of what they needed to know during a particular lookup task. On smaller devices such as PDAs, the default view was the daily view.
Figure 3: Preferred calendar views
There seem to be two motivators for browsing calendars: looking for activities to attend in the near future, and looking for activities further out that require preparation. A daily view directly supports the first, while a week view partially supports the second one. Intermediates such as Google Calendar's 4-day view afford browsing for future events without losing local context for the current day. The downside of such a view, however, is that days no longer appear in a fixed column position, but in different locations based on the day. Thus, the preferred calendar view depends on the type of activity the user is doing.
Frequency of Consulting the Calendar
When asked about the frequency at which users consulted their calendars, we received a wide range of responses in the survey: keeping the calendar program always open (66%) and several times a day (21%) were the most common.
In the interviews, several other specific times were reported: just before bedtime or after waking up; only when prompted by an alarm; when scheduling a new event; once weekly; or on weekends only. Two interviewees reported consulting their calendar only to check for conflicts before scheduling new events, and for confirmation of events already scheduled.
Proxy Calendar Artifacts
We use the term 'proxy calendar artifacts' (or 'proxies' in short) to refer to ephemeral scraps or notes (characterized as micronotes in [Lin et al., 2004]) or printed calendars or electronic means such as email to self that are used for calendaring when primary calendar systems are unavailable or inaccessible (e.g. when users were away from their desks or offices).
Despite the prevalent use of electronic calendars, many were not portable and were tied to specific desktop computers. This prompted the users to use other means to view or add events to their calendar; about 27% reported that they used proxy artifacts such as scraps or notes to be entered into the primary calendar at a later time. A wide variety of proxy calendar artifacts was reported in our interviews: paper scraps were by far the most common medium; other techniques included carrying laptops solely for the purpose of calendaring, PDAs, voice recorders, and printouts of electronic calendars. Information captured via these proxies was transferred to the primary calendar after a delay: most often, users entered the events as soon as they could access their primary calendar (63% of survey participants), a few others reported entering them within the same day (25%), while the maximum delay reported was up to one week.
Information Stored in an Event Record
Calendar systems allow users to add several items of information to an event record. Typical information included the date of the event (97%), time (96%), location (93%) and purpose (69%) as indicated in the survey. In interviews, it was clear that common fields such as notes, other attendees and status were used only to a limited extent. Location was entered mostly for non-recurring events. However, many other pieces of information were frequently recorded, even though calendar programs do not have a specific field for these data. For example, information critical for participation at an event was entered inline for easy access: e.g. phone numbers for conference calls, cooking menus and shopping lists, meeting agenda, original email for reference, links to relevant web sites, and filenames of relevant files.
One participant mentioned adding meeting participants' email addresses in case she needed to inform them of a cancellation or rescheduling. For activities such as trips or flights, further details such as booking codes and flight details were included as a way of reducing information fragmentation between the calendar system and the email system.
Types of Events
The events most commonly recorded on calendars by survey participants were timed events such as appointments or meetings (98%), special events requiring advance planning, such as tests (93%), long duration events such as the week of final exams at the end of each semester (66%), and all-day events such as birthdays (81%). Several interviewees also mentioned recording to-do items in a calendar, such as phone calls to be made, or tasks which would remain on the calendar until completed, or which were scheduled in on their deadline. Specifically, we found several instances of the following types of events scheduled:
• Work-related events. Many interviewees used calendar scheduling for workrelated events such as meetings, deadlines, classes, public events such as talks and conferences, and work holidays. Users in work environments included vacation details for co-workers and subordinates. Time was routinely blocked off to prepare for other events: e.g. class preparation or ground work to be done before a meeting.
Interviewees who had administrative assistants reported that their assistant maintained or co-maintained their calendar (7 out of 16 interviewees). The dynamics of shared access were vastly different across all these situations. One interviewee mentioned that he would never let an assistant be their primary scheduler; the assistant was able to access only a paper copy and any new events would be reviewed and added by the primary calendar user. Two other users mentioned that they provided paper calendars to subordinates to keep track of their schedule and to be able to answer questions about it to third parties. One participant reported calling in to their secretary when they needed to consult their schedule while away from their desk (similar to previous reports in [Perry et al., 2001]), while another reported sending email to themselves as a way to quickly capture a newly-scheduled meeting. • Family/personal events. Half of the survey respondents indicated that they coordinate calendars with their spouses, roommates, or family. Even though family activities such as picking up kids from school, or attending church services, were easily remembered without the aid of a calendar, interviewees reported that they chose to record them anyway to provide "a visual idea of the entire day" (figure 4). Public holidays, family birthdays, and guest visits were added to prevent accidental scheduling of conflicting events. Figure 4: Family events such as attending church are added to calendars, not for remembering, but to be able to get a visual idea of the entire day.
Many participants reported having separate calendars for business use and for home/personal use, as was also seen in a majority of respondents in [Kelley and Chapanis, 1982]. Although events overlapped between them (e.g. work trips on family calendars and family medical appointments on work calendars), the calendars themselves were located at the respective places and maintained separately. Family calendars were most likely to be kept in the kitchen, on the refrigerator. Two contrasts between work calendars and home calendars were prominent: work calendars were more often electronic while home calendars more likely to be paper calendars, e.g. as a wall calendar, or on the refrigerator. Work calendars were updated by the primary users or their secretaries or their colleagues, while family calendars were overwhelmingly managed by women. No male participant reported being the only calendar manager at home; women reported either being the only person to edit it, or sharing responsibilities with their husbands. Family-related events and reminders were constrained to the home calendar, as in [Nippert-Eng, 1996], but they were sometimes added to work calendars if such events would impact work time. For example, medical appointments (of self or family members) that occurred during work hours were added to work calendars so that their co-workers were aware of their absence.
• Public events. Public events were added even when the user had no intention of attending that event. They were added to recommend to other people, or for personal planning purposes, or to start conversations related to the public activity. An administrator (from ANONYMIZED, a small university town with a very popular college football team) said that although he had no interest in football, he added home games to his calendar to ensure that visiting dignitaries were not invited during a time when all hotels in town would be booked to capacity. On the other hand, two interviewees considered such public events as contributing to clutter in their personal calendar, and chose not to add them.
Continued Use of Paper Calendars
In his 1993 study [Payne, 1993], Payne reports that the most stable characteristic he observed was the continued reliance of all but two participants on some kind of paper calendar. Our findings are similar: despite most of our users using electronic calendars, every one of them reported using paper calendars even if not regularly; 12 out of 16 interview participants reported using them regularly.
Reasons for the Continued Use of Paper Calendars
We group the several reasons and examples elicited from our participants into the following four categories:
• Paper trail. Cancelled events were scratched off the calendar, leaving a paper trail. Being able to make a distinction between cancelled and neverscheduled events was cited as an important concern for continuing with paper calendars.
• Opportunistic rehearsal. We found support for the idea of opportunistic rehearsal [Payne, 1993]. Users cited that wall calendars needed no more than a glance to read, and provided for quick reference. This also corroborates Dourish's argument [Dourish et al., 1993] that the presence of informational context in paper artifacts such as calendars is an important motivator for people to continue to use them, even though electronic systems support the information retrieval task better.
• Annotation. Paper calendars are more amenable to free-form annotation, as reported earlier [Kelley and Chapanis, 1982], and as the following quotes from our study illustrate:
"That's what I call the graffiti aspect of it, it's probably freer by virtue of being handwritten." "There is a lot of that [code and symbols]. Stars and dashes and circles and headlines, marked and completed." Figure 5 shows a printed calendar with a sticky note pasted on it. The event is about a community potluck dinner. The sticky note complements the scheduled appointment with information about the dish the participant plans to bring to the event. Figure 6 shows a picture of a pumpkin hand-drawn on a printed calendar to mark Halloween on October 31. Figure 5: Sticky notes are pasted on paper calendars to remind oneself of the preparation required for an event. • Prepopulated events. Participants reported that having holidays or other event details already printed in commercially-available paper calendars was an important reason for using them. Calendars distributed by the university contained details not only of academic deadlines, but also of athletic events and games; [Kelley and Chapanis, 1982] point to branding issues as well.
Paper calendars were used alongside electronic calendars in either a supplementary or complementary role, as follows:
Printouts of Electronic Calendars
Printouts of electronic calendars played a supplementary role: they were used as proxies of the master calendar when the master calendar was unavailable. 35% of survey participants reported printing their calendar. Among those printed, all views were commonly printed: monthly (43%), weekly (33%) and daily (25%) (figure 3). Among those who printed, many printed it monthly, weekly or daily (figure 7). How often do users ...
Monthly
Weekly Daily Never Figure 7: How often users perform activities related to paper calendars.
Based on our interviews, we found that electronic calendars were printed for three main reasons:
• Portability. Users carried a printed copy of the master calendar to venues where collaboration was anticipated, such as a meetings or trips. Even those who carried laptops and PDAs said that they relied on printed calendars for quick reference.
• Quick capture. Events were often entered into paper calendars first because of their easy accessibility, and were later transferred back to the digital calendar. 4.1.1 One-third of all interviewees reported making changes to paper copies of their calendars. Not all these changes were propagated back to the master calendar, however.
• Sharing a read-only view with associates. Taping a printed calendar to the outside of office doors was common practice, as reported by interviewees.
In one instance, a user provided printed calendars to his subordinates so they could schedule him for meetings. These events were then screened by him before being added to the master calendar.
Wall Calendars
Wall calendars typically played a complementary role, and there was little overlap between the events on a wall calendar and those in an electronic calendar. 70% of survey participants had a wall calendar in their home or office, however only 25% of users actually recorded events on it. Family events such as birthdays, vacations, and days off were most commonly recorded by interviewees. At home, wall calendars were located in the kitchen, on the fridge.
Index Cards
An extreme case of ad hoc paper calendar usage reported by one of our interviewees involved index cards, one for each day, that the participant carried in his shirt pocket when he forgot his PDA. Another interviewee reported exclusively using index cards for calendar management at their previous job because of their portability and trustworthiness. We report this not as a trend, but to illustrate the wide variety in the use of paper calendars.
Reminders and Alarms
Reminders and alarms are one of the major distinguishing features of modern electronic calendar systems. A majority of survey participants (63%) reported using these features. One user reported switching from paper to an online calendar because "a paper calendar cannot have an alarm feature". We use the term reminder to refer to any notification of a calendar event, and alarm to refer to the specific case of an interruption generated by the calendar system. Based on our interviews, we classified reminders into three categories taking into consideration the reasons, time, number, modalities and intervals of alarms. Before presenting the details of such a classification, however, we examine the individual factors in more detail.
Reasons for Using Alarms
Although reminding oneself of upcoming events is the most obvious use case for alarms, there were several other situations where users mentioned using reminders in addition to consulting their calendars regularly. Even when users were cognizant of upcoming events, they preferred to set alarms to interrupt them and grab their attention at the appointed hour. Alarms served as preparation reminders for events that were not necessarily in the immediate future.
When subordinates added events to a primary user's calendar, alarms were deemed an important way of notifying that user of such events. Early morning meeting reminders doubled up as wake-up alarms: one interviewee reported keeping their PDA by their bedside for this purpose. Another interviewee who needed to move his car out of a university parking lot where towing started at 8:00 am sharp had set a recurring alarm (figure 8). In one case, alarms were closely monitored by a user's secretary: if an event were missed by the user by a few minutes, the secretary would check on her boss and remind him to attend the meeting that was now overdue.
Number and Modalities of Reminders
While most survey participants only set a single reminder per event (52%), many others reported using multiple alarms. We conclude from our interviews that different semantic meanings were assigned to each such reminder: an alarm one day before an event was for preparation purposes, while an alarm 15 minutes before an event was a solicited interruption. Multimodal alarms were not used by many: the two most popular modalities used individually were audio (40%) and on-screen dialogs (41%).
Alarm Intervals
Reminders were set for varying intervals of time before the actual events took place, ranging from 5 minutes to several years. The two factors that affected this timing were (1) location of the event, and (2) whether or not (and how much) preparation was required. Users often set multiple alarms to be able to satisfy each of these requirements, because a single alarm could not satisfy them all. Based on these findings, we classify alarms into 3 categories:
• Interruption Reminders. Alarms set 5-15 minutes before an event were extremely short-term interruptions intended to get users up from their desks. Even if they knew in their mind that a particular event was coming up, it was likely that they were involved in their current activity deeply enough to overlook the event at the precise time it occurred. 15 minutes was the most common interval, as reported by 8 out of 16 interview participants. We found that the exact interval for interruption reminders was a function of the location of the event. Events that occurred in the same building as the user's current location had alarms set for between 5 and 15 minutes. Events in a different building had alarms for between 15 minutes and 30 minutes, based on the time it would take to reach there. Two interviewees reported that they set alarms for TV shows and other activities at home for up to 1 hour prior, because that is how long their commute took.
• Preparation Reminders. Users set multiple alarms when preparation was required for an event: the first (or earlier) alarm was to alert them to begin the preparation, while a later alarm was the interruption reminder for that event.
Payne [Payne, 1993] mentions the prevalence of this tendency as well: the reason for the first alarm (out of several) is to aid prospective remembering where the intention to look up an event is not in response to a specific temporal condition, but instead such conditions are checked after the intention is recalled. If certain items were needed to be taken to such meetings, preparation reminders were set for the previous night or early morning on the day of the event. Based on the interviews, preparation reminders were more commonly used for non-recurring events than for recurring events.
• Long-term Reminders. Events several months or years into the future were assigned reminders so that the user would not have to remember to consult the calendar at that time, but instead would have them show up automatically at (or around) the proper time. This is an illustration of using the calendar for prospective remembering tasks. Examples include a department head who put details of faculty coming up for tenure in up to 5 years, and a professor setting reminders for a conference submission deadline several months later.
Calendars as a Memory Aid
Calendars serve a value purpose as external memory for events [Payne, 1993]. In addition, in our data we found that the role that calendars play with respect to memory goes beyond this simple use. In particular, the following uses of calendars illustrate the different ways in which calendars serve as memory aids beyond simple lookups: First, users reported recording events in the calendar after the fact, not for the purpose of reminding, but to support reporting needs. Second, a few reported using previous years' calendars as a way to record memorable events to be remembered in future years. For those that used paper calendars, these events were often copied at the end of the year to newer calendars. The function of memory aid goes beyond remembering personal events (appointments and deadlines); it serves as a life journal, capturing events year after year. Kelley and Chapanis [Kelley and Chapanis, 1982] reported that 9 out of 11 respondents in their study kept calendars from two to 15 years.
Reporting Purposes
In our study, 10 out of 16 interviewees reported that they used their calendar to generate annual reports every year. Since it contained an accurate log of all their activities that year, it was the closest to a complete record of all accomplishments for that year. Among these, 5 users reported that they archived their calendars year after year to serve as a reference for years later. This tendency has also been reported in past studies [Kelley andChapanis, 1982, Payne, 1993]; Kelley referred to it as an 'audit trail', and highlighted the role of calendars in reporting and planning.
One person mentioned that they discovered their father's journal a few years after his death, and now they cultivate their calendar as a memento to be shared with their kids in the future.
"I think I occasionally even think about my kids. Because I do, I save them, I don't throw them away [...] I think that it's common with a little more sense of mortality or something. It's trying to moving things outwards."
Opportunities for Design
In this section, we highlight how some of our findings can be address through new electronic calendar designs.
Paper Calendars and Printing
We do not believe that paper calendars will disappear from use; they serve several useful functions that are hard to replace by technology. Electronic calendars in general are more feature-rich than paper calendars. Portable devices have good support for capturing information while mobile. Yet, we found that paper calendars and proxies continue to be prevalent in the use of calendar management. They provide support for easy capture of calendar information, are effective at sharing, and support the display of the calendar in public view with ease.
Therefore, given the many uses of paper calendars, we consider how electronic calendar systems can provide better support for these proxies. Richer printing capabilities might provide easy support for transferring online calendar information to the paper domain. Printing a wall calendar is a novelty relegated to specialized design software. However our findings show that wall calendars play a significant role in supporting calendar management, particularly at home. With affordable printing technology available, it is possible to print a wall calendar or table calendar at home, incorporating not only details of events from a user's personal electronic calendar, but also visual elements such as color coding, digital photos (for birthdays, etc.) and event icons. In a way, printed calendars are used in similar ways as discussed in [Lin et al., 2004].
Digital Paper Trails
Some of the features of paper calendars can be recreated in online systems. For example, current electronic calendar systems remove all traces of an event upon cancellation, without providing an option to retain this historical record. This was one of the shortcomings which led interview participants to rely on paper instead. Instead of deleting events, they could be faded out of view, and made visible upon request. Most calendar software support the notion of different calendars inside of the same program. A possibility is that all deleted events could simply be moved to a separate calendar, where events can be hidden easily. Yet, the events would remain in the calendar as a record of cancelled activity.
Tentative Event Scheduling
Several participants indicated that they 'penciled in' appointments in their paper calendars as tentative appointments to be confirmed later (also identified as a problem in [Kelley and Chapanis, 1982]). These tentative appointments served as a way of blocking particular date/time combinations while a meeting was being scheduled with others. Often, there were several of these tentative times for a particular meeting. Once the meeting was confirmed, only one of them was kept and the rest discarded. This type of activity is not well-supported in personal calendars. For corporate calendars, there is adequate support for scheduling group meetings, but it is often missing in personal calendars.
Intelligent Alarms
Calendar alarms and reminders have evolved from past systems and now allow notification in several ways: audible alarms, short text messages, popup reminders, and email are just a few. However, the fundamental concept of an alarm still tailors only to interruption reminders.
• Preparation reminders. To support preparation reminders, many electronic calendars allow the creation of multiple alarms per event, with different modalities for each (e.g., email, SMS, sounds, dialog box). However, when these reminders are used for preparation, as we found in the study, users often wanted to have more context: they expected to have an optional text note to indicate what preparation was required. E.g., alarms that would remind a user before leaving home to remember to carry material for an upcoming meeting, or a reminder the previous night to review documents.
• Location-related alarms. The location of events was found to be an important influencer of alarm time. If calendars supported the notion of location (besides simply providing a field to type it in), alarms could be automatically set based on how long it would take the user to reach the event.
• Alarms on multiple devices. When an alarm is set on multiple devices, each will go off at the exact same time without any knowledge of all the others. There is need to establish communication among the devices to present a single alarm to the user on the mutually-determined dominant device at the time.
Supporting a Rich Variety of Event Types
Users reported that not all events were equal: public events were merely for awareness, recurring events indicated that time was blocked out, and holidays were added to prevent accidental scheduling. From the users' point of view, each has different connotations, different visibility (public events should ideally fade out of sight when not required), and different types, number and intervals of alarms.
• Event templates. A calendar system that supports event types can provide ways and means for users to create event templates and categories with different default settings along each of the dimensions outlined above. By having event templates, quick capture is supported as well. When much of the extra information about an event is pre-filled, data entry can be minimized to just the title of the event. Certain types of events have special metadata fields associated with them, e.g. conference call events contain the dial code, flight events contain airline and arrival/departure info. This could be easily achieved by event templates.
• Showing/hiding public events. While a few users said they added public events for informational purposes, others did not want public events (that they would not necessarily attend) to clutter their calendar. If calendars supported making certain event types visible or invisible on demand, the needs of both user groups could be met. Again, by providing an option to keep all events in the same calendar, such a system would contribute to reducing information fragmentation.
Reporting and Archival Support
Report generation is a significant use of electronic calendars. Calendar software should have a way to generate reports and export information so that particular groups of events can be summarized in terms of when the meetings/events occurred, how many hours were devoted to them, and capture any notes entered in the calendar. One participant reported that he uses the search functionality in his calendar to obtain a listing of events related to a theme. This is used to get an idea of the number of hours devoted to particular activities and help to prepare an annual activity report.
Discussion & Future Work
The paradox of encoding and remembering, as described in [Payne, 1993], was clearly evident in our data. Participants seem to over-rely on calendar artifacts to remember appointments, as seen in the setting of multiple alarms, printing of calendars for meetings, carrying a PDA everywhere, and calling their secretary to confirm events. The unfortunate side effect of sharing the management of a calendar with other people is that the primary user no longer goes through the personal encoding episode of entering the information. Some participants relied on administrative assistants to enter events in their calendars. At home, many participants relied on their spouses to maintain the calendar. Some participants even suggested the need to have an alarm for when events were added to their calendars. All of this points to a diminished opportunity for encoding the information that is entered into one's calendar. This makes it very difficult for participants to remember what is in their calendar, given that at times the scheduled events have never been seen before they occur. On the other hand, the opportunity for rehearsal is greater today, if users take advantage of existing information dissemination and syndication techniques. For example, keeping a calendar on a desktop computer and publishing to an online calendar service such Google Calendar or Apple Mobile Me makes the calendar available in many other locations. Users can view their calendar on the web from any web browser, from mobile phones, or in the background on a desktop computer as part of widgets (tiny applications) such as Apple's Dashboard or Google Gadgets, or access it over a regular phone call [Pérez-Quiñones and Rode, 2004]. So, the possibility of opportunistic rehearsal is afforded by current systems. We did not observe this in our data, as many of our users did not use these services. However, the paradox of encoding, rehearsal, and recall seems to be in need of future work so we can understand the impact of electronic calendar systems on human memory. • What is your age group?
Calendar Use Basics
• Which devices do you own or use frequently?
• What computing-enabled calendars do you use?
• Do you use your computer to keep your calendar? If so, which program do you use for your main calendar management task on your desktop/laptop computer?
• If you own and/or use a PDA, which calendar program do you use on the PDA?
• Do you use an online calendar?
• What events do you record on your calendar?
• How often do you visit your calendar?
• How far ahead do you regularly look when you view your calendar?
• What would you consider your preferred view?
• If your calendar software includes a To-Do function, do you use it?
• Does your calendar software have a way to classify calendar events by categories? If so, how do you use this feature?
• Who changes and updates your calendar?
• How often do you add new events?
• Do you keep 'proxies' (for example, post-its) or other notes that need to be entered in the calendar at a later time?
• How long does it take for the proxy to make it into your main calendar?
New Events
• How frequently do you get events by phone (someone calls you) that go into your calendar?
• How frequently do you get events by e-mail (someone sends you email) that go into your calendar?
• How frequently do you get events in person (someone tells you of a meeting) that go into your calendar?
• By what other methods do new events arrive?
• Is there any overlap? Is one just a pared-down version of the other one or do they contain completely separate events?
• Do you coordinate calendar events with your spouse, roommate, family?
• If so, how do you go about doing that?
• Please explain any additional ways in which you use your calendar system.
• What are you habits as far as when you look at your calendar, how often, how far ahead do you look, how in-depth you examine events when you look, etc.
• Do you use a method of organization on a paper calendar that you cannot apply to an electronic calendar? (i.e.: specific types of events go into a specific area of the date box, highlighted events, etc)
• Is there anything else about your personal information management we have not covered?
| 6,477 |
0809.3447
|
1619717240
|
In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.
|
Several interesting design concepts have been suggested to make electronic calendar systems less error-prone and smarter. These cover a wide range, from systems that retrieve tasks from email messages @cite_7 , to systems that learn from users' behavior to recommend intelligent defaults @cite_3 , to calendar systems that predict attendance at events @cite_30 . @cite_36 assessed the effectiveness of a priority-based calendar prototype and concluded that integration with other personal information systems (such as email) would make the system more useful for users. Visualizing calendar information on desktop computers and mobile devices has been explored in several studies @cite_4 @cite_11 .
|
{
"abstract": [
"In this paper, we describe Augur, a groupware calendar system to support personal calendaring practices, informal workplace communication, and the socio-technical evolution of the calendar system within a workgroup. Successful design and deployment of groupware calendar systems have been shown to depend on several converging, interacting perspectives. We describe calendar-based work practices as viewed from these perspectives, and present the Augur system in support of them. Augur allows users to retain the flexibility of personal calendars by anticipating and compensating for inaccurate calendar entries and idiosyncratic event names. We employ predictive user models of event attendance, intelligent processing of calendar text, and discovery of shared events to drive novel calendar visualizations that facilitate interpersonal communication. In addition, we visualize calendar access to support privacy management and long-term evolution of the calendar system.",
"The increasing mass of information confronting a business or an individual have created a demand for information management applications. Time-based information, in particular, is an important part of many information access tasks. This paper explores how to use 3D graphics and interactive animation to design and implement visualizers that improve access to large masses of time-based information. Two new visualizers have been developed for the Information Visualizer: 1) the Spiral Calendar was designed for rapid access to an individual's daily schedule, and 2) the Time Lattice was designed for analyzing the time relationships among the schedules of groups of people. The Spiral Calendar embodies a new 3D graphics technique for integrating detail and context by placing objects in a 3D spiral. It demonstrates that advanced graphics techniques can enhance routine office information tasks. The Time Lattice is formed by aligning a collection of 2D calendars. 2D translucent shadows provide views and interactive access to the resulting complex 3D object. The paper focuses on how these visualizations were developed. The Spiral Calendar, in particular, has gone through an entire cycle of development, including design, implementation, evaluation, revision and reuse. Our experience should prove useful to others developing user interfaces based on advanced graphics.",
"Recent debate has centered on the relative promise of focusinguser-interface research on developing new metaphors and tools thatenhance users abilities to directly manipulate objects versusdirecting effort toward developing interface agents that provideautomation. In this paper, we review principles that show promisefor allowing engineers to enhance human-computer interactionthrough an elegant coupling of automated services with directmanipulation. Key ideas will be highlighted in terms of the Lookoutsystem for scheduling and meeting management.",
"Scheduling group meetings requires access to participants' calendars, typically located in scattered pockets or desks. Placing participants' calendars on-line and using a rule-based scheduler to find a time slot would alleviate the problem to some extent, but it often is difficult to trust the results, because correct scheduling rules are elusive, varying with the participants and the agenda of a particular meeting. What's needed is a comprehensive scheduling system that summarizes the available information for quick, flexible, and reliable scheduling. We have developed a prototype of a priority-based, graphical scheduling system called Visual Scheduler (VS). A controlled experiment comparing automatic scheduling with VS to manual scheduling demonstrated the former to be faster and less error prone. A field study conducted over six weeks at the UNC-CH Computer Science Department showed VS to be a generally useful system and provided valuable feedback on ways to enhance the functionality of the system to increase its value as a groupwork tool. In particular, users found priority-based time-slots and access to scheduling decision reasoning advantageous. VS has been in use by more than 75 faculty, staff, and graduate students since Fall 1987.",
"Digital devices today have little understanding of their real-world context, and as a result they often make stupid mistakes. To improve this situation we are developing a database of world knowledge called ThoughtTreasure at the same time that we develop intelligent applications. In this paper we present one such application, SensiCal, a calendar with a degree of common sense. We discuss the pieces of common sense important in calendar management and present methods for extracting relevant information from calendar items.",
""
],
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_7",
"@cite_36",
"@cite_3",
"@cite_11"
],
"mid": [
"1983748693",
"2060353874",
"2059216172",
"1983818780",
"2057811494",
""
]
}
|
An Exploratory Study of Personal Calendar Use
|
Personal Information Management (PIM) is receiving attention as an area of research within the CHI community [Barreau et al., 2008, Bergman et al., 2004, Teevan et al., 2006. PIM research mostly is concerned with studying how people find, keep, organize, and re-find (or reuse) information in and around their personal information space. Calendar management, one of the typical PIM tasks, is done today using a variety of systems and methods, including several popular paper-based methods: At-A-Glance, one of the largest suppliers of paper planners, sold more than 100 million calendars in 2000 1 .
For computer-based systems, calendar management is often integrated into email clients (e.g. Microsoft Outlook); it is one of the most common applications in all personal digital assistants (PDAs, e.g. Blackberries and iPhones), and there are several online calendar systems (e.g. Yahoo! Calendar, Google Calendar, Apple Mobile Me). Date-and time-based information is ubiquitous, and is often available through many means such as postings on office doors, displays with dated announcements, through email conversations, written on wall calendars, etc. The result is that calendar information tends to be pervasive.
In this paper, we set out to explore how people use calendars in the presence of varied technological options. We are interested in understanding how calendar information is managed given the availability of these platforms. After a brief review of related work, we proceed to discuss our findings from the survey, interviews, and artifacts. From these, we suggest several opportunities for designers of future electronic calendar systems, and conclude the paper with a discussion of future research in personal information management.
Study Description
The ethnographic approach we took in this study follows techniques commonly reported in the Personal Information Management literature, notably [Kelley and Chapanis, 1982, Payne, 1993, Jones et al., 2005, Marshall and Bly, 2005. We did not attempt to test any a priori hypotheses, but were interested in examining how calendar practices have evolved in the years following previous calendar studies by Kelley and Chapanis [Kelley and Chapanis, 1982] and Payne [Payne, 1993].
Our study has three components to it: a survey (N=98), in-person interviews (N=16), and an examination of calendar artifacts such as screenshots and paper calendars. A large-scale online survey was distributed among members of a university. A total of 98 responses were received (54% male and 45% female), including faculty (56%), administrative staff (20%), and students (19%) (figure 1). While previous studies have examined organizational calendars [Dourish et al., 1993] and groupware calendar systems [Grudin, 1996, Palen andGrudin, 2003], our focus was on the personal use of calendars.
Other 4%
Staff 20%
Faculty 56% Students 19% Figure 1: Roles of survey participants
In part two, we conducted in-depth personal interviews with 16 participants, recruited from among the survey participants. The recruitment criterion for interview candidates was the same as in [Kelley and Chapanis, 1982]: that participants should be regular users of some form of calendar system, either electronic or paper or a combination of both. Participants included graduate students, faculty members, administrative assistants, a department head, the director of a small business, etc., among others.
Interviews ranged from 20 to 30 minutes each, and were conducted in situ at their workplaces so we could observe their calendaring practices directly (e.g. calendar programs or wall calendars or paper scraps.) Interviews were semistructured and open-ended: a prepared set of questions was asked in each interview. The questions we asked were closely modeled on those asked in similar studies [Kelley andChapanis, 1982, Payne, 1993]. The complete set of questions is available as an appendix in a technical report [Tungare and Pérez-Quiñones, 2008]. As an extension to past studies, we were able to explore the use of features of modern calendar systems such as alarms, reminders, and mobile use, which were absent in paper calendars. Interviewees were encouraged to talk freely and to expand upon any of the themes they wished to discuss in more detail. Additional topics were addressed as appropriate depending on the interviewee's calendar use. Examining the calendar systems in use at their desks or on their walls prompted specific questions from the interviewers about these practices.
All interviews were transcribed in full. We performed content analysis [Krippendorff, 2004] of the transcripts to extract common patterns of use. The main purpose of content analysis in this study was to summarize the findings into groups of common observations, as in [Marshall and Bly, 2005]. Individual responses were tagged into several categories by two of the authors and any differences reconciled by discussion. Nearly 410 tags resulted from this activity; these were then collapsed into 383 tags (grouping together tags that were near-duplicates) and 11 top-level groups during the clustering procedure.
From each interview participant, we collected copies of artifacts that were used for calendaring purposes: 2 weeks' worth of calendar information and any other idiosyncratic observations that were spotted by the interviewers. These included screenshots of their calendar programs, paper calendars, printouts of electronic calendars (that were already printed for their own use), sticky notes stuck on paper calendars, etc. Some of these reflected a degree of wear and tear that occurred naturally over time; others provided evidence of manipulations such as color highlights, annotations in the margins, or comments made in other ways. Artifacts were not coded on any particular dimension, but pictures of these artifacts are used to supplant our textual descriptions wherever appropriate.
Capturing and Adding Events
Capturing events refers to the act of knowing about an event and entering it into a calendaring system (also referred to as the 'keeping' phase in the PIM literature.) Most survey participants reported adding new events as soon as they were (made) aware of them (93%) while the rest added them before the end of the day. Even when at their desks, those users who owned PDAs reported using them to create new events in their calendar: this was deemed faster than trying to start the calendar program on a computer and then adding an event. When away from their desks, they used proxy artifacts such as printed calendar copies or paper scraps.
Information about new events reached the primary calendar user via one of several means: email, phone, and in-person were commonly reported (figure 2). The fact that email was the most common way reported in our study is an expected evolution from older findings [Kelley and Chapanis, 1982] that phones were the most common stimuli for calendar events. Interviewees mentioned several other methods through which they received events: flyers, posters, campus notices, meeting minutes, public calendars (such as academic schedules or sports events), newspapers, internet forums, (postal) mail, fax, radio, or scheduled directly by other people who had access to the calendar (e.g., shared calendars). The wide variety of sources here is a potential indication of the problem of information overload [Schick et al., 1990] faced by knowledge workers.
Personal Calendar View Preference
We refer to the most common time interval shown in a calendar program or on a paper calendar as the preferred personal calendar view: the week view was preferred by most of our survey participants at 44%, followed by the day view at 35%, and the month view at 21% (figure 3). These are very close to the numbers reported by Kelley et al. [Kelley and Chapanis, 1982] (45%, 33%, 22% respectively). That many interviewees preferred a week view suggests the use of the calendar for opportunistic rehearsal, because they browsed the entire week's appointments each time they viewed the calendar. This preference supports the analysis of [Payne, 1993] in that the printed versions of calendar do provide a valuable aid in opportunistic reading of the the week's activities. Users who kept multiple calendars within the same calendaring system indicated that they turned the visibility of each calendar on or off on demand, based on the specifics of what they needed to know during a particular lookup task. On smaller devices such as PDAs, the default view was the daily view.
Figure 3: Preferred calendar views
There seem to be two motivators for browsing calendars: looking for activities to attend in the near future, and looking for activities further out that require preparation. A daily view directly supports the first, while a week view partially supports the second one. Intermediates such as Google Calendar's 4-day view afford browsing for future events without losing local context for the current day. The downside of such a view, however, is that days no longer appear in a fixed column position, but in different locations based on the day. Thus, the preferred calendar view depends on the type of activity the user is doing.
Frequency of Consulting the Calendar
When asked about the frequency at which users consulted their calendars, we received a wide range of responses in the survey: keeping the calendar program always open (66%) and several times a day (21%) were the most common.
In the interviews, several other specific times were reported: just before bedtime or after waking up; only when prompted by an alarm; when scheduling a new event; once weekly; or on weekends only. Two interviewees reported consulting their calendar only to check for conflicts before scheduling new events, and for confirmation of events already scheduled.
Proxy Calendar Artifacts
We use the term 'proxy calendar artifacts' (or 'proxies' in short) to refer to ephemeral scraps or notes (characterized as micronotes in [Lin et al., 2004]) or printed calendars or electronic means such as email to self that are used for calendaring when primary calendar systems are unavailable or inaccessible (e.g. when users were away from their desks or offices).
Despite the prevalent use of electronic calendars, many were not portable and were tied to specific desktop computers. This prompted the users to use other means to view or add events to their calendar; about 27% reported that they used proxy artifacts such as scraps or notes to be entered into the primary calendar at a later time. A wide variety of proxy calendar artifacts was reported in our interviews: paper scraps were by far the most common medium; other techniques included carrying laptops solely for the purpose of calendaring, PDAs, voice recorders, and printouts of electronic calendars. Information captured via these proxies was transferred to the primary calendar after a delay: most often, users entered the events as soon as they could access their primary calendar (63% of survey participants), a few others reported entering them within the same day (25%), while the maximum delay reported was up to one week.
Information Stored in an Event Record
Calendar systems allow users to add several items of information to an event record. Typical information included the date of the event (97%), time (96%), location (93%) and purpose (69%) as indicated in the survey. In interviews, it was clear that common fields such as notes, other attendees and status were used only to a limited extent. Location was entered mostly for non-recurring events. However, many other pieces of information were frequently recorded, even though calendar programs do not have a specific field for these data. For example, information critical for participation at an event was entered inline for easy access: e.g. phone numbers for conference calls, cooking menus and shopping lists, meeting agenda, original email for reference, links to relevant web sites, and filenames of relevant files.
One participant mentioned adding meeting participants' email addresses in case she needed to inform them of a cancellation or rescheduling. For activities such as trips or flights, further details such as booking codes and flight details were included as a way of reducing information fragmentation between the calendar system and the email system.
Types of Events
The events most commonly recorded on calendars by survey participants were timed events such as appointments or meetings (98%), special events requiring advance planning, such as tests (93%), long duration events such as the week of final exams at the end of each semester (66%), and all-day events such as birthdays (81%). Several interviewees also mentioned recording to-do items in a calendar, such as phone calls to be made, or tasks which would remain on the calendar until completed, or which were scheduled in on their deadline. Specifically, we found several instances of the following types of events scheduled:
• Work-related events. Many interviewees used calendar scheduling for workrelated events such as meetings, deadlines, classes, public events such as talks and conferences, and work holidays. Users in work environments included vacation details for co-workers and subordinates. Time was routinely blocked off to prepare for other events: e.g. class preparation or ground work to be done before a meeting.
Interviewees who had administrative assistants reported that their assistant maintained or co-maintained their calendar (7 out of 16 interviewees). The dynamics of shared access were vastly different across all these situations. One interviewee mentioned that he would never let an assistant be their primary scheduler; the assistant was able to access only a paper copy and any new events would be reviewed and added by the primary calendar user. Two other users mentioned that they provided paper calendars to subordinates to keep track of their schedule and to be able to answer questions about it to third parties. One participant reported calling in to their secretary when they needed to consult their schedule while away from their desk (similar to previous reports in [Perry et al., 2001]), while another reported sending email to themselves as a way to quickly capture a newly-scheduled meeting. • Family/personal events. Half of the survey respondents indicated that they coordinate calendars with their spouses, roommates, or family. Even though family activities such as picking up kids from school, or attending church services, were easily remembered without the aid of a calendar, interviewees reported that they chose to record them anyway to provide "a visual idea of the entire day" (figure 4). Public holidays, family birthdays, and guest visits were added to prevent accidental scheduling of conflicting events. Figure 4: Family events such as attending church are added to calendars, not for remembering, but to be able to get a visual idea of the entire day.
Many participants reported having separate calendars for business use and for home/personal use, as was also seen in a majority of respondents in [Kelley and Chapanis, 1982]. Although events overlapped between them (e.g. work trips on family calendars and family medical appointments on work calendars), the calendars themselves were located at the respective places and maintained separately. Family calendars were most likely to be kept in the kitchen, on the refrigerator. Two contrasts between work calendars and home calendars were prominent: work calendars were more often electronic while home calendars more likely to be paper calendars, e.g. as a wall calendar, or on the refrigerator. Work calendars were updated by the primary users or their secretaries or their colleagues, while family calendars were overwhelmingly managed by women. No male participant reported being the only calendar manager at home; women reported either being the only person to edit it, or sharing responsibilities with their husbands. Family-related events and reminders were constrained to the home calendar, as in [Nippert-Eng, 1996], but they were sometimes added to work calendars if such events would impact work time. For example, medical appointments (of self or family members) that occurred during work hours were added to work calendars so that their co-workers were aware of their absence.
• Public events. Public events were added even when the user had no intention of attending that event. They were added to recommend to other people, or for personal planning purposes, or to start conversations related to the public activity. An administrator (from ANONYMIZED, a small university town with a very popular college football team) said that although he had no interest in football, he added home games to his calendar to ensure that visiting dignitaries were not invited during a time when all hotels in town would be booked to capacity. On the other hand, two interviewees considered such public events as contributing to clutter in their personal calendar, and chose not to add them.
Continued Use of Paper Calendars
In his 1993 study [Payne, 1993], Payne reports that the most stable characteristic he observed was the continued reliance of all but two participants on some kind of paper calendar. Our findings are similar: despite most of our users using electronic calendars, every one of them reported using paper calendars even if not regularly; 12 out of 16 interview participants reported using them regularly.
Reasons for the Continued Use of Paper Calendars
We group the several reasons and examples elicited from our participants into the following four categories:
• Paper trail. Cancelled events were scratched off the calendar, leaving a paper trail. Being able to make a distinction between cancelled and neverscheduled events was cited as an important concern for continuing with paper calendars.
• Opportunistic rehearsal. We found support for the idea of opportunistic rehearsal [Payne, 1993]. Users cited that wall calendars needed no more than a glance to read, and provided for quick reference. This also corroborates Dourish's argument [Dourish et al., 1993] that the presence of informational context in paper artifacts such as calendars is an important motivator for people to continue to use them, even though electronic systems support the information retrieval task better.
• Annotation. Paper calendars are more amenable to free-form annotation, as reported earlier [Kelley and Chapanis, 1982], and as the following quotes from our study illustrate:
"That's what I call the graffiti aspect of it, it's probably freer by virtue of being handwritten." "There is a lot of that [code and symbols]. Stars and dashes and circles and headlines, marked and completed." Figure 5 shows a printed calendar with a sticky note pasted on it. The event is about a community potluck dinner. The sticky note complements the scheduled appointment with information about the dish the participant plans to bring to the event. Figure 6 shows a picture of a pumpkin hand-drawn on a printed calendar to mark Halloween on October 31. Figure 5: Sticky notes are pasted on paper calendars to remind oneself of the preparation required for an event. • Prepopulated events. Participants reported that having holidays or other event details already printed in commercially-available paper calendars was an important reason for using them. Calendars distributed by the university contained details not only of academic deadlines, but also of athletic events and games; [Kelley and Chapanis, 1982] point to branding issues as well.
Paper calendars were used alongside electronic calendars in either a supplementary or complementary role, as follows:
Printouts of Electronic Calendars
Printouts of electronic calendars played a supplementary role: they were used as proxies of the master calendar when the master calendar was unavailable. 35% of survey participants reported printing their calendar. Among those printed, all views were commonly printed: monthly (43%), weekly (33%) and daily (25%) (figure 3). Among those who printed, many printed it monthly, weekly or daily (figure 7). How often do users ...
Monthly
Weekly Daily Never Figure 7: How often users perform activities related to paper calendars.
Based on our interviews, we found that electronic calendars were printed for three main reasons:
• Portability. Users carried a printed copy of the master calendar to venues where collaboration was anticipated, such as a meetings or trips. Even those who carried laptops and PDAs said that they relied on printed calendars for quick reference.
• Quick capture. Events were often entered into paper calendars first because of their easy accessibility, and were later transferred back to the digital calendar. 4.1.1 One-third of all interviewees reported making changes to paper copies of their calendars. Not all these changes were propagated back to the master calendar, however.
• Sharing a read-only view with associates. Taping a printed calendar to the outside of office doors was common practice, as reported by interviewees.
In one instance, a user provided printed calendars to his subordinates so they could schedule him for meetings. These events were then screened by him before being added to the master calendar.
Wall Calendars
Wall calendars typically played a complementary role, and there was little overlap between the events on a wall calendar and those in an electronic calendar. 70% of survey participants had a wall calendar in their home or office, however only 25% of users actually recorded events on it. Family events such as birthdays, vacations, and days off were most commonly recorded by interviewees. At home, wall calendars were located in the kitchen, on the fridge.
Index Cards
An extreme case of ad hoc paper calendar usage reported by one of our interviewees involved index cards, one for each day, that the participant carried in his shirt pocket when he forgot his PDA. Another interviewee reported exclusively using index cards for calendar management at their previous job because of their portability and trustworthiness. We report this not as a trend, but to illustrate the wide variety in the use of paper calendars.
Reminders and Alarms
Reminders and alarms are one of the major distinguishing features of modern electronic calendar systems. A majority of survey participants (63%) reported using these features. One user reported switching from paper to an online calendar because "a paper calendar cannot have an alarm feature". We use the term reminder to refer to any notification of a calendar event, and alarm to refer to the specific case of an interruption generated by the calendar system. Based on our interviews, we classified reminders into three categories taking into consideration the reasons, time, number, modalities and intervals of alarms. Before presenting the details of such a classification, however, we examine the individual factors in more detail.
Reasons for Using Alarms
Although reminding oneself of upcoming events is the most obvious use case for alarms, there were several other situations where users mentioned using reminders in addition to consulting their calendars regularly. Even when users were cognizant of upcoming events, they preferred to set alarms to interrupt them and grab their attention at the appointed hour. Alarms served as preparation reminders for events that were not necessarily in the immediate future.
When subordinates added events to a primary user's calendar, alarms were deemed an important way of notifying that user of such events. Early morning meeting reminders doubled up as wake-up alarms: one interviewee reported keeping their PDA by their bedside for this purpose. Another interviewee who needed to move his car out of a university parking lot where towing started at 8:00 am sharp had set a recurring alarm (figure 8). In one case, alarms were closely monitored by a user's secretary: if an event were missed by the user by a few minutes, the secretary would check on her boss and remind him to attend the meeting that was now overdue.
Number and Modalities of Reminders
While most survey participants only set a single reminder per event (52%), many others reported using multiple alarms. We conclude from our interviews that different semantic meanings were assigned to each such reminder: an alarm one day before an event was for preparation purposes, while an alarm 15 minutes before an event was a solicited interruption. Multimodal alarms were not used by many: the two most popular modalities used individually were audio (40%) and on-screen dialogs (41%).
Alarm Intervals
Reminders were set for varying intervals of time before the actual events took place, ranging from 5 minutes to several years. The two factors that affected this timing were (1) location of the event, and (2) whether or not (and how much) preparation was required. Users often set multiple alarms to be able to satisfy each of these requirements, because a single alarm could not satisfy them all. Based on these findings, we classify alarms into 3 categories:
• Interruption Reminders. Alarms set 5-15 minutes before an event were extremely short-term interruptions intended to get users up from their desks. Even if they knew in their mind that a particular event was coming up, it was likely that they were involved in their current activity deeply enough to overlook the event at the precise time it occurred. 15 minutes was the most common interval, as reported by 8 out of 16 interview participants. We found that the exact interval for interruption reminders was a function of the location of the event. Events that occurred in the same building as the user's current location had alarms set for between 5 and 15 minutes. Events in a different building had alarms for between 15 minutes and 30 minutes, based on the time it would take to reach there. Two interviewees reported that they set alarms for TV shows and other activities at home for up to 1 hour prior, because that is how long their commute took.
• Preparation Reminders. Users set multiple alarms when preparation was required for an event: the first (or earlier) alarm was to alert them to begin the preparation, while a later alarm was the interruption reminder for that event.
Payne [Payne, 1993] mentions the prevalence of this tendency as well: the reason for the first alarm (out of several) is to aid prospective remembering where the intention to look up an event is not in response to a specific temporal condition, but instead such conditions are checked after the intention is recalled. If certain items were needed to be taken to such meetings, preparation reminders were set for the previous night or early morning on the day of the event. Based on the interviews, preparation reminders were more commonly used for non-recurring events than for recurring events.
• Long-term Reminders. Events several months or years into the future were assigned reminders so that the user would not have to remember to consult the calendar at that time, but instead would have them show up automatically at (or around) the proper time. This is an illustration of using the calendar for prospective remembering tasks. Examples include a department head who put details of faculty coming up for tenure in up to 5 years, and a professor setting reminders for a conference submission deadline several months later.
Calendars as a Memory Aid
Calendars serve a value purpose as external memory for events [Payne, 1993]. In addition, in our data we found that the role that calendars play with respect to memory goes beyond this simple use. In particular, the following uses of calendars illustrate the different ways in which calendars serve as memory aids beyond simple lookups: First, users reported recording events in the calendar after the fact, not for the purpose of reminding, but to support reporting needs. Second, a few reported using previous years' calendars as a way to record memorable events to be remembered in future years. For those that used paper calendars, these events were often copied at the end of the year to newer calendars. The function of memory aid goes beyond remembering personal events (appointments and deadlines); it serves as a life journal, capturing events year after year. Kelley and Chapanis [Kelley and Chapanis, 1982] reported that 9 out of 11 respondents in their study kept calendars from two to 15 years.
Reporting Purposes
In our study, 10 out of 16 interviewees reported that they used their calendar to generate annual reports every year. Since it contained an accurate log of all their activities that year, it was the closest to a complete record of all accomplishments for that year. Among these, 5 users reported that they archived their calendars year after year to serve as a reference for years later. This tendency has also been reported in past studies [Kelley andChapanis, 1982, Payne, 1993]; Kelley referred to it as an 'audit trail', and highlighted the role of calendars in reporting and planning.
One person mentioned that they discovered their father's journal a few years after his death, and now they cultivate their calendar as a memento to be shared with their kids in the future.
"I think I occasionally even think about my kids. Because I do, I save them, I don't throw them away [...] I think that it's common with a little more sense of mortality or something. It's trying to moving things outwards."
Opportunities for Design
In this section, we highlight how some of our findings can be address through new electronic calendar designs.
Paper Calendars and Printing
We do not believe that paper calendars will disappear from use; they serve several useful functions that are hard to replace by technology. Electronic calendars in general are more feature-rich than paper calendars. Portable devices have good support for capturing information while mobile. Yet, we found that paper calendars and proxies continue to be prevalent in the use of calendar management. They provide support for easy capture of calendar information, are effective at sharing, and support the display of the calendar in public view with ease.
Therefore, given the many uses of paper calendars, we consider how electronic calendar systems can provide better support for these proxies. Richer printing capabilities might provide easy support for transferring online calendar information to the paper domain. Printing a wall calendar is a novelty relegated to specialized design software. However our findings show that wall calendars play a significant role in supporting calendar management, particularly at home. With affordable printing technology available, it is possible to print a wall calendar or table calendar at home, incorporating not only details of events from a user's personal electronic calendar, but also visual elements such as color coding, digital photos (for birthdays, etc.) and event icons. In a way, printed calendars are used in similar ways as discussed in [Lin et al., 2004].
Digital Paper Trails
Some of the features of paper calendars can be recreated in online systems. For example, current electronic calendar systems remove all traces of an event upon cancellation, without providing an option to retain this historical record. This was one of the shortcomings which led interview participants to rely on paper instead. Instead of deleting events, they could be faded out of view, and made visible upon request. Most calendar software support the notion of different calendars inside of the same program. A possibility is that all deleted events could simply be moved to a separate calendar, where events can be hidden easily. Yet, the events would remain in the calendar as a record of cancelled activity.
Tentative Event Scheduling
Several participants indicated that they 'penciled in' appointments in their paper calendars as tentative appointments to be confirmed later (also identified as a problem in [Kelley and Chapanis, 1982]). These tentative appointments served as a way of blocking particular date/time combinations while a meeting was being scheduled with others. Often, there were several of these tentative times for a particular meeting. Once the meeting was confirmed, only one of them was kept and the rest discarded. This type of activity is not well-supported in personal calendars. For corporate calendars, there is adequate support for scheduling group meetings, but it is often missing in personal calendars.
Intelligent Alarms
Calendar alarms and reminders have evolved from past systems and now allow notification in several ways: audible alarms, short text messages, popup reminders, and email are just a few. However, the fundamental concept of an alarm still tailors only to interruption reminders.
• Preparation reminders. To support preparation reminders, many electronic calendars allow the creation of multiple alarms per event, with different modalities for each (e.g., email, SMS, sounds, dialog box). However, when these reminders are used for preparation, as we found in the study, users often wanted to have more context: they expected to have an optional text note to indicate what preparation was required. E.g., alarms that would remind a user before leaving home to remember to carry material for an upcoming meeting, or a reminder the previous night to review documents.
• Location-related alarms. The location of events was found to be an important influencer of alarm time. If calendars supported the notion of location (besides simply providing a field to type it in), alarms could be automatically set based on how long it would take the user to reach the event.
• Alarms on multiple devices. When an alarm is set on multiple devices, each will go off at the exact same time without any knowledge of all the others. There is need to establish communication among the devices to present a single alarm to the user on the mutually-determined dominant device at the time.
Supporting a Rich Variety of Event Types
Users reported that not all events were equal: public events were merely for awareness, recurring events indicated that time was blocked out, and holidays were added to prevent accidental scheduling. From the users' point of view, each has different connotations, different visibility (public events should ideally fade out of sight when not required), and different types, number and intervals of alarms.
• Event templates. A calendar system that supports event types can provide ways and means for users to create event templates and categories with different default settings along each of the dimensions outlined above. By having event templates, quick capture is supported as well. When much of the extra information about an event is pre-filled, data entry can be minimized to just the title of the event. Certain types of events have special metadata fields associated with them, e.g. conference call events contain the dial code, flight events contain airline and arrival/departure info. This could be easily achieved by event templates.
• Showing/hiding public events. While a few users said they added public events for informational purposes, others did not want public events (that they would not necessarily attend) to clutter their calendar. If calendars supported making certain event types visible or invisible on demand, the needs of both user groups could be met. Again, by providing an option to keep all events in the same calendar, such a system would contribute to reducing information fragmentation.
Reporting and Archival Support
Report generation is a significant use of electronic calendars. Calendar software should have a way to generate reports and export information so that particular groups of events can be summarized in terms of when the meetings/events occurred, how many hours were devoted to them, and capture any notes entered in the calendar. One participant reported that he uses the search functionality in his calendar to obtain a listing of events related to a theme. This is used to get an idea of the number of hours devoted to particular activities and help to prepare an annual activity report.
Discussion & Future Work
The paradox of encoding and remembering, as described in [Payne, 1993], was clearly evident in our data. Participants seem to over-rely on calendar artifacts to remember appointments, as seen in the setting of multiple alarms, printing of calendars for meetings, carrying a PDA everywhere, and calling their secretary to confirm events. The unfortunate side effect of sharing the management of a calendar with other people is that the primary user no longer goes through the personal encoding episode of entering the information. Some participants relied on administrative assistants to enter events in their calendars. At home, many participants relied on their spouses to maintain the calendar. Some participants even suggested the need to have an alarm for when events were added to their calendars. All of this points to a diminished opportunity for encoding the information that is entered into one's calendar. This makes it very difficult for participants to remember what is in their calendar, given that at times the scheduled events have never been seen before they occur. On the other hand, the opportunity for rehearsal is greater today, if users take advantage of existing information dissemination and syndication techniques. For example, keeping a calendar on a desktop computer and publishing to an online calendar service such Google Calendar or Apple Mobile Me makes the calendar available in many other locations. Users can view their calendar on the web from any web browser, from mobile phones, or in the background on a desktop computer as part of widgets (tiny applications) such as Apple's Dashboard or Google Gadgets, or access it over a regular phone call [Pérez-Quiñones and Rode, 2004]. So, the possibility of opportunistic rehearsal is afforded by current systems. We did not observe this in our data, as many of our users did not use these services. However, the paradox of encoding, rehearsal, and recall seems to be in need of future work so we can understand the impact of electronic calendar systems on human memory. • What is your age group?
Calendar Use Basics
• Which devices do you own or use frequently?
• What computing-enabled calendars do you use?
• Do you use your computer to keep your calendar? If so, which program do you use for your main calendar management task on your desktop/laptop computer?
• If you own and/or use a PDA, which calendar program do you use on the PDA?
• Do you use an online calendar?
• What events do you record on your calendar?
• How often do you visit your calendar?
• How far ahead do you regularly look when you view your calendar?
• What would you consider your preferred view?
• If your calendar software includes a To-Do function, do you use it?
• Does your calendar software have a way to classify calendar events by categories? If so, how do you use this feature?
• Who changes and updates your calendar?
• How often do you add new events?
• Do you keep 'proxies' (for example, post-its) or other notes that need to be entered in the calendar at a later time?
• How long does it take for the proxy to make it into your main calendar?
New Events
• How frequently do you get events by phone (someone calls you) that go into your calendar?
• How frequently do you get events by e-mail (someone sends you email) that go into your calendar?
• How frequently do you get events in person (someone tells you of a meeting) that go into your calendar?
• By what other methods do new events arrive?
• Is there any overlap? Is one just a pared-down version of the other one or do they contain completely separate events?
• Do you coordinate calendar events with your spouse, roommate, family?
• If so, how do you go about doing that?
• Please explain any additional ways in which you use your calendar system.
• What are you habits as far as when you look at your calendar, how often, how far ahead do you look, how in-depth you examine events when you look, etc.
• Do you use a method of organization on a paper calendar that you cannot apply to an electronic calendar? (i.e.: specific types of events go into a specific area of the date box, highlighted events, etc)
• Is there anything else about your personal information management we have not covered?
| 6,477 |
0809.3447
|
1619717240
|
In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.
|
In the field of Personal Information Management, the management of various information collections such as files @cite_20 , folders @cite_12 , email @cite_6 bookmarks @cite_34 and cross-collection issues @cite_9 @cite_22 have been studied widely. Calendars are an important part of users' personal information, and this domain can benefit from a re-examination in the wake of electronic and ubiquitous calendar systems.
|
{
"abstract": [
"",
"This paper reports a study of Personal Information Management (PIM), which advances research in two ways: (1) rather than focusing on one tool, we collected cross-tool data relating to file, email and web bookmark usage for each participant, and (2) we collected longitudinal data for a subset of the participants. We found that individuals employ a rich variety of strategies both within and across PIM tools, and we present new strategy classifications that reflect this behaviour. We discuss synergies and differences between tools that may be useful in guiding the design of tool integration. Our longitudinal data provides insight into how PIM behaviour evolves over time, and suggests how the supporting nature of PIM discourages reflection by users on their strategies. We discuss how the promotion of some reflection by tools and organizations may benefit users.",
"Email is one oftl most successful computer applicmiom yet devised. Our empin :al ct ta show however, that althongh email was origiraUy designed as a c nmunica ons application, it is now used for tional funaions, that it was not designed for, such as tab management and persona afoOt v rig. We call this ernt l oveHoad We demonstrate that email overload creates problems for personal information manageaa,cnt: users eden have cluttered inboxes cor mining hundreds of n :age ¢, incl rling outstanding tasks, partially read documents and conversational threads. Furthermore,, user attemt:Xs to rationalise their inbox by ing are Ron unsuccessful, with the consequence that important rr ges get overlooked, or \"lost\" in archives. We explain how em l over oad ng arises and propose technical solutions to the problem.",
"Bookmarks are used as \"personal Web information spaces\" to help people remember and retrieve interesting Web pages. A study of personal Web information spaces surveyed 322 Web users and analyzed the bookmark archives of 50 Web users. The results of this study are used to address why people make bookmarks, and how they create, use, and organize them. Recommendations for improving the organization, visualization, representation, and integration of bookmarks are provided. The recommendations include simple mechanisms for filing bookmarks at creation time, the use of time-based visualizations with automated filters, the use of contextual information in representing bookmarks, and the combination of hierarchy formation and Web page authoring to aid in organizing and viewing bookmarks.",
"This paper summarizes and synthesizes two independent studies of the ways users organize and find files on their computers. The first study (Barreau 1995) investigated information organization practices among users of DOS, Windows and OS 2. The second study (Nardi, Anderson and Erickson 1995), examined the finding and filing practices of Macintosh users. There were more similarities in the two studies than differences. Users in both studies (1) preferred location-based finding because of its crucial reminding function; (2) avoided elaborate filing schemes; (3) archived relatively little information; and (4) worked with three types of information: ephemeral, working and archived. A main difference between the study populations was that the Macintosh users used subdirectories to organize information and the DOS users did not.",
"A study explores the way people organize information in support of projects (\"teach a course\", \"plan a wedding\", etc.). The folder structures to organize project information - especially electronic documents and other files - frequently resembled a \"divide and conquer\" problem decomposition with subfolders corresponding to major components (subprojects) of the project. Folders were clearly more than simply a means to one end: Organizing for later retrieval. Folders were information in their own right - representing, for example, a person's evolving understanding of a project and its components. Unfortunately, folders are often \"overloaded\" with information. For example, folders sometimes included leading characters to force an ordering (\"aa\", \"zz\"). And folder hierarchies frequently reflected a tension between organizing information for current use vs. repeated re-use."
],
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_6",
"@cite_34",
"@cite_20",
"@cite_12"
],
"mid": [
"",
"2065132166",
"2137891816",
"2042486495",
"2097127516",
"2165703420"
]
}
|
An Exploratory Study of Personal Calendar Use
|
Personal Information Management (PIM) is receiving attention as an area of research within the CHI community [Barreau et al., 2008, Bergman et al., 2004, Teevan et al., 2006. PIM research mostly is concerned with studying how people find, keep, organize, and re-find (or reuse) information in and around their personal information space. Calendar management, one of the typical PIM tasks, is done today using a variety of systems and methods, including several popular paper-based methods: At-A-Glance, one of the largest suppliers of paper planners, sold more than 100 million calendars in 2000 1 .
For computer-based systems, calendar management is often integrated into email clients (e.g. Microsoft Outlook); it is one of the most common applications in all personal digital assistants (PDAs, e.g. Blackberries and iPhones), and there are several online calendar systems (e.g. Yahoo! Calendar, Google Calendar, Apple Mobile Me). Date-and time-based information is ubiquitous, and is often available through many means such as postings on office doors, displays with dated announcements, through email conversations, written on wall calendars, etc. The result is that calendar information tends to be pervasive.
In this paper, we set out to explore how people use calendars in the presence of varied technological options. We are interested in understanding how calendar information is managed given the availability of these platforms. After a brief review of related work, we proceed to discuss our findings from the survey, interviews, and artifacts. From these, we suggest several opportunities for designers of future electronic calendar systems, and conclude the paper with a discussion of future research in personal information management.
Study Description
The ethnographic approach we took in this study follows techniques commonly reported in the Personal Information Management literature, notably [Kelley and Chapanis, 1982, Payne, 1993, Jones et al., 2005, Marshall and Bly, 2005. We did not attempt to test any a priori hypotheses, but were interested in examining how calendar practices have evolved in the years following previous calendar studies by Kelley and Chapanis [Kelley and Chapanis, 1982] and Payne [Payne, 1993].
Our study has three components to it: a survey (N=98), in-person interviews (N=16), and an examination of calendar artifacts such as screenshots and paper calendars. A large-scale online survey was distributed among members of a university. A total of 98 responses were received (54% male and 45% female), including faculty (56%), administrative staff (20%), and students (19%) (figure 1). While previous studies have examined organizational calendars [Dourish et al., 1993] and groupware calendar systems [Grudin, 1996, Palen andGrudin, 2003], our focus was on the personal use of calendars.
Other 4%
Staff 20%
Faculty 56% Students 19% Figure 1: Roles of survey participants
In part two, we conducted in-depth personal interviews with 16 participants, recruited from among the survey participants. The recruitment criterion for interview candidates was the same as in [Kelley and Chapanis, 1982]: that participants should be regular users of some form of calendar system, either electronic or paper or a combination of both. Participants included graduate students, faculty members, administrative assistants, a department head, the director of a small business, etc., among others.
Interviews ranged from 20 to 30 minutes each, and were conducted in situ at their workplaces so we could observe their calendaring practices directly (e.g. calendar programs or wall calendars or paper scraps.) Interviews were semistructured and open-ended: a prepared set of questions was asked in each interview. The questions we asked were closely modeled on those asked in similar studies [Kelley andChapanis, 1982, Payne, 1993]. The complete set of questions is available as an appendix in a technical report [Tungare and Pérez-Quiñones, 2008]. As an extension to past studies, we were able to explore the use of features of modern calendar systems such as alarms, reminders, and mobile use, which were absent in paper calendars. Interviewees were encouraged to talk freely and to expand upon any of the themes they wished to discuss in more detail. Additional topics were addressed as appropriate depending on the interviewee's calendar use. Examining the calendar systems in use at their desks or on their walls prompted specific questions from the interviewers about these practices.
All interviews were transcribed in full. We performed content analysis [Krippendorff, 2004] of the transcripts to extract common patterns of use. The main purpose of content analysis in this study was to summarize the findings into groups of common observations, as in [Marshall and Bly, 2005]. Individual responses were tagged into several categories by two of the authors and any differences reconciled by discussion. Nearly 410 tags resulted from this activity; these were then collapsed into 383 tags (grouping together tags that were near-duplicates) and 11 top-level groups during the clustering procedure.
From each interview participant, we collected copies of artifacts that were used for calendaring purposes: 2 weeks' worth of calendar information and any other idiosyncratic observations that were spotted by the interviewers. These included screenshots of their calendar programs, paper calendars, printouts of electronic calendars (that were already printed for their own use), sticky notes stuck on paper calendars, etc. Some of these reflected a degree of wear and tear that occurred naturally over time; others provided evidence of manipulations such as color highlights, annotations in the margins, or comments made in other ways. Artifacts were not coded on any particular dimension, but pictures of these artifacts are used to supplant our textual descriptions wherever appropriate.
Capturing and Adding Events
Capturing events refers to the act of knowing about an event and entering it into a calendaring system (also referred to as the 'keeping' phase in the PIM literature.) Most survey participants reported adding new events as soon as they were (made) aware of them (93%) while the rest added them before the end of the day. Even when at their desks, those users who owned PDAs reported using them to create new events in their calendar: this was deemed faster than trying to start the calendar program on a computer and then adding an event. When away from their desks, they used proxy artifacts such as printed calendar copies or paper scraps.
Information about new events reached the primary calendar user via one of several means: email, phone, and in-person were commonly reported (figure 2). The fact that email was the most common way reported in our study is an expected evolution from older findings [Kelley and Chapanis, 1982] that phones were the most common stimuli for calendar events. Interviewees mentioned several other methods through which they received events: flyers, posters, campus notices, meeting minutes, public calendars (such as academic schedules or sports events), newspapers, internet forums, (postal) mail, fax, radio, or scheduled directly by other people who had access to the calendar (e.g., shared calendars). The wide variety of sources here is a potential indication of the problem of information overload [Schick et al., 1990] faced by knowledge workers.
Personal Calendar View Preference
We refer to the most common time interval shown in a calendar program or on a paper calendar as the preferred personal calendar view: the week view was preferred by most of our survey participants at 44%, followed by the day view at 35%, and the month view at 21% (figure 3). These are very close to the numbers reported by Kelley et al. [Kelley and Chapanis, 1982] (45%, 33%, 22% respectively). That many interviewees preferred a week view suggests the use of the calendar for opportunistic rehearsal, because they browsed the entire week's appointments each time they viewed the calendar. This preference supports the analysis of [Payne, 1993] in that the printed versions of calendar do provide a valuable aid in opportunistic reading of the the week's activities. Users who kept multiple calendars within the same calendaring system indicated that they turned the visibility of each calendar on or off on demand, based on the specifics of what they needed to know during a particular lookup task. On smaller devices such as PDAs, the default view was the daily view.
Figure 3: Preferred calendar views
There seem to be two motivators for browsing calendars: looking for activities to attend in the near future, and looking for activities further out that require preparation. A daily view directly supports the first, while a week view partially supports the second one. Intermediates such as Google Calendar's 4-day view afford browsing for future events without losing local context for the current day. The downside of such a view, however, is that days no longer appear in a fixed column position, but in different locations based on the day. Thus, the preferred calendar view depends on the type of activity the user is doing.
Frequency of Consulting the Calendar
When asked about the frequency at which users consulted their calendars, we received a wide range of responses in the survey: keeping the calendar program always open (66%) and several times a day (21%) were the most common.
In the interviews, several other specific times were reported: just before bedtime or after waking up; only when prompted by an alarm; when scheduling a new event; once weekly; or on weekends only. Two interviewees reported consulting their calendar only to check for conflicts before scheduling new events, and for confirmation of events already scheduled.
Proxy Calendar Artifacts
We use the term 'proxy calendar artifacts' (or 'proxies' in short) to refer to ephemeral scraps or notes (characterized as micronotes in [Lin et al., 2004]) or printed calendars or electronic means such as email to self that are used for calendaring when primary calendar systems are unavailable or inaccessible (e.g. when users were away from their desks or offices).
Despite the prevalent use of electronic calendars, many were not portable and were tied to specific desktop computers. This prompted the users to use other means to view or add events to their calendar; about 27% reported that they used proxy artifacts such as scraps or notes to be entered into the primary calendar at a later time. A wide variety of proxy calendar artifacts was reported in our interviews: paper scraps were by far the most common medium; other techniques included carrying laptops solely for the purpose of calendaring, PDAs, voice recorders, and printouts of electronic calendars. Information captured via these proxies was transferred to the primary calendar after a delay: most often, users entered the events as soon as they could access their primary calendar (63% of survey participants), a few others reported entering them within the same day (25%), while the maximum delay reported was up to one week.
Information Stored in an Event Record
Calendar systems allow users to add several items of information to an event record. Typical information included the date of the event (97%), time (96%), location (93%) and purpose (69%) as indicated in the survey. In interviews, it was clear that common fields such as notes, other attendees and status were used only to a limited extent. Location was entered mostly for non-recurring events. However, many other pieces of information were frequently recorded, even though calendar programs do not have a specific field for these data. For example, information critical for participation at an event was entered inline for easy access: e.g. phone numbers for conference calls, cooking menus and shopping lists, meeting agenda, original email for reference, links to relevant web sites, and filenames of relevant files.
One participant mentioned adding meeting participants' email addresses in case she needed to inform them of a cancellation or rescheduling. For activities such as trips or flights, further details such as booking codes and flight details were included as a way of reducing information fragmentation between the calendar system and the email system.
Types of Events
The events most commonly recorded on calendars by survey participants were timed events such as appointments or meetings (98%), special events requiring advance planning, such as tests (93%), long duration events such as the week of final exams at the end of each semester (66%), and all-day events such as birthdays (81%). Several interviewees also mentioned recording to-do items in a calendar, such as phone calls to be made, or tasks which would remain on the calendar until completed, or which were scheduled in on their deadline. Specifically, we found several instances of the following types of events scheduled:
• Work-related events. Many interviewees used calendar scheduling for workrelated events such as meetings, deadlines, classes, public events such as talks and conferences, and work holidays. Users in work environments included vacation details for co-workers and subordinates. Time was routinely blocked off to prepare for other events: e.g. class preparation or ground work to be done before a meeting.
Interviewees who had administrative assistants reported that their assistant maintained or co-maintained their calendar (7 out of 16 interviewees). The dynamics of shared access were vastly different across all these situations. One interviewee mentioned that he would never let an assistant be their primary scheduler; the assistant was able to access only a paper copy and any new events would be reviewed and added by the primary calendar user. Two other users mentioned that they provided paper calendars to subordinates to keep track of their schedule and to be able to answer questions about it to third parties. One participant reported calling in to their secretary when they needed to consult their schedule while away from their desk (similar to previous reports in [Perry et al., 2001]), while another reported sending email to themselves as a way to quickly capture a newly-scheduled meeting. • Family/personal events. Half of the survey respondents indicated that they coordinate calendars with their spouses, roommates, or family. Even though family activities such as picking up kids from school, or attending church services, were easily remembered without the aid of a calendar, interviewees reported that they chose to record them anyway to provide "a visual idea of the entire day" (figure 4). Public holidays, family birthdays, and guest visits were added to prevent accidental scheduling of conflicting events. Figure 4: Family events such as attending church are added to calendars, not for remembering, but to be able to get a visual idea of the entire day.
Many participants reported having separate calendars for business use and for home/personal use, as was also seen in a majority of respondents in [Kelley and Chapanis, 1982]. Although events overlapped between them (e.g. work trips on family calendars and family medical appointments on work calendars), the calendars themselves were located at the respective places and maintained separately. Family calendars were most likely to be kept in the kitchen, on the refrigerator. Two contrasts between work calendars and home calendars were prominent: work calendars were more often electronic while home calendars more likely to be paper calendars, e.g. as a wall calendar, or on the refrigerator. Work calendars were updated by the primary users or their secretaries or their colleagues, while family calendars were overwhelmingly managed by women. No male participant reported being the only calendar manager at home; women reported either being the only person to edit it, or sharing responsibilities with their husbands. Family-related events and reminders were constrained to the home calendar, as in [Nippert-Eng, 1996], but they were sometimes added to work calendars if such events would impact work time. For example, medical appointments (of self or family members) that occurred during work hours were added to work calendars so that their co-workers were aware of their absence.
• Public events. Public events were added even when the user had no intention of attending that event. They were added to recommend to other people, or for personal planning purposes, or to start conversations related to the public activity. An administrator (from ANONYMIZED, a small university town with a very popular college football team) said that although he had no interest in football, he added home games to his calendar to ensure that visiting dignitaries were not invited during a time when all hotels in town would be booked to capacity. On the other hand, two interviewees considered such public events as contributing to clutter in their personal calendar, and chose not to add them.
Continued Use of Paper Calendars
In his 1993 study [Payne, 1993], Payne reports that the most stable characteristic he observed was the continued reliance of all but two participants on some kind of paper calendar. Our findings are similar: despite most of our users using electronic calendars, every one of them reported using paper calendars even if not regularly; 12 out of 16 interview participants reported using them regularly.
Reasons for the Continued Use of Paper Calendars
We group the several reasons and examples elicited from our participants into the following four categories:
• Paper trail. Cancelled events were scratched off the calendar, leaving a paper trail. Being able to make a distinction between cancelled and neverscheduled events was cited as an important concern for continuing with paper calendars.
• Opportunistic rehearsal. We found support for the idea of opportunistic rehearsal [Payne, 1993]. Users cited that wall calendars needed no more than a glance to read, and provided for quick reference. This also corroborates Dourish's argument [Dourish et al., 1993] that the presence of informational context in paper artifacts such as calendars is an important motivator for people to continue to use them, even though electronic systems support the information retrieval task better.
• Annotation. Paper calendars are more amenable to free-form annotation, as reported earlier [Kelley and Chapanis, 1982], and as the following quotes from our study illustrate:
"That's what I call the graffiti aspect of it, it's probably freer by virtue of being handwritten." "There is a lot of that [code and symbols]. Stars and dashes and circles and headlines, marked and completed." Figure 5 shows a printed calendar with a sticky note pasted on it. The event is about a community potluck dinner. The sticky note complements the scheduled appointment with information about the dish the participant plans to bring to the event. Figure 6 shows a picture of a pumpkin hand-drawn on a printed calendar to mark Halloween on October 31. Figure 5: Sticky notes are pasted on paper calendars to remind oneself of the preparation required for an event. • Prepopulated events. Participants reported that having holidays or other event details already printed in commercially-available paper calendars was an important reason for using them. Calendars distributed by the university contained details not only of academic deadlines, but also of athletic events and games; [Kelley and Chapanis, 1982] point to branding issues as well.
Paper calendars were used alongside electronic calendars in either a supplementary or complementary role, as follows:
Printouts of Electronic Calendars
Printouts of electronic calendars played a supplementary role: they were used as proxies of the master calendar when the master calendar was unavailable. 35% of survey participants reported printing their calendar. Among those printed, all views were commonly printed: monthly (43%), weekly (33%) and daily (25%) (figure 3). Among those who printed, many printed it monthly, weekly or daily (figure 7). How often do users ...
Monthly
Weekly Daily Never Figure 7: How often users perform activities related to paper calendars.
Based on our interviews, we found that electronic calendars were printed for three main reasons:
• Portability. Users carried a printed copy of the master calendar to venues where collaboration was anticipated, such as a meetings or trips. Even those who carried laptops and PDAs said that they relied on printed calendars for quick reference.
• Quick capture. Events were often entered into paper calendars first because of their easy accessibility, and were later transferred back to the digital calendar. 4.1.1 One-third of all interviewees reported making changes to paper copies of their calendars. Not all these changes were propagated back to the master calendar, however.
• Sharing a read-only view with associates. Taping a printed calendar to the outside of office doors was common practice, as reported by interviewees.
In one instance, a user provided printed calendars to his subordinates so they could schedule him for meetings. These events were then screened by him before being added to the master calendar.
Wall Calendars
Wall calendars typically played a complementary role, and there was little overlap between the events on a wall calendar and those in an electronic calendar. 70% of survey participants had a wall calendar in their home or office, however only 25% of users actually recorded events on it. Family events such as birthdays, vacations, and days off were most commonly recorded by interviewees. At home, wall calendars were located in the kitchen, on the fridge.
Index Cards
An extreme case of ad hoc paper calendar usage reported by one of our interviewees involved index cards, one for each day, that the participant carried in his shirt pocket when he forgot his PDA. Another interviewee reported exclusively using index cards for calendar management at their previous job because of their portability and trustworthiness. We report this not as a trend, but to illustrate the wide variety in the use of paper calendars.
Reminders and Alarms
Reminders and alarms are one of the major distinguishing features of modern electronic calendar systems. A majority of survey participants (63%) reported using these features. One user reported switching from paper to an online calendar because "a paper calendar cannot have an alarm feature". We use the term reminder to refer to any notification of a calendar event, and alarm to refer to the specific case of an interruption generated by the calendar system. Based on our interviews, we classified reminders into three categories taking into consideration the reasons, time, number, modalities and intervals of alarms. Before presenting the details of such a classification, however, we examine the individual factors in more detail.
Reasons for Using Alarms
Although reminding oneself of upcoming events is the most obvious use case for alarms, there were several other situations where users mentioned using reminders in addition to consulting their calendars regularly. Even when users were cognizant of upcoming events, they preferred to set alarms to interrupt them and grab their attention at the appointed hour. Alarms served as preparation reminders for events that were not necessarily in the immediate future.
When subordinates added events to a primary user's calendar, alarms were deemed an important way of notifying that user of such events. Early morning meeting reminders doubled up as wake-up alarms: one interviewee reported keeping their PDA by their bedside for this purpose. Another interviewee who needed to move his car out of a university parking lot where towing started at 8:00 am sharp had set a recurring alarm (figure 8). In one case, alarms were closely monitored by a user's secretary: if an event were missed by the user by a few minutes, the secretary would check on her boss and remind him to attend the meeting that was now overdue.
Number and Modalities of Reminders
While most survey participants only set a single reminder per event (52%), many others reported using multiple alarms. We conclude from our interviews that different semantic meanings were assigned to each such reminder: an alarm one day before an event was for preparation purposes, while an alarm 15 minutes before an event was a solicited interruption. Multimodal alarms were not used by many: the two most popular modalities used individually were audio (40%) and on-screen dialogs (41%).
Alarm Intervals
Reminders were set for varying intervals of time before the actual events took place, ranging from 5 minutes to several years. The two factors that affected this timing were (1) location of the event, and (2) whether or not (and how much) preparation was required. Users often set multiple alarms to be able to satisfy each of these requirements, because a single alarm could not satisfy them all. Based on these findings, we classify alarms into 3 categories:
• Interruption Reminders. Alarms set 5-15 minutes before an event were extremely short-term interruptions intended to get users up from their desks. Even if they knew in their mind that a particular event was coming up, it was likely that they were involved in their current activity deeply enough to overlook the event at the precise time it occurred. 15 minutes was the most common interval, as reported by 8 out of 16 interview participants. We found that the exact interval for interruption reminders was a function of the location of the event. Events that occurred in the same building as the user's current location had alarms set for between 5 and 15 minutes. Events in a different building had alarms for between 15 minutes and 30 minutes, based on the time it would take to reach there. Two interviewees reported that they set alarms for TV shows and other activities at home for up to 1 hour prior, because that is how long their commute took.
• Preparation Reminders. Users set multiple alarms when preparation was required for an event: the first (or earlier) alarm was to alert them to begin the preparation, while a later alarm was the interruption reminder for that event.
Payne [Payne, 1993] mentions the prevalence of this tendency as well: the reason for the first alarm (out of several) is to aid prospective remembering where the intention to look up an event is not in response to a specific temporal condition, but instead such conditions are checked after the intention is recalled. If certain items were needed to be taken to such meetings, preparation reminders were set for the previous night or early morning on the day of the event. Based on the interviews, preparation reminders were more commonly used for non-recurring events than for recurring events.
• Long-term Reminders. Events several months or years into the future were assigned reminders so that the user would not have to remember to consult the calendar at that time, but instead would have them show up automatically at (or around) the proper time. This is an illustration of using the calendar for prospective remembering tasks. Examples include a department head who put details of faculty coming up for tenure in up to 5 years, and a professor setting reminders for a conference submission deadline several months later.
Calendars as a Memory Aid
Calendars serve a value purpose as external memory for events [Payne, 1993]. In addition, in our data we found that the role that calendars play with respect to memory goes beyond this simple use. In particular, the following uses of calendars illustrate the different ways in which calendars serve as memory aids beyond simple lookups: First, users reported recording events in the calendar after the fact, not for the purpose of reminding, but to support reporting needs. Second, a few reported using previous years' calendars as a way to record memorable events to be remembered in future years. For those that used paper calendars, these events were often copied at the end of the year to newer calendars. The function of memory aid goes beyond remembering personal events (appointments and deadlines); it serves as a life journal, capturing events year after year. Kelley and Chapanis [Kelley and Chapanis, 1982] reported that 9 out of 11 respondents in their study kept calendars from two to 15 years.
Reporting Purposes
In our study, 10 out of 16 interviewees reported that they used their calendar to generate annual reports every year. Since it contained an accurate log of all their activities that year, it was the closest to a complete record of all accomplishments for that year. Among these, 5 users reported that they archived their calendars year after year to serve as a reference for years later. This tendency has also been reported in past studies [Kelley andChapanis, 1982, Payne, 1993]; Kelley referred to it as an 'audit trail', and highlighted the role of calendars in reporting and planning.
One person mentioned that they discovered their father's journal a few years after his death, and now they cultivate their calendar as a memento to be shared with their kids in the future.
"I think I occasionally even think about my kids. Because I do, I save them, I don't throw them away [...] I think that it's common with a little more sense of mortality or something. It's trying to moving things outwards."
Opportunities for Design
In this section, we highlight how some of our findings can be address through new electronic calendar designs.
Paper Calendars and Printing
We do not believe that paper calendars will disappear from use; they serve several useful functions that are hard to replace by technology. Electronic calendars in general are more feature-rich than paper calendars. Portable devices have good support for capturing information while mobile. Yet, we found that paper calendars and proxies continue to be prevalent in the use of calendar management. They provide support for easy capture of calendar information, are effective at sharing, and support the display of the calendar in public view with ease.
Therefore, given the many uses of paper calendars, we consider how electronic calendar systems can provide better support for these proxies. Richer printing capabilities might provide easy support for transferring online calendar information to the paper domain. Printing a wall calendar is a novelty relegated to specialized design software. However our findings show that wall calendars play a significant role in supporting calendar management, particularly at home. With affordable printing technology available, it is possible to print a wall calendar or table calendar at home, incorporating not only details of events from a user's personal electronic calendar, but also visual elements such as color coding, digital photos (for birthdays, etc.) and event icons. In a way, printed calendars are used in similar ways as discussed in [Lin et al., 2004].
Digital Paper Trails
Some of the features of paper calendars can be recreated in online systems. For example, current electronic calendar systems remove all traces of an event upon cancellation, without providing an option to retain this historical record. This was one of the shortcomings which led interview participants to rely on paper instead. Instead of deleting events, they could be faded out of view, and made visible upon request. Most calendar software support the notion of different calendars inside of the same program. A possibility is that all deleted events could simply be moved to a separate calendar, where events can be hidden easily. Yet, the events would remain in the calendar as a record of cancelled activity.
Tentative Event Scheduling
Several participants indicated that they 'penciled in' appointments in their paper calendars as tentative appointments to be confirmed later (also identified as a problem in [Kelley and Chapanis, 1982]). These tentative appointments served as a way of blocking particular date/time combinations while a meeting was being scheduled with others. Often, there were several of these tentative times for a particular meeting. Once the meeting was confirmed, only one of them was kept and the rest discarded. This type of activity is not well-supported in personal calendars. For corporate calendars, there is adequate support for scheduling group meetings, but it is often missing in personal calendars.
Intelligent Alarms
Calendar alarms and reminders have evolved from past systems and now allow notification in several ways: audible alarms, short text messages, popup reminders, and email are just a few. However, the fundamental concept of an alarm still tailors only to interruption reminders.
• Preparation reminders. To support preparation reminders, many electronic calendars allow the creation of multiple alarms per event, with different modalities for each (e.g., email, SMS, sounds, dialog box). However, when these reminders are used for preparation, as we found in the study, users often wanted to have more context: they expected to have an optional text note to indicate what preparation was required. E.g., alarms that would remind a user before leaving home to remember to carry material for an upcoming meeting, or a reminder the previous night to review documents.
• Location-related alarms. The location of events was found to be an important influencer of alarm time. If calendars supported the notion of location (besides simply providing a field to type it in), alarms could be automatically set based on how long it would take the user to reach the event.
• Alarms on multiple devices. When an alarm is set on multiple devices, each will go off at the exact same time without any knowledge of all the others. There is need to establish communication among the devices to present a single alarm to the user on the mutually-determined dominant device at the time.
Supporting a Rich Variety of Event Types
Users reported that not all events were equal: public events were merely for awareness, recurring events indicated that time was blocked out, and holidays were added to prevent accidental scheduling. From the users' point of view, each has different connotations, different visibility (public events should ideally fade out of sight when not required), and different types, number and intervals of alarms.
• Event templates. A calendar system that supports event types can provide ways and means for users to create event templates and categories with different default settings along each of the dimensions outlined above. By having event templates, quick capture is supported as well. When much of the extra information about an event is pre-filled, data entry can be minimized to just the title of the event. Certain types of events have special metadata fields associated with them, e.g. conference call events contain the dial code, flight events contain airline and arrival/departure info. This could be easily achieved by event templates.
• Showing/hiding public events. While a few users said they added public events for informational purposes, others did not want public events (that they would not necessarily attend) to clutter their calendar. If calendars supported making certain event types visible or invisible on demand, the needs of both user groups could be met. Again, by providing an option to keep all events in the same calendar, such a system would contribute to reducing information fragmentation.
Reporting and Archival Support
Report generation is a significant use of electronic calendars. Calendar software should have a way to generate reports and export information so that particular groups of events can be summarized in terms of when the meetings/events occurred, how many hours were devoted to them, and capture any notes entered in the calendar. One participant reported that he uses the search functionality in his calendar to obtain a listing of events related to a theme. This is used to get an idea of the number of hours devoted to particular activities and help to prepare an annual activity report.
Discussion & Future Work
The paradox of encoding and remembering, as described in [Payne, 1993], was clearly evident in our data. Participants seem to over-rely on calendar artifacts to remember appointments, as seen in the setting of multiple alarms, printing of calendars for meetings, carrying a PDA everywhere, and calling their secretary to confirm events. The unfortunate side effect of sharing the management of a calendar with other people is that the primary user no longer goes through the personal encoding episode of entering the information. Some participants relied on administrative assistants to enter events in their calendars. At home, many participants relied on their spouses to maintain the calendar. Some participants even suggested the need to have an alarm for when events were added to their calendars. All of this points to a diminished opportunity for encoding the information that is entered into one's calendar. This makes it very difficult for participants to remember what is in their calendar, given that at times the scheduled events have never been seen before they occur. On the other hand, the opportunity for rehearsal is greater today, if users take advantage of existing information dissemination and syndication techniques. For example, keeping a calendar on a desktop computer and publishing to an online calendar service such Google Calendar or Apple Mobile Me makes the calendar available in many other locations. Users can view their calendar on the web from any web browser, from mobile phones, or in the background on a desktop computer as part of widgets (tiny applications) such as Apple's Dashboard or Google Gadgets, or access it over a regular phone call [Pérez-Quiñones and Rode, 2004]. So, the possibility of opportunistic rehearsal is afforded by current systems. We did not observe this in our data, as many of our users did not use these services. However, the paradox of encoding, rehearsal, and recall seems to be in need of future work so we can understand the impact of electronic calendar systems on human memory. • What is your age group?
Calendar Use Basics
• Which devices do you own or use frequently?
• What computing-enabled calendars do you use?
• Do you use your computer to keep your calendar? If so, which program do you use for your main calendar management task on your desktop/laptop computer?
• If you own and/or use a PDA, which calendar program do you use on the PDA?
• Do you use an online calendar?
• What events do you record on your calendar?
• How often do you visit your calendar?
• How far ahead do you regularly look when you view your calendar?
• What would you consider your preferred view?
• If your calendar software includes a To-Do function, do you use it?
• Does your calendar software have a way to classify calendar events by categories? If so, how do you use this feature?
• Who changes and updates your calendar?
• How often do you add new events?
• Do you keep 'proxies' (for example, post-its) or other notes that need to be entered in the calendar at a later time?
• How long does it take for the proxy to make it into your main calendar?
New Events
• How frequently do you get events by phone (someone calls you) that go into your calendar?
• How frequently do you get events by e-mail (someone sends you email) that go into your calendar?
• How frequently do you get events in person (someone tells you of a meeting) that go into your calendar?
• By what other methods do new events arrive?
• Is there any overlap? Is one just a pared-down version of the other one or do they contain completely separate events?
• Do you coordinate calendar events with your spouse, roommate, family?
• If so, how do you go about doing that?
• Please explain any additional ways in which you use your calendar system.
• What are you habits as far as when you look at your calendar, how often, how far ahead do you look, how in-depth you examine events when you look, etc.
• Do you use a method of organization on a paper calendar that you cannot apply to an electronic calendar? (i.e.: specific types of events go into a specific area of the date box, highlighted events, etc)
• Is there anything else about your personal information management we have not covered?
| 6,477 |
0809.3447
|
1619717240
|
In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.
|
Jones @cite_33 framed the problems in PIM in terms of the canonical tasks of , , and . Keeping any kind of information involves a tradeoff between the likelihood it will be recalled in the future, and the costs of capturing and retaining it. Organizing involves filing it away such that it can be retrieved easily in the future, and while keeping the cost of organizing less than the cost of finding. Re-finding is a different problem from finding, since there are aspects to encountered information that make it personal.
|
{
"abstract": [
"To keep or not to keep? People continually face variations of this decision as they encounter information. A large percentage of information encountered is clearly useless — junk e–mail, for example. Another portion of encountered information can be \"used up\" and disposed of in a single read — the weather report or a sports score, for example. That leaves a great deal of information in a middle ground. The information might be useful somewhere at sometime in the future. Decisions concerning whether and how to keep this information are an essential part of personal information management. Bad decisions either way can be costly. Information not kept or not kept properly may be unavailable later when it is needed. But keeping too much information can also be costly. The wrong information competes for attention and may obscure information more appropriate to the current task. These are the logical costs of a signal detection task. From this perspective, one approach in tool support is to try to decrease the costs of a false positive (keeping useless information) and a miss (not keeping useful information). But this reduction in the costs of keeping mistakes is likely to be bounded by fundamental limitations in the human ability to remember and to attend. A second approach suggested by the theory of signal detectability is relatively less explored: Develop tools that decrease the likelihood that \"keeping\" mistakes are made in the first place."
],
"cite_N": [
"@cite_33"
],
"mid": [
"1991755171"
]
}
|
An Exploratory Study of Personal Calendar Use
|
Personal Information Management (PIM) is receiving attention as an area of research within the CHI community [Barreau et al., 2008, Bergman et al., 2004, Teevan et al., 2006. PIM research mostly is concerned with studying how people find, keep, organize, and re-find (or reuse) information in and around their personal information space. Calendar management, one of the typical PIM tasks, is done today using a variety of systems and methods, including several popular paper-based methods: At-A-Glance, one of the largest suppliers of paper planners, sold more than 100 million calendars in 2000 1 .
For computer-based systems, calendar management is often integrated into email clients (e.g. Microsoft Outlook); it is one of the most common applications in all personal digital assistants (PDAs, e.g. Blackberries and iPhones), and there are several online calendar systems (e.g. Yahoo! Calendar, Google Calendar, Apple Mobile Me). Date-and time-based information is ubiquitous, and is often available through many means such as postings on office doors, displays with dated announcements, through email conversations, written on wall calendars, etc. The result is that calendar information tends to be pervasive.
In this paper, we set out to explore how people use calendars in the presence of varied technological options. We are interested in understanding how calendar information is managed given the availability of these platforms. After a brief review of related work, we proceed to discuss our findings from the survey, interviews, and artifacts. From these, we suggest several opportunities for designers of future electronic calendar systems, and conclude the paper with a discussion of future research in personal information management.
Study Description
The ethnographic approach we took in this study follows techniques commonly reported in the Personal Information Management literature, notably [Kelley and Chapanis, 1982, Payne, 1993, Jones et al., 2005, Marshall and Bly, 2005. We did not attempt to test any a priori hypotheses, but were interested in examining how calendar practices have evolved in the years following previous calendar studies by Kelley and Chapanis [Kelley and Chapanis, 1982] and Payne [Payne, 1993].
Our study has three components to it: a survey (N=98), in-person interviews (N=16), and an examination of calendar artifacts such as screenshots and paper calendars. A large-scale online survey was distributed among members of a university. A total of 98 responses were received (54% male and 45% female), including faculty (56%), administrative staff (20%), and students (19%) (figure 1). While previous studies have examined organizational calendars [Dourish et al., 1993] and groupware calendar systems [Grudin, 1996, Palen andGrudin, 2003], our focus was on the personal use of calendars.
Other 4%
Staff 20%
Faculty 56% Students 19% Figure 1: Roles of survey participants
In part two, we conducted in-depth personal interviews with 16 participants, recruited from among the survey participants. The recruitment criterion for interview candidates was the same as in [Kelley and Chapanis, 1982]: that participants should be regular users of some form of calendar system, either electronic or paper or a combination of both. Participants included graduate students, faculty members, administrative assistants, a department head, the director of a small business, etc., among others.
Interviews ranged from 20 to 30 minutes each, and were conducted in situ at their workplaces so we could observe their calendaring practices directly (e.g. calendar programs or wall calendars or paper scraps.) Interviews were semistructured and open-ended: a prepared set of questions was asked in each interview. The questions we asked were closely modeled on those asked in similar studies [Kelley andChapanis, 1982, Payne, 1993]. The complete set of questions is available as an appendix in a technical report [Tungare and Pérez-Quiñones, 2008]. As an extension to past studies, we were able to explore the use of features of modern calendar systems such as alarms, reminders, and mobile use, which were absent in paper calendars. Interviewees were encouraged to talk freely and to expand upon any of the themes they wished to discuss in more detail. Additional topics were addressed as appropriate depending on the interviewee's calendar use. Examining the calendar systems in use at their desks or on their walls prompted specific questions from the interviewers about these practices.
All interviews were transcribed in full. We performed content analysis [Krippendorff, 2004] of the transcripts to extract common patterns of use. The main purpose of content analysis in this study was to summarize the findings into groups of common observations, as in [Marshall and Bly, 2005]. Individual responses were tagged into several categories by two of the authors and any differences reconciled by discussion. Nearly 410 tags resulted from this activity; these were then collapsed into 383 tags (grouping together tags that were near-duplicates) and 11 top-level groups during the clustering procedure.
From each interview participant, we collected copies of artifacts that were used for calendaring purposes: 2 weeks' worth of calendar information and any other idiosyncratic observations that were spotted by the interviewers. These included screenshots of their calendar programs, paper calendars, printouts of electronic calendars (that were already printed for their own use), sticky notes stuck on paper calendars, etc. Some of these reflected a degree of wear and tear that occurred naturally over time; others provided evidence of manipulations such as color highlights, annotations in the margins, or comments made in other ways. Artifacts were not coded on any particular dimension, but pictures of these artifacts are used to supplant our textual descriptions wherever appropriate.
Capturing and Adding Events
Capturing events refers to the act of knowing about an event and entering it into a calendaring system (also referred to as the 'keeping' phase in the PIM literature.) Most survey participants reported adding new events as soon as they were (made) aware of them (93%) while the rest added them before the end of the day. Even when at their desks, those users who owned PDAs reported using them to create new events in their calendar: this was deemed faster than trying to start the calendar program on a computer and then adding an event. When away from their desks, they used proxy artifacts such as printed calendar copies or paper scraps.
Information about new events reached the primary calendar user via one of several means: email, phone, and in-person were commonly reported (figure 2). The fact that email was the most common way reported in our study is an expected evolution from older findings [Kelley and Chapanis, 1982] that phones were the most common stimuli for calendar events. Interviewees mentioned several other methods through which they received events: flyers, posters, campus notices, meeting minutes, public calendars (such as academic schedules or sports events), newspapers, internet forums, (postal) mail, fax, radio, or scheduled directly by other people who had access to the calendar (e.g., shared calendars). The wide variety of sources here is a potential indication of the problem of information overload [Schick et al., 1990] faced by knowledge workers.
Personal Calendar View Preference
We refer to the most common time interval shown in a calendar program or on a paper calendar as the preferred personal calendar view: the week view was preferred by most of our survey participants at 44%, followed by the day view at 35%, and the month view at 21% (figure 3). These are very close to the numbers reported by Kelley et al. [Kelley and Chapanis, 1982] (45%, 33%, 22% respectively). That many interviewees preferred a week view suggests the use of the calendar for opportunistic rehearsal, because they browsed the entire week's appointments each time they viewed the calendar. This preference supports the analysis of [Payne, 1993] in that the printed versions of calendar do provide a valuable aid in opportunistic reading of the the week's activities. Users who kept multiple calendars within the same calendaring system indicated that they turned the visibility of each calendar on or off on demand, based on the specifics of what they needed to know during a particular lookup task. On smaller devices such as PDAs, the default view was the daily view.
Figure 3: Preferred calendar views
There seem to be two motivators for browsing calendars: looking for activities to attend in the near future, and looking for activities further out that require preparation. A daily view directly supports the first, while a week view partially supports the second one. Intermediates such as Google Calendar's 4-day view afford browsing for future events without losing local context for the current day. The downside of such a view, however, is that days no longer appear in a fixed column position, but in different locations based on the day. Thus, the preferred calendar view depends on the type of activity the user is doing.
Frequency of Consulting the Calendar
When asked about the frequency at which users consulted their calendars, we received a wide range of responses in the survey: keeping the calendar program always open (66%) and several times a day (21%) were the most common.
In the interviews, several other specific times were reported: just before bedtime or after waking up; only when prompted by an alarm; when scheduling a new event; once weekly; or on weekends only. Two interviewees reported consulting their calendar only to check for conflicts before scheduling new events, and for confirmation of events already scheduled.
Proxy Calendar Artifacts
We use the term 'proxy calendar artifacts' (or 'proxies' in short) to refer to ephemeral scraps or notes (characterized as micronotes in [Lin et al., 2004]) or printed calendars or electronic means such as email to self that are used for calendaring when primary calendar systems are unavailable or inaccessible (e.g. when users were away from their desks or offices).
Despite the prevalent use of electronic calendars, many were not portable and were tied to specific desktop computers. This prompted the users to use other means to view or add events to their calendar; about 27% reported that they used proxy artifacts such as scraps or notes to be entered into the primary calendar at a later time. A wide variety of proxy calendar artifacts was reported in our interviews: paper scraps were by far the most common medium; other techniques included carrying laptops solely for the purpose of calendaring, PDAs, voice recorders, and printouts of electronic calendars. Information captured via these proxies was transferred to the primary calendar after a delay: most often, users entered the events as soon as they could access their primary calendar (63% of survey participants), a few others reported entering them within the same day (25%), while the maximum delay reported was up to one week.
Information Stored in an Event Record
Calendar systems allow users to add several items of information to an event record. Typical information included the date of the event (97%), time (96%), location (93%) and purpose (69%) as indicated in the survey. In interviews, it was clear that common fields such as notes, other attendees and status were used only to a limited extent. Location was entered mostly for non-recurring events. However, many other pieces of information were frequently recorded, even though calendar programs do not have a specific field for these data. For example, information critical for participation at an event was entered inline for easy access: e.g. phone numbers for conference calls, cooking menus and shopping lists, meeting agenda, original email for reference, links to relevant web sites, and filenames of relevant files.
One participant mentioned adding meeting participants' email addresses in case she needed to inform them of a cancellation or rescheduling. For activities such as trips or flights, further details such as booking codes and flight details were included as a way of reducing information fragmentation between the calendar system and the email system.
Types of Events
The events most commonly recorded on calendars by survey participants were timed events such as appointments or meetings (98%), special events requiring advance planning, such as tests (93%), long duration events such as the week of final exams at the end of each semester (66%), and all-day events such as birthdays (81%). Several interviewees also mentioned recording to-do items in a calendar, such as phone calls to be made, or tasks which would remain on the calendar until completed, or which were scheduled in on their deadline. Specifically, we found several instances of the following types of events scheduled:
• Work-related events. Many interviewees used calendar scheduling for workrelated events such as meetings, deadlines, classes, public events such as talks and conferences, and work holidays. Users in work environments included vacation details for co-workers and subordinates. Time was routinely blocked off to prepare for other events: e.g. class preparation or ground work to be done before a meeting.
Interviewees who had administrative assistants reported that their assistant maintained or co-maintained their calendar (7 out of 16 interviewees). The dynamics of shared access were vastly different across all these situations. One interviewee mentioned that he would never let an assistant be their primary scheduler; the assistant was able to access only a paper copy and any new events would be reviewed and added by the primary calendar user. Two other users mentioned that they provided paper calendars to subordinates to keep track of their schedule and to be able to answer questions about it to third parties. One participant reported calling in to their secretary when they needed to consult their schedule while away from their desk (similar to previous reports in [Perry et al., 2001]), while another reported sending email to themselves as a way to quickly capture a newly-scheduled meeting. • Family/personal events. Half of the survey respondents indicated that they coordinate calendars with their spouses, roommates, or family. Even though family activities such as picking up kids from school, or attending church services, were easily remembered without the aid of a calendar, interviewees reported that they chose to record them anyway to provide "a visual idea of the entire day" (figure 4). Public holidays, family birthdays, and guest visits were added to prevent accidental scheduling of conflicting events. Figure 4: Family events such as attending church are added to calendars, not for remembering, but to be able to get a visual idea of the entire day.
Many participants reported having separate calendars for business use and for home/personal use, as was also seen in a majority of respondents in [Kelley and Chapanis, 1982]. Although events overlapped between them (e.g. work trips on family calendars and family medical appointments on work calendars), the calendars themselves were located at the respective places and maintained separately. Family calendars were most likely to be kept in the kitchen, on the refrigerator. Two contrasts between work calendars and home calendars were prominent: work calendars were more often electronic while home calendars more likely to be paper calendars, e.g. as a wall calendar, or on the refrigerator. Work calendars were updated by the primary users or their secretaries or their colleagues, while family calendars were overwhelmingly managed by women. No male participant reported being the only calendar manager at home; women reported either being the only person to edit it, or sharing responsibilities with their husbands. Family-related events and reminders were constrained to the home calendar, as in [Nippert-Eng, 1996], but they were sometimes added to work calendars if such events would impact work time. For example, medical appointments (of self or family members) that occurred during work hours were added to work calendars so that their co-workers were aware of their absence.
• Public events. Public events were added even when the user had no intention of attending that event. They were added to recommend to other people, or for personal planning purposes, or to start conversations related to the public activity. An administrator (from ANONYMIZED, a small university town with a very popular college football team) said that although he had no interest in football, he added home games to his calendar to ensure that visiting dignitaries were not invited during a time when all hotels in town would be booked to capacity. On the other hand, two interviewees considered such public events as contributing to clutter in their personal calendar, and chose not to add them.
Continued Use of Paper Calendars
In his 1993 study [Payne, 1993], Payne reports that the most stable characteristic he observed was the continued reliance of all but two participants on some kind of paper calendar. Our findings are similar: despite most of our users using electronic calendars, every one of them reported using paper calendars even if not regularly; 12 out of 16 interview participants reported using them regularly.
Reasons for the Continued Use of Paper Calendars
We group the several reasons and examples elicited from our participants into the following four categories:
• Paper trail. Cancelled events were scratched off the calendar, leaving a paper trail. Being able to make a distinction between cancelled and neverscheduled events was cited as an important concern for continuing with paper calendars.
• Opportunistic rehearsal. We found support for the idea of opportunistic rehearsal [Payne, 1993]. Users cited that wall calendars needed no more than a glance to read, and provided for quick reference. This also corroborates Dourish's argument [Dourish et al., 1993] that the presence of informational context in paper artifacts such as calendars is an important motivator for people to continue to use them, even though electronic systems support the information retrieval task better.
• Annotation. Paper calendars are more amenable to free-form annotation, as reported earlier [Kelley and Chapanis, 1982], and as the following quotes from our study illustrate:
"That's what I call the graffiti aspect of it, it's probably freer by virtue of being handwritten." "There is a lot of that [code and symbols]. Stars and dashes and circles and headlines, marked and completed." Figure 5 shows a printed calendar with a sticky note pasted on it. The event is about a community potluck dinner. The sticky note complements the scheduled appointment with information about the dish the participant plans to bring to the event. Figure 6 shows a picture of a pumpkin hand-drawn on a printed calendar to mark Halloween on October 31. Figure 5: Sticky notes are pasted on paper calendars to remind oneself of the preparation required for an event. • Prepopulated events. Participants reported that having holidays or other event details already printed in commercially-available paper calendars was an important reason for using them. Calendars distributed by the university contained details not only of academic deadlines, but also of athletic events and games; [Kelley and Chapanis, 1982] point to branding issues as well.
Paper calendars were used alongside electronic calendars in either a supplementary or complementary role, as follows:
Printouts of Electronic Calendars
Printouts of electronic calendars played a supplementary role: they were used as proxies of the master calendar when the master calendar was unavailable. 35% of survey participants reported printing their calendar. Among those printed, all views were commonly printed: monthly (43%), weekly (33%) and daily (25%) (figure 3). Among those who printed, many printed it monthly, weekly or daily (figure 7). How often do users ...
Monthly
Weekly Daily Never Figure 7: How often users perform activities related to paper calendars.
Based on our interviews, we found that electronic calendars were printed for three main reasons:
• Portability. Users carried a printed copy of the master calendar to venues where collaboration was anticipated, such as a meetings or trips. Even those who carried laptops and PDAs said that they relied on printed calendars for quick reference.
• Quick capture. Events were often entered into paper calendars first because of their easy accessibility, and were later transferred back to the digital calendar. 4.1.1 One-third of all interviewees reported making changes to paper copies of their calendars. Not all these changes were propagated back to the master calendar, however.
• Sharing a read-only view with associates. Taping a printed calendar to the outside of office doors was common practice, as reported by interviewees.
In one instance, a user provided printed calendars to his subordinates so they could schedule him for meetings. These events were then screened by him before being added to the master calendar.
Wall Calendars
Wall calendars typically played a complementary role, and there was little overlap between the events on a wall calendar and those in an electronic calendar. 70% of survey participants had a wall calendar in their home or office, however only 25% of users actually recorded events on it. Family events such as birthdays, vacations, and days off were most commonly recorded by interviewees. At home, wall calendars were located in the kitchen, on the fridge.
Index Cards
An extreme case of ad hoc paper calendar usage reported by one of our interviewees involved index cards, one for each day, that the participant carried in his shirt pocket when he forgot his PDA. Another interviewee reported exclusively using index cards for calendar management at their previous job because of their portability and trustworthiness. We report this not as a trend, but to illustrate the wide variety in the use of paper calendars.
Reminders and Alarms
Reminders and alarms are one of the major distinguishing features of modern electronic calendar systems. A majority of survey participants (63%) reported using these features. One user reported switching from paper to an online calendar because "a paper calendar cannot have an alarm feature". We use the term reminder to refer to any notification of a calendar event, and alarm to refer to the specific case of an interruption generated by the calendar system. Based on our interviews, we classified reminders into three categories taking into consideration the reasons, time, number, modalities and intervals of alarms. Before presenting the details of such a classification, however, we examine the individual factors in more detail.
Reasons for Using Alarms
Although reminding oneself of upcoming events is the most obvious use case for alarms, there were several other situations where users mentioned using reminders in addition to consulting their calendars regularly. Even when users were cognizant of upcoming events, they preferred to set alarms to interrupt them and grab their attention at the appointed hour. Alarms served as preparation reminders for events that were not necessarily in the immediate future.
When subordinates added events to a primary user's calendar, alarms were deemed an important way of notifying that user of such events. Early morning meeting reminders doubled up as wake-up alarms: one interviewee reported keeping their PDA by their bedside for this purpose. Another interviewee who needed to move his car out of a university parking lot where towing started at 8:00 am sharp had set a recurring alarm (figure 8). In one case, alarms were closely monitored by a user's secretary: if an event were missed by the user by a few minutes, the secretary would check on her boss and remind him to attend the meeting that was now overdue.
Number and Modalities of Reminders
While most survey participants only set a single reminder per event (52%), many others reported using multiple alarms. We conclude from our interviews that different semantic meanings were assigned to each such reminder: an alarm one day before an event was for preparation purposes, while an alarm 15 minutes before an event was a solicited interruption. Multimodal alarms were not used by many: the two most popular modalities used individually were audio (40%) and on-screen dialogs (41%).
Alarm Intervals
Reminders were set for varying intervals of time before the actual events took place, ranging from 5 minutes to several years. The two factors that affected this timing were (1) location of the event, and (2) whether or not (and how much) preparation was required. Users often set multiple alarms to be able to satisfy each of these requirements, because a single alarm could not satisfy them all. Based on these findings, we classify alarms into 3 categories:
• Interruption Reminders. Alarms set 5-15 minutes before an event were extremely short-term interruptions intended to get users up from their desks. Even if they knew in their mind that a particular event was coming up, it was likely that they were involved in their current activity deeply enough to overlook the event at the precise time it occurred. 15 minutes was the most common interval, as reported by 8 out of 16 interview participants. We found that the exact interval for interruption reminders was a function of the location of the event. Events that occurred in the same building as the user's current location had alarms set for between 5 and 15 minutes. Events in a different building had alarms for between 15 minutes and 30 minutes, based on the time it would take to reach there. Two interviewees reported that they set alarms for TV shows and other activities at home for up to 1 hour prior, because that is how long their commute took.
• Preparation Reminders. Users set multiple alarms when preparation was required for an event: the first (or earlier) alarm was to alert them to begin the preparation, while a later alarm was the interruption reminder for that event.
Payne [Payne, 1993] mentions the prevalence of this tendency as well: the reason for the first alarm (out of several) is to aid prospective remembering where the intention to look up an event is not in response to a specific temporal condition, but instead such conditions are checked after the intention is recalled. If certain items were needed to be taken to such meetings, preparation reminders were set for the previous night or early morning on the day of the event. Based on the interviews, preparation reminders were more commonly used for non-recurring events than for recurring events.
• Long-term Reminders. Events several months or years into the future were assigned reminders so that the user would not have to remember to consult the calendar at that time, but instead would have them show up automatically at (or around) the proper time. This is an illustration of using the calendar for prospective remembering tasks. Examples include a department head who put details of faculty coming up for tenure in up to 5 years, and a professor setting reminders for a conference submission deadline several months later.
Calendars as a Memory Aid
Calendars serve a value purpose as external memory for events [Payne, 1993]. In addition, in our data we found that the role that calendars play with respect to memory goes beyond this simple use. In particular, the following uses of calendars illustrate the different ways in which calendars serve as memory aids beyond simple lookups: First, users reported recording events in the calendar after the fact, not for the purpose of reminding, but to support reporting needs. Second, a few reported using previous years' calendars as a way to record memorable events to be remembered in future years. For those that used paper calendars, these events were often copied at the end of the year to newer calendars. The function of memory aid goes beyond remembering personal events (appointments and deadlines); it serves as a life journal, capturing events year after year. Kelley and Chapanis [Kelley and Chapanis, 1982] reported that 9 out of 11 respondents in their study kept calendars from two to 15 years.
Reporting Purposes
In our study, 10 out of 16 interviewees reported that they used their calendar to generate annual reports every year. Since it contained an accurate log of all their activities that year, it was the closest to a complete record of all accomplishments for that year. Among these, 5 users reported that they archived their calendars year after year to serve as a reference for years later. This tendency has also been reported in past studies [Kelley andChapanis, 1982, Payne, 1993]; Kelley referred to it as an 'audit trail', and highlighted the role of calendars in reporting and planning.
One person mentioned that they discovered their father's journal a few years after his death, and now they cultivate their calendar as a memento to be shared with their kids in the future.
"I think I occasionally even think about my kids. Because I do, I save them, I don't throw them away [...] I think that it's common with a little more sense of mortality or something. It's trying to moving things outwards."
Opportunities for Design
In this section, we highlight how some of our findings can be address through new electronic calendar designs.
Paper Calendars and Printing
We do not believe that paper calendars will disappear from use; they serve several useful functions that are hard to replace by technology. Electronic calendars in general are more feature-rich than paper calendars. Portable devices have good support for capturing information while mobile. Yet, we found that paper calendars and proxies continue to be prevalent in the use of calendar management. They provide support for easy capture of calendar information, are effective at sharing, and support the display of the calendar in public view with ease.
Therefore, given the many uses of paper calendars, we consider how electronic calendar systems can provide better support for these proxies. Richer printing capabilities might provide easy support for transferring online calendar information to the paper domain. Printing a wall calendar is a novelty relegated to specialized design software. However our findings show that wall calendars play a significant role in supporting calendar management, particularly at home. With affordable printing technology available, it is possible to print a wall calendar or table calendar at home, incorporating not only details of events from a user's personal electronic calendar, but also visual elements such as color coding, digital photos (for birthdays, etc.) and event icons. In a way, printed calendars are used in similar ways as discussed in [Lin et al., 2004].
Digital Paper Trails
Some of the features of paper calendars can be recreated in online systems. For example, current electronic calendar systems remove all traces of an event upon cancellation, without providing an option to retain this historical record. This was one of the shortcomings which led interview participants to rely on paper instead. Instead of deleting events, they could be faded out of view, and made visible upon request. Most calendar software support the notion of different calendars inside of the same program. A possibility is that all deleted events could simply be moved to a separate calendar, where events can be hidden easily. Yet, the events would remain in the calendar as a record of cancelled activity.
Tentative Event Scheduling
Several participants indicated that they 'penciled in' appointments in their paper calendars as tentative appointments to be confirmed later (also identified as a problem in [Kelley and Chapanis, 1982]). These tentative appointments served as a way of blocking particular date/time combinations while a meeting was being scheduled with others. Often, there were several of these tentative times for a particular meeting. Once the meeting was confirmed, only one of them was kept and the rest discarded. This type of activity is not well-supported in personal calendars. For corporate calendars, there is adequate support for scheduling group meetings, but it is often missing in personal calendars.
Intelligent Alarms
Calendar alarms and reminders have evolved from past systems and now allow notification in several ways: audible alarms, short text messages, popup reminders, and email are just a few. However, the fundamental concept of an alarm still tailors only to interruption reminders.
• Preparation reminders. To support preparation reminders, many electronic calendars allow the creation of multiple alarms per event, with different modalities for each (e.g., email, SMS, sounds, dialog box). However, when these reminders are used for preparation, as we found in the study, users often wanted to have more context: they expected to have an optional text note to indicate what preparation was required. E.g., alarms that would remind a user before leaving home to remember to carry material for an upcoming meeting, or a reminder the previous night to review documents.
• Location-related alarms. The location of events was found to be an important influencer of alarm time. If calendars supported the notion of location (besides simply providing a field to type it in), alarms could be automatically set based on how long it would take the user to reach the event.
• Alarms on multiple devices. When an alarm is set on multiple devices, each will go off at the exact same time without any knowledge of all the others. There is need to establish communication among the devices to present a single alarm to the user on the mutually-determined dominant device at the time.
Supporting a Rich Variety of Event Types
Users reported that not all events were equal: public events were merely for awareness, recurring events indicated that time was blocked out, and holidays were added to prevent accidental scheduling. From the users' point of view, each has different connotations, different visibility (public events should ideally fade out of sight when not required), and different types, number and intervals of alarms.
• Event templates. A calendar system that supports event types can provide ways and means for users to create event templates and categories with different default settings along each of the dimensions outlined above. By having event templates, quick capture is supported as well. When much of the extra information about an event is pre-filled, data entry can be minimized to just the title of the event. Certain types of events have special metadata fields associated with them, e.g. conference call events contain the dial code, flight events contain airline and arrival/departure info. This could be easily achieved by event templates.
• Showing/hiding public events. While a few users said they added public events for informational purposes, others did not want public events (that they would not necessarily attend) to clutter their calendar. If calendars supported making certain event types visible or invisible on demand, the needs of both user groups could be met. Again, by providing an option to keep all events in the same calendar, such a system would contribute to reducing information fragmentation.
Reporting and Archival Support
Report generation is a significant use of electronic calendars. Calendar software should have a way to generate reports and export information so that particular groups of events can be summarized in terms of when the meetings/events occurred, how many hours were devoted to them, and capture any notes entered in the calendar. One participant reported that he uses the search functionality in his calendar to obtain a listing of events related to a theme. This is used to get an idea of the number of hours devoted to particular activities and help to prepare an annual activity report.
Discussion & Future Work
The paradox of encoding and remembering, as described in [Payne, 1993], was clearly evident in our data. Participants seem to over-rely on calendar artifacts to remember appointments, as seen in the setting of multiple alarms, printing of calendars for meetings, carrying a PDA everywhere, and calling their secretary to confirm events. The unfortunate side effect of sharing the management of a calendar with other people is that the primary user no longer goes through the personal encoding episode of entering the information. Some participants relied on administrative assistants to enter events in their calendars. At home, many participants relied on their spouses to maintain the calendar. Some participants even suggested the need to have an alarm for when events were added to their calendars. All of this points to a diminished opportunity for encoding the information that is entered into one's calendar. This makes it very difficult for participants to remember what is in their calendar, given that at times the scheduled events have never been seen before they occur. On the other hand, the opportunity for rehearsal is greater today, if users take advantage of existing information dissemination and syndication techniques. For example, keeping a calendar on a desktop computer and publishing to an online calendar service such Google Calendar or Apple Mobile Me makes the calendar available in many other locations. Users can view their calendar on the web from any web browser, from mobile phones, or in the background on a desktop computer as part of widgets (tiny applications) such as Apple's Dashboard or Google Gadgets, or access it over a regular phone call [Pérez-Quiñones and Rode, 2004]. So, the possibility of opportunistic rehearsal is afforded by current systems. We did not observe this in our data, as many of our users did not use these services. However, the paradox of encoding, rehearsal, and recall seems to be in need of future work so we can understand the impact of electronic calendar systems on human memory. • What is your age group?
Calendar Use Basics
• Which devices do you own or use frequently?
• What computing-enabled calendars do you use?
• Do you use your computer to keep your calendar? If so, which program do you use for your main calendar management task on your desktop/laptop computer?
• If you own and/or use a PDA, which calendar program do you use on the PDA?
• Do you use an online calendar?
• What events do you record on your calendar?
• How often do you visit your calendar?
• How far ahead do you regularly look when you view your calendar?
• What would you consider your preferred view?
• If your calendar software includes a To-Do function, do you use it?
• Does your calendar software have a way to classify calendar events by categories? If so, how do you use this feature?
• Who changes and updates your calendar?
• How often do you add new events?
• Do you keep 'proxies' (for example, post-its) or other notes that need to be entered in the calendar at a later time?
• How long does it take for the proxy to make it into your main calendar?
New Events
• How frequently do you get events by phone (someone calls you) that go into your calendar?
• How frequently do you get events by e-mail (someone sends you email) that go into your calendar?
• How frequently do you get events in person (someone tells you of a meeting) that go into your calendar?
• By what other methods do new events arrive?
• Is there any overlap? Is one just a pared-down version of the other one or do they contain completely separate events?
• Do you coordinate calendar events with your spouse, roommate, family?
• If so, how do you go about doing that?
• Please explain any additional ways in which you use your calendar system.
• What are you habits as far as when you look at your calendar, how often, how far ahead do you look, how in-depth you examine events when you look, etc.
• Do you use a method of organization on a paper calendar that you cannot apply to an electronic calendar? (i.e.: specific types of events go into a specific area of the date box, highlighted events, etc)
• Is there anything else about your personal information management we have not covered?
| 6,477 |
0809.3447
|
1619717240
|
In this paper, we report on findings from an ethnographic study of how people use their calendars for personal information management (PIM). Our participants were faculty, staff and students who were not required to use or contribute to any specific calendaring solution, but chose to do so anyway. The study was conducted in three parts: first, an initial survey provided broad insights into how calendars were used; second, this was followed up with personal interviews of a few participants which were transcribed and content-analyzed; and third, examples of calendar artifacts were collected to inform our analysis. Findings from our study include the use of multiple reminder alarms, the reliance on paper calendars even among regular users of electronic calendars, and wide use of calendars for reporting and life-archival purposes. We conclude the paper with a discussion of what these imply for designers of interactive calendar systems and future work in PIM research.
|
With the increased use of mobile devices, more and more calendaring tasks are performed off the desktop computer. @cite_8 report on issues faced by mobile workers, their need for access to people and information located remotely, and the planful opportunism they engage in when utilizing their for tasks.
|
{
"abstract": [
"The rapid and accelerating move towards use of mobile technologies has increasingly provided people and organizations with the ability to work away from the office and on the move. The new ways of working afforded by these technologies are often characterized in terms of access to information and people anytime, anywhere. This article presents a study of mobile workers that highlights different facets of access to remote people and information, and different facets of anytime, anywhere. Four key factors in mobile work are identified: the role of planning, working in \"dead time,\" accessing remote technological and informational resources, and monitoring the activities of remote colleagues. By reflecting on these issues, we can better understand the role of technology and artifacts in mobile work and identify the opportunities for the development of appropriate technological solutions to support mobile workers."
],
"cite_N": [
"@cite_8"
],
"mid": [
"2107163126"
]
}
|
An Exploratory Study of Personal Calendar Use
|
Personal Information Management (PIM) is receiving attention as an area of research within the CHI community [Barreau et al., 2008, Bergman et al., 2004, Teevan et al., 2006. PIM research mostly is concerned with studying how people find, keep, organize, and re-find (or reuse) information in and around their personal information space. Calendar management, one of the typical PIM tasks, is done today using a variety of systems and methods, including several popular paper-based methods: At-A-Glance, one of the largest suppliers of paper planners, sold more than 100 million calendars in 2000 1 .
For computer-based systems, calendar management is often integrated into email clients (e.g. Microsoft Outlook); it is one of the most common applications in all personal digital assistants (PDAs, e.g. Blackberries and iPhones), and there are several online calendar systems (e.g. Yahoo! Calendar, Google Calendar, Apple Mobile Me). Date-and time-based information is ubiquitous, and is often available through many means such as postings on office doors, displays with dated announcements, through email conversations, written on wall calendars, etc. The result is that calendar information tends to be pervasive.
In this paper, we set out to explore how people use calendars in the presence of varied technological options. We are interested in understanding how calendar information is managed given the availability of these platforms. After a brief review of related work, we proceed to discuss our findings from the survey, interviews, and artifacts. From these, we suggest several opportunities for designers of future electronic calendar systems, and conclude the paper with a discussion of future research in personal information management.
Study Description
The ethnographic approach we took in this study follows techniques commonly reported in the Personal Information Management literature, notably [Kelley and Chapanis, 1982, Payne, 1993, Jones et al., 2005, Marshall and Bly, 2005. We did not attempt to test any a priori hypotheses, but were interested in examining how calendar practices have evolved in the years following previous calendar studies by Kelley and Chapanis [Kelley and Chapanis, 1982] and Payne [Payne, 1993].
Our study has three components to it: a survey (N=98), in-person interviews (N=16), and an examination of calendar artifacts such as screenshots and paper calendars. A large-scale online survey was distributed among members of a university. A total of 98 responses were received (54% male and 45% female), including faculty (56%), administrative staff (20%), and students (19%) (figure 1). While previous studies have examined organizational calendars [Dourish et al., 1993] and groupware calendar systems [Grudin, 1996, Palen andGrudin, 2003], our focus was on the personal use of calendars.
Other 4%
Staff 20%
Faculty 56% Students 19% Figure 1: Roles of survey participants
In part two, we conducted in-depth personal interviews with 16 participants, recruited from among the survey participants. The recruitment criterion for interview candidates was the same as in [Kelley and Chapanis, 1982]: that participants should be regular users of some form of calendar system, either electronic or paper or a combination of both. Participants included graduate students, faculty members, administrative assistants, a department head, the director of a small business, etc., among others.
Interviews ranged from 20 to 30 minutes each, and were conducted in situ at their workplaces so we could observe their calendaring practices directly (e.g. calendar programs or wall calendars or paper scraps.) Interviews were semistructured and open-ended: a prepared set of questions was asked in each interview. The questions we asked were closely modeled on those asked in similar studies [Kelley andChapanis, 1982, Payne, 1993]. The complete set of questions is available as an appendix in a technical report [Tungare and Pérez-Quiñones, 2008]. As an extension to past studies, we were able to explore the use of features of modern calendar systems such as alarms, reminders, and mobile use, which were absent in paper calendars. Interviewees were encouraged to talk freely and to expand upon any of the themes they wished to discuss in more detail. Additional topics were addressed as appropriate depending on the interviewee's calendar use. Examining the calendar systems in use at their desks or on their walls prompted specific questions from the interviewers about these practices.
All interviews were transcribed in full. We performed content analysis [Krippendorff, 2004] of the transcripts to extract common patterns of use. The main purpose of content analysis in this study was to summarize the findings into groups of common observations, as in [Marshall and Bly, 2005]. Individual responses were tagged into several categories by two of the authors and any differences reconciled by discussion. Nearly 410 tags resulted from this activity; these were then collapsed into 383 tags (grouping together tags that were near-duplicates) and 11 top-level groups during the clustering procedure.
From each interview participant, we collected copies of artifacts that were used for calendaring purposes: 2 weeks' worth of calendar information and any other idiosyncratic observations that were spotted by the interviewers. These included screenshots of their calendar programs, paper calendars, printouts of electronic calendars (that were already printed for their own use), sticky notes stuck on paper calendars, etc. Some of these reflected a degree of wear and tear that occurred naturally over time; others provided evidence of manipulations such as color highlights, annotations in the margins, or comments made in other ways. Artifacts were not coded on any particular dimension, but pictures of these artifacts are used to supplant our textual descriptions wherever appropriate.
Capturing and Adding Events
Capturing events refers to the act of knowing about an event and entering it into a calendaring system (also referred to as the 'keeping' phase in the PIM literature.) Most survey participants reported adding new events as soon as they were (made) aware of them (93%) while the rest added them before the end of the day. Even when at their desks, those users who owned PDAs reported using them to create new events in their calendar: this was deemed faster than trying to start the calendar program on a computer and then adding an event. When away from their desks, they used proxy artifacts such as printed calendar copies or paper scraps.
Information about new events reached the primary calendar user via one of several means: email, phone, and in-person were commonly reported (figure 2). The fact that email was the most common way reported in our study is an expected evolution from older findings [Kelley and Chapanis, 1982] that phones were the most common stimuli for calendar events. Interviewees mentioned several other methods through which they received events: flyers, posters, campus notices, meeting minutes, public calendars (such as academic schedules or sports events), newspapers, internet forums, (postal) mail, fax, radio, or scheduled directly by other people who had access to the calendar (e.g., shared calendars). The wide variety of sources here is a potential indication of the problem of information overload [Schick et al., 1990] faced by knowledge workers.
Personal Calendar View Preference
We refer to the most common time interval shown in a calendar program or on a paper calendar as the preferred personal calendar view: the week view was preferred by most of our survey participants at 44%, followed by the day view at 35%, and the month view at 21% (figure 3). These are very close to the numbers reported by Kelley et al. [Kelley and Chapanis, 1982] (45%, 33%, 22% respectively). That many interviewees preferred a week view suggests the use of the calendar for opportunistic rehearsal, because they browsed the entire week's appointments each time they viewed the calendar. This preference supports the analysis of [Payne, 1993] in that the printed versions of calendar do provide a valuable aid in opportunistic reading of the the week's activities. Users who kept multiple calendars within the same calendaring system indicated that they turned the visibility of each calendar on or off on demand, based on the specifics of what they needed to know during a particular lookup task. On smaller devices such as PDAs, the default view was the daily view.
Figure 3: Preferred calendar views
There seem to be two motivators for browsing calendars: looking for activities to attend in the near future, and looking for activities further out that require preparation. A daily view directly supports the first, while a week view partially supports the second one. Intermediates such as Google Calendar's 4-day view afford browsing for future events without losing local context for the current day. The downside of such a view, however, is that days no longer appear in a fixed column position, but in different locations based on the day. Thus, the preferred calendar view depends on the type of activity the user is doing.
Frequency of Consulting the Calendar
When asked about the frequency at which users consulted their calendars, we received a wide range of responses in the survey: keeping the calendar program always open (66%) and several times a day (21%) were the most common.
In the interviews, several other specific times were reported: just before bedtime or after waking up; only when prompted by an alarm; when scheduling a new event; once weekly; or on weekends only. Two interviewees reported consulting their calendar only to check for conflicts before scheduling new events, and for confirmation of events already scheduled.
Proxy Calendar Artifacts
We use the term 'proxy calendar artifacts' (or 'proxies' in short) to refer to ephemeral scraps or notes (characterized as micronotes in [Lin et al., 2004]) or printed calendars or electronic means such as email to self that are used for calendaring when primary calendar systems are unavailable or inaccessible (e.g. when users were away from their desks or offices).
Despite the prevalent use of electronic calendars, many were not portable and were tied to specific desktop computers. This prompted the users to use other means to view or add events to their calendar; about 27% reported that they used proxy artifacts such as scraps or notes to be entered into the primary calendar at a later time. A wide variety of proxy calendar artifacts was reported in our interviews: paper scraps were by far the most common medium; other techniques included carrying laptops solely for the purpose of calendaring, PDAs, voice recorders, and printouts of electronic calendars. Information captured via these proxies was transferred to the primary calendar after a delay: most often, users entered the events as soon as they could access their primary calendar (63% of survey participants), a few others reported entering them within the same day (25%), while the maximum delay reported was up to one week.
Information Stored in an Event Record
Calendar systems allow users to add several items of information to an event record. Typical information included the date of the event (97%), time (96%), location (93%) and purpose (69%) as indicated in the survey. In interviews, it was clear that common fields such as notes, other attendees and status were used only to a limited extent. Location was entered mostly for non-recurring events. However, many other pieces of information were frequently recorded, even though calendar programs do not have a specific field for these data. For example, information critical for participation at an event was entered inline for easy access: e.g. phone numbers for conference calls, cooking menus and shopping lists, meeting agenda, original email for reference, links to relevant web sites, and filenames of relevant files.
One participant mentioned adding meeting participants' email addresses in case she needed to inform them of a cancellation or rescheduling. For activities such as trips or flights, further details such as booking codes and flight details were included as a way of reducing information fragmentation between the calendar system and the email system.
Types of Events
The events most commonly recorded on calendars by survey participants were timed events such as appointments or meetings (98%), special events requiring advance planning, such as tests (93%), long duration events such as the week of final exams at the end of each semester (66%), and all-day events such as birthdays (81%). Several interviewees also mentioned recording to-do items in a calendar, such as phone calls to be made, or tasks which would remain on the calendar until completed, or which were scheduled in on their deadline. Specifically, we found several instances of the following types of events scheduled:
• Work-related events. Many interviewees used calendar scheduling for workrelated events such as meetings, deadlines, classes, public events such as talks and conferences, and work holidays. Users in work environments included vacation details for co-workers and subordinates. Time was routinely blocked off to prepare for other events: e.g. class preparation or ground work to be done before a meeting.
Interviewees who had administrative assistants reported that their assistant maintained or co-maintained their calendar (7 out of 16 interviewees). The dynamics of shared access were vastly different across all these situations. One interviewee mentioned that he would never let an assistant be their primary scheduler; the assistant was able to access only a paper copy and any new events would be reviewed and added by the primary calendar user. Two other users mentioned that they provided paper calendars to subordinates to keep track of their schedule and to be able to answer questions about it to third parties. One participant reported calling in to their secretary when they needed to consult their schedule while away from their desk (similar to previous reports in [Perry et al., 2001]), while another reported sending email to themselves as a way to quickly capture a newly-scheduled meeting. • Family/personal events. Half of the survey respondents indicated that they coordinate calendars with their spouses, roommates, or family. Even though family activities such as picking up kids from school, or attending church services, were easily remembered without the aid of a calendar, interviewees reported that they chose to record them anyway to provide "a visual idea of the entire day" (figure 4). Public holidays, family birthdays, and guest visits were added to prevent accidental scheduling of conflicting events. Figure 4: Family events such as attending church are added to calendars, not for remembering, but to be able to get a visual idea of the entire day.
Many participants reported having separate calendars for business use and for home/personal use, as was also seen in a majority of respondents in [Kelley and Chapanis, 1982]. Although events overlapped between them (e.g. work trips on family calendars and family medical appointments on work calendars), the calendars themselves were located at the respective places and maintained separately. Family calendars were most likely to be kept in the kitchen, on the refrigerator. Two contrasts between work calendars and home calendars were prominent: work calendars were more often electronic while home calendars more likely to be paper calendars, e.g. as a wall calendar, or on the refrigerator. Work calendars were updated by the primary users or their secretaries or their colleagues, while family calendars were overwhelmingly managed by women. No male participant reported being the only calendar manager at home; women reported either being the only person to edit it, or sharing responsibilities with their husbands. Family-related events and reminders were constrained to the home calendar, as in [Nippert-Eng, 1996], but they were sometimes added to work calendars if such events would impact work time. For example, medical appointments (of self or family members) that occurred during work hours were added to work calendars so that their co-workers were aware of their absence.
• Public events. Public events were added even when the user had no intention of attending that event. They were added to recommend to other people, or for personal planning purposes, or to start conversations related to the public activity. An administrator (from ANONYMIZED, a small university town with a very popular college football team) said that although he had no interest in football, he added home games to his calendar to ensure that visiting dignitaries were not invited during a time when all hotels in town would be booked to capacity. On the other hand, two interviewees considered such public events as contributing to clutter in their personal calendar, and chose not to add them.
Continued Use of Paper Calendars
In his 1993 study [Payne, 1993], Payne reports that the most stable characteristic he observed was the continued reliance of all but two participants on some kind of paper calendar. Our findings are similar: despite most of our users using electronic calendars, every one of them reported using paper calendars even if not regularly; 12 out of 16 interview participants reported using them regularly.
Reasons for the Continued Use of Paper Calendars
We group the several reasons and examples elicited from our participants into the following four categories:
• Paper trail. Cancelled events were scratched off the calendar, leaving a paper trail. Being able to make a distinction between cancelled and neverscheduled events was cited as an important concern for continuing with paper calendars.
• Opportunistic rehearsal. We found support for the idea of opportunistic rehearsal [Payne, 1993]. Users cited that wall calendars needed no more than a glance to read, and provided for quick reference. This also corroborates Dourish's argument [Dourish et al., 1993] that the presence of informational context in paper artifacts such as calendars is an important motivator for people to continue to use them, even though electronic systems support the information retrieval task better.
• Annotation. Paper calendars are more amenable to free-form annotation, as reported earlier [Kelley and Chapanis, 1982], and as the following quotes from our study illustrate:
"That's what I call the graffiti aspect of it, it's probably freer by virtue of being handwritten." "There is a lot of that [code and symbols]. Stars and dashes and circles and headlines, marked and completed." Figure 5 shows a printed calendar with a sticky note pasted on it. The event is about a community potluck dinner. The sticky note complements the scheduled appointment with information about the dish the participant plans to bring to the event. Figure 6 shows a picture of a pumpkin hand-drawn on a printed calendar to mark Halloween on October 31. Figure 5: Sticky notes are pasted on paper calendars to remind oneself of the preparation required for an event. • Prepopulated events. Participants reported that having holidays or other event details already printed in commercially-available paper calendars was an important reason for using them. Calendars distributed by the university contained details not only of academic deadlines, but also of athletic events and games; [Kelley and Chapanis, 1982] point to branding issues as well.
Paper calendars were used alongside electronic calendars in either a supplementary or complementary role, as follows:
Printouts of Electronic Calendars
Printouts of electronic calendars played a supplementary role: they were used as proxies of the master calendar when the master calendar was unavailable. 35% of survey participants reported printing their calendar. Among those printed, all views were commonly printed: monthly (43%), weekly (33%) and daily (25%) (figure 3). Among those who printed, many printed it monthly, weekly or daily (figure 7). How often do users ...
Monthly
Weekly Daily Never Figure 7: How often users perform activities related to paper calendars.
Based on our interviews, we found that electronic calendars were printed for three main reasons:
• Portability. Users carried a printed copy of the master calendar to venues where collaboration was anticipated, such as a meetings or trips. Even those who carried laptops and PDAs said that they relied on printed calendars for quick reference.
• Quick capture. Events were often entered into paper calendars first because of their easy accessibility, and were later transferred back to the digital calendar. 4.1.1 One-third of all interviewees reported making changes to paper copies of their calendars. Not all these changes were propagated back to the master calendar, however.
• Sharing a read-only view with associates. Taping a printed calendar to the outside of office doors was common practice, as reported by interviewees.
In one instance, a user provided printed calendars to his subordinates so they could schedule him for meetings. These events were then screened by him before being added to the master calendar.
Wall Calendars
Wall calendars typically played a complementary role, and there was little overlap between the events on a wall calendar and those in an electronic calendar. 70% of survey participants had a wall calendar in their home or office, however only 25% of users actually recorded events on it. Family events such as birthdays, vacations, and days off were most commonly recorded by interviewees. At home, wall calendars were located in the kitchen, on the fridge.
Index Cards
An extreme case of ad hoc paper calendar usage reported by one of our interviewees involved index cards, one for each day, that the participant carried in his shirt pocket when he forgot his PDA. Another interviewee reported exclusively using index cards for calendar management at their previous job because of their portability and trustworthiness. We report this not as a trend, but to illustrate the wide variety in the use of paper calendars.
Reminders and Alarms
Reminders and alarms are one of the major distinguishing features of modern electronic calendar systems. A majority of survey participants (63%) reported using these features. One user reported switching from paper to an online calendar because "a paper calendar cannot have an alarm feature". We use the term reminder to refer to any notification of a calendar event, and alarm to refer to the specific case of an interruption generated by the calendar system. Based on our interviews, we classified reminders into three categories taking into consideration the reasons, time, number, modalities and intervals of alarms. Before presenting the details of such a classification, however, we examine the individual factors in more detail.
Reasons for Using Alarms
Although reminding oneself of upcoming events is the most obvious use case for alarms, there were several other situations where users mentioned using reminders in addition to consulting their calendars regularly. Even when users were cognizant of upcoming events, they preferred to set alarms to interrupt them and grab their attention at the appointed hour. Alarms served as preparation reminders for events that were not necessarily in the immediate future.
When subordinates added events to a primary user's calendar, alarms were deemed an important way of notifying that user of such events. Early morning meeting reminders doubled up as wake-up alarms: one interviewee reported keeping their PDA by their bedside for this purpose. Another interviewee who needed to move his car out of a university parking lot where towing started at 8:00 am sharp had set a recurring alarm (figure 8). In one case, alarms were closely monitored by a user's secretary: if an event were missed by the user by a few minutes, the secretary would check on her boss and remind him to attend the meeting that was now overdue.
Number and Modalities of Reminders
While most survey participants only set a single reminder per event (52%), many others reported using multiple alarms. We conclude from our interviews that different semantic meanings were assigned to each such reminder: an alarm one day before an event was for preparation purposes, while an alarm 15 minutes before an event was a solicited interruption. Multimodal alarms were not used by many: the two most popular modalities used individually were audio (40%) and on-screen dialogs (41%).
Alarm Intervals
Reminders were set for varying intervals of time before the actual events took place, ranging from 5 minutes to several years. The two factors that affected this timing were (1) location of the event, and (2) whether or not (and how much) preparation was required. Users often set multiple alarms to be able to satisfy each of these requirements, because a single alarm could not satisfy them all. Based on these findings, we classify alarms into 3 categories:
• Interruption Reminders. Alarms set 5-15 minutes before an event were extremely short-term interruptions intended to get users up from their desks. Even if they knew in their mind that a particular event was coming up, it was likely that they were involved in their current activity deeply enough to overlook the event at the precise time it occurred. 15 minutes was the most common interval, as reported by 8 out of 16 interview participants. We found that the exact interval for interruption reminders was a function of the location of the event. Events that occurred in the same building as the user's current location had alarms set for between 5 and 15 minutes. Events in a different building had alarms for between 15 minutes and 30 minutes, based on the time it would take to reach there. Two interviewees reported that they set alarms for TV shows and other activities at home for up to 1 hour prior, because that is how long their commute took.
• Preparation Reminders. Users set multiple alarms when preparation was required for an event: the first (or earlier) alarm was to alert them to begin the preparation, while a later alarm was the interruption reminder for that event.
Payne [Payne, 1993] mentions the prevalence of this tendency as well: the reason for the first alarm (out of several) is to aid prospective remembering where the intention to look up an event is not in response to a specific temporal condition, but instead such conditions are checked after the intention is recalled. If certain items were needed to be taken to such meetings, preparation reminders were set for the previous night or early morning on the day of the event. Based on the interviews, preparation reminders were more commonly used for non-recurring events than for recurring events.
• Long-term Reminders. Events several months or years into the future were assigned reminders so that the user would not have to remember to consult the calendar at that time, but instead would have them show up automatically at (or around) the proper time. This is an illustration of using the calendar for prospective remembering tasks. Examples include a department head who put details of faculty coming up for tenure in up to 5 years, and a professor setting reminders for a conference submission deadline several months later.
Calendars as a Memory Aid
Calendars serve a value purpose as external memory for events [Payne, 1993]. In addition, in our data we found that the role that calendars play with respect to memory goes beyond this simple use. In particular, the following uses of calendars illustrate the different ways in which calendars serve as memory aids beyond simple lookups: First, users reported recording events in the calendar after the fact, not for the purpose of reminding, but to support reporting needs. Second, a few reported using previous years' calendars as a way to record memorable events to be remembered in future years. For those that used paper calendars, these events were often copied at the end of the year to newer calendars. The function of memory aid goes beyond remembering personal events (appointments and deadlines); it serves as a life journal, capturing events year after year. Kelley and Chapanis [Kelley and Chapanis, 1982] reported that 9 out of 11 respondents in their study kept calendars from two to 15 years.
Reporting Purposes
In our study, 10 out of 16 interviewees reported that they used their calendar to generate annual reports every year. Since it contained an accurate log of all their activities that year, it was the closest to a complete record of all accomplishments for that year. Among these, 5 users reported that they archived their calendars year after year to serve as a reference for years later. This tendency has also been reported in past studies [Kelley andChapanis, 1982, Payne, 1993]; Kelley referred to it as an 'audit trail', and highlighted the role of calendars in reporting and planning.
One person mentioned that they discovered their father's journal a few years after his death, and now they cultivate their calendar as a memento to be shared with their kids in the future.
"I think I occasionally even think about my kids. Because I do, I save them, I don't throw them away [...] I think that it's common with a little more sense of mortality or something. It's trying to moving things outwards."
Opportunities for Design
In this section, we highlight how some of our findings can be address through new electronic calendar designs.
Paper Calendars and Printing
We do not believe that paper calendars will disappear from use; they serve several useful functions that are hard to replace by technology. Electronic calendars in general are more feature-rich than paper calendars. Portable devices have good support for capturing information while mobile. Yet, we found that paper calendars and proxies continue to be prevalent in the use of calendar management. They provide support for easy capture of calendar information, are effective at sharing, and support the display of the calendar in public view with ease.
Therefore, given the many uses of paper calendars, we consider how electronic calendar systems can provide better support for these proxies. Richer printing capabilities might provide easy support for transferring online calendar information to the paper domain. Printing a wall calendar is a novelty relegated to specialized design software. However our findings show that wall calendars play a significant role in supporting calendar management, particularly at home. With affordable printing technology available, it is possible to print a wall calendar or table calendar at home, incorporating not only details of events from a user's personal electronic calendar, but also visual elements such as color coding, digital photos (for birthdays, etc.) and event icons. In a way, printed calendars are used in similar ways as discussed in [Lin et al., 2004].
Digital Paper Trails
Some of the features of paper calendars can be recreated in online systems. For example, current electronic calendar systems remove all traces of an event upon cancellation, without providing an option to retain this historical record. This was one of the shortcomings which led interview participants to rely on paper instead. Instead of deleting events, they could be faded out of view, and made visible upon request. Most calendar software support the notion of different calendars inside of the same program. A possibility is that all deleted events could simply be moved to a separate calendar, where events can be hidden easily. Yet, the events would remain in the calendar as a record of cancelled activity.
Tentative Event Scheduling
Several participants indicated that they 'penciled in' appointments in their paper calendars as tentative appointments to be confirmed later (also identified as a problem in [Kelley and Chapanis, 1982]). These tentative appointments served as a way of blocking particular date/time combinations while a meeting was being scheduled with others. Often, there were several of these tentative times for a particular meeting. Once the meeting was confirmed, only one of them was kept and the rest discarded. This type of activity is not well-supported in personal calendars. For corporate calendars, there is adequate support for scheduling group meetings, but it is often missing in personal calendars.
Intelligent Alarms
Calendar alarms and reminders have evolved from past systems and now allow notification in several ways: audible alarms, short text messages, popup reminders, and email are just a few. However, the fundamental concept of an alarm still tailors only to interruption reminders.
• Preparation reminders. To support preparation reminders, many electronic calendars allow the creation of multiple alarms per event, with different modalities for each (e.g., email, SMS, sounds, dialog box). However, when these reminders are used for preparation, as we found in the study, users often wanted to have more context: they expected to have an optional text note to indicate what preparation was required. E.g., alarms that would remind a user before leaving home to remember to carry material for an upcoming meeting, or a reminder the previous night to review documents.
• Location-related alarms. The location of events was found to be an important influencer of alarm time. If calendars supported the notion of location (besides simply providing a field to type it in), alarms could be automatically set based on how long it would take the user to reach the event.
• Alarms on multiple devices. When an alarm is set on multiple devices, each will go off at the exact same time without any knowledge of all the others. There is need to establish communication among the devices to present a single alarm to the user on the mutually-determined dominant device at the time.
Supporting a Rich Variety of Event Types
Users reported that not all events were equal: public events were merely for awareness, recurring events indicated that time was blocked out, and holidays were added to prevent accidental scheduling. From the users' point of view, each has different connotations, different visibility (public events should ideally fade out of sight when not required), and different types, number and intervals of alarms.
• Event templates. A calendar system that supports event types can provide ways and means for users to create event templates and categories with different default settings along each of the dimensions outlined above. By having event templates, quick capture is supported as well. When much of the extra information about an event is pre-filled, data entry can be minimized to just the title of the event. Certain types of events have special metadata fields associated with them, e.g. conference call events contain the dial code, flight events contain airline and arrival/departure info. This could be easily achieved by event templates.
• Showing/hiding public events. While a few users said they added public events for informational purposes, others did not want public events (that they would not necessarily attend) to clutter their calendar. If calendars supported making certain event types visible or invisible on demand, the needs of both user groups could be met. Again, by providing an option to keep all events in the same calendar, such a system would contribute to reducing information fragmentation.
Reporting and Archival Support
Report generation is a significant use of electronic calendars. Calendar software should have a way to generate reports and export information so that particular groups of events can be summarized in terms of when the meetings/events occurred, how many hours were devoted to them, and capture any notes entered in the calendar. One participant reported that he uses the search functionality in his calendar to obtain a listing of events related to a theme. This is used to get an idea of the number of hours devoted to particular activities and help to prepare an annual activity report.
Discussion & Future Work
The paradox of encoding and remembering, as described in [Payne, 1993], was clearly evident in our data. Participants seem to over-rely on calendar artifacts to remember appointments, as seen in the setting of multiple alarms, printing of calendars for meetings, carrying a PDA everywhere, and calling their secretary to confirm events. The unfortunate side effect of sharing the management of a calendar with other people is that the primary user no longer goes through the personal encoding episode of entering the information. Some participants relied on administrative assistants to enter events in their calendars. At home, many participants relied on their spouses to maintain the calendar. Some participants even suggested the need to have an alarm for when events were added to their calendars. All of this points to a diminished opportunity for encoding the information that is entered into one's calendar. This makes it very difficult for participants to remember what is in their calendar, given that at times the scheduled events have never been seen before they occur. On the other hand, the opportunity for rehearsal is greater today, if users take advantage of existing information dissemination and syndication techniques. For example, keeping a calendar on a desktop computer and publishing to an online calendar service such Google Calendar or Apple Mobile Me makes the calendar available in many other locations. Users can view their calendar on the web from any web browser, from mobile phones, or in the background on a desktop computer as part of widgets (tiny applications) such as Apple's Dashboard or Google Gadgets, or access it over a regular phone call [Pérez-Quiñones and Rode, 2004]. So, the possibility of opportunistic rehearsal is afforded by current systems. We did not observe this in our data, as many of our users did not use these services. However, the paradox of encoding, rehearsal, and recall seems to be in need of future work so we can understand the impact of electronic calendar systems on human memory. • What is your age group?
Calendar Use Basics
• Which devices do you own or use frequently?
• What computing-enabled calendars do you use?
• Do you use your computer to keep your calendar? If so, which program do you use for your main calendar management task on your desktop/laptop computer?
• If you own and/or use a PDA, which calendar program do you use on the PDA?
• Do you use an online calendar?
• What events do you record on your calendar?
• How often do you visit your calendar?
• How far ahead do you regularly look when you view your calendar?
• What would you consider your preferred view?
• If your calendar software includes a To-Do function, do you use it?
• Does your calendar software have a way to classify calendar events by categories? If so, how do you use this feature?
• Who changes and updates your calendar?
• How often do you add new events?
• Do you keep 'proxies' (for example, post-its) or other notes that need to be entered in the calendar at a later time?
• How long does it take for the proxy to make it into your main calendar?
New Events
• How frequently do you get events by phone (someone calls you) that go into your calendar?
• How frequently do you get events by e-mail (someone sends you email) that go into your calendar?
• How frequently do you get events in person (someone tells you of a meeting) that go into your calendar?
• By what other methods do new events arrive?
• Is there any overlap? Is one just a pared-down version of the other one or do they contain completely separate events?
• Do you coordinate calendar events with your spouse, roommate, family?
• If so, how do you go about doing that?
• Please explain any additional ways in which you use your calendar system.
• What are you habits as far as when you look at your calendar, how often, how far ahead do you look, how in-depth you examine events when you look, etc.
• Do you use a method of organization on a paper calendar that you cannot apply to an electronic calendar? (i.e.: specific types of events go into a specific area of the date box, highlighted events, etc)
• Is there anything else about your personal information management we have not covered?
| 6,477 |
0809.0460
|
1566903869
|
AbstractIn this paper, we present approximation algorithms for combinatorial optimization problemsunder probabilistic constraints. Specifically, we focus on stochastic variants of two importantcombinatorial optimization problems: the k-center problem and the set cover problem, with un-certainty characterized by a probability distribution over set of points or elements to be covered.We consider these problems under adaptive and non-adaptive settings, and present efficient ap-proximation algorithms for the case when underlying distribution is a product distribution. Incontrast to the expected cost model prevalent in stochastic optimization literature, our problemdefinitions support restrictions on the probability distributions of the total costs, via incorporat-ing constraints that bound the probability with which the incurred costs may exceed a giventhreshold. 1 Introduction A prevalent model to deal with uncertain data in optimization problems is to minimize expectedcost over an input probability distribution. However, the expected cost model does not adequatelycapture the following two aspects of the problem. Firstly, in many applications, constraint violationscannot be modeled by costs or penalties in any reasonable way (e.g., safety relevant restrictionslike levels of a water reservoir). Thus, if the problem constraints involve an uncertain parameter,one would rather insist on bounding the probability that a decision is infeasible. This leads to
|
A recent unpublished work by Swamy @cite_8 considers two stage risk-averse models for stochastic set cover and related combinatorial optimization problems. In the two stage recourse model, some sets can be chosen in the first stage at a low cost, and then if a scenario is not covered, more sets can be bought in the second stage as a recourse action. The risk averse problem is to minimize the sum of first stage cost and value-at-risk for the second stage. It was observed that if the value-at-risk for second stage is fixed to be @math , the problem reduces to chance-constrained set cover without recourse - same as our non-adaptive set cover problem. Although the algorithms in @cite_8 can be used under more general assumptions of black box distributions", we present faster algorithms that achieve better approximation factors for the special case of product distributions. Specifically, in contrast to the results in @cite_8 , we do not incur any approximation in the probabilistic constraint, and the running time of our algorithms is independent of the input threshold @math .
|
{
"abstract": [
"We consider various stochastic models that incorporate the notion of risk-averseness into the standard 2-stage recourse model, and develop novel techniques for solving the algorithmic problems arising in these models. A key notable feature of our work that distinguishes it from work in some other related models, such as the (standard) budget model and the (demand-) robust model, is that we obtain results in the black-box setting, that is, where one is given only sampling access to the underlying distribution. Our first model, which we call the risk-averse budget model, incorporates the notion of risk-averseness via a probabilistic constraint that restricts the probability (according to the underlying distribution) with which the second-stage cost may exceed a given budget B to at most a given input threshold . We also a consider a closely-related model that we call the risk-averse robust model, where we seek to minimize the first-stage cost and the (1- )-quantile of the second-stage cost. We obtain approximation algorithms for a variety of combinatorial optimization problems including the set cover, vertex cover, multicut on trees, min cut, and facility location problems, in the risk-averse budget and robust models with black-box distributions. We obtain near-optimal solutions that preserve the budget approximately and incur a small blow-up of the probability threshold (both of which are unavoidable). To the best of our knowledge, these are the first approximation results for problems involving probabilistic constraints and black-box distributions. A major component of our results is a fully polynomial approximation scheme for solving the LP-relaxation of the risk-averse problem."
],
"cite_N": [
"@cite_8"
],
"mid": [
"1578746725"
]
}
|
Stochastic Combinatorial Optimization under Probabilistic Constraints
|
A prevalent model to deal with uncertain data in optimization problems is to minimize expected cost over an input probability distribution. However, the expected cost model does not adequately capture the following two aspects of the problem. Firstly, in many applications, constraint violations cannot be modeled by costs or penalties in any reasonable way (e.g., safety relevant restrictions like levels of a water reservoir). Thus, if the problem constraints involve an uncertain parameter, one would rather insist on bounding the probability that a decision is infeasible. This leads to probabilistic or chance constraints of type: cost for a disaster scenario with non-negligible probability. A risk averse user will naturally prefer the former decision. Various measures have been proposed in finance and stochastic optimization literature to capture this notion of risk averseness. A popular measure is the 'value-at-risk (VaR)' measure, which is widely used in financial models, and has even been written into some industry regulations [7,12]. For a given risk aversion level ρ, value-at-risk is given by the smallest value γ such that probability that objective cost exceeds γ is less than ρ. This leads to the probabilistic constraint:
P (f (x, ξ) ≥ γ) ≤ ρ
where f (x, ξ) is the objective value for decision x in scenario ξ.
In this paper, we develop approximation algorithms for such probabilistically constrained optimization problems. Specifically, we look at stochastic variants of two important combinatorial optimization problems: the k-center problem and the set cover problem, with uncertainty characterized by a probability distribution over subset of points or elements to be covered. We study the problems under "non-adaptive" and "adaptive" settings. In non-adaptive setting, the entire set cover (k-center) must be chosen before the random element set is known. The goal is to minimize the covering cost (clustering distance) while satisfying a constraint that probability of covering a random subset of elements is higher than a given input threshold. In adaptive setting, the set cover (k-center) can be chosen adaptively for each scenario after observing the random element set. The goal is to determine the quality of optimal adaptive solution using value-at-risk (VaR) measure, that is, determine the minimum value γ such that probability that the covering cost (clustering distance) exceeds γ is less than ρ. Note that these two settings capture the two problem aspects mentioned in the previous paragraph.
Below we give formal definitions of our optimization problems and assumptions made on the statistical information available; followed by a summary of results and related previous work.
Non-adaptive stochastic k-center: Consider a set V of n vertices. Assume that distance d(u, v) between two vertices u and v in V is given by a graph metric G = (V, E). The deterministic k-center problem is to find a subset C ⊆ V , |C| ≤ k, which minimizes the distance r such that
max v∈V d(v, C) ≤ r
In the stochastic k-center problem, the subset of V that actually needs to be served is given by a random variableṼ , where each vertex v i appears inṼ independently with probability p i . The problem is to choose a set C ⊆ V , |C| ≤ k, which minimizes the distance r such that
P (max v∈Ṽ d(v, C) ≤ r) ≥ 1 − ρ for a small input constant 0 < ρ ≤ 1.
Adaptive stochastic k-center In adaptive setting, the k centers will be chosen after the random subsetṼ becomes known. Thus, the k-center solutionC is itself a random variable, and depends on the random subsetṼ . The problem is to compute the value-at-risk, that is, the distance r such that
P (max v∈Ṽ d(v,C) > r) ≤ ρ
HereC denotes optimal k-center solution for subsetṼ .
Non-adaptive stochastic set cover Given a universe of n elements E = {e 1 , e 2 , . . . , e n }, and a family S of m subsets of E. The deterministic set cover problem is to find the minimum cost subcollection C ⊆ S such that every element in E is covered by some set in C. In the stochastic set cover problem, the elements to be covered are a random subsetẼ of E, where each element e j appears independently inẼ with probability p j . The problem is to find minimum cost subcollection C ⊆ S such that the probability that every element inẼ is covered is by some set in C higher than an input threshold 1 − ρ.
Adaptive stochastic set cover In adaptive setting, the set cover will be chosen after the random subset of elementsẼ becomes known. The problem is to compute the value-at-risk B, that is the
minimum value B such that P ( i∈C c i > B) ≤ ρ
HereC denotes optimal set cover for random subsetẼ.
Summary of our results
For the k-center problems (non-adaptive and adaptive), we present polynomial-time dynamic programming algorithms that give optimal solutions for tree metrics. Moreover, we show that the algorithms for tree metrics can be extended to give efficient PTAS for planar graph metrics, and more generally a class of graphs called 'bounded genus' graphs. Here, the approximation is only in the number of centers; the probabilistic constraint holds exactly. For set cover problem, we give an O(log n)-approximation algorithm for the non-adaptive case. We also show that for the adaptive case of this problem, verifying the probability threshold is atleast as hard as the problem of counting maximum independent sets of a graph, and hence is likely to be very hard to approximate. We use combinatorial optimization techniques like dynamic programming to obtain fast and accurate algorithms for stochastic optimization problems. A common limitation of previous work [6,13] on approximation algorithms for probabilistically constrained optimization problems is that the probabilistic constraint cannot be satisfied accurately. That is, an approximation of type P (f (x, ξ) ≥ (1 + ǫ 1 )B) ≤ (1 + ǫ 2 )ρ is obtained. We overcome this limitation by taking advantage of special structure of the problems in case of product distributions, and obtain approximation algorithms where probabilistic constraints hold exactly.
Non-adaptive stochastic k-center problem
In this section, we look at the non-adaptive k-center problem. We present a dynamic programming algorithm for choosing a set C ⊆ V of k centers that maximizes the 'success probability' P (max v∈Ṽ d(v, C) ≤ r) for a given distance r. The final solution can then be found by doing binary search for optimal r over a sorted list of n 2 distances. Below, we first describe an exact algorithm for tree metrics. The algorithm is similar in spirit to the dynamic programming algorithm given in [8] for (deterministic) k-median problem under tree metrics. In the sequel, we extend this algorithm to obtain approximation algorithms for more general graph metrics.
Exact algorithm for tree metrics
Our algorithm for tree metrics is based on a key property of our model, that is, "for any subtree in a tree, once the number of centers in the subtree and the center closest to its root are fixed, the probability of success for the subtree is independent of the rest of the tree". The reason this property holds lie with the structural properties of the problem on a tree graph, and our independence assumption on probability of vertices. The hierarchical structure of tree ensures that the closest center to any vertex in a subtree either lies inside the subtree or is the center closest to the root of the subtree. The independence assumption on vertices implies that inter-dependencies between disjoint subtrees are caused only due to the common centers used to cover them. Once the closest center to root and number of centers in the subtrees are fixed, the joint probability of success for a tree can be expressed as product of success probabilities for its subtrees. This observation will give us the optimal substructure property required for a dynamic programming approach.
We make these ideas more precise in the following.
Dynamic programming algorithm Given a rooted tree
T = (V, E) with root v 0 . T v denotes the subtree of T under vertex v (including v), e(v, t) denotes the t th child edge of vertex v,
and T e(v,t) denotes the subtree of T v on the left of the edge e(v, t) (including v and edge e(v, t)). Also, t s denotes the total number of child edges of a vertex v s . Now, for any subtreeT = {S,Ē} of T , define function H(T , j) as maximum probability (i.e., the probability under optimal choice of centers) that random subsetsS of S can be covered by j-centers. Given clustering distance r, we say that a set of vertices is covered by a set of centers iff for every vertex there is some center within distance r.
H(T , j) = max C j ⊆S,|C j |=j P r(C j coversS)
Note that H(T, k) gives the desired optimal value. We now define function R(T , j, v) which will prove to be an essential tool for computing values of H(·). Suppose it is given that v is a closest center to the root of the subtreeT , then R(T , j, v) is defined as maximum probability that a set of j − 1 centers in S, along with the center v, can cover a random subset of S. That is,
R(T , j, v) = max C j−1 ⊆S,|C j−1 |=j−1 P r(C j−1 ∪ v coversS)
We employ a dynamic programming type procedure that proceeds bottom up in the tree and computes all values of R(T e(v 1 ,l) , j, v 2 ) and H(T v , j) (and finally H(T, k) for the whole tree T ).
The initial values: For any leaf
v, T v = v, H(T v , j) = 1 if j ≥ 1 1 − p v o.w.
Also, for any vertex v 1 , T e(v 1 ,0) = v 1 . So, for any pair of vertices v 1 , v 2 :
R(T e(v 1 ,0) , j, v 2 ) = 1 if j > 1 1 if j = 1, d(v 1 , v 2 ) ≤ r 1 − p 1 o.w. v 1 v 3 e(v 1 , l)
T v3 T e(v1,l−1) Figure 1: Tree T e(v 1 ,l) and its subtrees
Computation of H(T v 1 , j): Let C be the optimal set of j-centers for tree T v 1 , and v r be the closest center to v 1 in C. Then, by the definition:
H(T v 1 , j) = R(T e(v 1 ,t 1 ) , j, v r )
Therefore we can compute H(T v 1 , j) using the following relation:
H(T v 1 , j) = max v 2 ∈Tv 1 R(T e(v 1 ,t 1 ) , j, v 2 )
Computation of R(T e(v 1 ,l) , j, v 2 ): By definition, v 2 is closest vertex to the root v 1 of the subtree; and l ≤ t 1 , the number of child edges of v 1 . If v 1 is a leaf, then t 1 = 0, and R(e(v 1 , 0), j, v 2 ) is given by the initial values. Assume that v 1 is not a leaf and l ≥ 1. Let v 3 be the vertex on the other end of edge e(v 1 , l) (refer Figure 1). The value of R(T e(v 1 ,l) , j, v 2 ) is given by the following recursion:
R(T e(v 1 ,l) , j, v 2 ) = max j 1 ,j 2 ∈[0,j] {R(T e(v 1 ,l−1) , j 1 , v 2 ) · R(T e(v 3 ,t 3 ) , j − j 1 + 1, v 2 ), R(T e(v 1 ,l−1) , j 2 , v 2 ) · H(T v 3 , j − j 2 )}
The reason this equation holds is as follows. Since v 2 was the closest center to the root of subtree T e(v 1 ,l) , it remains closest center to the root of subtree T e(v 1 ,l−1) . However for subtree T v 3 (same as T e(v 3 ,t 3 ) ), there are two possible choices: either v 2 remains the closest center, or a center in T v 3 is the closest center. The two terms on the right represent these two choices. The product expression follows from the independence property discussed in the beginning of this section.
We order the vertices of the tree from bottom to top and left to right. At stage i, we compute values R(T e(v 1 ,l) , j, v 2 ) for i th vertex v 1 picked in this order. For a given vertex v 1 , R(T e(v 1 ,l) , j, v 2 ) is computed for increasing values of l and j, and all choices of v 2 in T . Then, we compute values H(T v 1 , j), and go on to the next stage. Thus, at any stage, all the terms in above expression are already known from computations in the previous stages.
Computing the optimal solution Assume that we have calculated (and recorded) all values of H(·) and R(·). H(T, k) gives the optimal probability. The corresponding optimal set of kcenters can be generated by carrying out another pass over this table of values. This is a standard component of any dynamic programming procedure, we omit the details here.
Running time complexity For each edge e(v 1 , l) and each vertex v 2 , we compute R(T e(v 1 ,l) , j, v 2 ) for all k values of j. Also, each computation of R(·) requires taking max over atmost 2k terms. Therefore total complexity of computing the terms R(·) is O(n 2 k 2 ). For each vertex v, there are atmost k values of j for which H(·) need to be computed. And each of these computations takes O(n) steps. Hence, total complexity of computing terms H(·) is O(n 2 k).
Also, as a pre-procedure for the algorithm we compute the distance-matrix of the tree (this requires O(n 2 ) steps). And, the algorithm needs to be repeated for log n 2 possible values of r. Thus, total complexity of the procedure is O(n 2 k 2 log n).
Extensions
Extensions to more general graph metrics In this section, we extend our algorithm to obtain efficient PTAS for planar graphs and a more general class of graphs called "bounded genus graphs". The heart of this approach lies in the adaptability of the structure of c-outerplanar graphs to dynamic programming. A c-outerplanar graph has the property that it can easily be decomposed into two subgraphs with just 2c common boundary nodes [1]. Now, a dynamic programming algorithm similar to our algorithm for tree case can be used. For a given c, let G is a c-outerplanar graph. Then, using techniques in [1], G can be recursively decomposed into c-outerplanar subgraphs G 1 and G 2 with atmost 2c common boundary nodes. The dynamic programming recursion is now defined as:
H(G, j) = max {v i } 2c i=1 ⊆V R(G, j, {v i }) R(G, j, {v i }) = max 0≤j 1 ,j 2 ≤j max U ⊆{v i },U =φ R(G 1 , j 1 , {v i }) · R(G 2 , j − j 1 + |U |, U ), R(G 1 , j 2 , {v i }) · H(G 2 , j − j 2 )
Since there are n vertices, there are atmost nk values of G and j for which H has to be computed, and each computation requires taking max over n 2c values. So complexity of computing terms H(·) is O(n 2c+1 k). Similarly, there are n 2c+1 k values for which R(·) has to be computed. Each computation requires taking max over 2 2c+1 k terms. Hence, total running time complexity of above procedure is O(n 2c+1 k 2 ). To extend this approach to general planar graphs, we can use graph decomposition concepts from [1]. Here, we give an outline of the method. The idea is to decompose the planar graph into disjoint (c + 1)-outerplanar components by copying the nodes in every c th 'level' [1]. Then, use the above algorithm for resulting (c + 1)-outerplanar graph. Note that we are potentially duplicating the centers in the copied levels. However, by pigeonhole principle, there exists i ∈ {1, . . . , c} such that if we copy levels congruent to jc + i, j > 0, then number of centers increase by a factor of atmost 1+1/c. This gives a (1+1/c)-approximation in number of centers, with running timeÕ(n c ). A result by Eppstein [4] shows that similar decompositions can be achieved in polynomial-time for a more general class of graphs called "bounded genus" graphs. Thus, our approximation algorithms extend in a natural way to this class of graphs.
Extensions to other covering problems Our algorithm can be directly applied to other stochastic covering problems on planar graphs, like vertex cover, edge cover and dominating set. The basic idea remains the same: once we fix the number of centers (covering nodes or edges) in a subgraph and the closet center(s) to its boundary node(s), the probability of covering the subgraph is independent of the rest of the graph. Note however, that for problems with non-uniform cost of centers, our dynamic programming algorithm will be pseudo-polynomial (polynomial in 'total cost').
Adaptive stochastic k-center problem
In adaptive setting, the goal is to find the minimum distance r such that the failure probability P (max v∈Ṽ d(v,C) > r) is less than ρ. Again, the desired value r could be found by doing a binary search over n 2 values of r, and testing for each r whether the failure probability is less than ρ. However, evaluating this probability term is not straightforward. Here, a key difference from the non-adaptive setting is that a different set of centersC is chosen for each random scenarioṼ , optimized for the subset of vertices in that scenario. A brute force approach to find the failed scenarios would require solving a deterministic k-center problem for each of the 2 n subsets of V .
In this section, we propose a dynamic programming algorithm to compute this failure probability in polynomial-time for a given value of r. First, we present an exact algorithm for tree metrics, and then extend it to more general graph metrics.
Exact algorithm for tree metrics
The basic idea in our algorithm is to characterize each random subset of a subtree via a profile (j, d, d ′ ) that completely captures its covering properties. Specifically, given a subtreeT = {S,Ē}, a random subsetS ⊆ S belongs to a profile (j, d, d ′ ) if and only if
• the minimum number of centers sufficient to coverS within distance r is j,
• among the covers of size j, minimum distance of a center to the root ofT is d, and
• d ′ is the maximum distance such that if a vertex v ′ outside the subtreeT and at distance d ′ from its root is a center, then the subtree can be covered using only j − 1 centers. If no such vertex v ′ exists, then d ′ = −d.
Note that each subset of vertices belongs to exactly one profile (j, d, d ′ ). This is because there is a unique minimum number of centers j required for any subset, and that corresponds to a unique minimum distance d of closest center to the root. Also note that using any help v ′ from outside the tree atmost 1 center can be removed out of the j centers -otherwise we could place a center at the root and reduce the minimum number of centers to j − 1. Taking maximum of the distances of all such v ′ s from root, we get our unique d ′ .
Above argument shows that the profile (j, d, d ′ ) define a disjoint partition over the subsets of any subtreeT . Now, define function DP (T , j, d, d ′ ) as the probability of random subsets inT under profile (j, d, d ′ ). Then, by definition, the probability of failure is given by:
Failure probability = k<j≤n,d DP (T, j, d, −d)(1)
Here, d can take atmost n possible values -corresponding to possible distances of vertices from the root. Now, we are ready to present our dynamic programming algorithm. We use the same notations as in the previous section. The algorithm will compute all values DP (T e(v 1 ,l) , j, d, d ′ ) in a bottom to top, left to right order, finally computing the values DP (T, j, d, −d) that appear in the above expression for failure probability.
Initial values:
For l = 0, T e(v,0) = v,
DP (v, j, d, d ′ ) = p v if j = 1, d = 0, d ′ = max v ′ =v,d(v,v ′ )≤r d(v, v ′ ) 1 − p v if j = 0, d = 0, d ′ = 0 0 o.w.
Computation of DP (T e(v 1 ,l) , j, d, d ′ ): Now, assume that v 1 is not a leaf, and l ≥ 1. To compute DP (T e(v 1 ,l) , j, d, d ′ ) for some l, we reduce it to an expression consisting of function DP (·) on subtrees T 1 = T e(v 1 ,l−1) and T 2 = T e(v 2 ,t 2 ) , where v 2 is the vertex on the other end of edge e(v 1 , l). We use the observation that a random subsetṼ of this tree has a profile {j, d, d ′ } if and only ifṼ 1 = T 1 ∩Ṽ and
V 2 = T 2 ∩Ṽ have profiles (j 1 , d 1 , d ′ 1 ) and (j 2 , d 2 , d ′ 2 )
, respectively, satisfying either of the following conditions:
• j 1 + j 2 = j: In this case, we must ensure that the centers in V 1 do not help V 2 and vice-versa so that total minimum number of centers is j. Let w denote the distance d(v 1 , v 2 ), then we require
d 2 + w > d ′ 1 , d 1 + w > d ′ 2 .
To get d, the least of d 1 and d 2 + w must be equal to d, and to get d ′ , the max of d ′ 1 and d ′ 2 − w must be equal to d ′ .
• j 1 + j 2 = j + 1: In this case we must ensure that the centers in V 1 help V 2 or vice-versa, so that total minimum number of centers is j, that is
d 2 + w ≤ d ′ 1 , d 1 + w > d ′ 2 or d 2 + w > d ′ 1 , d 1 + w ≤ d ′ 2 .
To get d, the least of d 1 and d 2 + w must be equal to d.
To get d ′ , d ′ 1 must be equal to d ′ if V 2 is helped by V 1 , and d ′ 2 − w must be equal to d ′ if V 1 is helped by V 2 .
• j 1 + j 2 = j + 2: In this case we must ensure that the centers in V 1 help V 2 and vice-versa, so that total minimum number of centers is j, that is
d 2 ≤ d ′ 1 − w, d 1 ≤ d ′ 2 − w.
To get d, the least of d 1 and d 2 + w must be equal to d. Only negative values of d ′ (= −d) have this case.
It is easy to see that in each of the above cases, the conditions on d 1 , d 2 and d ′ 1 , d ′ 2 are necessary and sufficient to get the joint profile (j, d 1 , d 2 ). Let P denotes the collection of profiles {(j 1 , d 1 , d ′ 1 ), (j 2 , d 2 , d ′ 2 )} satisfying either of the above conditions. Then, using the fact that the profiles are disjoint, and independence assumptions on the probability model, DP (T e(v 1 ,t) , j, d, d ′ ) can be expressed as DP (T e(v 1 ,l) , j, d, d ′ ) = P DP (T e(v 1 ,l−1) , j 1 , d 1 , d ′ 1 ) · DP (T e(v 2 ,t 2 ) , j 2 , d 2 , d ′ 2 ) Observe that due to the specific order in which we compute the values of DP (·), all terms in the above expression were already computed in a previous stage.
Running time complexity For each edge, we compute atmost kn 2 values of DP (·) (possible values of j and d, d ′ ). For each of these terms we sum over at most 3kn 4 terms. Therefore, total complexity is O(k 2 n 6 ). The preprocessing time is O(n 3 ) for computing distance pairs, and O(n 2 ) for assigning initial values. Including the log n 2 iterations for binary search on r, the effective complexity is O(k 2 n 6 log n).
Extensions
The algorithm can be extended to more general graph classes and other covering problems on graphs, using ideas similar to those discussed at the end of previous section. We omit the details here.
Non-adaptive stochastic set cover problem
We give an approximation method for non-adaptive stochastic set cover problem by reformulating it as a partial set cover problem. The problem (refer Section 1) can be restated as:
min x m i=1 c i x i s.t. P (Ẽ is not covered by x) ≤ ρ x i ∈ {0, 1} ∀i ∈ [n]
Here, [n] denotes the set {1, . . . , n}. The value of 0-1 variable x i indicates whether set i is chosen or not. For any element j, let ∂j denote the collection of sets that cover the element j. Then, indicator function I j (x) = (1 − i∈∂j x i ) + takes value 1 if j is NOT covered by solution x and 0 otherwise. Using the assumptions on our probability model:
P (Ẽ is not covered by x) = 1 − Π j:I j (x) (1 − p j )
Let l j = log 1 1−p j , and l = log 1 1−ρ . Then, the probabilistic constraint is equivalent to:
1 − Π j:I j (x) (1 − p j ) ≤ ρ ⇔ Π j:I j (x) 1 1 − p j ≤ 1 1 − ρ ⇔ n j=1 I j (x)l j ≤ l
Therefore, we can reformulate our problem as:
min x m i=1 c i x i s.t. n j=1 (1 − i∈∂j x i ) + · l j ≤ l x i ∈ {0, 1}
which is equivalent to the following integer program:
min x m i=1 c i x i s.t. i∈∂j x i ≥ 1 − z j ∀j = 1, . . . , n n j=1 z j l j ≤ l x i ∈ {0, 1} ∀i = 1, . . . , m z i ∈ {0, 1} ∀j = 1, . . . , n
The above problem can be interpreted as a 'partial set cover problem', where penalty for not covering an element j is given by l j . The partial set cover problem is to minimize the cost of sets (c T x) such that the total penalty ( z j l j ) for uncovered elements is less than a given limit (l). A ( 4 3 + ǫ) log n-approximation algorithm for the partial set cover problem is appears in [9]. The algorithm can be directly used for the above problem.
Adaptive stochastic set-cover problem
In the adaptive setting, our goal is to compute the minimum value B so that probability that cost of optimal set cover for a random subset of elements in E exceeds B, is less than ρ. Given a fixed value B, we call the subsets of E with adaptive set cover cost > B as failed subsets, and probability of these subsets as failure probability. We show that even for the uniform cost edge cover case, the problem of approximating this failure probability is harder than the problem of approximately counting maximum independent sets in a graph. An inapproximability result for the latter problem appears in [10], which states that this problem cannot be approximated within a polynomial factor unless RP=NP (refer Theorem 4 in [10]). Thus, a reduction from this problem will suggest that our problem is hard to approximate as well.
Given graph G = (V, E) and a parameter k, we denote the edge cover failure probability by f (k). It is the probability of random subsetsṼ of V such that the number of edges in the edge cover ofṼ is greater than k. We call such subsets of V , the "failed subsets". Let each vertex appears independently in the random subsetṼ with probability p (that is, p i = p for all i). Denote by N i (G, k) the number of failed subsets containing i vertices. Then,
f (k) = n i=k N i (G, k) · p i (1 − p) n−i
Denote the count of maximum independent sets of graph G by I(G)(≥ 1). Let m be the size of a maximum independent set in G. We show that computing f (m − 1) with a good approximation factor is harder than approximating the number of independent sets I(G). Note that N m (G, m) denotes the number of subsets of V that have m vertices and need m or more edges to cover them.
From inequalities (2) and (3), we can conclude that
1 3 · f (m) p m (1 − p) n−m ≤ I(G) ≤ f (m) p m (1 − p) n−m
Thus, if we have a (1 ± ǫ) approximation of f (m), then we could get a (1 ± (ǫ + 2 3 )) approximation for I(G). These completes the reduction.
| 5,163 |
0808.0148
|
2949285221
|
We present a new method for upper bounding the second eigenvalue of the Laplacian of graphs. Our approach uses multi-commodity flows to deform the geometry of the graph; we embed the resulting metric into Euclidean space to recover a bound on the Rayleigh quotient. Using this, we show that every @math -vertex graph of genus @math and maximum degree @math satisfies @math . This recovers the @math bound of Spielman and Teng for planar graphs, and compares to Kelner's bound of @math , but our proof does not make use of conformal mappings or circle packings. We are thus able to extend this to resolve positively a conjecture of Spielman and Teng, by proving that @math whenever @math is @math -minor free. This shows, in particular, that spectral partitioning can be used to recover @math -sized separators in bounded degree graphs that exclude a fixed minor. We extend this further by obtaining nearly optimal bounds on @math for graphs which exclude small-depth minors in the sense of Plotkin, Rao, and Smith. Consequently, we show that spectral algorithms find small separators in a general class of geometric graphs. Moreover, while the standard "sweep" algorithm applied to the second eigenvector may fail to find good quotient cuts in graphs of unbounded degree, our approach produces a vector that works for arbitrary graphs. This yields an alternate proof of the result of Alon, Seymour, and Thomas that every excluded-minor family of graphs has @math -node balanced separators.
|
Connections with discrete conformal mappings. One can view the minimizer of (or, more appropriately, the maximizer of the vertex version ) as a sort of global uniformizing'' metric for general graphs. In the setting of discrete conformal mappings, a number of variationally defined objects appear, and duality is often an important component in their analysis. We mention, for instance, the extremal length @cite_23 as a prominent example. It also often happens that one chooses a weight function @math as the minimizer of some convex functional, and this weight function plays the role of a discrete Riemannian metric (much as is the case in Section ); see, e.g. the work of Schramm @cite_27 and He and Schramm @cite_20 .
|
{
"abstract": [
"LetT be a triangulation of a quadrilateralQ, and letV be the set of vertices ofT. Then there is an essentially unique tilingZ=(Zv: v ∈ V) of a rectangleR by squares such that for every edge ofT the corresponding two squaresZ u, Zvare in contact and such that the vertices corresponding to squares at corners ofR are at the corners ofQ.",
"The contacts graph, or nerve, of a packing, is a combinatorial graph that describes the combinatorics of the packing. LetG be the 1-skeleton of a triangulation of an open disk.G is said to be CP parabolic (resp. CP hyperbolic) if there is a locally finite disk packingP in the plane (resp. the unit disk) with contacts graphG. Several criteria for deciding whetherG is CP parabolic or CP hyperbolic are given, including a necessary and sufficient combinatorial criterion. A criterion in terms of the random walk says that if the random walk onG is recurrent, theG is CP parabolic. Conversely, ifG has bounded valence and the random walk onG is transient, thenG is CP hyperbolic. We also give a new proof thatG is either CP parabolic or CP hyperbolic, but not both. The new proof has the advantage of being applicable to packings of more general shapes. Another new result is that ifG is CP hyperbolic andD is any simply connected proper subdomain of the plane, then there is a disk packingP with contacts graphG such thatP is contained and locally finite inD.",
""
],
"cite_N": [
"@cite_27",
"@cite_20",
"@cite_23"
],
"mid": [
"2021586503",
"2005704408",
"2008264271"
]
}
|
Eigenvalue bounds, spectral partitioning, and metrical deformations via flows
|
Spectral methods are some of the most successful heuristics for graph partitioning and its variants. They have seen a great deal of success in application domains such as mapping finite element calculations onto parallel machines [46,50], solving sparse linear systems [40], partitioning for domain decomposition [11,12], VLSI circuit design and simulation [10,22,4], and image segmentation [45]. We refer to [47] for a discussion of their history and experimental success.
Recent papers [47,21,25] have begun a theoretical analysis of spectral partitioning for families of graphs on which it seems to work well in practice. Such analyses proceed by showing that the second eigenvalue of the Laplacian of the associated graph is small; from this, one derives a guarantee on the performance of simple spectral algorithms. The previous approaches of Spielman and Teng [47] and Kelner [25] either work for graphs which already possess a natural geometric representation (e.g. simplicial graphs or k-nearest-neighbor graphs), or use conformal mappings (or their discrete analog, circle packings) to impart a natural geometric representation to the graph.
Unfortunately, the use of these powerful tools makes it difficult to extend their analysis to more general families of graphs. We present a new method for upper bounding the second eigenvalue of the Laplacian of graphs. As evidence of its efficacy, we resolve a conjecture of Spielman and Teng about the second eigenvalue for excluded-minor families of graphs. Furthermore, we show that the "spectral approach" can be useful for understanding the cut structure of graphs, even when spectral partitioning itself may fail to find those cuts; this occurs mainly in the setting of graphs with arbitrary degrees, and yields a new proof of the separator theorem of Alon, Seymour, and Thomas [3].
Previous results and our work
Let G = (V, E) be an n-vertex graph with maximum degree d. Spielman and Teng [47] show that if G is a planar graph, then λ 2 = O(d/n), where λ 2 is the second eigenvalue of the Laplacian of G (see Section 1.2.1 for background on eigenvalues and spectral partitioning). It follows that a very simple spectral "sweep" algorithm finds a quotient cut of ratio O(d/ √ n) in such graphs. For d = O(1), this shows that spectral methods can recover the cuts guaranteed by the planar separator theorem of Lipton and Tarjan [31]; in particular, recursive bisection yields a balanced separator which cuts only O( √ n) edges. The proof of Spielman and Teng is based on the Koebe-Andreev-Thurston circle packing theorem for planar graphs, which provides an initial geometric representation of the graph. Indeed, in his survey [33], Lovász notes that there is no known method for proving the eigenvalue bound without circle packings.
In [25], Kelner proves that if G is a graph of genus g, then λ 2 = O( g+1 n )poly(d). Again for d = O(1), this shows that spectral algorithms yield balanced separators of size O( (g + 1)n), matching the bound of Gilbert, Hutchinson, and Tarjan [19]. Kelner's proof is not based on circle packings for genus g graphs, but instead on the uniformization theorem-the fact that every genus g surface admits a certain kind of conformal mapping onto the unit sphere. (It turns out that the discrete theory is not as strong in the case of genus g circle packings.) Kelner must embed his graph on a surface, and then recursively subdivide the graph (keeping careful track of λ 2 ), until it approximates the surface well enough.
Excluded-minor graphs. The preceding techniques are highly specialized to graphs that can be endowed with some conformal structure, and thus Spielman and Teng asked [47] whether there is a more combinatorial approach to bounding λ 2 . In particular, they conjectured a significant generalization of the preceding results: If G excludes K h (the complete graph on h vertices) as a minor, then one should have λ 2 = O( poly(h)d n ). See Section 1.2.2 for a brief discussion of graph minors.
Our new methods for bounding λ 2 are able to resolve this conjecture; in particular, we prove that
λ 2 = O( h 6 (log h)d n )
. As a special case, this provides eigenvalue bounds in the planar and bounded genus cases which bypass the need for circle packings or conformal mappings. As stated previously, these bounds show that for d, h = O(1), spectral algorithms are able to recover the O( √ n)-sized balanced separators of Alon, Seymour, and Thomas [3] in K h -minor-free graphs.
Geometric graphs. Spielman and Teng also bound λ 2 for geometric graphs, e.g. well-shaped meshes and k-nearest-neighbor graphs in a fixed number of dimensions. Although these graphs do not exclude a K h -minor for any h (indeed, even the n × n × 2 grid contains arbitrarily large K h minors as n → ∞), these graphs do exclude minors at small depth, in the sense of Plotkin, Rao, and Smith [38]. (Essentially, the connected components witnessing the minor must be of bounded diameter; see Section 1.2.2.) Spielman and Teng [47] ask whether one can prove spectral bounds for such graphs. In Section 5.3, we prove nearly-optimal bounds on λ 2 for graphs which exclude small-depth minors. This shows that spectral algorithms can find small balanced separators for a large family of low-dimensional geometric graphs.
Graphs with unbounded degrees. Finally, we consider separators in arbitrary graphs, i.e. without imposing a bound on the maximum degree. Very small separators can still exist in such graphs, if we consider node separators instead of the edge variety. For example, Alon, Seymour, and Thomas [3] (following [31,19]) show that every K h -minor-free graph has a subset of nodes of size of O(h 3/2 √ n) whose removal breaks the graph into pieces of size at most n/3.
The Laplacian of a graph is very sensitive to the maximum degree, and thus one does not expect spectral partitioning to do as well in this setting. Nevertheless, we show that the "spectral ideology" can still be used to obtain separators in general. We show that if one runs the "sweep" algorithm, not on the second eigenvector of the Laplacian, but on the vector we produce to bound the Rayleigh quotient, then one recovers small separators regardless of the degree. In particular, our approach is able to locate balanced node separators of size O(h 3 √ log h √ n) in K h -minor-free graphs; this gives a new proof of the Alon-Seymour-Thomas result (with a slightly worse dependence on h).
Overview of our approach. At a high level (discussed in more detail in Section 1.3), our approach to bounding λ 2 proceeds as follows. Given a graph G, we compute an all-pairs multicommodity flow in G which minimizes the ℓ 2 -norm of the congestion at the vertices. This flow at optimality is used to deform the geometry of G by weighting the vertices according to their congestion. We then embed the resulting vertex-weighted shortest path metric into the line to recover a bound on the Rayleigh quotient, and hence on λ 2 . The remaining technical step is to get control on the structure of an optimal flow in the various graph families that we care about. We remark that our bounds are optimal, except for the slack that comes from the embedding step. E.g., for genus g graphs we actually achieve the bound λ 2 = O( dg n ) (min{log n, g}) 2 , where we expect that the latter factor can be removed. For instance, our approach might give a path toward improving the Alon-Seymour-Thomas separator result to its optimal dependency on h.
Preliminaries
Given two expressions E and E ′ (possibly depending on a number of parameters), we write E = O(E ′ ) to mean that E ≤ CE ′ for some constant C > 0 which is independent of the parameters. Similarly, E = Ω(E ′ ) implies that E ≥ CE ′ for some C > 0. We also write E E ′ as a synonym for E = O(E ′ ). Finally, we write E ≈ E ′ to denote the conjunction of E E ′ and E E ′ .
All graphs in the paper are assumed to be undirected. K n denotes the complete graph on n vertices, and K m,n denotes the complete m × n bipartite graph. For a graph G, we use V (G) and E(G) to denote the vertex and edge sets of G, respectively.
Eigenvalues and spectral partitioning
Let G = (V, E) be a connected graph with n = |V |. The adjacency matrix A G of G is an n × n matrix with (A G ) i,j = 1 if (i, j) ∈ E and (A G ) i,j = 0 otherwise. The degree matrix of G is defined by (D G ) i,i = deg(i) for all i ∈ V , and (D G ) i,j = 0 for i = j. Finally, we define the Laplacian of G by
L G = D G − A G .
It is easy to see that L G is a real, symmetric, positive semi-definite; if we order the eigenvalues of L G as λ 1 ≤ λ 2 ≤ · · · ≤ λ n , and let v 1 , v 2 , . . . , v n be a corresponding orthonormal basis of eigenvectors, one checks that λ 1 = 0 and v 1 = 1 √ n (1, 1, . . . , 1). A vast array of work in spectral graph theory relates the eigenvalues of L G to the combinatorial properties of G (see, e.g. [14]). In the present work, we will be most interested in the connections between the second eigenvalue λ 2 , and the existence of small quotient cuts in G, following [13,2]. We will write λ 2 (G) for λ 2 when G is not clear from context. Given a subset S ⊆ V , we define the ratio of the cut (S,S) by
Φ G (S) = |E(S,S)| min(|S|, |S|) ,(1)
where E(S,S) is the set of edges with exactly one endpoint in S. We also define Φ * (G) = min S⊆V Φ G (S). Finally, we say that S ⊆ V is a δ-separator if min(|S|, |S|) ≥ δn. Spectral partitioning uses the second eigenvector of G to attempt to find a cut with small ratio. The most basic spectral partitioning algorithm uses the following simple "sweep."
1. Compute the second eigenvector z ∈ R n of L G . The next result is well-known and follows from the proof of the Alon-Milman-Cheeger inequality for graphs [13,2]; see, e.g. [47,34].
Theorem 1.1. For any v ∈ R n with n i=1 v i = 0, the sweep algorithm returns a cut S ⊆ V with Φ G (S) ≤ 2d max v, L G v v 2 ,
where d max is the maximum degree in G.
Furthermore, one can use recursive quotient cuts to find small δ-separators in G [31].
Lemma 1.2. Let G = (V, E)
, and suppose that for any subgraph H of G, we can find a cut of ratio at most φ. Then a simple recursive quotient cut algorithm returns a 1 3 -separator S ⊆ V with |E(S,S)| ≤ O(φn).
Graph minors
If H and G are two graphs, one says that H is a minor of G if H can be obtained from G by a sequence of zero or more of the three operations: edge deletion, vertex deletion, and edge contraction. G is said to be H-minor-free if it H is not a minor of G. We refer to [32,15] for a more extensive discussion of the vast graph minor theory.
Equivalently, H is a minor of G if there exists a collection of disjoint sets
{A v } v∈V (H) with A v ⊆ V (G) for each v ∈ V (H)
, such that each A v is connected in G, and there is an edge between A u and A v whenever (u, v) ∈ E(H).
Following [38], we see that H is a minor of G at depth L if, additionally, there exists such a collection of sets with diam(A v ) ≤ L for each v ∈ V (H), where diam(A v ) = max i,j∈Av dist(i, j) and dist is the shortest-path distance in G.
Outline
We now explain an outline of the paper, as well as a sketch of our approach. Let G = (V, E) be a connected, undirected graph with n = |V |. Using the variational characterization of the eigenvalues of L G (see (8)), we can write
λ 2 (G) 2n = min f :V →R uv∈E |f (u) − f (v)| 2 u,v∈V |f (u) − f (v)| 2 ≥ min d:V ×V →R + uv∈E d(u, v) 2 u,v∈V d(u, v) 2 ,
where the latter minimum is over all semi-metrics on V , i.e. all symmetric distance functions that satisfy the triangle inequality and d(u, u) = 0 for u ∈ V . Of course we are trying to prove upper bounds on λ 2 (G), but it is not difficult to see that by Bourgain's theorem [7] on the embeddability of finite metric spaces in Hilbert space, the second minimization is within an O(log n) 2 factor of the first. In Section 4, we discuss more refined notions of "average distortion" embeddings which are able to avoid the O(log n) 2 loss for many families of graphs; in particular, we use the structure theorem of [26] to achieve an O(1) loss for excluded-minor families.
Thus we now focus on finding a semi-metric d for which
R G (d) = uv∈E d(u, v) 2 u,v∈V d(u, v) 2(2)
is small. It is easy to see that for any graph G, the minimum will be achieved by a shortest-path metric, and thus finding such a d corresponds to deforming the geometry of G by shrinking and expanding its edges. In actuality, it is far more convenient to work with deformations that involve vertex weights, but we use edge weights here to keep the presentation simple. Thus in the body of the paper, all the edge notions expressed below are replaced by their vertex counterparts. Unfortunately, min d R G (d) is not a convex optimization problem, so we replace it by the convexified objective function
C G (d) = uv∈E d(u, v) 2 u,v∈V d(u, v)
.
In the proof of Theorem 5.1, we connect R G (d) and C G (d) via Cauchy-Schwarz; the structure of the extremal metrics ensure that we do not lose too much in this step.
In Section 2, we show that minimizing C G (d) is a convex optimization problem, and thus we are able to pass to a dual formulation, which is to send an all-pairs multicommodity flow in G, while minimizing the ℓ 2 norm of the congestion of the edges. In fact, examination of the Lagrangian multipliers in the proof of Theorem 2.2 reveals that the optimal metric d is obtained by weighting an edge proportional to its congestion in an optimal flow. Thus, by strong duality, in order to prove an upper bound on (2) for some graph G, it suffices to show that every all-pairs multicommodity flow in G incurs a lot of congestion in the ℓ 2 sense.
We address this in Section 3. First, we randomly round a fractional flow to an integral flow, with only a mild blowup in the ℓ 2 -congestion. In the case of planar (and bounded genus) graphs, we observe that an all-pairs integral flow in G induces a drawing of the complete graph in the plane. By relating the ℓ 2 -congestion of the flow to the number of crossings in this drawing, and using known results on graph drawings, we are able to conclude that ℓ 2 -congestion must be large, finishing our quest for upper bounds on the eigenvalues in such graphs (the entire argument is brought together in Section 5).
Extending this to H-minor-free graphs is more difficult, since there is no natural notion of "drawing" to work with. Instead, we introduce a generalized "intersection number" for flows with arbitrary demand graphs, and use this in place of the crossing number in the planar case. The intersection number is more delicate topologically, but after establishing its properties, we are able to adapt the crossing number proofs to establishing lower bounds on the intersection number, and hence on the ℓ 2 -congestion of any all-pairs flow in an excluded-minor graph. We end Section 3 by extending our congestion lower bounds to graphs which exclude small-depth minors. This is important for the applications to geometric graphs in Section 5.3.
Balanced vertex separators with no d max dependence. In the argument described above for bounding λ 2 (G), we lose a factor of d max . It turns out that if we simply want to find a small vertex separator in G, then we can use the vertex variant of the minimizer of (2) to obtain a metric on G, along with an appropriate embedding of the metric into R from Section 4. By passing these two components to the vertex-quotient cut rounding algorithm of [17], we are able to recover vertex separators in arbitrary graphs, with no degree constraints. This is carried out in Section 5.2.
Metrics, flows, and congestion
Let G = (V, E) be an undirected graph, and for every pair u, v ∈ V , let P uv be the set of all paths between u and v in G.
Let P = u,v∈V P uv . A flow in G is a mapping F : P → R + . We define, for every vertex v ∈ V , the value C F (v) = p∈P:v∈p F (p)
as the vertex congestion of F at v. For p ≥ 1, we define the vertex p-congestion of F by
con p (F ) = v∈V C F (v) p 1/p .
We say that F is an integral flow if, for every u, v ∈ V , |{p ∈ P uv : F (p) > 0}| ≤ 1. Given a demand graph H = (U, D), we say that F is a unit H-flow if there exists an injective mapping g : U → V such that for all (i, j) ∈ D, we have p∈P g(i)g(j) F (p) = 1, and furthermore F (p) = 0 if p / ∈ (i,j)∈D P g(i)g(j) . An integral H-flow is a unit H-flow which is also integral. Proof. For a flow F : P → R + and vertices x, u, v, let F uv (x) = x∈p∈Puv F (p). Define the random flow F * as follows: For each demand pair uv, independently pick one path p ∈ P uv with probability F (p). Set F * (p) = 1 for each of the selected paths, and zero for all other paths. Then
E[con 2 (F * ) 2 ] = E x∈V u,v∈V F * uv (x) 2 = x∈V u,v∈V E[F * uv (x) 2 ] + 2 {u,v} ={u ′ ,v ′ }⊆V E[F * uv (x)] E[F * u ′ v ′ (x)] .
Observing that F * uv (x) ∈ {0, 1},
E[con 2 (F * ) 2 ] ≤ x∈V u,v∈V E[F * uv (x)] + x∈V u,v∈V E[F * uv (x)] 2 ≤ con 1 (F ) + con 2 (F ) 2 .
By concavity, we conclude that E[con(F * )] ≤ con 1 (F ) + con 2 (F ); in particular, there exists some fixed flow F * that achieves this bound.
A non-negative vertex weighting s : V → R + induces a semi-metric d s :
V × V → R + defined by d s (u, v) = min p∈Puv x∈p s(x). We define Λ s (G) = u,v∈V d s (u, v) v∈V s(v) 2 .(3)
The main theorem of this section follows.
s:V →R + Λ s (G),
where the minimum is over all unit K n -flows in G, and the maximum is over all non-negative weight functions on V .
Proof. Let P ∈ {0, 1} P×V be the path incidence matrix and Q ∈ {0, 1} P×( V 2 ) be the path endpoint matrix, respectively, which are defined by
P p,v = 1 v ∈ p 0 otherwise Q p,uv = 1 p ∈ P uv 0 otherwise.
Then we write max s : V →R + Λ s (G) as a convex program (P) in standard form, with variables (d, s) ∈
Ω = R ( V 2 ) + × R V + . minimize −1 ⊤ d subject to Qd P s s 2 2 ≤ 1 s 0 d 0 (P)
Next, we introduce the Lagrangian multipliers f ∈ R P + and µ ∈ R + and write the Lagrangian function
L(d, s, f, µ) = −1 ⊤ d + f ⊤ (Qd − P s) + µ(s ⊤ s − 1) = d ⊤ (Q ⊤ f − 1) + (µs ⊤ s − f ⊤ P s) − µ.
Therefore, the Lagrange dual g(f, µ) = inf (d,s)∈Ω L(d, s, f, µ) is given by
g(f, µ) = inf d 0 d ⊤ (Q ⊤ f − 1) + inf s 0 (µs ⊤ s − f ⊤ P s) − µ.
The dual program is then sup f,µ g(f, µ). In order to write it in a more tractable form, first observe that g(f, µ) = −∞ when Q ⊤ f ≺ 1. But if we require that Q ⊤ f 1, it is easy to see that the optimum must be attained when equality holds. To minimize the quadratic part, set ∇(µs ⊤ s − f ⊤ P s) = 0 to get s = P ⊤ f /2µ. With these substitutions, the dual objective simplifies to
g(f, µ) = − P ⊤ f 2 2 4µ − µ
To maximize this quantity, set µ * = P ⊤ f 2 /2, and get g(f, µ) = − P ⊤ f 2 . Therefore, the final dual program is min P ⊤ f 2 subject to f 0 Q ⊤ f = 1 (P*)
When P and Q correspond to a K n demand graph for G, the dual optimum is precisely min f con 2 (f ), where the minimum is over unit K n -flows.
The theorem now follows from Slater's condition in convex optimization; see [8,Ch. 5].
Fact 2.3 (Slater's condition for strong duality). When the feasible region for (P) has non-empty interior, the values of (P) and (P*) are equal.
2-congestion lower bounds
In the present section, we prove lower bounds on the 2-congestion needed to route all-pairs multicommodity flows in various families of graphs.
Theorem 3.1 (Bounded genus). There exists a universal constant c > 0 such that if G = (V, E) is a genus g graph with n = |V |, and F is any unit K n -flow in G, then con 2 (F ) ≥ cn 2 √ g for n ≥ 3 √ g.
Proof. By Lemma 2.1 it suffices to prove the theorem when F is an integral flow. Suppose, for the sake of contradiction, there exists an integral K n -flow F with con 2 (F ) < n 2 8 √ g . The drawing of G in a genus g surface S induces (via F ) a drawing of K n in S where edges of K n only cross at (the images of) vertices of G. Clearly the number of crossings is upper bounded by v∈V C F (v) 2 = con 2 (F ) 2 < n 4 64g . On the other hand, it is known that as long as n ≥ 3 √ g, any drawing of K n in a surface of genus g requires at least n 4 64g edge crossings [1,30], yielding a contradiction. Now we prove a similar theorem for K h -minor-free graphs. To this end, suppose we have a graph G = (V, E) and an integral flow ϕ in G. For every (i, j) ∈ E(H), let ϕ ij be the corresponding flow path in G. Define inter(ϕ) = # (i, j), (i ′ , j ′ ) ∈ E(H) : |{i, j, i ′ , j ′ }| = 4 and ϕ ij ∩ ϕ i ′ j ′ = ∅ .
V (H) ⊆ V . For a vertex i ∈ V (H), let N H (i) ⊆ V H be the set of j ∈ R with (i, j) ∈ E(H). For each i ∈ V (H), let V i = j∈N H (i) ϕ ij .
Now, for each i ∈ L and j ∈ N H (i), consider the path ϕ ij = v 1 , v 2 , . . . , v k , and let v t be the first vertex in this path for which v t ∈ r∈L\{i} V r . If no such t exists, defineφ ij = ϕ ij , and otherwise define the prefixφ ij = {v 1 , v 2 , . . . , v t−1 }. Set C i = j∈N H (i)φ ij to be the union of all such prefixes. Then, for each i ∈ R, define C i = V i \ j∈L C j .
We claim that the sets {C i } i∈L∪R are all connected, and pairwise disjoint, and that for (i, j) ∈ E(H), we have E(C i , C j ) = ∅. This will imply that G has an H-minor. We start with the following straightforward fact.
Fact 3.3. If v ∈ V r ∩ V r ′ for some r = r ′ ∈ L, then v / ∈ j∈L C j .
Lemma 3.4. For every i ∈ L ∪ R, we have i ∈ C i .
Proof. First, we consider i ∈ L. If i / ∈ C i , then i occurs as an intermediate vertex of some ϕ rs path for r ∈ L, s ∈ R with r = i. Since i has degree at least 2 in H, there must exist some s ′ ∈ R with s = s ′ and (i, s ′ ) ∈ E(H). But now i ∈ ϕ rs ∩ ϕ is ′ which contradicts the fact that inter(ϕ) = 0. Thus we must have i ∈ C i . To see that i ∈ C i for i ∈ R, note that by assumption deg H (i) ≥ 2, so there must exist r = r ′ ∈ L for which (r, i), (r ′ , i) ∈ E(H). Thus i ∈ V r ∩ V r ′ , and by Fact 3.3, it must be that i / ∈ j∈L C j . We conclude that i ∈ C i .
Connected and disjoint components. Lemma 3.4 implies that i ∈ C i for i ∈ L, so it is clear by construction that the sets {C i } i∈L are each connected and that for any i ∈ L and j ∈ L ∪ R \ {i}, we have C i ∩ C j = ∅. Thus we need only verify that each set C i is connected for i ∈ R, and also that for i, j ∈ R with i = j, we have C i ∩ C j = ∅.
Lemma 3.5. If i ∈ R and j ∈ N H (i), then ϕ ji \φ ji ⊆ C i .
Proof. Any node v ∈ ϕ ji \φ ji must be contained either in C i or in C r for some r ∈ L with r = j. But the latter case cannot occur because any node which is contained in V j ∩ V r for r = j cannot be contained in C r by Fact 3.3.
Using the fact that i ∈ C i (Lemma 3.4) and the preceding lemma, we see that C i is connected for every i ∈ R. It remains to show that for i, j ∈ R with i = j, we have C i ∩ C j = ∅.
Suppose, to the contrary, that C i ∩ C j = ∅. Since inter(ϕ) = 0, there must exist a k ∈ L such that (ϕ ki ∩ C i ) ∩ (ϕ kj ∩ C j ) = ∅. The following lemma shows this to be impossible.
Lemma 3.6. For k ∈ L and i, j ∈ N H (k), we must have ϕ ki ∩ ϕ kj ⊆ C k .
Proof. Suppose, to the contrary, that there is a v ∈ ϕ ki ∩ ϕ kj for which v / ∈ C k . In this case, it must be that v ∈ V r for some r ∈ L with r = k. In other words, for some s ∈ R, ϕ rs intersects both ϕ ki and ϕ kj , but this is impossible since i = j and inter(ϕ) = 0.
Edges of E(H). Consider i ∈ L and j ∈ R with (i, j) ∈ E(H). It is straightforward to see that ϕ ij ⊆ C i by construction, and on the other hand, ϕ ij \φ ij ⊆ C j , by Lemma 3.5. Since i ∈ C i and j ∈ C j by Lemma 3.4, it follows that E(C i , C j ) = ∅. This complete the proof. Corollary 3.7. For every h ≥ 2, if G is K h -minor-free, and ϕ is an integral K 2h -flow in G, then inter(ϕ) > 0.
Proof. If ϕ is an integral K 2h flow with inter(ϕ) = 0, the obviously it induces an integral K h,h flow with the same property. By Lemma 3.2, G has K h,h as a minor, and hence also has K h as a minor, yielding a contradiction. Proof. Clearly,
inter(ϕ) ≤ v∈V i,j,i ′ ,j ′ ∈E(H) 1 v∈ϕ ij · 1 v∈ϕ i ′ j ′ = v∈V C ϕ (v) 2 .
We begin with an elementary proof that yields a suboptimal bound dependence on h. Theorem 3.9. If G = (V, E) is K h -minor-free and n = |V |, then any unit K n -flow F in G has con 2 (F ) ≥ n 2 12h 3/2 for n ≥ 4h. Proof. Using Lemma 2.1, it suffices to prove the theorem when F is an integral K n -flow in G. By Lemma 3.8, it suffices to show that inter(F ) ≥ n 4 16h 3 . If ϕ is any integral flow with inter(ϕ) > 0, then one can always remove a terminal of ϕ to obtain an integral flow ϕ ′ for which inter(ϕ ′ ) ≤ inter(ϕ) − 1. From Lemma 3.2, we know that for an integral K 2h -flow ϕ, we have inter(ϕ) > 0. It follows that if ϕ is an integral K r -flow in G, then inter(ϕ) ≥ r − 2h + 1. Now let p ∈ [0, 1], and consider choosing a random subset S p ⊆ V by including every vertex independently with probability p. Let n p = |S p |, and let F p be the integral K np -flow formed by restricting the terminals of F to lie in S p . It is obvious that E[n p ] = pn and E[inter(F p )] = p 4 · inter(F ), since all intersections counted by inter(F ) involve four distinct vertices. Hence,
p 4 · inter(F ) = E[inter(F p )] ≥ E[n p − h + 1] ≥ pn − 2h.(4)
We may assume that n ≥ 4h, and in this case choosing p = 4h n in (4) yields
inter(F ) ≥ 2h n 4h 4 = n 4 128h 3 .
finishing the proof.
To do better, we first require the following theorem proved independently by Kostochka [28] and Thomason [49]. We remark that the preceding theorem is tight [28,18,6]. We now proceed to an improved bound.
Theorem 3.11 (Excluded minors). There exists a universal constant c > 0 such that if G = (V, E) is K h -minor-free and n = |V |, then any unit K n -flow F in G has con 2 (F )
≥ cn 2 h √ log h for n ≥ 4c KT h √ log h + 1,
where c KT is the constant from Theorem 3.10.
Proof. As in the proof Theorem 3.9, it suffices to prove that inter(F ) = Ω( n 4 h 2 log h ) whenever F is an integral K n -flow in G.
If ϕ is an integral H-flow with inter(ϕ) > 0, then obviously there exists an edge e ∈ E(H) and an integral (H \ e)-flow ϕ ′ for which inter(ϕ ′ ) ≤ inter(ϕ) − 1. Combining this with Theorem 3.10 and Lemma 3.2 shows that for any H-flow ϕ in G, we have
inter(ϕ) ≥ |E(H)| − 2c KT |V (H)|h log h,
where c KT is the constant from Theorem 3.10.
We now apply this to the K n -flow F . As in the proof of Theorem 3.9, let p ∈ [0, 1], and consider choosing a random subset S p ⊆ V by including every vertex independently with probability p. Let n p = |S p |, and let F p be the integral K np -flow formed by restricting the terminals of F to lie in S p . We have,
p 4 · inter(F ) = E[inter(F p )] ≥ p 2 n(n − 1) 2 − 2c KT pnh log h(5)
We may assume that n ≥ 4c KT h √ log h + 1, and in this case
choosing p = 4c KT h √ log h n−1 in (5) yields inter(F ) ≥ n(n − 1) 3 64c 2 KT h 2 log h ,
finishing the proof.
Bounds for shallow excluded minors. Finally, we prove congestion lower bounds for graphs which exclude minors at small depth. This is useful for applications to geometric graphs in Section 5.3.
Theorem 3.12. There exists a constant c > 0 such that if G = (V, E) excludes a K h -minor at depth L and n = |V |, then any unit K n -flow in G has con 2 (F ) ≥ c min
n 2 h √ log h , n 3/2 L for n ≥ ch √ log h.
Proof. Suppose that G excludes a K h -minor at depth L, and let F be an integral K n flow in G. First, we state the following straightforward strengthening of Lemma 3.2. flow paths in F have length greater than L/2, then the total length of flow paths is at least Ω(n 2 )L, which shows that
con 2 (F ) = v∈V C F (v) 2 ≥ n −1/2 v∈V C F (v) = Ω(n 3/2 )L.
If, on the other hand, at least half of the flow paths in F have length less than L/2, let F ′ be the flow restricted to such paths. Clearly F ′ is an integral H-flow for some dense graph H on n nodes, hence the proof of Theorem 3.11 (with Lemma 3.13 substituted for Lemma 3.2) shows that for n large enough, we have con 2 (F ) ≥ con 2 (F ′ ) = Ω
n 2 h √ log h .
Corollary 3.14. There exists a constant c > 0 such that the following holds. Suppose that for some d ≥ 1 and every L ≥ 1, G = (V, E) excludes a K L d minor at depth L. If n = |V |, then con 2 (F ) ≥ cn 3/2 n d 1/(2d+2) .
Average distortion embeddings and random partitions
In this section, we use a construction of Rabinovich to embed certain metrics into the line with small "average distortion." This allows us to pass from a good metric on a graph to a good bound on the Rayleigh quotient. Our main technique is the use of random padded partitions, a now standard tool in the construction of metric embeddings (see, e.g. [5,42,41,29]).
Random partitions
Let (X, d) be a finite metric space. We recall the standard definitions for padded decompositions (see, e.g. [29]). If P is a partition of X, we will also consider it as a function P : X → 2 X such that for x ∈ X, P (x) is the unique C ∈ P for which x ∈ C.
Let µ be a distribution over partitions of X, and let P be a random partition distributed according to µ. We say that P is ∆-bounded if it always holds that for S ∈ P , diam(S) ≤ ∆.
Given a ∆-bounded random partition P , we say that P is α-padded if for every x ∈ X, we have
Pr [B(x, ∆/α) ⊆ P (x)] ≥ 1 2 ,
where B(x, r) = {y : d(x, y) ≤ r} denotes the closed ball of radius r about x.
We recall that the modulus of padded decomposability is the value
α(X, d) = sup ∆≥0
α : X admits a ∆-bounded α-padded random partition .
Now we can state a consequence [42] of the main theorem of Klein, Plotkin, and Rao [26]. Proof. Simply define len(u, v) = s(u) + s(v) for (u, v) ∈ E. Clearly the shortest-path distances induced by len and s are within a factor of 2, so the result follows from Theorem 4.1.
We also have the following theorem of Bartal for general metrics [5] (see [29] for a proof of the precise statement, based on [9]).
Average distortion embeddings
Rabinovich [41] essentially proved the following theorem in the case p = 1. For applications to eigenvalues, the case p = 2 is of particular interest (see Section 5.1). For direct application to vertex separators (Section 5.2), we will employ the p = 1 case. For a metric space (X, d), a mapping f :
X → R is said to be non-expansive if |f (x) − f (y)| ≤ d(x, y) for all x, y ∈ X.d(u, v) p ≤ C p [α(X, d)] p u,v∈X |f (u) − f (v)| p .
Proof. We prove the theorem for p = 2. The other cases are similar. Let ∆ 2 = 1 n 2 u,v∈X d(u, v) 2 . First, we handle the case when many points are clustered about a single node x 0 ∈ X. In what follows, we use B(x, R) = {y ∈ X : d(x, y) ≤ R} to denote the closed ball of radius R about x ∈ X.
Case I: There exists x 0 ∈ X for which |B(x 0 , 1 4 ∆ 2 )| ≥ n 10 . In this case, let S = B(x 0 , 1 4 ∆ 2 ), and define f (u) = d(u, S). First, we have
n 2 ∆ 2 2 = u,v∈X d(u, v) 2 ≤ 2 u,v d(u, x 0 ) 2 + d(v, x 0 ) 2 = 4n u∈X d(u, x 0 ) 2 ≤ 4n u∈X d(u, S) + ∆ 2 4 2 ≤ n 2 ∆ 2 2 2 + 8n u∈X d(u, S) 2 .
Therefore, u∈X d(u, S) 2 ≥ n·∆ 2 2 16 . We conclude that
u,v∈X (f (u) − f (v)) 2 = u,v∈X [d(u, S) − d(v, S)] 2 ≥ u / ∈S,v∈S [d(u, S) − d(v, S)] 2 = u / ∈S,v∈S d(u, S) 2 = |S| u∈X d(u, S) 2 n 2 ∆ 2 2 u,v∈X d(u, v) 2 .
This finishes the clustered case.
Case II: For every u ∈ X, |B(u, 1 4 ∆ 2 )| < n 10 . In particular, we know that for any subset T ⊆ X with diam(T ) ≤ 1 4 ∆ 2 , we have |T | < n/10. Now, let P be a random partition of X which is 1 4 ∆ 2 bounded and α-padded, where α = α(X, d). We know that for every x ∈ X, we have
Pr [B(x, ∆ 2 /(4α)) ⊆ P (x)] ≥ 1 2 .
So by Markov's inequality, it must be that there exists a partition P 0 such that the set
H 0 = {x ∈ X : B(x, ∆ 2 /(4α)) ⊆ P (x)}
has |H 0 | ≥ n/2. Fix this choice of P 0 and H 0 . Let {σ C } C∈P 0 be a collection of i.i.d. uniform 0/1 random variables, one for each cluster C ∈ P 0 and define S = C∈P 0 :σ C =0 C. Finally, define f : X → R by f (u) = d(u, S).
Note that f is a random function. We will now argue that
E u,v∈X (f (u) − f (v)) 2 n∆ 2 α 2 α −2 u,v∈X d(u, v) 2 ,(6)
which will imply (by averaging) that there exists a choice of f : X → R for which the sum is at least Ω(α −2 ) u,v∈X d(u, v) 2 .
So it remains to prove (6). Note that for every C ∈ P 0 , we have diam(C) ≤ ∆ 2 /4, so since we are in case (II), we have |C| ≤ n/10. Write
u,v∈X (f (u) − f (v)) 2 = u,v∈X (d(u, S) − d(v, S)) 2 ≥ C∈P 0 u∈C∩H 0 v / ∈C (d(u, S) − d(v, S)) 2 .(7)
So let's estimate E (d(u, S) − d(v, S)) 2 for u ∈ C ∩ H 0 and v / ∈ C. Since u, v lie in different clusters, conditioned on what happens for v, we have d(u, S) oscillating randomly between 0 when σ C = 0 and some value greater than ∆ 2 /(4α) when σ C = 1 (since u ∈ H 0 ). It follows that
E (d(u, S) − d(v, S)) 2 ≥ ∆ 2 2 64α 2 .
Plugging this into (7) and using |C| < n/10 for every C ∈ P 0 yields
E u,v∈X (f (u) − f (v)) 2 ≥ C∈P 0 u∈C∩H 0 |X \ C| · ∆ 2 2 64α 2 ≥ C∈P 0 u∈C∩H 0 9n 10 · ∆ 2 2 64α 2 = |H 0 | 9n 10 ∆ 2 2 64α 2 n∆ 2 α 2 ,
finishing our proof of (6).
Spectral bounds and balanced separators
We now combine the tools of the previous sections to prove bounds on the Rayleigh quotients of various graphs.
Eigenvalues in bounded degree graphs
Let G = (V, E) be any graph, and set n = |V |. Letting λ 2 (G) be the second eigenvalue of the Laplacian of G, by the variational characterization of eigenvalues, we have
λ 2 (G) = min f :V →R uv∈E |f (u) − f (v)| 2 u∈V |f (u) −f | 2 = 2n · min f :V →R uv∈E |f (u) − f (v)| 2 u,v∈V |f (u) − f (v)| 2 .(8)
wheref = 1 n x∈V f (x).
λ 2 (G) d max n 3 [α(V, d s )] 2 Λ s (G) 2 .
Proof. If d is any metric on V , then using (8) and Theorem 4.4, we have
λ 2 (G) n · [α(V, d)] 2 uv∈E d(u, v) 2 u,v∈V d(u, v) 2 . Therefore, λ 2 n[α(V, d s )] 2 uv∈E d s (u, v) 2 u,v∈V d s (u, v) 2 ≤ n[α(V, d s ) 2 ] 4d max v∈V s(v) 2 u,v∈V d s (u, v) 2 ≤ 4d max n 3 [α(V, d s )] 2 v∈V s(v) 2 u,v∈V d s (u, v) 2 = 4d max n 3 [α(V, d s )] 2 Λ s (G) 2 ,(9)
where the penultimate inequality follows from Cauchy-Schwarz.
Theorem 5.2. If G = (V, E) is a genus g graph with n = |V |, then λ 2 (G) = O( dmaxg 3 n ).
Proof. For any weight function s : V → R + , we have α(V, d s ) = O(g) by Corollary 4.2, hence (9) yields
λ 2 (G) ≤ O(d max g 2 n 3 ) max s:V →R + Λ s (G) 2 .
But by Theorems 2.2 and 3.1, we have max s:V →R + Λ s (G) 2 = min F con 2 (F ) 2 ≥ Ω( n 4 g ), where the minimum is over all K n -flows in G. Proof. For any weight function s : V → R + , we have α(V, d s ) = O(h 2 ) by Corollary 4.2, hence (9) yields
λ 2 (G) ≤ O(d max h 4 n 3 ) max s:V →R + Λ s (G) 2 .
But by Theorems 2.2 and 3.11, we have max s:V →R + Λ s (G) 2 = min F con 2 (F ) 2 ≥ Ω( n 4 h 2 log h ), where the minimum is over all K n -flows F in G. 3, respectively. Clearly these bounds are better when g or h grow moderately fast with n. We suspect that the O(·) part of each bound can be replaced by a universal constant. The resulting bounds would be tight in the case of genus, and almost tight in the case of K h -minor-free graphs. It is not clear whether the log h factor is necessary in general.
Balanced vertex separators
α(A, B, S) ≤ 2 v∈V s(v) u,v∈V |f (u) − f (v)| . Theorem 5.5. If G = (V, E) is a genus g graph, then α(G) = O(n −3/2 g 3/2 ). If G = (V, E) is K h -minor free, then α(G) = O(n −3/2 h 3 √ log h). Therefore such graphs have O(g 3/2 √ n) and O(h 3 √ log h √ n)-sized ( 1 3 , 2 3 )-balanced separators, respectively.
Proof. We prove the theorem only for a K h -minor free graph G.
min s:V →R + v∈V s(v) u,v∈V d s (u, v) ≤ √ n · min s:V →R + v∈V s(v) 2 u,v∈V d s (u, v) = √ n max s:V →R + Λ s (G) = O h √ log h n 3/2 ,
Geometric graphs
In practice, spectral methods are applied to graphs arising from a variety of geometric settings, not limited to surfaces of fixed genus. Miller et al. considered k-ply neighborhood systems, k-nearest neighbor graphs, and well-shaped finite element meshes in any fixed dimension [35,36]. They used geometric techniques to efficiently find small ratio cuts for these classes of graphs. Spielman and Teng give bounds for the second eigenvalue of these graphs, thus showing that these cuts can be recovered by spectral partitioning [47]. While none of these graph families exclude a fixed set of minors, some of them have been shown to lack small minors at small depth. We can adapt the proofs from the preceding subsections to this setting using the following lemma.
Lemma 5.6. Let G = (V, E) have constant maximum degree and exclude a K h -minor at depth L, where h = L p for some constant p and |V | = n. We have,
• λ 2 (G) =Õ(n −1/(1+p) ). • If G is a simplicial graph in d dimensions with constant aspect ratio (see [37] for a detailed definition), then λ 2 (G) =Õ(n −1/d ) and G has balanced separators of sizeÕ(n 1− 1 2d ).
• If G is an arbitrary k-nearest neighbor graph in d dimensions, then λ 2 (G) =Õ(n −1/(1+d) ) and G has balanced separators of sizeÕ(n 1− 1 2+2d ).
• If G is a d-dimensional grid, then λ 2 (G) =Õ(n −2/(2+d) ) and G has balanced separators of sizeÕ(n 1− 1 2+d ). This is also true with high probability when G is the relative neighborhood graph, the Delaunay diagram, or the k-nearest neighbor graph of a random point set in d dimensions.
Proof. The results follow from the corresponding bounds on excluded shallow minors. From Plotkin et al. [39], we have h = Ω d (L d−1 ) for simplicial graphs and h = Ω d (L d/2 ) for grids. From Teng [48], we have h = Ω(L d ) for arbitrary k-nearest neighbor graphs and, with high probability, h = Ω(L d/2 ) for the relative neighborhood graph, the Delaunay diagram, and the k-nearest neighbor graph of a random point set.
Remark 5.2. Spielman and Teng [47] prove that k-nearest-neighbor graphs and well-shaped meshes have λ 2 of value O(n −2/d ) and balanced separators of ratio O(n −1/d ). Their results are better than ours by a square; we suspect this is due to the non-tightness of the bounds on shallow excluded minors for these graph families.
| 8,273 |
0808.1128
|
2953283045
|
Dynamic connectivity is a well-studied problem, but so far the most compelling progress has been confined to the edge-update model: maintain an understanding of connectivity in an undirected graph, subject to edge insertions and deletions. In this paper, we study two more challenging, yet equally fundamental problems. Subgraph connectivity asks to maintain an understanding of connectivity under vertex updates: updates can turn vertices on and off, and queries refer to the subgraph induced by "on" vertices. (For instance, this is closer to applications in networks of routers, where node faults may occur.) We describe a data structure supporting vertex updates in O (m^ 2 3 ) amortized time, where m denotes the number of edges in the graph. This greatly improves over the previous result [Chan, STOC'02], which required fast matrix multiplication and had an update time of O(m^0.94). The new data structure is also simpler. Geometric connectivity asks to maintain a dynamic set of n geometric objects, and query connectivity in their intersection graph. (For instance, the intersection graph of balls describes connectivity in a network of sensors with bounded transmission radius.) Previously, nontrivial fully dynamic results were known only for special cases like axis-parallel line segments and rectangles. We provide similarly improved update times, O (n^ 2 3 ), for these special cases. Moreover, we show how to obtain sublinear update bounds for virtually all families of geometric objects which allow sublinear-time range queries, such as arbitrary 2D line segments, d-dimensional simplices, and d-dimensional balls.
|
Most previous work on dynamic subgraph connectivity concerns special cases only. Frigioni and Italiano @cite_7 considered vertex updates in planar graphs, and described a polylogarithmic solution.
|
{
"abstract": [
"We consider graphs whose vertices may be in one of two different states: either on or off . We wish to maintain dynamically such graphs under an intermixed sequence of updates and queries. An update may reverse the status of a vertex, by switching it either on or off , and may insert a new edge or delete an existing edge. A query tests whether any two given vertices are connected in the subgraph induced by the vertices that are on . We give efficient algorithms that maintain information about connectivity on planar graphs in O( log 3 n) amortized time per query, insert, delete, switch-on, and switch-off operation over sequences of at least Ω(n) operations, where n is the number of vertices of the graph."
],
"cite_N": [
"@cite_7"
],
"mid": [
"2177508451"
]
}
|
Dynamic Connectivity: Connecting to Networks and Geometry
|
Dynamic graphs inspire a natural, challenging, and well-studied class of algorithmic problems. A rich body of the STOC/FOCS literature has considered problems ranging from the basic question of understanding connectivity in a dynamic graph [13,17,34,6,31], to maintaining the minimum spanning tree [20], the min-cut [36], shortest paths [9,35], reachability in directed graphs [10,25,26,32,33], etc.
But what exactly makes a graph "dynamic"? Computer networks have long provided the common motivation. The dynamic nature of such networks is captured by two basic types of updates to the graph:
• edge updates: adding or removing an edge. These correspond to setting up a new cable connection, accidental cable cuts, etc.
• vertex updates: turning a vertex on and off. Vertices (routers) can temporarily become "off" after events such as a misconfiguration, a software crash and reboot, etc. Problems involving only vertex updates have been called dynamic subgraph problems, since queries refer to the subgraph induced by vertices which are on.
Loosely speaking, dynamic graph problems fall into two categories. For "hard" problems, such as shortest paths and directed reachability, the best known running times are at least linear in the number of vertices. These high running times obscure the difference between vertex and edge updates, and identical bounds are often stated [9,32,33] for both operations. For the remainder of the problems, sublinear running times are known for edge updates, but sublinear bounds for vertex updates seems much harder to get. For instance, even iterating through all edges incident to a vertex may take linear time in the worst case. That vertex updates are slow is unfortunate. Referring to the computer-network metaphor, vertex updates are cheap "soft" events (misconfiguration or reboot), which occur more frequently than the costly physical events (cable cut) that cause an edge update.
Subgraph connectivity. As mentioned, most previous sublinear dynamic graph algorithms address edge updates but not the equally fundamental vertex updates. One notable exception, however, was a result of Chan [6] from STOC'02 on the basic connectivity problem for general sparse (undirected) graphs. This algorithm can support vertex updates in time 1 O(m 0.94 ) and decide whether two query vertices are connected in time O(m 1/3 ).
Though an encouraging start, the nature of this result makes it appear more like a half breakthrough. For one, the update time is only slightly sublinear. Worse yet, Chan's algorithm requires fast matrix multiplication (FMM). The O(m 0.94 ) update time follows from the theoretical FMM algorithm of Coppersmith and Winograd [8]. If Strassen's algorithm is used instead, the update time becomes O(m 0.984 ). Even if optimistically FMM could be done in quadratic time, the update time would only improve to O(m 0.89 ). FMM has been used before in various dynamic graph algorithms (e.g., [10,26]), and the paper [6] noted specific connections to some matrix-multiplication-related problems (see Section 2). All this naturally led one to suspect, as conjectured in the paper, that FMM might be essential to our problem. Thus, the result we are about to describe may come as a bit of a surprise. . . 1 We use m and n to denote the number of edges and vertices of the graph respectively; e O(·) ignores polylogarithmic factors and O * (·) hides n ε factors for an arbitrarily small constant ε > 0. Update bounds in this paper are, by default, amortized. First of all, this is a significant quantitative improvement (to anyone who regards an m 0.27 factor as substantial), and it represents the first convincingly sublinear running time. More importantly, it is a significant qualitative improvement, as our bound does not require FMM. Our algorithm involves a number of ideas, some of which can be traced back to earlier algorithms, but we use known edge-updatable connectivity structures to maintain a more cleverly designed intermediate graph. The end product is not straightforward at all, but still turns out to be simpler than the previous method [6] and has a compact, two-page description (we regard this as another plus, not a drawback).
Dynamic Geometry
We next turn to another important class of dynamic connectivity problems-those arising from geometry.
Geometric connectivity. Consider the following question, illustrated in Figure 1(a). Maintain a set of line segments in the plane, under insertions and deletions, to answer queries of the form: "given two points a and b, is there a path between a and b along the segments?" This simple-sounding problem turns out to be a challenge. On one hand, understanding any local geometry does not seem to help, because the connecting path can be long and windy. On the other hand, the graph-theoretic understanding is based on the intersection graph, which is too expensive to maintain. A newly inserted (or deleted) segment can intersect a large number of objects in the set, changing the intersection graph dramatically.
Abstracting away, we can consider a broad class of problems of the form: maintain a set of n geometric objects, and answer connectivity queries in their intersection graph. Such graphs arise, for instance, in VLSI applications in the case of orthogonal segments, or gear transmission systems, in the case of touching disks; see Figure 1(b). A more compelling application can be found in sensor networks: if r is the radius within which two sensors can communicate, the communication network is the intersection graph of balls of radius r/2 centered at the sensors. While our focus is on theoretical understanding rather than the practicality of specific applications, these examples still indicate the natural appeal of geometric connectivity problems.
All these problems have a trivial O(n) solution, by maintaining the intersection graph through edge updates. A systematic approach to beating the linear time bound was proposed in Chan's paper as well [6], by drawing a connection to subgraph connectivity. Assume that a particular object type allows data struc-tures for intersection range searching with space S(n) and query time T (n). It was shown that geometric connectivity can essentially be solved by maintaining a graph of size m = O(S(n) + nT (n)) and running O(S(n)/n + T (n)) vertex updates for every object insertion or deletion. Using the previous subgraph connectivity result [6], an update in the geometric connectivity problem took time O([S(n)/n + T (n)] · [S(n) + nT (n)] 0.94 ). Using our improved result, the bound becomes O([S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 ).
The prime implication in the previous paper is that connectivity of axis-parallel boxes in any constant dimension (in particular, orthogonal line segments in the plane) reduces to subgraph connectivity, with a polylogarithmic cost. Indeed, for such boxes range trees yield S(n) = n · lg O(d) n and T (n) = lg O(d) n. Unfortunately, while nontrivial range searching results are known for many types of objects, very efficient range searching is hard to come by. Consider our main motivating examples:
• for arbitrary (non-orthogonal) line segments in IR 2 , one can achieve
T (n) = O * ( √ n) and S(n) = O * (n), or T (n) = O * (n 1/3 ) and S(n) = O * (n 4/3 ) [28].
• for disks in IR 2 , one can achieve T (n) = O * (n 2/3 ) and S(n) = O * (n), or T (n) = O * (n 1/2 ) and
S(n) = O * (n 3/2 ) [3].
Even with our improved vertex-update time, the [S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 bound is too weak to beat the trivial linear update time. For arbitrary line segments in IR 2 , one would need to improve the vertex-update time to m 1/2−ε , which appears unlikely without FMM (see Section 2). The line segment case was in fact mentioned as a major open problem, implicitly in [6] and explicitly in [1]. The situation gets worse for objects of higher complexity or in higher dimensions.
Our results. In this paper, we are finally able to break the above barrier for dynamic geometric connectivity. At a high level, we show that range searching with any sublinear query time is enough to obtain sublinear update time in geometric connectivity. In particular, we get the first nontrivial update times for arbitrary line segments in the plane, disks of arbitrary radii, and simplices and balls in any fixed dimension. While the previous reduction [6] involves merely a straightforward usage of "biclique covers", our result here requires much more work. For starters, we need to devise a "degree-sensitive" version of our improved subgraph connectivity algorithm (which is of interest in itself); we then use this and known connectivity structures to maintain not one but two carefully designed intermediate graphs. Known range searching techniques [2] from computational geometry almost always provide sublinear query time. For instance, Matoušek [28] showed that b ≈ 1/2 is attainable for line segments, triangles, and any constant-size polygons in IR 2 ; more generally, b ≈ 1/d for simplices or constant-size polyhedra in IR d . Further results by Agarwal and Matoušek [3] yield b ≈ 1/(d + 1) for balls in IR d . Most generally, b > 0 is possible for any class of objects defined by semialgebraic sets of constant description complexity.
More results. Our general sublinear results undoubtedly invite further research into finding better bounds for specific classes of objects. In general, the complexity of range queries provides a natural barrier for the update time, since upon inserting an object we at least need to determine if it intersects any object already in the set. Essentially, our result has a quadratic loss compared to range queries: if T (n) = n 1−b , the update time is n 1−Θ(b 2 ) .
In Section 5, We make a positive step towards closing this quadratic gap: we show that if the updates are given offline (i.e. are known in advance), the amortized update time can be made n 1−Θ(b) . We need FMM this time, but the usage of FMM here is more intricate (and interesting) than typical. For one, it is crucial to use fast rectangular matrix multiplication. Along the way, we even find ourselves rederiving Yuster and Zwick's sparse matrix multiplication result [38] in a more general form. The juggling of parameters is also more unusual, as one can suspect from looking at our actual update bound, which is O(n 1+α−bα 1+α−bα/2 ), where α = 0.294 is an exponent associated with rectangular FMM.
Dynamic Subgraph Connectivity with O(m 2/) Update Time
In this section, we present our new method for the dynamic subgraph connectivity problem: maintaining a subset S of vertices in a graph G, under vertex insertions and deletions in S, so that we can decide whether any two query vertices are connected in the subgraph induced by S. We will call the vertices in S the active vertices. For now, we assume that the graph G itself is static.
The complete description of the new method is given in the proof of the following theorem. It is "short and sweet", especially if the reader compares with Chan's paper [6]. The previous method requires several stages of development, addressing the offline and semi-online special cases, along with the use of FMMwe completely bypass these intermediate stages, and FMM, here. Embedded below, one can find a number of different ideas (some also used in [6]): rebuilding periodically after a certain number of updates, distinguishing "high-degree" features from "low-degree" features (e.g., see [5,37]), amortizing by splitting smaller subsets from larger ones, etc. The key lies in the definition of a new, yet deceptively simple, intermediate graph G * , which is maintained by known polylogarithmic data structures for dynamic connectivity under edge updates [17,20,34]. Except for these known connectivity structures, the description is entirely self-contained. Proof. We divide the update sequence into phases, each consisting of q := m/∆ updates. The active vertices are partitioned into two sets P and Q, where P undergoes only deletions and Q undergoes both insertions and deletions. Each vertex insertion is done to Q. At the end of each phase, we move the elements of Q to P and reset Q to the empty set. This way, |Q| is kept at most q at all times.
Call a connected component in (the subgraph induced by) P high if the sum of the degrees of its vertices exceeds ∆, and low otherwise. Clearly, there are at most O(m/∆) high components.
The data structure.
• We store the components of P in a data structure for decremental (deletion-only) connectivity that supports edge deletions in polylogarithmic amortized time.
• We maintain a bipartite multigraph Γ between V and the components γ in P : for each uv ∈ E where v lies in component γ, we create a copy of an edge uγ ∈ Γ.
• For each vertex pair u,v, we maintain the value C[u, v] defined as the number of low components in P that are adjacent to both u and v in Γ. (Actually, only O(m∆) entries of C[·, ·] are nonzero and need to be stored.)
• We define a graph G * whose vertices are the vertices of Q and components of P :
(a) For each u, v ∈ Q, if C[u, v] > 0, then create an edge uv ∈ G * . (b) For each vertex u ∈ Q and high component γ in P , if uγ ∈ Γ, then create an edge uγ ∈ G * . (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * .
We maintain G * in another data structure for dynamic connectivity supporting polylogarithmic-time edge updates.
Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If instead γ is low, then edges of type (a) ensure that u and v are connected in G * . By concatenation, the argument extends to show that any two vertices u, v ∈ Q connected by a path in G are connected in G * .
Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a high component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b) edges, γ j is an isolated component and we can simply test whether v 1 and v 2 are both in the same component of P .
If on the other hand v j is in a low component γ j , then we can exhaustively search for a vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. Again if no such vertex exists, then γ j is an isolated component and the test is easy. The query cost is O(∆). Deletion of a vertex from a high component γ in P . The component γ is split into a number of subcomponents γ 1 , . . . , γ ℓ with, say, γ 1 being the largest. We can update the multigraph Γ in time O(deg(γ 2 ) + · · · + deg(γ ℓ )) by splitting the smaller subcomponents from the largest subcomponent. Consequently, we need to update O(deg(γ 2 ) + · · · + deg(γ ℓ )) edges of type (b) in G * . Since P undergoes deletions only, a vertex can belong to the smaller subcomponents in at most O(lg n) splits over the entire phase, and so the total cost per phase is O(m), which is absorbed in the preprocessing cost of the phase.
For each low subcomponent γ j , we update the matrix C[·, ·] in O(deg(γ j )∆) time, by examining each edge γ j v ∈ Γ and each of the O(∆) vertices u adjacent to γ j and testing whether γ j u ∈ Γ. Consequently, we need to update O(deg(γ j )∆) edges of type (a) in G * . Since a vertex can change from being in a high component to a low component at most once over the entire phase, the total cost per phase is O(m∆), which is absorbed by the preprocessing cost.
Finale. The overall amortized cost per update operation is
O(∆ 2 + m/∆). Set ∆ = m 1/3 .
Note that edge insertions and deletions in G can be accomodated easily (e.g., see Lemma 2 of the next section).
Dynamic Geometric Connectivity with Sublinear Update Time
In this section, we investigate geometric connectivity problems: maintaining a set S of n objects, under insertions and deletions of objects, so that we can decide whether two query objects are connected in the intersection graph of S. (In particular, we can decide whether two query points are connected in the union of S by finding two objects containing the two points, via range searching, and testing connectedness for these two objects.)
By the biclique-cover technique from [6], the result from the previous section immediately implies a dynamic connectivity method for axis-parallel boxes with O(n 2/3 ) update time and O(n 1/3 ) query time in any fixed dimension.
Unfortunately, this technique is not strong enough to lead to sublinear results for other objects, as we have explained in the introduction. This is because (i) the size of the maintained graph, m = O(S(n) + nT (n)), may be too large and (ii) the number of vertex updates triggered by an object update, O(S(n)/n + T (n)), may be too large.
We can overcome the first obstacle by using a different strategy that rebuilds the graph more often to keep it sparse; this is not obvious and will be described precisely later during the proof of Theorem 5. The second obstacle is even more critical: here, the key is to observe that although each geometric update requires multiple vertex updates, many of these vertex updates involves vertices of low degrees.
A degree-sensitive version of subgraph connectivity
The first ingredient we need is a dynamic subgraph connectivity method that works faster when the degree of the updated vertex is small. Fortunately, we can prove the following lemma, which extends Theorem 1 (if we set ∆ = n 1/3 ). The method follows that of Theorem 1, but with an extra twist: not only do we classify components of P as high or low, but we also classify vertices of Q as high or low. Proof. The data structure is the same as in the proof of Theorem 1, except for one difference: the definition of the graph G * .
Call a vertex high if its degree exceeds m/∆, and low otherwise. Clearly, there are at most O(∆) high vertices.
• We define a graph G * whose vertices are the vertices of Q and components of P : (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * .
We maintain G * in a data structure for dynamic connectivity with polylogarithmic-time edge updates.
Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If u and v are both low, then edges of type (b ′ ) ensure that u and v are connected in G * . In the remaining case, at least one of the two vertices, say, u is high, and γ is low; here, edges of type (a ′ ) ensure that u and v are again connected in G * . The claim follows by concatenation.
Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b ′ ) edges, γ j can only be adjacent to high vertices of Q. We can exhaustively search for a high vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. If no such vertex exists, then γ j is an isolated component and we can simply test whether v 1 and v 2 are both in γ j . The cost is O(∆).
Preprocessing per phase. At the beginning of each phase, the cost to preprocess the data structure is O(m∆) as before. We can charge every update operation with an amortized cost of O(m∆/q) = O(∆ 2 ). Edge updates. We can simulate the insertion of an edge uv by inserting a new low vertex z adjacent to only u and v to Q. Since the degree is 2, the cost is O(1). We can later simulate the deletion of this edge by deleting the vertex z from Q.
Update of a high vertex
Range searching tools from geometry
Next, we need known range searching techniques. These techniques give linear-space data structures (S(n) = O(n)) that can retrieve all objects intersecting a query object in sublinear time (T (n) = O(n 1−b )) for many types of geometric objects. We assume that our class of geometric objects satisfies the following property for some constant b > 0-this property neatly summarizes all we need to know from geometry. The property is typically proved by applying a suitable "partition theorem" in a recursive manner, thereby forming a so-called "partition tree"; for example, see the work by Matoušek [28] or the survey by Agarwal and Erickson [2]. Each canonical subset corresponds to a node of the partition tree (more precisely, the subset of all objects stored at the leaves underneath the node). Matoušek's results imply that b = 1/d − ε is attainable for simplices or constant-size polyhedra in IR d . (To go from simplex range searching to intersection searching, one uses multi-level partition trees; e.g., see [29].) Further results by Agarwal and Matoušek [3] yield b = 1/(d + 1) − ε for balls in IR d and nontrivial values of b for other families of curved objects (semialgebraic sets of constant degree). The special case of axis-parallel boxes corresponds to b = 1.
The specific bounds in (i) and (ii) may not be too well known, but they follow from the hierarchical way in which canonical subsets are constructed. For example, (ii) follows since the subsets in C z of size at most n/∆ are contained in O(∆ 1−b ) subsets of size O(n/∆). In fact, (multi-level) partition trees guarantee a stronger inequality,
C∈Cz |C| 1−b = O(n 1−b )
, from which both (i) and (ii) can be obtained after a moment's thought.
As an illustration, we can use the above property to develop a data structure for a special case of dynamic geometric connectivity where insertions are done in "blocks" but arbitrary deletions are to be supported. Although the insertion time is at least linear, the result is good if the block size s is sufficiently large. This subroutine will make up a part of the final solution.
Lemma 4. We can maintain the connected components among a set S of objects in a data structure that supports insertion of a block of s objects in O(n + sn 1−b ) amortized time (s < n), and deletion of a single object in O(1) amortized time.
Proof. We maintain a multigraph H in a data structure for dynamic connectivity with polylogarithmic edge update time (which explicitly maintains the connected components), where the vertices are the objects of S. This multigraph will obey the invariant that two objects are geometrically connected iff they are connected in S. We do not insist that H has linear size.
Insertion of a block B to S. We first form a collection C of canonical subsets for S ∪ B by Property 3. For each z ∈ B and each C ∈ C z , we assign z to C. For each canonical subset C ∈ C, if C is assigned at least one object of B, then we create new edges in H linking all objects of C and all objects assigned to C in a path. (If this path overlaps with previous paths, we create multiple copies of edges.) The number of edges inserted is thus O(n + |B|n 1−b ).
Justification. The invariant is satisfied since all objects in a canonical subset C intersect all objects assigned to C, and are thus all connected if there is at least one object assigned to C.
Deletion of an object z from S. For each canonical subset C containing or assigned the object z, we need to delete at most 2 edges and insert 1 edge to maintain the path. As soon as the path contains no object assigned to C, we delete all the edges in the path. Since the length of the path can only decrease over the entire update sequence, the total number of such edge updates is proportional to the initial length of the path. We can charge the cost to edge insertions.
Putting it together
We are finally ready to present our sublinear result for dynamic geometric connectivity. We again need the idea of rebuilding periodically, and splitting smaller sets from larger ones. In addition to the graph H (of superlinear size) from Lemma 4, which undergoes insertions only in blocks, the key lies in the definition of another subtly crafted intermediate graph G (of linear size), maintained this time by the subgraph connectivity structure of Lemma 2. The definition of this graph involves multiple types of vertices and edges. The details of the analysis and the setting of parameters get more interesting.
Theorem 5. Assume 0 < b ≤ 1/2. We can maintain a collection of objects in amortized update time O(n 1−b 2 /(2+b) ) and answer connectivity queries in time O(n b/(2+b) ).
Proof. We divide the update sequence into phases, each consisting of y := n b updates. The current objects are partitioned into two sets X and Y , where X undergoes only deletions and Y undergoes both insertions and deletions. Each insertion is done to Y . At the end of each phase, we move the elements of Y to X and reset Y to the empty set. This way, |Y | is kept at most y at all times.
At the beginning of each phase, we form a collection C of canonical subsets for X by Property 3.
The data structure.
• We maintain the components of X in the data structure from Lemma 4.
• We maintain the following graph G for dynamic subgraph connectivity, where the vertices are objects of X ∪ Y , components of X, and the canonical subsets of the current phase:
(a) Create an edge in G between each component of X and each of its objects.
(b) Create an edge in G between each canonical subset and each of its objects in X.
(c) Create an edge in G between each object z ∈ Y and each canonical subset C ∈ C z . Here, we assign z to C.
(d) Create an edge in G between every two intersecting objects in Y .
(e) We make a canonical subset active in G iff it is assigned at least one object in Y . Vertices that are objects or components are always active. Justification. We claim that two objects are geometrically connected in X ∪ Y iff they are connected in the subgraph induced by the active vertices in the graph G. The "only if" direction is obvious. For the "if" direction, we note that all objects in an active canonical subset C intersect all objects assigned to C and are thus all connected.
Queries. We answer a query by querying in the graph G. The cost is O(∆).
Preprocessing per phase. Before a new phase begins, we need to update the components in X as we move all elements of Y to X (a block insertion). By Lemma 4, the cost is O(n + yn
∆ 2 + ∆ 1−b · n/∆ + n/∆ b ) = O(n 1−b ∆ 2 + n/∆ b ).
Deletion of an object z in X. We first update the components of X. By Lemma 4, the amortized cost is O(1). We can now update the edges of type (a) in G. The total number of such edge updates per phase is O(n lg n), by always splitting smaller components from larger ones. The amortized number of edge updates is thus O(n/y). The amortized cost is O((n/y)∆ 2 ) = O(n 1−b ∆ 2 ).
Finale. The overall amortized cost per update operation is
O(n 1−b ∆ 2 + n/∆ b ). Set ∆ = n b/(2+b) .
Note that we can still prove the theorem for b > 1/2, by handling the O(y 2 ) intersections among Y (the type (d) edges) in a less naive way. However, we are not aware of any specific applications with b ∈ (1/2, 1).
Offline Dynamic Geometric Connectivity
For the special case of offline updates, we can improve the result of Section 4 for small values of b by a different method using rectangular matrix multiplication.
Let M [n 1 , n 2 , n 3 ] represent the cost of multiplying a Boolean n 1 × n 2 matrix A with a Boolean n 2 × n 3 matrix B. Let M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the same cost under the knowledge that the number of 1's in A is m 1 and the number of 1's in B is m 2 . We can reinterpret this task in graph terms: Suppose we are given a tripartite graph with vertex classes V 1 , V 2 , V 3 of sizes n 1 , n 2 , n 3 respectively where there are m 1 edges between V 1 and V 2 and m 2 edges between V 2 and V 3 . Then M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the cost of deciding, for each u ∈ V 1 and v ∈ V 3 , whether u and v are adjacent to a common vertex in V 2 .
An offline degree-sensitive version of subgraph connectivity
We begin with an offline variant of Lemma 2: Proof. We divide the update sequence into phases, each consisting of q low-vertex updates. The active vertices are partitioned into two sets P and Q, with Q ⊆ Q 0 , where P and Q 0 are static and Q undergoes both insertions and deletions. Each vertex insertion/deletion is done to Q. At the end of each phase, we reset Q 0 to hold all O(∆) high vertices plus the low vertices involved in the updates of the next phase, reset P to hold all active vertices not in Q 0 , and reset Q to hold all active vertices in Q 0 . Clearly, |Q| ≤ |Q 0 | = O(q).
Lemma 6. Let 1 ≤ ∆ ≤ q ≤ m. We
The data structure is the same as the one in the proof of Lemma 2, with one key difference: we only maintain the value C[u, v] when u is a high vertex in Q 0 and v is a (high or low) vertex in Q 0 . Moreover, we do not need to distinguish between high and low components, i.e., all components are considered low.
During preprocessing of each phase, we can now compute C Deletions in P do not occur now.
Sparse and dense rectangular matrix multiplication
Sparse matrix multiplication can be reduced to multiplying smaller dense matrices, by using a "highlow" trick [5]. Fact 7(i) below can be viewed as a variant of [6, Lemma 3.1] and a result of Yuster and Zwick [38]-incidentally, this fact is sufficiently powerful to yield a simple(r) proof of Yuster and Zwick's sparse matrix multiplication result, when combined with known bounds on dense rectangular matrix multiplication. Fact 7(ii) below states one known bound on dense rectangular matrix multiplication which we will use.
Putting it together
We now present our offline result for dynamic geometric connectivity using Lemma 6. Although we also use Property 3, the design of the key graph G is quite different from the one in the proof of Theorem 5. For instance, the size of the graph is larger (and no longer O(n)), but the number of edges incident to high vertices remains linear; furthermore, each object update triggers only a constant number of vertex updates in the graph. All the details come together in the analysis to lead to some intriguing choices of parameters. Proof. We divide the update sequence into phases, each consisting of q updates, where q is a parameter satisfying ∆ ≤ q ≤ n/∆ 1−b . The current objects are partitioned into two sets X and Y , with Y ⊆ Y 0 where X and Y 0 are static and Y undergoes both insertions and deletions. Each insertion/deletion is done to Y . At the end of each phase, we reset Y 0 to hold all objects involved the objects of the next phase, X to hold all current objects not in Y 0 , and Y to hold all current objects in Y 0 . Clearly, |Y | ≤ |Y 0 | = O(q). At the beginning of each phase, we form a collection C of canonical subsets for X ∪ Y 0 by Property 3.
The data structure.
• We maintain the components of X in the data structure from Lemma 4.
• We maintain the following graph G for offline dynamic subgraph connectivity, where the vertices are objects of X ∪ Y 0 , components of X, and canonical subsets of size exceeding n/∆:
(a) Create an edge in G between each component of X and each of its objects.
(b) Create an edge in G between each canonical subset C of size exceeding n/∆ and each of its objects in X ∪ Y .
(c) Create an edge in G between each object z ∈ Y 0 and each canonical subset C ∈ C z of size exceeding n/∆. Here, we assign z to C.
(d) Create an edge in G between each object z ∈ Y 0 and each object in the union of the canonical subsets in C z of size at most n/∆.
(e) We make a canonical subset active in G iff it is assigned at least one object in Y . We make the vertices in X ∪Y active, and all components active. The high vertices are precisely the canonical subsets of size exceeding n/∆; there are O(∆) such vertices. Update of an object z in Y . We need to make a single vertex update z in G, which has degree O(n/∆ b ) by Property 3(ii). Furthermore, we may have to change the status of as many as O(∆ 1−b ) high vertices by Property 3(i). According to Lemma 8, the cost of these vertex updates is O(M [∆, n, q | n, m]/q + n/∆ b + ∆ 1−b q).
Finale. By Fact 7, assuming that ∆ ≤ q α and q ≤ n/t, we have M [∆, n, q | n, m] = O(M [∆, n/t, q] + mt) = O(nq/t + nqt/∆ b ). Choosing t = ∆ b/2 gives O(nq/∆ b/2 ). The overall amortized cost per update operation is thus O(n/∆ b/2 + ∆ 1−b q + n/q + n 1−b ). Set ∆ = q α and q = n 1 1+α−bα/2 and the result follows. (Note that indeed ∆ ≤ q ≤ n/∆ 1−b and q ≤ n/t for these choices of parameters.) Compared to Theorem 5, the dependence on b of the exponent in the update bound is only 1 − Θ(b) rather than 1 − Θ(b 2 ). The bound is better, for example, for b ≤ 1/4.
Open Problems
Our work opens up many interesting directions for further research. For subgraph connectivity, an obvious question is whether the O(m 2/3 ) vertex-update bound can be improved (without or with FMM); as we have mentioned, improvements beyond √ m without FMM are not possible without a breakthrough on the triangle-finding problem. An intriguing question is whether for dense graphs we can achieve update time sublinear in n, i.e., O(n 1−ε ) (or possibly even sublinear in the degree). For geometric connectivity, it would be desirable to determine the best update bounds for specific shapes such as line segments and disks in two dimensions. Also, directed settings of geometric connectivity arise in applications and are worth studying; for example, when sensors' transmission ranges are balls of different radii or wedges, a sensor may lie in another sensor's range without the reverse being true.
For both subgraph and geometric connectivity, we can reduce the query time at the expense of increasing the update time, but we do not know whether constant or polylogarithmic query time is possible with sublinear update time in general (see [1] for a result on the 2-dimensional orthogonal special case). Currently, we do not know how to obtain our update bounds with linear space (e.g., Theorem 1 requires O(m 4/3 ) space), nor do we know how to get good worst-case update bounds (since the known polylogarithmic results for connectivity under edge updates are all amortized). Also, the queries we have considered are about connectivity between two vertices/objects. Can nontrivial results be obtained for richer queries such as counting the number of connected components (see [1] on the 2-dimensional orthogonal case), or perhaps shortest paths or minimum cut?
| 6,473 |
0808.1128
|
2953283045
|
Dynamic connectivity is a well-studied problem, but so far the most compelling progress has been confined to the edge-update model: maintain an understanding of connectivity in an undirected graph, subject to edge insertions and deletions. In this paper, we study two more challenging, yet equally fundamental problems. Subgraph connectivity asks to maintain an understanding of connectivity under vertex updates: updates can turn vertices on and off, and queries refer to the subgraph induced by "on" vertices. (For instance, this is closer to applications in networks of routers, where node faults may occur.) We describe a data structure supporting vertex updates in O (m^ 2 3 ) amortized time, where m denotes the number of edges in the graph. This greatly improves over the previous result [Chan, STOC'02], which required fast matrix multiplication and had an update time of O(m^0.94). The new data structure is also simpler. Geometric connectivity asks to maintain a dynamic set of n geometric objects, and query connectivity in their intersection graph. (For instance, the intersection graph of balls describes connectivity in a network of sensors with bounded transmission radius.) Previously, nontrivial fully dynamic results were known only for special cases like axis-parallel line segments and rectangles. We provide similarly improved update times, O (n^ 2 3 ), for these special cases. Moreover, we show how to obtain sublinear update bounds for virtually all families of geometric objects which allow sublinear-time range queries, such as arbitrary 2D line segments, d-dimensional simplices, and d-dimensional balls.
|
If vertices have constant degree, vertex updates are equivalent to edge updates. For edge updates, Henzinger and King @cite_9 were first to obtain polylogarithmic update times (randomized). This was improved by @cite_4 to a deterministic solution with @math time per update, and by Thorup @cite_40 to a randomized solution with @math update time. The randomized bound almost matches the @math lower bound from @cite_24 . All these data structures maintain a spanning forest as a certificate for connectivity. This idea fails for vertex updates in the general case, since the certificate can change substantially after just one update.
|
{
"abstract": [
"We develop a new technique for proving cell-probe lower bounds on dynamic data structures. This technique enables us to prove an amortized randomized @math lower bound per operation for several data structural problems on @math elements, including partial sums, dynamic connectivity among disjoint paths (or a forest or a graph), and several other dynamic graph problems (by simple reductions). Such a lower bound breaks a long-standing barrier of @math for any dynamic language membership problem. It also establishes the optimality of several existing data structures, such as Sleator and Tarjan's dynamic trees. We also prove the first @math lower bound in the external-memory model without assumptions on the data structure (such as the comparison model). Our lower bounds also give a query-update trade-off curve matched, e.g., by several data structures for dynamic connectivity in graphs. We also prove matching upper and lower bounds for partial sums when parameterized by the word size and the maximum additive change in an update.",
"In this paper we present near-optimal bounds for fullydynamic graph connectivity which is the most basic nontrivial fully-dynamic graph problem. Connectivity queries are supported in O(log n log log log n) time while the updates are supported in O(log n(log log n) 3) expected amortized time. The previous best update time was O((log n)2). Our new bound is only doubly-logarithmic factors from a general cell probe lower bound of f2(log n log log n). Our algorithm runs on a pointer machine, and uses only standard AC ° instructions. In our developments we make some comparatively trivial observations improving some deterministic bounds. The space bound of the previous O((log n) ) connectivity algorithm is improved from O(m + n log n) to O(m). The previous time complexity of fully-dynamic 2-edge and biconnectivity is improved from O((log n) 4) to O((log n) 3 log log n).",
"This paper solves a longstanding open problem in fully dynamic algorithms: We present the first fully dynamic algorithms that maintain connectivity, bipartiteness, and approximate minimum spanning trees in polylogarithmic time per edge insertion or deletion. The algorithms are designed using a new dynamic technique that combines a novel graph decomposition with randomization. They are Las-Vegas type randomized algorithms which use simple data structures and have a small constant factor. Let n denote the number of nodes in the graph. For a sequence of O( m 0 ) operations, where m 0 is the number of edges in the initial graph, the expected time for p updates is O ( p log 3 n ) (througout the paper the logarithms are based 2) for connectivity and bipartiteness. The worst-case time for one query is O (log n log log n ). For the k -edge witness problem (“Does the removal of k given edges disconnect the graph?”) the expected time for p updates is O ( p log 3 n ) and the expected time for q queries is O ( qk log 3 n ). Given a graph with k different weights, the minimum spanning tree can be maintained during a sequence of p updates in expected time O ( pk log 3 n ). This implies an algorithm to maintain a 1 + e-approximation of the minimum spanning tree in expected time O (( p log 3 n log U ) e) for p updates, where the weights of the edges are between 1 and U .",
"Deterministic fully dynamic graph algorithms are presented for connectivity, minimum spanning tree, 2-edge connectivity, and biconnectivity. Assuming that we start with no edges in a graph with n vertices, the amortized operation costs are O(log2 n) for connectivity, O(log4 n) for minimum spanning forest, 2-edge connectivity, and O(log5 n) biconnectivity."
],
"cite_N": [
"@cite_24",
"@cite_40",
"@cite_9",
"@cite_4"
],
"mid": [
"1963524245",
"2010151376",
"1992869351",
"2045430818"
]
}
|
Dynamic Connectivity: Connecting to Networks and Geometry
|
Dynamic graphs inspire a natural, challenging, and well-studied class of algorithmic problems. A rich body of the STOC/FOCS literature has considered problems ranging from the basic question of understanding connectivity in a dynamic graph [13,17,34,6,31], to maintaining the minimum spanning tree [20], the min-cut [36], shortest paths [9,35], reachability in directed graphs [10,25,26,32,33], etc.
But what exactly makes a graph "dynamic"? Computer networks have long provided the common motivation. The dynamic nature of such networks is captured by two basic types of updates to the graph:
• edge updates: adding or removing an edge. These correspond to setting up a new cable connection, accidental cable cuts, etc.
• vertex updates: turning a vertex on and off. Vertices (routers) can temporarily become "off" after events such as a misconfiguration, a software crash and reboot, etc. Problems involving only vertex updates have been called dynamic subgraph problems, since queries refer to the subgraph induced by vertices which are on.
Loosely speaking, dynamic graph problems fall into two categories. For "hard" problems, such as shortest paths and directed reachability, the best known running times are at least linear in the number of vertices. These high running times obscure the difference between vertex and edge updates, and identical bounds are often stated [9,32,33] for both operations. For the remainder of the problems, sublinear running times are known for edge updates, but sublinear bounds for vertex updates seems much harder to get. For instance, even iterating through all edges incident to a vertex may take linear time in the worst case. That vertex updates are slow is unfortunate. Referring to the computer-network metaphor, vertex updates are cheap "soft" events (misconfiguration or reboot), which occur more frequently than the costly physical events (cable cut) that cause an edge update.
Subgraph connectivity. As mentioned, most previous sublinear dynamic graph algorithms address edge updates but not the equally fundamental vertex updates. One notable exception, however, was a result of Chan [6] from STOC'02 on the basic connectivity problem for general sparse (undirected) graphs. This algorithm can support vertex updates in time 1 O(m 0.94 ) and decide whether two query vertices are connected in time O(m 1/3 ).
Though an encouraging start, the nature of this result makes it appear more like a half breakthrough. For one, the update time is only slightly sublinear. Worse yet, Chan's algorithm requires fast matrix multiplication (FMM). The O(m 0.94 ) update time follows from the theoretical FMM algorithm of Coppersmith and Winograd [8]. If Strassen's algorithm is used instead, the update time becomes O(m 0.984 ). Even if optimistically FMM could be done in quadratic time, the update time would only improve to O(m 0.89 ). FMM has been used before in various dynamic graph algorithms (e.g., [10,26]), and the paper [6] noted specific connections to some matrix-multiplication-related problems (see Section 2). All this naturally led one to suspect, as conjectured in the paper, that FMM might be essential to our problem. Thus, the result we are about to describe may come as a bit of a surprise. . . 1 We use m and n to denote the number of edges and vertices of the graph respectively; e O(·) ignores polylogarithmic factors and O * (·) hides n ε factors for an arbitrarily small constant ε > 0. Update bounds in this paper are, by default, amortized. First of all, this is a significant quantitative improvement (to anyone who regards an m 0.27 factor as substantial), and it represents the first convincingly sublinear running time. More importantly, it is a significant qualitative improvement, as our bound does not require FMM. Our algorithm involves a number of ideas, some of which can be traced back to earlier algorithms, but we use known edge-updatable connectivity structures to maintain a more cleverly designed intermediate graph. The end product is not straightforward at all, but still turns out to be simpler than the previous method [6] and has a compact, two-page description (we regard this as another plus, not a drawback).
Dynamic Geometry
We next turn to another important class of dynamic connectivity problems-those arising from geometry.
Geometric connectivity. Consider the following question, illustrated in Figure 1(a). Maintain a set of line segments in the plane, under insertions and deletions, to answer queries of the form: "given two points a and b, is there a path between a and b along the segments?" This simple-sounding problem turns out to be a challenge. On one hand, understanding any local geometry does not seem to help, because the connecting path can be long and windy. On the other hand, the graph-theoretic understanding is based on the intersection graph, which is too expensive to maintain. A newly inserted (or deleted) segment can intersect a large number of objects in the set, changing the intersection graph dramatically.
Abstracting away, we can consider a broad class of problems of the form: maintain a set of n geometric objects, and answer connectivity queries in their intersection graph. Such graphs arise, for instance, in VLSI applications in the case of orthogonal segments, or gear transmission systems, in the case of touching disks; see Figure 1(b). A more compelling application can be found in sensor networks: if r is the radius within which two sensors can communicate, the communication network is the intersection graph of balls of radius r/2 centered at the sensors. While our focus is on theoretical understanding rather than the practicality of specific applications, these examples still indicate the natural appeal of geometric connectivity problems.
All these problems have a trivial O(n) solution, by maintaining the intersection graph through edge updates. A systematic approach to beating the linear time bound was proposed in Chan's paper as well [6], by drawing a connection to subgraph connectivity. Assume that a particular object type allows data struc-tures for intersection range searching with space S(n) and query time T (n). It was shown that geometric connectivity can essentially be solved by maintaining a graph of size m = O(S(n) + nT (n)) and running O(S(n)/n + T (n)) vertex updates for every object insertion or deletion. Using the previous subgraph connectivity result [6], an update in the geometric connectivity problem took time O([S(n)/n + T (n)] · [S(n) + nT (n)] 0.94 ). Using our improved result, the bound becomes O([S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 ).
The prime implication in the previous paper is that connectivity of axis-parallel boxes in any constant dimension (in particular, orthogonal line segments in the plane) reduces to subgraph connectivity, with a polylogarithmic cost. Indeed, for such boxes range trees yield S(n) = n · lg O(d) n and T (n) = lg O(d) n. Unfortunately, while nontrivial range searching results are known for many types of objects, very efficient range searching is hard to come by. Consider our main motivating examples:
• for arbitrary (non-orthogonal) line segments in IR 2 , one can achieve
T (n) = O * ( √ n) and S(n) = O * (n), or T (n) = O * (n 1/3 ) and S(n) = O * (n 4/3 ) [28].
• for disks in IR 2 , one can achieve T (n) = O * (n 2/3 ) and S(n) = O * (n), or T (n) = O * (n 1/2 ) and
S(n) = O * (n 3/2 ) [3].
Even with our improved vertex-update time, the [S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 bound is too weak to beat the trivial linear update time. For arbitrary line segments in IR 2 , one would need to improve the vertex-update time to m 1/2−ε , which appears unlikely without FMM (see Section 2). The line segment case was in fact mentioned as a major open problem, implicitly in [6] and explicitly in [1]. The situation gets worse for objects of higher complexity or in higher dimensions.
Our results. In this paper, we are finally able to break the above barrier for dynamic geometric connectivity. At a high level, we show that range searching with any sublinear query time is enough to obtain sublinear update time in geometric connectivity. In particular, we get the first nontrivial update times for arbitrary line segments in the plane, disks of arbitrary radii, and simplices and balls in any fixed dimension. While the previous reduction [6] involves merely a straightforward usage of "biclique covers", our result here requires much more work. For starters, we need to devise a "degree-sensitive" version of our improved subgraph connectivity algorithm (which is of interest in itself); we then use this and known connectivity structures to maintain not one but two carefully designed intermediate graphs. Known range searching techniques [2] from computational geometry almost always provide sublinear query time. For instance, Matoušek [28] showed that b ≈ 1/2 is attainable for line segments, triangles, and any constant-size polygons in IR 2 ; more generally, b ≈ 1/d for simplices or constant-size polyhedra in IR d . Further results by Agarwal and Matoušek [3] yield b ≈ 1/(d + 1) for balls in IR d . Most generally, b > 0 is possible for any class of objects defined by semialgebraic sets of constant description complexity.
More results. Our general sublinear results undoubtedly invite further research into finding better bounds for specific classes of objects. In general, the complexity of range queries provides a natural barrier for the update time, since upon inserting an object we at least need to determine if it intersects any object already in the set. Essentially, our result has a quadratic loss compared to range queries: if T (n) = n 1−b , the update time is n 1−Θ(b 2 ) .
In Section 5, We make a positive step towards closing this quadratic gap: we show that if the updates are given offline (i.e. are known in advance), the amortized update time can be made n 1−Θ(b) . We need FMM this time, but the usage of FMM here is more intricate (and interesting) than typical. For one, it is crucial to use fast rectangular matrix multiplication. Along the way, we even find ourselves rederiving Yuster and Zwick's sparse matrix multiplication result [38] in a more general form. The juggling of parameters is also more unusual, as one can suspect from looking at our actual update bound, which is O(n 1+α−bα 1+α−bα/2 ), where α = 0.294 is an exponent associated with rectangular FMM.
Dynamic Subgraph Connectivity with O(m 2/) Update Time
In this section, we present our new method for the dynamic subgraph connectivity problem: maintaining a subset S of vertices in a graph G, under vertex insertions and deletions in S, so that we can decide whether any two query vertices are connected in the subgraph induced by S. We will call the vertices in S the active vertices. For now, we assume that the graph G itself is static.
The complete description of the new method is given in the proof of the following theorem. It is "short and sweet", especially if the reader compares with Chan's paper [6]. The previous method requires several stages of development, addressing the offline and semi-online special cases, along with the use of FMMwe completely bypass these intermediate stages, and FMM, here. Embedded below, one can find a number of different ideas (some also used in [6]): rebuilding periodically after a certain number of updates, distinguishing "high-degree" features from "low-degree" features (e.g., see [5,37]), amortizing by splitting smaller subsets from larger ones, etc. The key lies in the definition of a new, yet deceptively simple, intermediate graph G * , which is maintained by known polylogarithmic data structures for dynamic connectivity under edge updates [17,20,34]. Except for these known connectivity structures, the description is entirely self-contained. Proof. We divide the update sequence into phases, each consisting of q := m/∆ updates. The active vertices are partitioned into two sets P and Q, where P undergoes only deletions and Q undergoes both insertions and deletions. Each vertex insertion is done to Q. At the end of each phase, we move the elements of Q to P and reset Q to the empty set. This way, |Q| is kept at most q at all times.
Call a connected component in (the subgraph induced by) P high if the sum of the degrees of its vertices exceeds ∆, and low otherwise. Clearly, there are at most O(m/∆) high components.
The data structure.
• We store the components of P in a data structure for decremental (deletion-only) connectivity that supports edge deletions in polylogarithmic amortized time.
• We maintain a bipartite multigraph Γ between V and the components γ in P : for each uv ∈ E where v lies in component γ, we create a copy of an edge uγ ∈ Γ.
• For each vertex pair u,v, we maintain the value C[u, v] defined as the number of low components in P that are adjacent to both u and v in Γ. (Actually, only O(m∆) entries of C[·, ·] are nonzero and need to be stored.)
• We define a graph G * whose vertices are the vertices of Q and components of P :
(a) For each u, v ∈ Q, if C[u, v] > 0, then create an edge uv ∈ G * . (b) For each vertex u ∈ Q and high component γ in P , if uγ ∈ Γ, then create an edge uγ ∈ G * . (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * .
We maintain G * in another data structure for dynamic connectivity supporting polylogarithmic-time edge updates.
Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If instead γ is low, then edges of type (a) ensure that u and v are connected in G * . By concatenation, the argument extends to show that any two vertices u, v ∈ Q connected by a path in G are connected in G * .
Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a high component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b) edges, γ j is an isolated component and we can simply test whether v 1 and v 2 are both in the same component of P .
If on the other hand v j is in a low component γ j , then we can exhaustively search for a vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. Again if no such vertex exists, then γ j is an isolated component and the test is easy. The query cost is O(∆). Deletion of a vertex from a high component γ in P . The component γ is split into a number of subcomponents γ 1 , . . . , γ ℓ with, say, γ 1 being the largest. We can update the multigraph Γ in time O(deg(γ 2 ) + · · · + deg(γ ℓ )) by splitting the smaller subcomponents from the largest subcomponent. Consequently, we need to update O(deg(γ 2 ) + · · · + deg(γ ℓ )) edges of type (b) in G * . Since P undergoes deletions only, a vertex can belong to the smaller subcomponents in at most O(lg n) splits over the entire phase, and so the total cost per phase is O(m), which is absorbed in the preprocessing cost of the phase.
For each low subcomponent γ j , we update the matrix C[·, ·] in O(deg(γ j )∆) time, by examining each edge γ j v ∈ Γ and each of the O(∆) vertices u adjacent to γ j and testing whether γ j u ∈ Γ. Consequently, we need to update O(deg(γ j )∆) edges of type (a) in G * . Since a vertex can change from being in a high component to a low component at most once over the entire phase, the total cost per phase is O(m∆), which is absorbed by the preprocessing cost.
Finale. The overall amortized cost per update operation is
O(∆ 2 + m/∆). Set ∆ = m 1/3 .
Note that edge insertions and deletions in G can be accomodated easily (e.g., see Lemma 2 of the next section).
Dynamic Geometric Connectivity with Sublinear Update Time
In this section, we investigate geometric connectivity problems: maintaining a set S of n objects, under insertions and deletions of objects, so that we can decide whether two query objects are connected in the intersection graph of S. (In particular, we can decide whether two query points are connected in the union of S by finding two objects containing the two points, via range searching, and testing connectedness for these two objects.)
By the biclique-cover technique from [6], the result from the previous section immediately implies a dynamic connectivity method for axis-parallel boxes with O(n 2/3 ) update time and O(n 1/3 ) query time in any fixed dimension.
Unfortunately, this technique is not strong enough to lead to sublinear results for other objects, as we have explained in the introduction. This is because (i) the size of the maintained graph, m = O(S(n) + nT (n)), may be too large and (ii) the number of vertex updates triggered by an object update, O(S(n)/n + T (n)), may be too large.
We can overcome the first obstacle by using a different strategy that rebuilds the graph more often to keep it sparse; this is not obvious and will be described precisely later during the proof of Theorem 5. The second obstacle is even more critical: here, the key is to observe that although each geometric update requires multiple vertex updates, many of these vertex updates involves vertices of low degrees.
A degree-sensitive version of subgraph connectivity
The first ingredient we need is a dynamic subgraph connectivity method that works faster when the degree of the updated vertex is small. Fortunately, we can prove the following lemma, which extends Theorem 1 (if we set ∆ = n 1/3 ). The method follows that of Theorem 1, but with an extra twist: not only do we classify components of P as high or low, but we also classify vertices of Q as high or low. Proof. The data structure is the same as in the proof of Theorem 1, except for one difference: the definition of the graph G * .
Call a vertex high if its degree exceeds m/∆, and low otherwise. Clearly, there are at most O(∆) high vertices.
• We define a graph G * whose vertices are the vertices of Q and components of P : (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * .
We maintain G * in a data structure for dynamic connectivity with polylogarithmic-time edge updates.
Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If u and v are both low, then edges of type (b ′ ) ensure that u and v are connected in G * . In the remaining case, at least one of the two vertices, say, u is high, and γ is low; here, edges of type (a ′ ) ensure that u and v are again connected in G * . The claim follows by concatenation.
Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b ′ ) edges, γ j can only be adjacent to high vertices of Q. We can exhaustively search for a high vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. If no such vertex exists, then γ j is an isolated component and we can simply test whether v 1 and v 2 are both in γ j . The cost is O(∆).
Preprocessing per phase. At the beginning of each phase, the cost to preprocess the data structure is O(m∆) as before. We can charge every update operation with an amortized cost of O(m∆/q) = O(∆ 2 ). Edge updates. We can simulate the insertion of an edge uv by inserting a new low vertex z adjacent to only u and v to Q. Since the degree is 2, the cost is O(1). We can later simulate the deletion of this edge by deleting the vertex z from Q.
Update of a high vertex
Range searching tools from geometry
Next, we need known range searching techniques. These techniques give linear-space data structures (S(n) = O(n)) that can retrieve all objects intersecting a query object in sublinear time (T (n) = O(n 1−b )) for many types of geometric objects. We assume that our class of geometric objects satisfies the following property for some constant b > 0-this property neatly summarizes all we need to know from geometry. The property is typically proved by applying a suitable "partition theorem" in a recursive manner, thereby forming a so-called "partition tree"; for example, see the work by Matoušek [28] or the survey by Agarwal and Erickson [2]. Each canonical subset corresponds to a node of the partition tree (more precisely, the subset of all objects stored at the leaves underneath the node). Matoušek's results imply that b = 1/d − ε is attainable for simplices or constant-size polyhedra in IR d . (To go from simplex range searching to intersection searching, one uses multi-level partition trees; e.g., see [29].) Further results by Agarwal and Matoušek [3] yield b = 1/(d + 1) − ε for balls in IR d and nontrivial values of b for other families of curved objects (semialgebraic sets of constant degree). The special case of axis-parallel boxes corresponds to b = 1.
The specific bounds in (i) and (ii) may not be too well known, but they follow from the hierarchical way in which canonical subsets are constructed. For example, (ii) follows since the subsets in C z of size at most n/∆ are contained in O(∆ 1−b ) subsets of size O(n/∆). In fact, (multi-level) partition trees guarantee a stronger inequality,
C∈Cz |C| 1−b = O(n 1−b )
, from which both (i) and (ii) can be obtained after a moment's thought.
As an illustration, we can use the above property to develop a data structure for a special case of dynamic geometric connectivity where insertions are done in "blocks" but arbitrary deletions are to be supported. Although the insertion time is at least linear, the result is good if the block size s is sufficiently large. This subroutine will make up a part of the final solution.
Lemma 4. We can maintain the connected components among a set S of objects in a data structure that supports insertion of a block of s objects in O(n + sn 1−b ) amortized time (s < n), and deletion of a single object in O(1) amortized time.
Proof. We maintain a multigraph H in a data structure for dynamic connectivity with polylogarithmic edge update time (which explicitly maintains the connected components), where the vertices are the objects of S. This multigraph will obey the invariant that two objects are geometrically connected iff they are connected in S. We do not insist that H has linear size.
Insertion of a block B to S. We first form a collection C of canonical subsets for S ∪ B by Property 3. For each z ∈ B and each C ∈ C z , we assign z to C. For each canonical subset C ∈ C, if C is assigned at least one object of B, then we create new edges in H linking all objects of C and all objects assigned to C in a path. (If this path overlaps with previous paths, we create multiple copies of edges.) The number of edges inserted is thus O(n + |B|n 1−b ).
Justification. The invariant is satisfied since all objects in a canonical subset C intersect all objects assigned to C, and are thus all connected if there is at least one object assigned to C.
Deletion of an object z from S. For each canonical subset C containing or assigned the object z, we need to delete at most 2 edges and insert 1 edge to maintain the path. As soon as the path contains no object assigned to C, we delete all the edges in the path. Since the length of the path can only decrease over the entire update sequence, the total number of such edge updates is proportional to the initial length of the path. We can charge the cost to edge insertions.
Putting it together
We are finally ready to present our sublinear result for dynamic geometric connectivity. We again need the idea of rebuilding periodically, and splitting smaller sets from larger ones. In addition to the graph H (of superlinear size) from Lemma 4, which undergoes insertions only in blocks, the key lies in the definition of another subtly crafted intermediate graph G (of linear size), maintained this time by the subgraph connectivity structure of Lemma 2. The definition of this graph involves multiple types of vertices and edges. The details of the analysis and the setting of parameters get more interesting.
Theorem 5. Assume 0 < b ≤ 1/2. We can maintain a collection of objects in amortized update time O(n 1−b 2 /(2+b) ) and answer connectivity queries in time O(n b/(2+b) ).
Proof. We divide the update sequence into phases, each consisting of y := n b updates. The current objects are partitioned into two sets X and Y , where X undergoes only deletions and Y undergoes both insertions and deletions. Each insertion is done to Y . At the end of each phase, we move the elements of Y to X and reset Y to the empty set. This way, |Y | is kept at most y at all times.
At the beginning of each phase, we form a collection C of canonical subsets for X by Property 3.
The data structure.
• We maintain the components of X in the data structure from Lemma 4.
• We maintain the following graph G for dynamic subgraph connectivity, where the vertices are objects of X ∪ Y , components of X, and the canonical subsets of the current phase:
(a) Create an edge in G between each component of X and each of its objects.
(b) Create an edge in G between each canonical subset and each of its objects in X.
(c) Create an edge in G between each object z ∈ Y and each canonical subset C ∈ C z . Here, we assign z to C.
(d) Create an edge in G between every two intersecting objects in Y .
(e) We make a canonical subset active in G iff it is assigned at least one object in Y . Vertices that are objects or components are always active. Justification. We claim that two objects are geometrically connected in X ∪ Y iff they are connected in the subgraph induced by the active vertices in the graph G. The "only if" direction is obvious. For the "if" direction, we note that all objects in an active canonical subset C intersect all objects assigned to C and are thus all connected.
Queries. We answer a query by querying in the graph G. The cost is O(∆).
Preprocessing per phase. Before a new phase begins, we need to update the components in X as we move all elements of Y to X (a block insertion). By Lemma 4, the cost is O(n + yn
∆ 2 + ∆ 1−b · n/∆ + n/∆ b ) = O(n 1−b ∆ 2 + n/∆ b ).
Deletion of an object z in X. We first update the components of X. By Lemma 4, the amortized cost is O(1). We can now update the edges of type (a) in G. The total number of such edge updates per phase is O(n lg n), by always splitting smaller components from larger ones. The amortized number of edge updates is thus O(n/y). The amortized cost is O((n/y)∆ 2 ) = O(n 1−b ∆ 2 ).
Finale. The overall amortized cost per update operation is
O(n 1−b ∆ 2 + n/∆ b ). Set ∆ = n b/(2+b) .
Note that we can still prove the theorem for b > 1/2, by handling the O(y 2 ) intersections among Y (the type (d) edges) in a less naive way. However, we are not aware of any specific applications with b ∈ (1/2, 1).
Offline Dynamic Geometric Connectivity
For the special case of offline updates, we can improve the result of Section 4 for small values of b by a different method using rectangular matrix multiplication.
Let M [n 1 , n 2 , n 3 ] represent the cost of multiplying a Boolean n 1 × n 2 matrix A with a Boolean n 2 × n 3 matrix B. Let M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the same cost under the knowledge that the number of 1's in A is m 1 and the number of 1's in B is m 2 . We can reinterpret this task in graph terms: Suppose we are given a tripartite graph with vertex classes V 1 , V 2 , V 3 of sizes n 1 , n 2 , n 3 respectively where there are m 1 edges between V 1 and V 2 and m 2 edges between V 2 and V 3 . Then M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the cost of deciding, for each u ∈ V 1 and v ∈ V 3 , whether u and v are adjacent to a common vertex in V 2 .
An offline degree-sensitive version of subgraph connectivity
We begin with an offline variant of Lemma 2: Proof. We divide the update sequence into phases, each consisting of q low-vertex updates. The active vertices are partitioned into two sets P and Q, with Q ⊆ Q 0 , where P and Q 0 are static and Q undergoes both insertions and deletions. Each vertex insertion/deletion is done to Q. At the end of each phase, we reset Q 0 to hold all O(∆) high vertices plus the low vertices involved in the updates of the next phase, reset P to hold all active vertices not in Q 0 , and reset Q to hold all active vertices in Q 0 . Clearly, |Q| ≤ |Q 0 | = O(q).
Lemma 6. Let 1 ≤ ∆ ≤ q ≤ m. We
The data structure is the same as the one in the proof of Lemma 2, with one key difference: we only maintain the value C[u, v] when u is a high vertex in Q 0 and v is a (high or low) vertex in Q 0 . Moreover, we do not need to distinguish between high and low components, i.e., all components are considered low.
During preprocessing of each phase, we can now compute C Deletions in P do not occur now.
Sparse and dense rectangular matrix multiplication
Sparse matrix multiplication can be reduced to multiplying smaller dense matrices, by using a "highlow" trick [5]. Fact 7(i) below can be viewed as a variant of [6, Lemma 3.1] and a result of Yuster and Zwick [38]-incidentally, this fact is sufficiently powerful to yield a simple(r) proof of Yuster and Zwick's sparse matrix multiplication result, when combined with known bounds on dense rectangular matrix multiplication. Fact 7(ii) below states one known bound on dense rectangular matrix multiplication which we will use.
Putting it together
We now present our offline result for dynamic geometric connectivity using Lemma 6. Although we also use Property 3, the design of the key graph G is quite different from the one in the proof of Theorem 5. For instance, the size of the graph is larger (and no longer O(n)), but the number of edges incident to high vertices remains linear; furthermore, each object update triggers only a constant number of vertex updates in the graph. All the details come together in the analysis to lead to some intriguing choices of parameters. Proof. We divide the update sequence into phases, each consisting of q updates, where q is a parameter satisfying ∆ ≤ q ≤ n/∆ 1−b . The current objects are partitioned into two sets X and Y , with Y ⊆ Y 0 where X and Y 0 are static and Y undergoes both insertions and deletions. Each insertion/deletion is done to Y . At the end of each phase, we reset Y 0 to hold all objects involved the objects of the next phase, X to hold all current objects not in Y 0 , and Y to hold all current objects in Y 0 . Clearly, |Y | ≤ |Y 0 | = O(q). At the beginning of each phase, we form a collection C of canonical subsets for X ∪ Y 0 by Property 3.
The data structure.
• We maintain the components of X in the data structure from Lemma 4.
• We maintain the following graph G for offline dynamic subgraph connectivity, where the vertices are objects of X ∪ Y 0 , components of X, and canonical subsets of size exceeding n/∆:
(a) Create an edge in G between each component of X and each of its objects.
(b) Create an edge in G between each canonical subset C of size exceeding n/∆ and each of its objects in X ∪ Y .
(c) Create an edge in G between each object z ∈ Y 0 and each canonical subset C ∈ C z of size exceeding n/∆. Here, we assign z to C.
(d) Create an edge in G between each object z ∈ Y 0 and each object in the union of the canonical subsets in C z of size at most n/∆.
(e) We make a canonical subset active in G iff it is assigned at least one object in Y . We make the vertices in X ∪Y active, and all components active. The high vertices are precisely the canonical subsets of size exceeding n/∆; there are O(∆) such vertices. Update of an object z in Y . We need to make a single vertex update z in G, which has degree O(n/∆ b ) by Property 3(ii). Furthermore, we may have to change the status of as many as O(∆ 1−b ) high vertices by Property 3(i). According to Lemma 8, the cost of these vertex updates is O(M [∆, n, q | n, m]/q + n/∆ b + ∆ 1−b q).
Finale. By Fact 7, assuming that ∆ ≤ q α and q ≤ n/t, we have M [∆, n, q | n, m] = O(M [∆, n/t, q] + mt) = O(nq/t + nqt/∆ b ). Choosing t = ∆ b/2 gives O(nq/∆ b/2 ). The overall amortized cost per update operation is thus O(n/∆ b/2 + ∆ 1−b q + n/q + n 1−b ). Set ∆ = q α and q = n 1 1+α−bα/2 and the result follows. (Note that indeed ∆ ≤ q ≤ n/∆ 1−b and q ≤ n/t for these choices of parameters.) Compared to Theorem 5, the dependence on b of the exponent in the update bound is only 1 − Θ(b) rather than 1 − Θ(b 2 ). The bound is better, for example, for b ≤ 1/4.
Open Problems
Our work opens up many interesting directions for further research. For subgraph connectivity, an obvious question is whether the O(m 2/3 ) vertex-update bound can be improved (without or with FMM); as we have mentioned, improvements beyond √ m without FMM are not possible without a breakthrough on the triangle-finding problem. An intriguing question is whether for dense graphs we can achieve update time sublinear in n, i.e., O(n 1−ε ) (or possibly even sublinear in the degree). For geometric connectivity, it would be desirable to determine the best update bounds for specific shapes such as line segments and disks in two dimensions. Also, directed settings of geometric connectivity arise in applications and are worth studying; for example, when sensors' transmission ranges are balls of different radii or wedges, a sensor may lie in another sensor's range without the reverse being true.
For both subgraph and geometric connectivity, we can reduce the query time at the expense of increasing the update time, but we do not know whether constant or polylogarithmic query time is possible with sublinear update time in general (see [1] for a result on the 2-dimensional orthogonal special case). Currently, we do not know how to obtain our update bounds with linear space (e.g., Theorem 1 requires O(m 4/3 ) space), nor do we know how to get good worst-case update bounds (since the known polylogarithmic results for connectivity under edge updates are all amortized). Also, the queries we have considered are about connectivity between two vertices/objects. Can nontrivial results be obtained for richer queries such as counting the number of connected components (see [1] on the 2-dimensional orthogonal case), or perhaps shortest paths or minimum cut?
| 6,473 |
0808.1128
|
2953283045
|
Dynamic connectivity is a well-studied problem, but so far the most compelling progress has been confined to the edge-update model: maintain an understanding of connectivity in an undirected graph, subject to edge insertions and deletions. In this paper, we study two more challenging, yet equally fundamental problems. Subgraph connectivity asks to maintain an understanding of connectivity under vertex updates: updates can turn vertices on and off, and queries refer to the subgraph induced by "on" vertices. (For instance, this is closer to applications in networks of routers, where node faults may occur.) We describe a data structure supporting vertex updates in O (m^ 2 3 ) amortized time, where m denotes the number of edges in the graph. This greatly improves over the previous result [Chan, STOC'02], which required fast matrix multiplication and had an update time of O(m^0.94). The new data structure is also simpler. Geometric connectivity asks to maintain a dynamic set of n geometric objects, and query connectivity in their intersection graph. (For instance, the intersection graph of balls describes connectivity in a network of sensors with bounded transmission radius.) Previously, nontrivial fully dynamic results were known only for special cases like axis-parallel line segments and rectangles. We provide similarly improved update times, O (n^ 2 3 ), for these special cases. Moreover, we show how to obtain sublinear update bounds for virtually all families of geometric objects which allow sublinear-time range queries, such as arbitrary 2D line segments, d-dimensional simplices, and d-dimensional balls.
|
For more difficult dynamic graph problems, the goal is typically changed from getting polylogarithmic bounds to finding better exponents in polynomial bounds; for example, see all the papers on directed reachability @cite_1 @cite_26 @cite_35 @cite_22 . Evidence suggests that dynamic subgraph connectivity fits this category. It was observed @cite_32 that finding triangles (3-cycles) or quadrilaterals (4-cycles) in directed graphs can be reduced to @math vertex updates. Thus, an update bound better than @math appears unlikely without FMM, since the best running time for finding triangles without FMM is @math , dating back to STOC'77 @cite_28 . Even with FMM, known results are only slightly better: finding triangles and quadrilaterals takes time @math @cite_3 and @math @cite_6 respectively. Thus, current knowledge prevents an update bound better than @math .
|
{
"abstract": [
"",
"",
"",
"Finding minimum circuits in graphs and digraphs is discussed. An almost minimum circuit is a circuit which may have only one edge more than the minimum. To find an almost minimum circuit an @math algorithm is presented. A direct algorithm for finding a minimum circuit has an @math behavior. It is refined to yield an @math average time algorithm. An alternative method is to reduce the problem of finding a minimum circuit to that of finding a triangle in an auxiliary graph. Three methods for finding a triangle in a graph are given. The first has an @math worst case bound ( @math for planar graphs); the second takes @math time on the average; the third has an @math worst case behavior. For digraphs, results of Bloniarz, Fisher and Meyer are used to obtain an algorithm with @math average behavior.",
"This paper presents an efficient fully dynamic graph algorithm for maintaining the transitive closure of a directed graph. The algorithm updates the adjacency matrix of the transitive closure with each update to the graph; hence, each reachability query of the form \"Is there a directed path from i to j?\" can be answered in O(1) time. The algorithm is randomized and has a one-sided error; it is correct when answering yes, but has O(1 nc) probability of error when answering no, for any constant c. In acyclic graphs, worst case update time is O(n2). In general graphs, the update time is O(n2.26). The space complexity of the algorithm is O(n2).",
"(MATH) Inspired by dynamic connectivity applications in computational geometry, we consider a problem we call dynamic subgraph connectivity: design a data structure for an undirected graph @math and a subset of vertices @math , to support insertions and deletions in @math and connectivity queries (are two vertices connected @?) in the subgraph induced by @math . We develop the first sublinear, fully dynamic method for this problem for general sparse graphs, using an elegant combination of several simple ideas. Our method requires linear space, @math amortized update time, and @math query time, where @math is the matrix multiplication exponent and @math hides polylogarithmic factors.",
"We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected graphs. Most of the bounds obtained depend solely on the number of edges in the graph in question, and not on the number of vertices. The bounds obtained improve upon various previously known results.",
"We present several new algorithms for detecting short fixed length cycles in digraphs. The new algorithms utilize fast rectangular matrix multiplication algorithms together with a dynamic programming approach similar to the one used in the solution of the classical chain matrix product problem. The new algorithms are instantiations of a generic algorithm that we present for finding a directed C k , i.e., a directed cycle of length k, in a digraph, for any fixed k ≥ 3. This algorithm partitions the prospective C k 's in the input digraph G = (V,E) into O(logk V) classes, according to the degrees of their vertices. For each cycle class we determine, in O(Eck log V) time, whether G contains a C k from that class, where c k = c k (ω) is a constant that depends only on !, the exponent of square matrix multiplication. The search for cycles from a given class is guided by the solution of a small dynamic programming problem. The total running time of the obtained deterministic algorithm is therefore O(Eck logk+1 V).For C 3 , we get c 3 = 2ω (ω + 1) 4 we get c 4 = (4ω - 1) (2ω + 1) 5 we get c 5 = 3ω (ω + 2) k for k ≥ 6 is a difficult task. We conjecture that c k = (k + 1)ω (2ω + k - 1), for every odd k. The values of c k for even k ≥ 6 seem to exhibit a much more complicated dependence on ω."
],
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_6"
],
"mid": [
"",
"",
"",
"1991858502",
"2044040096",
"2011182146",
"2015960500",
"2093825995"
]
}
|
Dynamic Connectivity: Connecting to Networks and Geometry
|
Dynamic graphs inspire a natural, challenging, and well-studied class of algorithmic problems. A rich body of the STOC/FOCS literature has considered problems ranging from the basic question of understanding connectivity in a dynamic graph [13,17,34,6,31], to maintaining the minimum spanning tree [20], the min-cut [36], shortest paths [9,35], reachability in directed graphs [10,25,26,32,33], etc.
But what exactly makes a graph "dynamic"? Computer networks have long provided the common motivation. The dynamic nature of such networks is captured by two basic types of updates to the graph:
• edge updates: adding or removing an edge. These correspond to setting up a new cable connection, accidental cable cuts, etc.
• vertex updates: turning a vertex on and off. Vertices (routers) can temporarily become "off" after events such as a misconfiguration, a software crash and reboot, etc. Problems involving only vertex updates have been called dynamic subgraph problems, since queries refer to the subgraph induced by vertices which are on.
Loosely speaking, dynamic graph problems fall into two categories. For "hard" problems, such as shortest paths and directed reachability, the best known running times are at least linear in the number of vertices. These high running times obscure the difference between vertex and edge updates, and identical bounds are often stated [9,32,33] for both operations. For the remainder of the problems, sublinear running times are known for edge updates, but sublinear bounds for vertex updates seems much harder to get. For instance, even iterating through all edges incident to a vertex may take linear time in the worst case. That vertex updates are slow is unfortunate. Referring to the computer-network metaphor, vertex updates are cheap "soft" events (misconfiguration or reboot), which occur more frequently than the costly physical events (cable cut) that cause an edge update.
Subgraph connectivity. As mentioned, most previous sublinear dynamic graph algorithms address edge updates but not the equally fundamental vertex updates. One notable exception, however, was a result of Chan [6] from STOC'02 on the basic connectivity problem for general sparse (undirected) graphs. This algorithm can support vertex updates in time 1 O(m 0.94 ) and decide whether two query vertices are connected in time O(m 1/3 ).
Though an encouraging start, the nature of this result makes it appear more like a half breakthrough. For one, the update time is only slightly sublinear. Worse yet, Chan's algorithm requires fast matrix multiplication (FMM). The O(m 0.94 ) update time follows from the theoretical FMM algorithm of Coppersmith and Winograd [8]. If Strassen's algorithm is used instead, the update time becomes O(m 0.984 ). Even if optimistically FMM could be done in quadratic time, the update time would only improve to O(m 0.89 ). FMM has been used before in various dynamic graph algorithms (e.g., [10,26]), and the paper [6] noted specific connections to some matrix-multiplication-related problems (see Section 2). All this naturally led one to suspect, as conjectured in the paper, that FMM might be essential to our problem. Thus, the result we are about to describe may come as a bit of a surprise. . . 1 We use m and n to denote the number of edges and vertices of the graph respectively; e O(·) ignores polylogarithmic factors and O * (·) hides n ε factors for an arbitrarily small constant ε > 0. Update bounds in this paper are, by default, amortized. First of all, this is a significant quantitative improvement (to anyone who regards an m 0.27 factor as substantial), and it represents the first convincingly sublinear running time. More importantly, it is a significant qualitative improvement, as our bound does not require FMM. Our algorithm involves a number of ideas, some of which can be traced back to earlier algorithms, but we use known edge-updatable connectivity structures to maintain a more cleverly designed intermediate graph. The end product is not straightforward at all, but still turns out to be simpler than the previous method [6] and has a compact, two-page description (we regard this as another plus, not a drawback).
Dynamic Geometry
We next turn to another important class of dynamic connectivity problems-those arising from geometry.
Geometric connectivity. Consider the following question, illustrated in Figure 1(a). Maintain a set of line segments in the plane, under insertions and deletions, to answer queries of the form: "given two points a and b, is there a path between a and b along the segments?" This simple-sounding problem turns out to be a challenge. On one hand, understanding any local geometry does not seem to help, because the connecting path can be long and windy. On the other hand, the graph-theoretic understanding is based on the intersection graph, which is too expensive to maintain. A newly inserted (or deleted) segment can intersect a large number of objects in the set, changing the intersection graph dramatically.
Abstracting away, we can consider a broad class of problems of the form: maintain a set of n geometric objects, and answer connectivity queries in their intersection graph. Such graphs arise, for instance, in VLSI applications in the case of orthogonal segments, or gear transmission systems, in the case of touching disks; see Figure 1(b). A more compelling application can be found in sensor networks: if r is the radius within which two sensors can communicate, the communication network is the intersection graph of balls of radius r/2 centered at the sensors. While our focus is on theoretical understanding rather than the practicality of specific applications, these examples still indicate the natural appeal of geometric connectivity problems.
All these problems have a trivial O(n) solution, by maintaining the intersection graph through edge updates. A systematic approach to beating the linear time bound was proposed in Chan's paper as well [6], by drawing a connection to subgraph connectivity. Assume that a particular object type allows data struc-tures for intersection range searching with space S(n) and query time T (n). It was shown that geometric connectivity can essentially be solved by maintaining a graph of size m = O(S(n) + nT (n)) and running O(S(n)/n + T (n)) vertex updates for every object insertion or deletion. Using the previous subgraph connectivity result [6], an update in the geometric connectivity problem took time O([S(n)/n + T (n)] · [S(n) + nT (n)] 0.94 ). Using our improved result, the bound becomes O([S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 ).
The prime implication in the previous paper is that connectivity of axis-parallel boxes in any constant dimension (in particular, orthogonal line segments in the plane) reduces to subgraph connectivity, with a polylogarithmic cost. Indeed, for such boxes range trees yield S(n) = n · lg O(d) n and T (n) = lg O(d) n. Unfortunately, while nontrivial range searching results are known for many types of objects, very efficient range searching is hard to come by. Consider our main motivating examples:
• for arbitrary (non-orthogonal) line segments in IR 2 , one can achieve
T (n) = O * ( √ n) and S(n) = O * (n), or T (n) = O * (n 1/3 ) and S(n) = O * (n 4/3 ) [28].
• for disks in IR 2 , one can achieve T (n) = O * (n 2/3 ) and S(n) = O * (n), or T (n) = O * (n 1/2 ) and
S(n) = O * (n 3/2 ) [3].
Even with our improved vertex-update time, the [S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 bound is too weak to beat the trivial linear update time. For arbitrary line segments in IR 2 , one would need to improve the vertex-update time to m 1/2−ε , which appears unlikely without FMM (see Section 2). The line segment case was in fact mentioned as a major open problem, implicitly in [6] and explicitly in [1]. The situation gets worse for objects of higher complexity or in higher dimensions.
Our results. In this paper, we are finally able to break the above barrier for dynamic geometric connectivity. At a high level, we show that range searching with any sublinear query time is enough to obtain sublinear update time in geometric connectivity. In particular, we get the first nontrivial update times for arbitrary line segments in the plane, disks of arbitrary radii, and simplices and balls in any fixed dimension. While the previous reduction [6] involves merely a straightforward usage of "biclique covers", our result here requires much more work. For starters, we need to devise a "degree-sensitive" version of our improved subgraph connectivity algorithm (which is of interest in itself); we then use this and known connectivity structures to maintain not one but two carefully designed intermediate graphs. Known range searching techniques [2] from computational geometry almost always provide sublinear query time. For instance, Matoušek [28] showed that b ≈ 1/2 is attainable for line segments, triangles, and any constant-size polygons in IR 2 ; more generally, b ≈ 1/d for simplices or constant-size polyhedra in IR d . Further results by Agarwal and Matoušek [3] yield b ≈ 1/(d + 1) for balls in IR d . Most generally, b > 0 is possible for any class of objects defined by semialgebraic sets of constant description complexity.
More results. Our general sublinear results undoubtedly invite further research into finding better bounds for specific classes of objects. In general, the complexity of range queries provides a natural barrier for the update time, since upon inserting an object we at least need to determine if it intersects any object already in the set. Essentially, our result has a quadratic loss compared to range queries: if T (n) = n 1−b , the update time is n 1−Θ(b 2 ) .
In Section 5, We make a positive step towards closing this quadratic gap: we show that if the updates are given offline (i.e. are known in advance), the amortized update time can be made n 1−Θ(b) . We need FMM this time, but the usage of FMM here is more intricate (and interesting) than typical. For one, it is crucial to use fast rectangular matrix multiplication. Along the way, we even find ourselves rederiving Yuster and Zwick's sparse matrix multiplication result [38] in a more general form. The juggling of parameters is also more unusual, as one can suspect from looking at our actual update bound, which is O(n 1+α−bα 1+α−bα/2 ), where α = 0.294 is an exponent associated with rectangular FMM.
Dynamic Subgraph Connectivity with O(m 2/) Update Time
In this section, we present our new method for the dynamic subgraph connectivity problem: maintaining a subset S of vertices in a graph G, under vertex insertions and deletions in S, so that we can decide whether any two query vertices are connected in the subgraph induced by S. We will call the vertices in S the active vertices. For now, we assume that the graph G itself is static.
The complete description of the new method is given in the proof of the following theorem. It is "short and sweet", especially if the reader compares with Chan's paper [6]. The previous method requires several stages of development, addressing the offline and semi-online special cases, along with the use of FMMwe completely bypass these intermediate stages, and FMM, here. Embedded below, one can find a number of different ideas (some also used in [6]): rebuilding periodically after a certain number of updates, distinguishing "high-degree" features from "low-degree" features (e.g., see [5,37]), amortizing by splitting smaller subsets from larger ones, etc. The key lies in the definition of a new, yet deceptively simple, intermediate graph G * , which is maintained by known polylogarithmic data structures for dynamic connectivity under edge updates [17,20,34]. Except for these known connectivity structures, the description is entirely self-contained. Proof. We divide the update sequence into phases, each consisting of q := m/∆ updates. The active vertices are partitioned into two sets P and Q, where P undergoes only deletions and Q undergoes both insertions and deletions. Each vertex insertion is done to Q. At the end of each phase, we move the elements of Q to P and reset Q to the empty set. This way, |Q| is kept at most q at all times.
Call a connected component in (the subgraph induced by) P high if the sum of the degrees of its vertices exceeds ∆, and low otherwise. Clearly, there are at most O(m/∆) high components.
The data structure.
• We store the components of P in a data structure for decremental (deletion-only) connectivity that supports edge deletions in polylogarithmic amortized time.
• We maintain a bipartite multigraph Γ between V and the components γ in P : for each uv ∈ E where v lies in component γ, we create a copy of an edge uγ ∈ Γ.
• For each vertex pair u,v, we maintain the value C[u, v] defined as the number of low components in P that are adjacent to both u and v in Γ. (Actually, only O(m∆) entries of C[·, ·] are nonzero and need to be stored.)
• We define a graph G * whose vertices are the vertices of Q and components of P :
(a) For each u, v ∈ Q, if C[u, v] > 0, then create an edge uv ∈ G * . (b) For each vertex u ∈ Q and high component γ in P , if uγ ∈ Γ, then create an edge uγ ∈ G * . (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * .
We maintain G * in another data structure for dynamic connectivity supporting polylogarithmic-time edge updates.
Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If instead γ is low, then edges of type (a) ensure that u and v are connected in G * . By concatenation, the argument extends to show that any two vertices u, v ∈ Q connected by a path in G are connected in G * .
Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a high component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b) edges, γ j is an isolated component and we can simply test whether v 1 and v 2 are both in the same component of P .
If on the other hand v j is in a low component γ j , then we can exhaustively search for a vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. Again if no such vertex exists, then γ j is an isolated component and the test is easy. The query cost is O(∆). Deletion of a vertex from a high component γ in P . The component γ is split into a number of subcomponents γ 1 , . . . , γ ℓ with, say, γ 1 being the largest. We can update the multigraph Γ in time O(deg(γ 2 ) + · · · + deg(γ ℓ )) by splitting the smaller subcomponents from the largest subcomponent. Consequently, we need to update O(deg(γ 2 ) + · · · + deg(γ ℓ )) edges of type (b) in G * . Since P undergoes deletions only, a vertex can belong to the smaller subcomponents in at most O(lg n) splits over the entire phase, and so the total cost per phase is O(m), which is absorbed in the preprocessing cost of the phase.
For each low subcomponent γ j , we update the matrix C[·, ·] in O(deg(γ j )∆) time, by examining each edge γ j v ∈ Γ and each of the O(∆) vertices u adjacent to γ j and testing whether γ j u ∈ Γ. Consequently, we need to update O(deg(γ j )∆) edges of type (a) in G * . Since a vertex can change from being in a high component to a low component at most once over the entire phase, the total cost per phase is O(m∆), which is absorbed by the preprocessing cost.
Finale. The overall amortized cost per update operation is
O(∆ 2 + m/∆). Set ∆ = m 1/3 .
Note that edge insertions and deletions in G can be accomodated easily (e.g., see Lemma 2 of the next section).
Dynamic Geometric Connectivity with Sublinear Update Time
In this section, we investigate geometric connectivity problems: maintaining a set S of n objects, under insertions and deletions of objects, so that we can decide whether two query objects are connected in the intersection graph of S. (In particular, we can decide whether two query points are connected in the union of S by finding two objects containing the two points, via range searching, and testing connectedness for these two objects.)
By the biclique-cover technique from [6], the result from the previous section immediately implies a dynamic connectivity method for axis-parallel boxes with O(n 2/3 ) update time and O(n 1/3 ) query time in any fixed dimension.
Unfortunately, this technique is not strong enough to lead to sublinear results for other objects, as we have explained in the introduction. This is because (i) the size of the maintained graph, m = O(S(n) + nT (n)), may be too large and (ii) the number of vertex updates triggered by an object update, O(S(n)/n + T (n)), may be too large.
We can overcome the first obstacle by using a different strategy that rebuilds the graph more often to keep it sparse; this is not obvious and will be described precisely later during the proof of Theorem 5. The second obstacle is even more critical: here, the key is to observe that although each geometric update requires multiple vertex updates, many of these vertex updates involves vertices of low degrees.
A degree-sensitive version of subgraph connectivity
The first ingredient we need is a dynamic subgraph connectivity method that works faster when the degree of the updated vertex is small. Fortunately, we can prove the following lemma, which extends Theorem 1 (if we set ∆ = n 1/3 ). The method follows that of Theorem 1, but with an extra twist: not only do we classify components of P as high or low, but we also classify vertices of Q as high or low. Proof. The data structure is the same as in the proof of Theorem 1, except for one difference: the definition of the graph G * .
Call a vertex high if its degree exceeds m/∆, and low otherwise. Clearly, there are at most O(∆) high vertices.
• We define a graph G * whose vertices are the vertices of Q and components of P : (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * .
We maintain G * in a data structure for dynamic connectivity with polylogarithmic-time edge updates.
Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If u and v are both low, then edges of type (b ′ ) ensure that u and v are connected in G * . In the remaining case, at least one of the two vertices, say, u is high, and γ is low; here, edges of type (a ′ ) ensure that u and v are again connected in G * . The claim follows by concatenation.
Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b ′ ) edges, γ j can only be adjacent to high vertices of Q. We can exhaustively search for a high vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. If no such vertex exists, then γ j is an isolated component and we can simply test whether v 1 and v 2 are both in γ j . The cost is O(∆).
Preprocessing per phase. At the beginning of each phase, the cost to preprocess the data structure is O(m∆) as before. We can charge every update operation with an amortized cost of O(m∆/q) = O(∆ 2 ). Edge updates. We can simulate the insertion of an edge uv by inserting a new low vertex z adjacent to only u and v to Q. Since the degree is 2, the cost is O(1). We can later simulate the deletion of this edge by deleting the vertex z from Q.
Update of a high vertex
Range searching tools from geometry
Next, we need known range searching techniques. These techniques give linear-space data structures (S(n) = O(n)) that can retrieve all objects intersecting a query object in sublinear time (T (n) = O(n 1−b )) for many types of geometric objects. We assume that our class of geometric objects satisfies the following property for some constant b > 0-this property neatly summarizes all we need to know from geometry. The property is typically proved by applying a suitable "partition theorem" in a recursive manner, thereby forming a so-called "partition tree"; for example, see the work by Matoušek [28] or the survey by Agarwal and Erickson [2]. Each canonical subset corresponds to a node of the partition tree (more precisely, the subset of all objects stored at the leaves underneath the node). Matoušek's results imply that b = 1/d − ε is attainable for simplices or constant-size polyhedra in IR d . (To go from simplex range searching to intersection searching, one uses multi-level partition trees; e.g., see [29].) Further results by Agarwal and Matoušek [3] yield b = 1/(d + 1) − ε for balls in IR d and nontrivial values of b for other families of curved objects (semialgebraic sets of constant degree). The special case of axis-parallel boxes corresponds to b = 1.
The specific bounds in (i) and (ii) may not be too well known, but they follow from the hierarchical way in which canonical subsets are constructed. For example, (ii) follows since the subsets in C z of size at most n/∆ are contained in O(∆ 1−b ) subsets of size O(n/∆). In fact, (multi-level) partition trees guarantee a stronger inequality,
C∈Cz |C| 1−b = O(n 1−b )
, from which both (i) and (ii) can be obtained after a moment's thought.
As an illustration, we can use the above property to develop a data structure for a special case of dynamic geometric connectivity where insertions are done in "blocks" but arbitrary deletions are to be supported. Although the insertion time is at least linear, the result is good if the block size s is sufficiently large. This subroutine will make up a part of the final solution.
Lemma 4. We can maintain the connected components among a set S of objects in a data structure that supports insertion of a block of s objects in O(n + sn 1−b ) amortized time (s < n), and deletion of a single object in O(1) amortized time.
Proof. We maintain a multigraph H in a data structure for dynamic connectivity with polylogarithmic edge update time (which explicitly maintains the connected components), where the vertices are the objects of S. This multigraph will obey the invariant that two objects are geometrically connected iff they are connected in S. We do not insist that H has linear size.
Insertion of a block B to S. We first form a collection C of canonical subsets for S ∪ B by Property 3. For each z ∈ B and each C ∈ C z , we assign z to C. For each canonical subset C ∈ C, if C is assigned at least one object of B, then we create new edges in H linking all objects of C and all objects assigned to C in a path. (If this path overlaps with previous paths, we create multiple copies of edges.) The number of edges inserted is thus O(n + |B|n 1−b ).
Justification. The invariant is satisfied since all objects in a canonical subset C intersect all objects assigned to C, and are thus all connected if there is at least one object assigned to C.
Deletion of an object z from S. For each canonical subset C containing or assigned the object z, we need to delete at most 2 edges and insert 1 edge to maintain the path. As soon as the path contains no object assigned to C, we delete all the edges in the path. Since the length of the path can only decrease over the entire update sequence, the total number of such edge updates is proportional to the initial length of the path. We can charge the cost to edge insertions.
Putting it together
We are finally ready to present our sublinear result for dynamic geometric connectivity. We again need the idea of rebuilding periodically, and splitting smaller sets from larger ones. In addition to the graph H (of superlinear size) from Lemma 4, which undergoes insertions only in blocks, the key lies in the definition of another subtly crafted intermediate graph G (of linear size), maintained this time by the subgraph connectivity structure of Lemma 2. The definition of this graph involves multiple types of vertices and edges. The details of the analysis and the setting of parameters get more interesting.
Theorem 5. Assume 0 < b ≤ 1/2. We can maintain a collection of objects in amortized update time O(n 1−b 2 /(2+b) ) and answer connectivity queries in time O(n b/(2+b) ).
Proof. We divide the update sequence into phases, each consisting of y := n b updates. The current objects are partitioned into two sets X and Y , where X undergoes only deletions and Y undergoes both insertions and deletions. Each insertion is done to Y . At the end of each phase, we move the elements of Y to X and reset Y to the empty set. This way, |Y | is kept at most y at all times.
At the beginning of each phase, we form a collection C of canonical subsets for X by Property 3.
The data structure.
• We maintain the components of X in the data structure from Lemma 4.
• We maintain the following graph G for dynamic subgraph connectivity, where the vertices are objects of X ∪ Y , components of X, and the canonical subsets of the current phase:
(a) Create an edge in G between each component of X and each of its objects.
(b) Create an edge in G between each canonical subset and each of its objects in X.
(c) Create an edge in G between each object z ∈ Y and each canonical subset C ∈ C z . Here, we assign z to C.
(d) Create an edge in G between every two intersecting objects in Y .
(e) We make a canonical subset active in G iff it is assigned at least one object in Y . Vertices that are objects or components are always active. Justification. We claim that two objects are geometrically connected in X ∪ Y iff they are connected in the subgraph induced by the active vertices in the graph G. The "only if" direction is obvious. For the "if" direction, we note that all objects in an active canonical subset C intersect all objects assigned to C and are thus all connected.
Queries. We answer a query by querying in the graph G. The cost is O(∆).
Preprocessing per phase. Before a new phase begins, we need to update the components in X as we move all elements of Y to X (a block insertion). By Lemma 4, the cost is O(n + yn
∆ 2 + ∆ 1−b · n/∆ + n/∆ b ) = O(n 1−b ∆ 2 + n/∆ b ).
Deletion of an object z in X. We first update the components of X. By Lemma 4, the amortized cost is O(1). We can now update the edges of type (a) in G. The total number of such edge updates per phase is O(n lg n), by always splitting smaller components from larger ones. The amortized number of edge updates is thus O(n/y). The amortized cost is O((n/y)∆ 2 ) = O(n 1−b ∆ 2 ).
Finale. The overall amortized cost per update operation is
O(n 1−b ∆ 2 + n/∆ b ). Set ∆ = n b/(2+b) .
Note that we can still prove the theorem for b > 1/2, by handling the O(y 2 ) intersections among Y (the type (d) edges) in a less naive way. However, we are not aware of any specific applications with b ∈ (1/2, 1).
Offline Dynamic Geometric Connectivity
For the special case of offline updates, we can improve the result of Section 4 for small values of b by a different method using rectangular matrix multiplication.
Let M [n 1 , n 2 , n 3 ] represent the cost of multiplying a Boolean n 1 × n 2 matrix A with a Boolean n 2 × n 3 matrix B. Let M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the same cost under the knowledge that the number of 1's in A is m 1 and the number of 1's in B is m 2 . We can reinterpret this task in graph terms: Suppose we are given a tripartite graph with vertex classes V 1 , V 2 , V 3 of sizes n 1 , n 2 , n 3 respectively where there are m 1 edges between V 1 and V 2 and m 2 edges between V 2 and V 3 . Then M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the cost of deciding, for each u ∈ V 1 and v ∈ V 3 , whether u and v are adjacent to a common vertex in V 2 .
An offline degree-sensitive version of subgraph connectivity
We begin with an offline variant of Lemma 2: Proof. We divide the update sequence into phases, each consisting of q low-vertex updates. The active vertices are partitioned into two sets P and Q, with Q ⊆ Q 0 , where P and Q 0 are static and Q undergoes both insertions and deletions. Each vertex insertion/deletion is done to Q. At the end of each phase, we reset Q 0 to hold all O(∆) high vertices plus the low vertices involved in the updates of the next phase, reset P to hold all active vertices not in Q 0 , and reset Q to hold all active vertices in Q 0 . Clearly, |Q| ≤ |Q 0 | = O(q).
Lemma 6. Let 1 ≤ ∆ ≤ q ≤ m. We
The data structure is the same as the one in the proof of Lemma 2, with one key difference: we only maintain the value C[u, v] when u is a high vertex in Q 0 and v is a (high or low) vertex in Q 0 . Moreover, we do not need to distinguish between high and low components, i.e., all components are considered low.
During preprocessing of each phase, we can now compute C Deletions in P do not occur now.
Sparse and dense rectangular matrix multiplication
Sparse matrix multiplication can be reduced to multiplying smaller dense matrices, by using a "highlow" trick [5]. Fact 7(i) below can be viewed as a variant of [6, Lemma 3.1] and a result of Yuster and Zwick [38]-incidentally, this fact is sufficiently powerful to yield a simple(r) proof of Yuster and Zwick's sparse matrix multiplication result, when combined with known bounds on dense rectangular matrix multiplication. Fact 7(ii) below states one known bound on dense rectangular matrix multiplication which we will use.
Putting it together
We now present our offline result for dynamic geometric connectivity using Lemma 6. Although we also use Property 3, the design of the key graph G is quite different from the one in the proof of Theorem 5. For instance, the size of the graph is larger (and no longer O(n)), but the number of edges incident to high vertices remains linear; furthermore, each object update triggers only a constant number of vertex updates in the graph. All the details come together in the analysis to lead to some intriguing choices of parameters. Proof. We divide the update sequence into phases, each consisting of q updates, where q is a parameter satisfying ∆ ≤ q ≤ n/∆ 1−b . The current objects are partitioned into two sets X and Y , with Y ⊆ Y 0 where X and Y 0 are static and Y undergoes both insertions and deletions. Each insertion/deletion is done to Y . At the end of each phase, we reset Y 0 to hold all objects involved the objects of the next phase, X to hold all current objects not in Y 0 , and Y to hold all current objects in Y 0 . Clearly, |Y | ≤ |Y 0 | = O(q). At the beginning of each phase, we form a collection C of canonical subsets for X ∪ Y 0 by Property 3.
The data structure.
• We maintain the components of X in the data structure from Lemma 4.
• We maintain the following graph G for offline dynamic subgraph connectivity, where the vertices are objects of X ∪ Y 0 , components of X, and canonical subsets of size exceeding n/∆:
(a) Create an edge in G between each component of X and each of its objects.
(b) Create an edge in G between each canonical subset C of size exceeding n/∆ and each of its objects in X ∪ Y .
(c) Create an edge in G between each object z ∈ Y 0 and each canonical subset C ∈ C z of size exceeding n/∆. Here, we assign z to C.
(d) Create an edge in G between each object z ∈ Y 0 and each object in the union of the canonical subsets in C z of size at most n/∆.
(e) We make a canonical subset active in G iff it is assigned at least one object in Y . We make the vertices in X ∪Y active, and all components active. The high vertices are precisely the canonical subsets of size exceeding n/∆; there are O(∆) such vertices. Update of an object z in Y . We need to make a single vertex update z in G, which has degree O(n/∆ b ) by Property 3(ii). Furthermore, we may have to change the status of as many as O(∆ 1−b ) high vertices by Property 3(i). According to Lemma 8, the cost of these vertex updates is O(M [∆, n, q | n, m]/q + n/∆ b + ∆ 1−b q).
Finale. By Fact 7, assuming that ∆ ≤ q α and q ≤ n/t, we have M [∆, n, q | n, m] = O(M [∆, n/t, q] + mt) = O(nq/t + nqt/∆ b ). Choosing t = ∆ b/2 gives O(nq/∆ b/2 ). The overall amortized cost per update operation is thus O(n/∆ b/2 + ∆ 1−b q + n/q + n 1−b ). Set ∆ = q α and q = n 1 1+α−bα/2 and the result follows. (Note that indeed ∆ ≤ q ≤ n/∆ 1−b and q ≤ n/t for these choices of parameters.) Compared to Theorem 5, the dependence on b of the exponent in the update bound is only 1 − Θ(b) rather than 1 − Θ(b 2 ). The bound is better, for example, for b ≤ 1/4.
Open Problems
Our work opens up many interesting directions for further research. For subgraph connectivity, an obvious question is whether the O(m 2/3 ) vertex-update bound can be improved (without or with FMM); as we have mentioned, improvements beyond √ m without FMM are not possible without a breakthrough on the triangle-finding problem. An intriguing question is whether for dense graphs we can achieve update time sublinear in n, i.e., O(n 1−ε ) (or possibly even sublinear in the degree). For geometric connectivity, it would be desirable to determine the best update bounds for specific shapes such as line segments and disks in two dimensions. Also, directed settings of geometric connectivity arise in applications and are worth studying; for example, when sensors' transmission ranges are balls of different radii or wedges, a sensor may lie in another sensor's range without the reverse being true.
For both subgraph and geometric connectivity, we can reduce the query time at the expense of increasing the update time, but we do not know whether constant or polylogarithmic query time is possible with sublinear update time in general (see [1] for a result on the 2-dimensional orthogonal special case). Currently, we do not know how to obtain our update bounds with linear space (e.g., Theorem 1 requires O(m 4/3 ) space), nor do we know how to get good worst-case update bounds (since the known polylogarithmic results for connectivity under edge updates are all amortized). Also, the queries we have considered are about connectivity between two vertices/objects. Can nontrivial results be obtained for richer queries such as counting the number of connected components (see [1] on the 2-dimensional orthogonal case), or perhaps shortest paths or minimum cut?
| 6,473 |
0808.1128
|
2953283045
|
Dynamic connectivity is a well-studied problem, but so far the most compelling progress has been confined to the edge-update model: maintain an understanding of connectivity in an undirected graph, subject to edge insertions and deletions. In this paper, we study two more challenging, yet equally fundamental problems. Subgraph connectivity asks to maintain an understanding of connectivity under vertex updates: updates can turn vertices on and off, and queries refer to the subgraph induced by "on" vertices. (For instance, this is closer to applications in networks of routers, where node faults may occur.) We describe a data structure supporting vertex updates in O (m^ 2 3 ) amortized time, where m denotes the number of edges in the graph. This greatly improves over the previous result [Chan, STOC'02], which required fast matrix multiplication and had an update time of O(m^0.94). The new data structure is also simpler. Geometric connectivity asks to maintain a dynamic set of n geometric objects, and query connectivity in their intersection graph. (For instance, the intersection graph of balls describes connectivity in a network of sensors with bounded transmission radius.) Previously, nontrivial fully dynamic results were known only for special cases like axis-parallel line segments and rectangles. We provide similarly improved update times, O (n^ 2 3 ), for these special cases. Moreover, we show how to obtain sublinear update bounds for virtually all families of geometric objects which allow sublinear-time range queries, such as arbitrary 2D line segments, d-dimensional simplices, and d-dimensional balls.
|
It was shown @cite_32 that subgraph connectivity can be reduced to dynamic connectivity of axis-parallel line segments in 3 dimensions. Thus, as soon as one gets enough combinatorial richness in the host geometric space, subgraph connectivity becomes the possible way to solve geometric connectivity.
|
{
"abstract": [
"(MATH) Inspired by dynamic connectivity applications in computational geometry, we consider a problem we call dynamic subgraph connectivity: design a data structure for an undirected graph @math and a subset of vertices @math , to support insertions and deletions in @math and connectivity queries (are two vertices connected @?) in the subgraph induced by @math . We develop the first sublinear, fully dynamic method for this problem for general sparse graphs, using an elegant combination of several simple ideas. Our method requires linear space, @math amortized update time, and @math query time, where @math is the matrix multiplication exponent and @math hides polylogarithmic factors."
],
"cite_N": [
"@cite_32"
],
"mid": [
"2011182146"
]
}
|
Dynamic Connectivity: Connecting to Networks and Geometry
|
Dynamic graphs inspire a natural, challenging, and well-studied class of algorithmic problems. A rich body of the STOC/FOCS literature has considered problems ranging from the basic question of understanding connectivity in a dynamic graph [13,17,34,6,31], to maintaining the minimum spanning tree [20], the min-cut [36], shortest paths [9,35], reachability in directed graphs [10,25,26,32,33], etc.
But what exactly makes a graph "dynamic"? Computer networks have long provided the common motivation. The dynamic nature of such networks is captured by two basic types of updates to the graph:
• edge updates: adding or removing an edge. These correspond to setting up a new cable connection, accidental cable cuts, etc.
• vertex updates: turning a vertex on and off. Vertices (routers) can temporarily become "off" after events such as a misconfiguration, a software crash and reboot, etc. Problems involving only vertex updates have been called dynamic subgraph problems, since queries refer to the subgraph induced by vertices which are on.
Loosely speaking, dynamic graph problems fall into two categories. For "hard" problems, such as shortest paths and directed reachability, the best known running times are at least linear in the number of vertices. These high running times obscure the difference between vertex and edge updates, and identical bounds are often stated [9,32,33] for both operations. For the remainder of the problems, sublinear running times are known for edge updates, but sublinear bounds for vertex updates seems much harder to get. For instance, even iterating through all edges incident to a vertex may take linear time in the worst case. That vertex updates are slow is unfortunate. Referring to the computer-network metaphor, vertex updates are cheap "soft" events (misconfiguration or reboot), which occur more frequently than the costly physical events (cable cut) that cause an edge update.
Subgraph connectivity. As mentioned, most previous sublinear dynamic graph algorithms address edge updates but not the equally fundamental vertex updates. One notable exception, however, was a result of Chan [6] from STOC'02 on the basic connectivity problem for general sparse (undirected) graphs. This algorithm can support vertex updates in time 1 O(m 0.94 ) and decide whether two query vertices are connected in time O(m 1/3 ).
Though an encouraging start, the nature of this result makes it appear more like a half breakthrough. For one, the update time is only slightly sublinear. Worse yet, Chan's algorithm requires fast matrix multiplication (FMM). The O(m 0.94 ) update time follows from the theoretical FMM algorithm of Coppersmith and Winograd [8]. If Strassen's algorithm is used instead, the update time becomes O(m 0.984 ). Even if optimistically FMM could be done in quadratic time, the update time would only improve to O(m 0.89 ). FMM has been used before in various dynamic graph algorithms (e.g., [10,26]), and the paper [6] noted specific connections to some matrix-multiplication-related problems (see Section 2). All this naturally led one to suspect, as conjectured in the paper, that FMM might be essential to our problem. Thus, the result we are about to describe may come as a bit of a surprise. . . 1 We use m and n to denote the number of edges and vertices of the graph respectively; e O(·) ignores polylogarithmic factors and O * (·) hides n ε factors for an arbitrarily small constant ε > 0. Update bounds in this paper are, by default, amortized. First of all, this is a significant quantitative improvement (to anyone who regards an m 0.27 factor as substantial), and it represents the first convincingly sublinear running time. More importantly, it is a significant qualitative improvement, as our bound does not require FMM. Our algorithm involves a number of ideas, some of which can be traced back to earlier algorithms, but we use known edge-updatable connectivity structures to maintain a more cleverly designed intermediate graph. The end product is not straightforward at all, but still turns out to be simpler than the previous method [6] and has a compact, two-page description (we regard this as another plus, not a drawback).
Dynamic Geometry
We next turn to another important class of dynamic connectivity problems-those arising from geometry.
Geometric connectivity. Consider the following question, illustrated in Figure 1(a). Maintain a set of line segments in the plane, under insertions and deletions, to answer queries of the form: "given two points a and b, is there a path between a and b along the segments?" This simple-sounding problem turns out to be a challenge. On one hand, understanding any local geometry does not seem to help, because the connecting path can be long and windy. On the other hand, the graph-theoretic understanding is based on the intersection graph, which is too expensive to maintain. A newly inserted (or deleted) segment can intersect a large number of objects in the set, changing the intersection graph dramatically.
Abstracting away, we can consider a broad class of problems of the form: maintain a set of n geometric objects, and answer connectivity queries in their intersection graph. Such graphs arise, for instance, in VLSI applications in the case of orthogonal segments, or gear transmission systems, in the case of touching disks; see Figure 1(b). A more compelling application can be found in sensor networks: if r is the radius within which two sensors can communicate, the communication network is the intersection graph of balls of radius r/2 centered at the sensors. While our focus is on theoretical understanding rather than the practicality of specific applications, these examples still indicate the natural appeal of geometric connectivity problems.
All these problems have a trivial O(n) solution, by maintaining the intersection graph through edge updates. A systematic approach to beating the linear time bound was proposed in Chan's paper as well [6], by drawing a connection to subgraph connectivity. Assume that a particular object type allows data struc-tures for intersection range searching with space S(n) and query time T (n). It was shown that geometric connectivity can essentially be solved by maintaining a graph of size m = O(S(n) + nT (n)) and running O(S(n)/n + T (n)) vertex updates for every object insertion or deletion. Using the previous subgraph connectivity result [6], an update in the geometric connectivity problem took time O([S(n)/n + T (n)] · [S(n) + nT (n)] 0.94 ). Using our improved result, the bound becomes O([S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 ).
The prime implication in the previous paper is that connectivity of axis-parallel boxes in any constant dimension (in particular, orthogonal line segments in the plane) reduces to subgraph connectivity, with a polylogarithmic cost. Indeed, for such boxes range trees yield S(n) = n · lg O(d) n and T (n) = lg O(d) n. Unfortunately, while nontrivial range searching results are known for many types of objects, very efficient range searching is hard to come by. Consider our main motivating examples:
• for arbitrary (non-orthogonal) line segments in IR 2 , one can achieve
T (n) = O * ( √ n) and S(n) = O * (n), or T (n) = O * (n 1/3 ) and S(n) = O * (n 4/3 ) [28].
• for disks in IR 2 , one can achieve T (n) = O * (n 2/3 ) and S(n) = O * (n), or T (n) = O * (n 1/2 ) and
S(n) = O * (n 3/2 ) [3].
Even with our improved vertex-update time, the [S(n)/n + T (n)] · [S(n) + nT (n)] 2/3 bound is too weak to beat the trivial linear update time. For arbitrary line segments in IR 2 , one would need to improve the vertex-update time to m 1/2−ε , which appears unlikely without FMM (see Section 2). The line segment case was in fact mentioned as a major open problem, implicitly in [6] and explicitly in [1]. The situation gets worse for objects of higher complexity or in higher dimensions.
Our results. In this paper, we are finally able to break the above barrier for dynamic geometric connectivity. At a high level, we show that range searching with any sublinear query time is enough to obtain sublinear update time in geometric connectivity. In particular, we get the first nontrivial update times for arbitrary line segments in the plane, disks of arbitrary radii, and simplices and balls in any fixed dimension. While the previous reduction [6] involves merely a straightforward usage of "biclique covers", our result here requires much more work. For starters, we need to devise a "degree-sensitive" version of our improved subgraph connectivity algorithm (which is of interest in itself); we then use this and known connectivity structures to maintain not one but two carefully designed intermediate graphs. Known range searching techniques [2] from computational geometry almost always provide sublinear query time. For instance, Matoušek [28] showed that b ≈ 1/2 is attainable for line segments, triangles, and any constant-size polygons in IR 2 ; more generally, b ≈ 1/d for simplices or constant-size polyhedra in IR d . Further results by Agarwal and Matoušek [3] yield b ≈ 1/(d + 1) for balls in IR d . Most generally, b > 0 is possible for any class of objects defined by semialgebraic sets of constant description complexity.
More results. Our general sublinear results undoubtedly invite further research into finding better bounds for specific classes of objects. In general, the complexity of range queries provides a natural barrier for the update time, since upon inserting an object we at least need to determine if it intersects any object already in the set. Essentially, our result has a quadratic loss compared to range queries: if T (n) = n 1−b , the update time is n 1−Θ(b 2 ) .
In Section 5, We make a positive step towards closing this quadratic gap: we show that if the updates are given offline (i.e. are known in advance), the amortized update time can be made n 1−Θ(b) . We need FMM this time, but the usage of FMM here is more intricate (and interesting) than typical. For one, it is crucial to use fast rectangular matrix multiplication. Along the way, we even find ourselves rederiving Yuster and Zwick's sparse matrix multiplication result [38] in a more general form. The juggling of parameters is also more unusual, as one can suspect from looking at our actual update bound, which is O(n 1+α−bα 1+α−bα/2 ), where α = 0.294 is an exponent associated with rectangular FMM.
Dynamic Subgraph Connectivity with O(m 2/) Update Time
In this section, we present our new method for the dynamic subgraph connectivity problem: maintaining a subset S of vertices in a graph G, under vertex insertions and deletions in S, so that we can decide whether any two query vertices are connected in the subgraph induced by S. We will call the vertices in S the active vertices. For now, we assume that the graph G itself is static.
The complete description of the new method is given in the proof of the following theorem. It is "short and sweet", especially if the reader compares with Chan's paper [6]. The previous method requires several stages of development, addressing the offline and semi-online special cases, along with the use of FMMwe completely bypass these intermediate stages, and FMM, here. Embedded below, one can find a number of different ideas (some also used in [6]): rebuilding periodically after a certain number of updates, distinguishing "high-degree" features from "low-degree" features (e.g., see [5,37]), amortizing by splitting smaller subsets from larger ones, etc. The key lies in the definition of a new, yet deceptively simple, intermediate graph G * , which is maintained by known polylogarithmic data structures for dynamic connectivity under edge updates [17,20,34]. Except for these known connectivity structures, the description is entirely self-contained. Proof. We divide the update sequence into phases, each consisting of q := m/∆ updates. The active vertices are partitioned into two sets P and Q, where P undergoes only deletions and Q undergoes both insertions and deletions. Each vertex insertion is done to Q. At the end of each phase, we move the elements of Q to P and reset Q to the empty set. This way, |Q| is kept at most q at all times.
Call a connected component in (the subgraph induced by) P high if the sum of the degrees of its vertices exceeds ∆, and low otherwise. Clearly, there are at most O(m/∆) high components.
The data structure.
• We store the components of P in a data structure for decremental (deletion-only) connectivity that supports edge deletions in polylogarithmic amortized time.
• We maintain a bipartite multigraph Γ between V and the components γ in P : for each uv ∈ E where v lies in component γ, we create a copy of an edge uγ ∈ Γ.
• For each vertex pair u,v, we maintain the value C[u, v] defined as the number of low components in P that are adjacent to both u and v in Γ. (Actually, only O(m∆) entries of C[·, ·] are nonzero and need to be stored.)
• We define a graph G * whose vertices are the vertices of Q and components of P :
(a) For each u, v ∈ Q, if C[u, v] > 0, then create an edge uv ∈ G * . (b) For each vertex u ∈ Q and high component γ in P , if uγ ∈ Γ, then create an edge uγ ∈ G * . (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * .
We maintain G * in another data structure for dynamic connectivity supporting polylogarithmic-time edge updates.
Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If instead γ is low, then edges of type (a) ensure that u and v are connected in G * . By concatenation, the argument extends to show that any two vertices u, v ∈ Q connected by a path in G are connected in G * .
Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a high component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b) edges, γ j is an isolated component and we can simply test whether v 1 and v 2 are both in the same component of P .
If on the other hand v j is in a low component γ j , then we can exhaustively search for a vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. Again if no such vertex exists, then γ j is an isolated component and the test is easy. The query cost is O(∆). Deletion of a vertex from a high component γ in P . The component γ is split into a number of subcomponents γ 1 , . . . , γ ℓ with, say, γ 1 being the largest. We can update the multigraph Γ in time O(deg(γ 2 ) + · · · + deg(γ ℓ )) by splitting the smaller subcomponents from the largest subcomponent. Consequently, we need to update O(deg(γ 2 ) + · · · + deg(γ ℓ )) edges of type (b) in G * . Since P undergoes deletions only, a vertex can belong to the smaller subcomponents in at most O(lg n) splits over the entire phase, and so the total cost per phase is O(m), which is absorbed in the preprocessing cost of the phase.
For each low subcomponent γ j , we update the matrix C[·, ·] in O(deg(γ j )∆) time, by examining each edge γ j v ∈ Γ and each of the O(∆) vertices u adjacent to γ j and testing whether γ j u ∈ Γ. Consequently, we need to update O(deg(γ j )∆) edges of type (a) in G * . Since a vertex can change from being in a high component to a low component at most once over the entire phase, the total cost per phase is O(m∆), which is absorbed by the preprocessing cost.
Finale. The overall amortized cost per update operation is
O(∆ 2 + m/∆). Set ∆ = m 1/3 .
Note that edge insertions and deletions in G can be accomodated easily (e.g., see Lemma 2 of the next section).
Dynamic Geometric Connectivity with Sublinear Update Time
In this section, we investigate geometric connectivity problems: maintaining a set S of n objects, under insertions and deletions of objects, so that we can decide whether two query objects are connected in the intersection graph of S. (In particular, we can decide whether two query points are connected in the union of S by finding two objects containing the two points, via range searching, and testing connectedness for these two objects.)
By the biclique-cover technique from [6], the result from the previous section immediately implies a dynamic connectivity method for axis-parallel boxes with O(n 2/3 ) update time and O(n 1/3 ) query time in any fixed dimension.
Unfortunately, this technique is not strong enough to lead to sublinear results for other objects, as we have explained in the introduction. This is because (i) the size of the maintained graph, m = O(S(n) + nT (n)), may be too large and (ii) the number of vertex updates triggered by an object update, O(S(n)/n + T (n)), may be too large.
We can overcome the first obstacle by using a different strategy that rebuilds the graph more often to keep it sparse; this is not obvious and will be described precisely later during the proof of Theorem 5. The second obstacle is even more critical: here, the key is to observe that although each geometric update requires multiple vertex updates, many of these vertex updates involves vertices of low degrees.
A degree-sensitive version of subgraph connectivity
The first ingredient we need is a dynamic subgraph connectivity method that works faster when the degree of the updated vertex is small. Fortunately, we can prove the following lemma, which extends Theorem 1 (if we set ∆ = n 1/3 ). The method follows that of Theorem 1, but with an extra twist: not only do we classify components of P as high or low, but we also classify vertices of Q as high or low. Proof. The data structure is the same as in the proof of Theorem 1, except for one difference: the definition of the graph G * .
Call a vertex high if its degree exceeds m/∆, and low otherwise. Clearly, there are at most O(∆) high vertices.
• We define a graph G * whose vertices are the vertices of Q and components of P : (c) For each u, v ∈ Q, if uv ∈ E, then create an edge uv ∈ G * .
We maintain G * in a data structure for dynamic connectivity with polylogarithmic-time edge updates.
Justification. We claim that two vertices of Q are connected in the subgraph induced by the active vertices in G iff they are connected in G * . The "if" direction is obvious. For the "only if" direction, suppose two vertices u, v ∈ Q are "directly" connected in G by being adjacent to a common component γ in P . If γ is high, then edges of type (b) ensure that u and v are connected in G * . If u and v are both low, then edges of type (b ′ ) ensure that u and v are connected in G * . In the remaining case, at least one of the two vertices, say, u is high, and γ is low; here, edges of type (a ′ ) ensure that u and v are again connected in G * . The claim follows by concatenation.
Queries. Given two vertices v 1 and v 2 , if both are in Q, we can simply test whether they are connected in G * . If instead v j (j ∈ {1, 2}) is in a component γ j , then we can replace v j with any vertex of Q adjacent to γ j in G * . If no such vertex exists, then because of type-(b ′ ) edges, γ j can only be adjacent to high vertices of Q. We can exhaustively search for a high vertex in Q adjacent to γ j in Γ, in O(∆) time, and replace v j with such a vertex. If no such vertex exists, then γ j is an isolated component and we can simply test whether v 1 and v 2 are both in γ j . The cost is O(∆).
Preprocessing per phase. At the beginning of each phase, the cost to preprocess the data structure is O(m∆) as before. We can charge every update operation with an amortized cost of O(m∆/q) = O(∆ 2 ). Edge updates. We can simulate the insertion of an edge uv by inserting a new low vertex z adjacent to only u and v to Q. Since the degree is 2, the cost is O(1). We can later simulate the deletion of this edge by deleting the vertex z from Q.
Update of a high vertex
Range searching tools from geometry
Next, we need known range searching techniques. These techniques give linear-space data structures (S(n) = O(n)) that can retrieve all objects intersecting a query object in sublinear time (T (n) = O(n 1−b )) for many types of geometric objects. We assume that our class of geometric objects satisfies the following property for some constant b > 0-this property neatly summarizes all we need to know from geometry. The property is typically proved by applying a suitable "partition theorem" in a recursive manner, thereby forming a so-called "partition tree"; for example, see the work by Matoušek [28] or the survey by Agarwal and Erickson [2]. Each canonical subset corresponds to a node of the partition tree (more precisely, the subset of all objects stored at the leaves underneath the node). Matoušek's results imply that b = 1/d − ε is attainable for simplices or constant-size polyhedra in IR d . (To go from simplex range searching to intersection searching, one uses multi-level partition trees; e.g., see [29].) Further results by Agarwal and Matoušek [3] yield b = 1/(d + 1) − ε for balls in IR d and nontrivial values of b for other families of curved objects (semialgebraic sets of constant degree). The special case of axis-parallel boxes corresponds to b = 1.
The specific bounds in (i) and (ii) may not be too well known, but they follow from the hierarchical way in which canonical subsets are constructed. For example, (ii) follows since the subsets in C z of size at most n/∆ are contained in O(∆ 1−b ) subsets of size O(n/∆). In fact, (multi-level) partition trees guarantee a stronger inequality,
C∈Cz |C| 1−b = O(n 1−b )
, from which both (i) and (ii) can be obtained after a moment's thought.
As an illustration, we can use the above property to develop a data structure for a special case of dynamic geometric connectivity where insertions are done in "blocks" but arbitrary deletions are to be supported. Although the insertion time is at least linear, the result is good if the block size s is sufficiently large. This subroutine will make up a part of the final solution.
Lemma 4. We can maintain the connected components among a set S of objects in a data structure that supports insertion of a block of s objects in O(n + sn 1−b ) amortized time (s < n), and deletion of a single object in O(1) amortized time.
Proof. We maintain a multigraph H in a data structure for dynamic connectivity with polylogarithmic edge update time (which explicitly maintains the connected components), where the vertices are the objects of S. This multigraph will obey the invariant that two objects are geometrically connected iff they are connected in S. We do not insist that H has linear size.
Insertion of a block B to S. We first form a collection C of canonical subsets for S ∪ B by Property 3. For each z ∈ B and each C ∈ C z , we assign z to C. For each canonical subset C ∈ C, if C is assigned at least one object of B, then we create new edges in H linking all objects of C and all objects assigned to C in a path. (If this path overlaps with previous paths, we create multiple copies of edges.) The number of edges inserted is thus O(n + |B|n 1−b ).
Justification. The invariant is satisfied since all objects in a canonical subset C intersect all objects assigned to C, and are thus all connected if there is at least one object assigned to C.
Deletion of an object z from S. For each canonical subset C containing or assigned the object z, we need to delete at most 2 edges and insert 1 edge to maintain the path. As soon as the path contains no object assigned to C, we delete all the edges in the path. Since the length of the path can only decrease over the entire update sequence, the total number of such edge updates is proportional to the initial length of the path. We can charge the cost to edge insertions.
Putting it together
We are finally ready to present our sublinear result for dynamic geometric connectivity. We again need the idea of rebuilding periodically, and splitting smaller sets from larger ones. In addition to the graph H (of superlinear size) from Lemma 4, which undergoes insertions only in blocks, the key lies in the definition of another subtly crafted intermediate graph G (of linear size), maintained this time by the subgraph connectivity structure of Lemma 2. The definition of this graph involves multiple types of vertices and edges. The details of the analysis and the setting of parameters get more interesting.
Theorem 5. Assume 0 < b ≤ 1/2. We can maintain a collection of objects in amortized update time O(n 1−b 2 /(2+b) ) and answer connectivity queries in time O(n b/(2+b) ).
Proof. We divide the update sequence into phases, each consisting of y := n b updates. The current objects are partitioned into two sets X and Y , where X undergoes only deletions and Y undergoes both insertions and deletions. Each insertion is done to Y . At the end of each phase, we move the elements of Y to X and reset Y to the empty set. This way, |Y | is kept at most y at all times.
At the beginning of each phase, we form a collection C of canonical subsets for X by Property 3.
The data structure.
• We maintain the components of X in the data structure from Lemma 4.
• We maintain the following graph G for dynamic subgraph connectivity, where the vertices are objects of X ∪ Y , components of X, and the canonical subsets of the current phase:
(a) Create an edge in G between each component of X and each of its objects.
(b) Create an edge in G between each canonical subset and each of its objects in X.
(c) Create an edge in G between each object z ∈ Y and each canonical subset C ∈ C z . Here, we assign z to C.
(d) Create an edge in G between every two intersecting objects in Y .
(e) We make a canonical subset active in G iff it is assigned at least one object in Y . Vertices that are objects or components are always active. Justification. We claim that two objects are geometrically connected in X ∪ Y iff they are connected in the subgraph induced by the active vertices in the graph G. The "only if" direction is obvious. For the "if" direction, we note that all objects in an active canonical subset C intersect all objects assigned to C and are thus all connected.
Queries. We answer a query by querying in the graph G. The cost is O(∆).
Preprocessing per phase. Before a new phase begins, we need to update the components in X as we move all elements of Y to X (a block insertion). By Lemma 4, the cost is O(n + yn
∆ 2 + ∆ 1−b · n/∆ + n/∆ b ) = O(n 1−b ∆ 2 + n/∆ b ).
Deletion of an object z in X. We first update the components of X. By Lemma 4, the amortized cost is O(1). We can now update the edges of type (a) in G. The total number of such edge updates per phase is O(n lg n), by always splitting smaller components from larger ones. The amortized number of edge updates is thus O(n/y). The amortized cost is O((n/y)∆ 2 ) = O(n 1−b ∆ 2 ).
Finale. The overall amortized cost per update operation is
O(n 1−b ∆ 2 + n/∆ b ). Set ∆ = n b/(2+b) .
Note that we can still prove the theorem for b > 1/2, by handling the O(y 2 ) intersections among Y (the type (d) edges) in a less naive way. However, we are not aware of any specific applications with b ∈ (1/2, 1).
Offline Dynamic Geometric Connectivity
For the special case of offline updates, we can improve the result of Section 4 for small values of b by a different method using rectangular matrix multiplication.
Let M [n 1 , n 2 , n 3 ] represent the cost of multiplying a Boolean n 1 × n 2 matrix A with a Boolean n 2 × n 3 matrix B. Let M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the same cost under the knowledge that the number of 1's in A is m 1 and the number of 1's in B is m 2 . We can reinterpret this task in graph terms: Suppose we are given a tripartite graph with vertex classes V 1 , V 2 , V 3 of sizes n 1 , n 2 , n 3 respectively where there are m 1 edges between V 1 and V 2 and m 2 edges between V 2 and V 3 . Then M [n 1 , n 2 , n 3 | m 1 , m 2 ] represent the cost of deciding, for each u ∈ V 1 and v ∈ V 3 , whether u and v are adjacent to a common vertex in V 2 .
An offline degree-sensitive version of subgraph connectivity
We begin with an offline variant of Lemma 2: Proof. We divide the update sequence into phases, each consisting of q low-vertex updates. The active vertices are partitioned into two sets P and Q, with Q ⊆ Q 0 , where P and Q 0 are static and Q undergoes both insertions and deletions. Each vertex insertion/deletion is done to Q. At the end of each phase, we reset Q 0 to hold all O(∆) high vertices plus the low vertices involved in the updates of the next phase, reset P to hold all active vertices not in Q 0 , and reset Q to hold all active vertices in Q 0 . Clearly, |Q| ≤ |Q 0 | = O(q).
Lemma 6. Let 1 ≤ ∆ ≤ q ≤ m. We
The data structure is the same as the one in the proof of Lemma 2, with one key difference: we only maintain the value C[u, v] when u is a high vertex in Q 0 and v is a (high or low) vertex in Q 0 . Moreover, we do not need to distinguish between high and low components, i.e., all components are considered low.
During preprocessing of each phase, we can now compute C Deletions in P do not occur now.
Sparse and dense rectangular matrix multiplication
Sparse matrix multiplication can be reduced to multiplying smaller dense matrices, by using a "highlow" trick [5]. Fact 7(i) below can be viewed as a variant of [6, Lemma 3.1] and a result of Yuster and Zwick [38]-incidentally, this fact is sufficiently powerful to yield a simple(r) proof of Yuster and Zwick's sparse matrix multiplication result, when combined with known bounds on dense rectangular matrix multiplication. Fact 7(ii) below states one known bound on dense rectangular matrix multiplication which we will use.
Putting it together
We now present our offline result for dynamic geometric connectivity using Lemma 6. Although we also use Property 3, the design of the key graph G is quite different from the one in the proof of Theorem 5. For instance, the size of the graph is larger (and no longer O(n)), but the number of edges incident to high vertices remains linear; furthermore, each object update triggers only a constant number of vertex updates in the graph. All the details come together in the analysis to lead to some intriguing choices of parameters. Proof. We divide the update sequence into phases, each consisting of q updates, where q is a parameter satisfying ∆ ≤ q ≤ n/∆ 1−b . The current objects are partitioned into two sets X and Y , with Y ⊆ Y 0 where X and Y 0 are static and Y undergoes both insertions and deletions. Each insertion/deletion is done to Y . At the end of each phase, we reset Y 0 to hold all objects involved the objects of the next phase, X to hold all current objects not in Y 0 , and Y to hold all current objects in Y 0 . Clearly, |Y | ≤ |Y 0 | = O(q). At the beginning of each phase, we form a collection C of canonical subsets for X ∪ Y 0 by Property 3.
The data structure.
• We maintain the components of X in the data structure from Lemma 4.
• We maintain the following graph G for offline dynamic subgraph connectivity, where the vertices are objects of X ∪ Y 0 , components of X, and canonical subsets of size exceeding n/∆:
(a) Create an edge in G between each component of X and each of its objects.
(b) Create an edge in G between each canonical subset C of size exceeding n/∆ and each of its objects in X ∪ Y .
(c) Create an edge in G between each object z ∈ Y 0 and each canonical subset C ∈ C z of size exceeding n/∆. Here, we assign z to C.
(d) Create an edge in G between each object z ∈ Y 0 and each object in the union of the canonical subsets in C z of size at most n/∆.
(e) We make a canonical subset active in G iff it is assigned at least one object in Y . We make the vertices in X ∪Y active, and all components active. The high vertices are precisely the canonical subsets of size exceeding n/∆; there are O(∆) such vertices. Update of an object z in Y . We need to make a single vertex update z in G, which has degree O(n/∆ b ) by Property 3(ii). Furthermore, we may have to change the status of as many as O(∆ 1−b ) high vertices by Property 3(i). According to Lemma 8, the cost of these vertex updates is O(M [∆, n, q | n, m]/q + n/∆ b + ∆ 1−b q).
Finale. By Fact 7, assuming that ∆ ≤ q α and q ≤ n/t, we have M [∆, n, q | n, m] = O(M [∆, n/t, q] + mt) = O(nq/t + nqt/∆ b ). Choosing t = ∆ b/2 gives O(nq/∆ b/2 ). The overall amortized cost per update operation is thus O(n/∆ b/2 + ∆ 1−b q + n/q + n 1−b ). Set ∆ = q α and q = n 1 1+α−bα/2 and the result follows. (Note that indeed ∆ ≤ q ≤ n/∆ 1−b and q ≤ n/t for these choices of parameters.) Compared to Theorem 5, the dependence on b of the exponent in the update bound is only 1 − Θ(b) rather than 1 − Θ(b 2 ). The bound is better, for example, for b ≤ 1/4.
Open Problems
Our work opens up many interesting directions for further research. For subgraph connectivity, an obvious question is whether the O(m 2/3 ) vertex-update bound can be improved (without or with FMM); as we have mentioned, improvements beyond √ m without FMM are not possible without a breakthrough on the triangle-finding problem. An intriguing question is whether for dense graphs we can achieve update time sublinear in n, i.e., O(n 1−ε ) (or possibly even sublinear in the degree). For geometric connectivity, it would be desirable to determine the best update bounds for specific shapes such as line segments and disks in two dimensions. Also, directed settings of geometric connectivity arise in applications and are worth studying; for example, when sensors' transmission ranges are balls of different radii or wedges, a sensor may lie in another sensor's range without the reverse being true.
For both subgraph and geometric connectivity, we can reduce the query time at the expense of increasing the update time, but we do not know whether constant or polylogarithmic query time is possible with sublinear update time in general (see [1] for a result on the 2-dimensional orthogonal special case). Currently, we do not know how to obtain our update bounds with linear space (e.g., Theorem 1 requires O(m 4/3 ) space), nor do we know how to get good worst-case update bounds (since the known polylogarithmic results for connectivity under edge updates are all amortized). Also, the queries we have considered are about connectivity between two vertices/objects. Can nontrivial results be obtained for richer queries such as counting the number of connected components (see [1] on the 2-dimensional orthogonal case), or perhaps shortest paths or minimum cut?
| 6,473 |
0807.4326
|
2951552853
|
In this work we suggest a new model for generating random satisfiable k-CNF formulas. To generate such formulas -- randomly permute all 2^k n k possible clauses over the variables x_1, ..., x_n, and starting from the empty formula, go over the clauses one by one, including each new clause as you go along if after its addition the formula remains satisfiable. We study the evolution of this process, namely the distribution over formulas obtained after scanning through the first m clauses (in the random permutation's order). Random processes with conditioning on a certain property being respected are widely studied in the context of graph properties. This study was pioneered by Ruci 'nski and Wormald in 1992 for graphs with a fixed degree sequence, and also by Erd o s, Suen, and Winkler in 1995 for triangle-free and bipartite graphs. Since then many other graph properties were studied such as planarity and H-freeness. Thus our model is a natural extension of this approach to the satisfiability setting. Our main contribution is as follows. For m cn, c=c(k) a sufficiently large constant, we are able to characterize the structure of the solution space of a typical formula in this distribution. Specifically, we show that typically all satisfying assignments are essentially clustered in one cluster, and all but e^ - (m n) n of the variables take the same value in all satisfying assignments. We also describe a polynomial time algorithm that finds with high probability a satisfying assignment for such formulas.
|
Almost all polynomial-time heuristics suggested so far for random instances (either SAT or graph optimization problems) were analyzed when the input is sampled according to a planted-solution distribution, or various semi-random variants thereof. Alon and Kahale @cite_21 suggest a polynomial time algorithm based on spectral techniques that @math properly @math -colors a random graph from the planted @math -coloring distribution (the distribution of graphs generated by partitioning the @math vertices into @math equally-sized color classes, and including every edge connecting two different color classes with probability @math ), for graphs with average degree greater than some constant. In the SAT context, Flaxman's algorithm, drawing on ideas from @cite_21 , solves @math planted 3SAT instances where the clause-variable ratio is greater than some constant. Also @cite_15 @cite_6 @cite_12 address the planted 3SAT distribution.
|
{
"abstract": [
"",
"Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p @math c n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p @math polylog(n) n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c.",
"We present an algorithm for solving 3SAT instances. Several algorithms have been proved to work whp (with high probability) for various SAT distributions. However, an algorithm that works whp has a drawback. Indeed for typical instances it works well, however for some rare inputs it does not provide a solution at all. Alternatively, one could require that the algorithm always produce a correct answer but perform well on average. Expected polynomial time formalizes this notion. We prove that for some natural distribution on 3CNF formulas, called planted 3SAT, our algorithm has expected polynomial (in fact, almost linear) running time. The planted 3SAT distribution is the set of satisfiable 3CNF formulas generated in the following manner. First, a truth assignment is picked uniformly at random. Then, each clause satisfied by it is included in the formula with probability p. Extending previous work for the planted 3SAT distribution, we present, for the first time for a satisfiable SAT distribution, an expected polynomial time algorithm. Namely, it solves all 3SAT instances, and over the planted distribution (with p = d n2, d > 0 a sufficiently large constant) it runs in expected polynomial time. Our results extend to k-SAT for any constant k.",
"Experimental results show that certain message passing algorithms, namely, survey propagation, are very effective in finding satisfying assignments in random satisfiable 3CNF formulas. In this paper we make a modest step towards providing rigorous analysis that proves the effectiveness of message passing algorithms for random 3SAT. We analyze the performance of Warning Propagation, a popular message passing algorithm that is simpler than survey propagation. We show that for 3CNF formulas generated under the planted assignment distribution, running warning propagation in the standard way works when the clause-to-variable ratio is a sufficiently large constant. We are not aware of previous rigorous analysis of message passing algorithms for satisfiability instances, though such analysis was performed for decoding of Low Density Parity Check (LDPC) Codes. We discuss some of the differences between results for the LDPC setting and our results."
],
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_12",
"@cite_6"
],
"mid": [
"60182242",
"2079035346",
"1983171306",
"2139919528"
]
}
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.