Convex Optimization 读书笔记 (7)

Chapter8: Geometric problems

8.1 Projection on a set

The distance of a point x 0 ∈ R n x_0 ∈ \mathbf{R}^n x0Rn to a closed set C ⊆ R n C⊆\mathbf{R}^n CRn, in the norm ∥ ⋅ ∥ ∥·∥ , is defined as
d i s t ( x 0 , C ) = inf ⁡ { ∣ ∣ x 0 − x ∣ ∣ ∣ x ∈ C } {\rm \bold{dist}}(x_0,C)=\inf\{ ||x_0-x|| \mid x\in C \} dist(x0,C)=inf{x0xxC}We refer to any point z ∈ C z ∈ C zC which is closest to $x_0 $ satisfies ∥ z − x 0 ∥ = d i s t ( x 0 , C ) ∥z − x_0∥ = dist(x_0, C) zx0=dist(x0,C), as a projection of x 0 x_0 x0 on C C C.

8.1.1 Projecting a point on a convex set

If C C C is convex, then we can compute the projection P C ( x 0 ) P_C(x_0) PC(x0) and the distance d i s t ( x 0 , C ) {\rm \bold{dist}}(x_0,C) dist(x0,C) by solving a convex optimization problem:
m i n i m i z e      ∣ ∣ x − x 0 ∣ ∣ s u b j e c t   t o      f i ( x ) ≤ 0 ,      i = 1 , . . . m A x = b \begin{aligned} {\rm minimize} \ \ \ \ & ||x-x_0|| \\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq0, \ \ \ \ i=1,...m \\ & Ax=b \end{aligned} minimize    subject to    xx0fi(x)0,    i=1,...mAx=b

8.1.2 Separating a point and a convex set

If P C ( x 0 ) P_C(x_0) PC(x0) denotes the Euclidean projection of x 0 x_0 x0 on C C C, where x 0 ∉ C x_0 \notin C x0/C, then the hyperplane
( P C ( x 0 ) − x 0 ) T ( x − 1 2 ( x 0 + P C ( x 0 ) ) ) = 0 (P_C (x_0) − x_0)^T (x − \frac{1}{2}(x_0 + P_C (x_0))) = 0 (PC(x0)x0)T(x21(x0+PC(x0)))=0(strictly) separates x 0 x_0 x0 from C C C.

Convex Optimization 读书笔记 (7)

In other norms, the clearest link between the projection problem and the separating hyperplane problem is via Lagrange duality.

8.1.3 Projection and separation via indicator and support functions

Define indicator function I C I_C IC and the support function S C S_C SC of the set C C C:
S C ( x ) = sup ⁡ y ∈ C x T y I C ( x ) = { 0 ,    x ∈ C ∞ ,    x ∉ C S_C(x)=\sup_{y\in C} x^Ty\\ I_C(x)=\left\{ \begin{array}{rcl} 0, \ \ & x\in C \\ \infty, \ \ & x\notin C\end{array} \right. SC(x)=yCsupxTyIC(x)={0,  ,  xCx/CThe problem of projecting x 0 x_0 x0 on a closed convex set C C C can be expressed compactly as
m i n i m i z e      ∣ ∣ x − x 0 ∣ ∣ s u b j e c t   t o      I C ( x ) ≤ 0 \begin{aligned} {\rm minimize} \ \ \ \ & ||x-x_0|| \\ {\rm subject \ to} \ \ \ \ & I_C(x)\leq0 \end{aligned} minimize    subject to    xx0IC(x)0The dual function of this problem is
g ( z , λ ) = { z T x 0 − S C ( z ) ,      ∣ ∣ z ∣ ∣ ∗ ≤ 1 ,    λ ≥ 0 − ∞ ,      o t h e r w i s e g(z,\lambda)=\left\{ \begin{array}{rcl} z^Tx_0-S_C(z), & \ \ \ \ ||z||_*\leq1, \ \ \lambda \geq0 \\ -\infty, & \ \ \ \ {\rm otherwise} \end{array} \right. g(z,λ)={zTx0SC(z),,    z1,  λ0    otherwiseso we obtain the dual problem
m a x i m i z e      z T x 0 − S C ( z ) s u b j e c t   t o      ∣ ∣ z ∣ ∣ ∗ ≤ 1 \begin{aligned} {\rm maximize} \ \ \ \ & z^Tx_0-S_C(z) \\ {\rm subject \ to} \ \ \ \ & ||z||_*\leq1 \end{aligned} maximize    subject to    zTx0SC(z)z1

8.2 Distance between sets

The distance between two sets C C C and D D D, in a norm ∥ ⋅ ∥ ∥ · ∥ , is defined as
d i s t ( C , D ) = inf ⁡ { ∣ ∣ x − y ∣ ∣ ∣ x ∈ C , y ∈ D } {\rm \bold{dist}}(C,D)=\inf\{ ||x-y||\mid x\in C,y\in D \} dist(C,D)=inf{xyxC,yD}

8.2.1 Computing the distance between convex sets

We can find d i s t ( C , D ) \mathbf{dist}(C, D) dist(C,D) by solving the convex optimization problem
m i n i m i z e      ∣ ∣ x − y ∣ ∣ s u b j e c t   t o      f i ( x ) ≤ 0 ,      i = 1 , . . . m g i ( y ) ≤ 0 ,      i = 1 , . . . p \begin{aligned} {\rm minimize} \ \ \ \ & ||x-y|| \\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq0, \ \ \ \ i=1,...m \\ & g_i(y)\leq0, \ \ \ \ i=1,...p \end{aligned} minimize    subject to    xyfi(x)0,    i=1,...mgi(y)0,    i=1,...p

8.2.2 Separating convex sets

First express the distance between convex sets in the following equivalent form:
m i n i m i z e      ∣ ∣ w ∣ ∣ s u b j e c t   t o      f i ( x ) ≤ 0 ,      i = 1 , . . . m g i ( y ) ≤ 0 ,      i = 1 , . . . p x − y = w \begin{aligned} {\rm minimize} \ \ \ \ & ||w|| \\ {\rm subject \ to} \ \ \ \ & f_i(x)\leq0, \ \ \ \ i=1,...m \\ & g_i(y)\leq0, \ \ \ \ i=1,...p \\ & x-y=w \end{aligned} minimize    subject to    wfi(x)0,    i=1,...mgi(y)0,    i=1,...pxy=wThe dual function is
g ( λ , z , μ ) = inf ⁡ x , y , w ( ∣ ∣ w ∣ ∣ + ∑ i = 1 m λ i f i ( x ) + ∑ i = 1 p μ i g i ( x ) + z T ( x − y − z ) ) = { inf ⁡ x ( ∑ i = 1 m λ i f i ( x ) + z T x ) + inf ⁡ y ( ∑ i = 1 p μ i g i ( x ) − z T y )      ∣ ∣ z ∣ ∣ ∗ ≤ 1 ,    − ∞      o t h e r w i s e \begin{aligned} g(\lambda,z,\mu) &= \inf_{x,y,w}(||w||+\sum_{i=1}^m\lambda_if_i(x)+\sum_{i=1}^p\mu_ig_i(x)+z^T(x-y-z))\\ &=\left\{ \begin{array}{ll} \inf_x(\sum_{i=1}^m\lambda_if_i(x)+z^Tx) + \inf_y(\sum_{i=1}^p\mu_ig_i(x)-z^Ty) & \ \ \ \ ||z||_*\leq1, \ \ \\ -\infty & \ \ \ \ {\rm otherwise} \end{array} \right. \end{aligned} g(λ,z,μ)=x,y,winf(w+i=1mλifi(x)+i=1pμigi(x)+zT(xyz))={infx(i=1mλifi(x)+zTx)+infy(i=1pμigi(x)zTy)    z1,      otherwise
which results in the dual problem
m a x i m i z e      inf ⁡ x ( ∑ i = 1 m λ i f i ( x ) + z T x ) + inf ⁡ y ( ∑ i = 1 p μ i g i ( x ) − z T y ) s u b j e c t   t o      ∣ ∣ z ∣ ∣ ∗ ≤ 1 λ ⪰ 0 ,    μ ⪰ 0 \begin{aligned} {\rm maximize} \ \ \ \ & \inf_x(\sum_{i=1}^m\lambda_if_i(x)+z^Tx) + \inf_y(\sum_{i=1}^p\mu_ig_i(x)-z^Ty) \\ {\rm subject \ to} \ \ \ \ & ||z||_*\leq1 \\ & \lambda\succeq0, \ \ \mu\succeq0 \end{aligned} maximize    subject to    xinf(i=1mλifi(x)+zTx)+yinf(i=1pμigi(x)zTy)z1λ0,  μ0

8.2.3 Distance and separation via indicator and support functions

The problem of finding the distance between two convex sets can be posed as the convex problem
m i n i m i z e      ∣ ∣ x − y ∣ ∣ s u b j e c t   t o      I C ( x ) ≤ 0 I D ( x ) ≤ 0 \begin{aligned} {\rm minimize} \ \ \ \ & ||x-y|| \\ {\rm subject \ to} \ \ \ \ & I_C(x)\leq0 \\ & I_D(x)\leq0 \\ \end{aligned} minimize    subject to    xyIC(x)0ID(x)0The dual of this problem is
m a x i m i z e      − S C ( − z ) − S D ( z ) s u b j e c t   t o      ∣ ∣ z ∣ ∣ ∗ ≤ 1 \begin{aligned} {\rm maximize} \ \ \ \ & -S_C(-z)-S_D(z) \\ {\rm subject \ to} \ \ \ \ & ||z||_*\leq1 \\ \end{aligned} maximize    subject to    SC(z)SD(z)z1

8.3 Euclidean distance and angle problems

Suppose a 1 , . . . , a n a_1, . . . , a_n a1,...,an is a set of vectors in R n \mathbf{R}^n Rn, which we assume (for now) have known Euclidean lengths
l 1 = ∣ ∣ a 1 ∣ ∣ 2 , . . . , l n = ∣ ∣ a n ∣ ∣ 2 l_1=||a_1||_2,...,l_n=||a_n||_2 l1=a12,...,ln=an2We will refer to the set of vectors as a configuration, or, when they are independent, a basis.

8.3.1 Gram matrix and realizability

The Gram matrix of vectors { a 1 , . . . , a n } \{ a_1,...,a_n \} {a1,...,an} is
G = A T A ,      A = [ a 1    ⋯    a n ] G=A^TA, \ \ \ \ A=[a_1 \ \ \cdots \ \ a_n] G=ATA,    A=[a1    an]which is G i j = a i T a j , G i i = l i 2 G_{ij}=a_i^Ta_j,G_{ii}=l_i^2 Gij=aiTaj,Gii=li2. The distance between a i a_i ai and a j a_j aj is
d i j = ∣ ∣ a i − a j ∣ ∣ 2 = ( G i i + G j j − 2 G i j ) 1 2 d_{ij}=||a_i-a_j||_2=(G_{ii}+G_{jj}-2G_{ij})^{\frac{1}{2}} dij=aiaj2=(Gii+Gjj2Gij)21The correlation coefficient ρ i j ρ_{ij} ρij between (nonzero) a i a_i ai and a j a_j aj is given by
ρ i j = a i T a j ∣ ∣ a i ∣ ∣ 2 ∣ ∣ a j ∣ ∣ 2 = G i j G i i G j j \rho_{ij}=\frac{a_i^Ta_j}{||a_i||_2||a_j||_2}=\frac{G_{ij}}{\sqrt{G_{ii}}\sqrt{G_{jj}}} ρij=ai2aj2aiTaj=Gii Gjj GijThe angel θ i j \theta_{ij} θij is
θ i j = cos ⁡ − 1 ρ i j \theta_{ij}=\cos^{-1}\rho_{ij} θij=cos1ρijA set of lengths, distances, and angles (or correlation coefficients) is realizable if and only if the associated Gram matrix G G G is positive semidefinite, and has diagonal elements l 1 2 , . . . , l n 2 l_1^2, . . . , l_n^2 l12,...,ln2.

8.3.2 Problems involving angles only

Suppose we only care about the angles (or correlation coefficients) between the vectors, and do not specify the lengths or distances between them. Then the gram matrix
G = d i a g ( l ) C d i a g ( l ) G=\mathbf{diag}(l)C\mathbf{diag}(l) G=diag(l)Cdiag(l)where C i j = cos ⁡ θ i j C_{ij}=\cos\theta_{ij} Cij=cosθij.

8.3.3 Euclidean distance problems

In a Euclidean distance problem, we are concerned only with the distances between the vectors, d i j d_{ij} dij , and do not care about the lengths of the vectors, or about the angles between them.
A Euclidean distance matrix is D ∈ S n D\in \mathbf{S}^n DSn with nonnegative elements, zero diagonal and satisfied
G = ( z 1 T + 1 z T − D ) / 2 ⪰ 0   f o r   s o m e   z ⪰ 0 G=(z\bold{1}^T+\bold{1}z^T-D)/2\succeq0 {\rm \ for \ some \ }z\succeq0 G=(z1T+1zTD)/20 for some z0where D i j = d i j 2 D_{ij}=d_{ij}^2 Dij=dij2.
In summary, a matrix D ∈ S n D ∈ \mathbf{S}^n DSn is a Euclidean distance matrix if and only if
D i i = 0 ,    i = 1 , . . . , n      D i j ≥ 0 ,    i , j = 1 , . . . , n ( I − 1 n 1 1 T ) D ( I − 1 n 1 1 T ) ⪯ 0 D_{ii}=0, \ \ i=1,...,n \ \ \ \ D_{ij}\geq0, \ \ i,j=1,...,n \\ (I-\frac{1}{n}\bold{1}\bold{1}^T)D(I-\frac{1}{n}\bold{1}\bold{1}^T) \preceq 0 Dii=0,  i=1,...,n    Dij0,  i,j=1,...,n(In111T)D(In111T)0

8.4 Extremal volume ellipsoids

8.4.1 The Lowner-John ellipsoid

The minimum volume ellipsoid that contains a set C C C is called the Lowner-John ellipsoid of the set C C C, and is denoted E l j \mathcal{E}_{lj} Elj. First denote ellipsoid as
E = { v ∣ ∣ ∣ A v + b ∣ ∣ 2 ≤ 1 } \mathcal{E}=\{ v\mid ||Av+b||_2\leq1 \} E={vAv+b21}Computing the minimum volume ellipsoi containing C C C can be expressed as
m i n i m i z e      log ⁡ det ⁡ A − 1 s u b j e c t   t o      sup ⁡ v ∈ C ∣ ∣ A v + b ∣ ∣ 2 ≤ 1 \begin{aligned} {\rm minimize} \ \ \ \ & \log\det A^{-1} \\ {\rm subject \ to} \ \ \ \ & \sup_{v\in C}||Av+b||_2\leq1 \\ \end{aligned} minimize    subject to    logdetA1vCsupAv+b21
Convex Optimization 读书笔记 (7)

8.4.2 Maximum volume inscribed ellipsoid

First denote ellipsoid as
E = { B u + d ∣ ∣ ∣ u ∣ ∣ 2 ≤ 1 } \mathcal{E}=\{ Bu+d\mid ||u||_2\leq1 \} E={Bu+du21}We now consider the problem of finding the ellipsoid of maximum volume that lies inside a convex set C C C
m a x i m i z e      log ⁡ det ⁡ B s u b j e c t   t o      sup ⁡ ∣ ∣ u ∣ ∣ 2 ≤ 1 I C ( B u + d ) ≤ 0 \begin{aligned} {\rm maximize} \ \ \ \ & \log\det B \\ {\rm subject \ to} \ \ \ \ & \sup_{||u||_2\leq1}I_C(Bu+d)\leq0 \\ \end{aligned} maximize    subject to    logdetBu21supIC(Bu+d)0

8.4.3 Affine invariance of extremal volume ellipsoids

If E l j \mathcal{E}_{\rm lj} Elj is the Lowner-John ellipsoid of C C C, and T ∈ R n × n T ∈ \mathbf{R}^{n×n} TRn×n is nonsingular, then the Lowner-John ellipsoid of T C TC TC is T E l j T\mathcal{E}_{\rm lj} TElj. A similar result holds for the maximum volume inscribed ellipsoid.

8.5 Centering

8.5.1 Chebyshev center

The depth of a point x ∈ C x ∈ C xC is defined as
d e p t h ( x , C ) = d i s t ( x , R n \ C ) \mathbf{depth}(x,C)=\mathbf{dist}(x,\mathbf{R}^n\backslash C) depth(x,C)=dist(x,Rn\C)A Chebyshev center of the set C C C is defined as any point of maximum depth in C C C:
x c h e b ( C ) = arg ⁡ max ⁡ d e p t h ( x , C ) = arg ⁡ max ⁡ d i s t ( x , R n \ C ) . x_{ \rm cheb}(C) = \arg\max \mathbf{depth}(x, C) = \arg\max \mathbf{dist}(x,\mathbf{R}^n\backslash C). xcheb(C)=argmaxdepth(x,C)=argmaxdist(x,Rn\C).
Convex Optimization 读书笔记 (7)

8.5.2 Maximum volume ellipsoid center

As an extension of this idea, we define the maximum volume ellipsoid center of C C C, denoted x m v e x_{\rm mve} xmve, as the center of the maximum volume ellipsoid that lies in C C C.

Convex Optimization 读书笔记 (7)

8.5.3 Analytic center of a set of inequalities

The analytic center x a c x_{\rm ac} xac of a set of convex inequalities and linear equalities
f i ( x ) ≤ 0 ,    i = 1 , . . . , m ,    F x = g f_i(x)\leq0, \ \ i=1,...,m, \ \ Fx=g fi(x)0,  i=1,...,m,  Fx=gis defined as
m i n i m i z e      − ∑ i = 1 m log ⁡ ( − f i ( x ) ) s u b j e c t   t o      F x = g \begin{aligned} {\rm minimize} \ \ \ \ & -\sum_{i=1}^m\log(-f_i(x)) \\ {\rm subject \ to} \ \ \ \ & Fx=g \\ \end{aligned} minimize    subject to    i=1mlog(fi(x))Fx=g

8.6 Classification

In pattern recognition and classification problems we are given two sets of points in R n \mathbf{R}^n Rn, { x 1 , . . . , x N } \{x_1,...,x_N\} {x1,...,xN} and { y 1 , . . . , y M } \{y_1,...,y_M\} {y1,...,yM}, define a function
f ( x i ) > 0 ,    i = 1 , . . . , N f ( y i ) < 0 ,    i = 1 , . . . , M f(x_i)>0, \ \ i=1,...,N \\ f(y_i)<0, \ \ i=1,...,M f(xi)>0,  i=1,...,Nf(yi)<0,  i=1,...,MIf these inequalities hold, we say that f f f, or its 0-level set { x ∣ f ( x ) = 0 } \{x \mid f(x) = 0\} {xf(x)=0}, separates, classifies, or discriminates the two sets of points.

8.6.1 Linear discrimination

In linear discrimination, we seek an affine function f ( x ) = a T x − b f (x) = a^T x − b f(x)=aTxb that classifies the points
a T x i − b > 0 ,   i = 1 , . . . , N ,          a T y i − b < 0 ,   i = 1 , . . . , M . a^Tx_i −b>0, \ i=1,...,N, \ \ \ \ \ \ \ \ a^Ty_i −b<0, \ i=1,...,M. aTxib>0, i=1,...,N,        aTyib<0, i=1,...,M.Since the strict inequalities are homogeneous in a a a and b b b, they are feasible if and only if the set of nonstrict linear inequalities
a T x i − b ≥ 1 ,   i = 1 , . . . , N ,          a T y i − b ≤ − 1 ,   i = 1 , . . . , M a^Tx_i −b≥1, \ i=1,...,N, \ \ \ \ \ \ \ \ a^Ty_i −b≤−1, \ i=1,...,M aTxib1, i=1,...,N,        aTyib1, i=1,...,M
Convex Optimization 读书笔记 (7)

8.6.2 Nonlinear discrimination

We can just as well seek a nonlinear function f f f, from a given subspace of functions, that is positive on one set and negative on another:
f ( x i ) > 0 ,   i = 1 , . . . , N ,          f ( y i ) < 0 ,   i = 1 , . . . , M . f(x_i) > 0, \ i = 1,...,N, \ \ \ \ \ \ \ \ f(y_i) < 0, \ i = 1,...,M. f(xi)>0, i=1,...,N,        f(yi)<0, i=1,...,M.

8.7 Placement and location

We have N points in R 2 \mathbf{R}^2 R2 or R 3 \mathbf{R}^3 R3, and a list of pairs of points that must be connected by links. The positions of some of the N N N points are fixed; our task is to determine the positions of the remaining points, i.e., to place the remaining points. The problem is to minimize
∑ ( i , j ) ∈ A f i j ( x i , x j ) \sum_{(i,j)\in\mathcal{A}}f_{ij}(x_i,x_j) (i,j)Afij(xi,xj)where A \mathcal{A} A is the set of all links in the graph, and f i j : R k × R k → R f_{ij} :\mathbf{R}^k\times\mathbf{R}^k →\mathbf{R} fij:Rk×RkR is a cost function associated with arc ( i , j ) (i,j) (i,j).

8.7.1 Linear facility location problems

In the simplest version of the problem the cost associated with arc ( i , j ) (i,j) (i,j) is the distance between nodes i i i and j : f i j ( x i , x j ) = ∥ x i − x j ∥ j: f_{ij} (x_i, x_j ) = ∥x_i − x_j ∥ j:fij(xi,xj)=xixj, i.e., we minimize
∑ ( i , j ) ∈ A ∥ x i − x j ∣ ∣ \sum_{(i,j)\in\mathcal{A}}∥x_i − x_j|| (i,j)Axixj

8.7.2 Placement constraints

We can impose a constraint that limits the points x 1 , . . . , x p x_1, . . . , x_p x1,...,xp (say) to lie in a bounding box with perimeter not exceeding P m a x P_{max} Pmax, by adding the constraints
u ⪯ x i ⪯ v ,   i = 1 , . . . , p ,          2 1 T ( v − u ) ≤ P m a x , u\preceq xi \preceq v, \ i=1,...,p, \ \ \ \ \ \ \ \ 2\bold{1}^T(v−u)≤P_{max}, uxiv, i=1,...,p,        21T(vu)Pmax,where u , v u, v u,v are additional variables.

8.7.3 Nonlinear facility location problems

More generally, we can associate a cost with each arc that is a nonlinear increasing function of the length
m i n i m i z e      ∑ i < j w i j h ( ∣ ∣ x i − x j ∣ ∣ ) {\rm minimize} \ \ \ \ \sum_{i<j}w_{ij}h(||x_i-x_j||) \\ minimize    i<jwijh(xixj)where h h h is an increasing (on R + \mathbf{R}^+ R+) and convex function, and w i j ≥ 0 w_{ij} ≥ 0 wij0.

8.7.4 Location problems with path constraints

8.8 Floor planning

A floor planning problem can be considered an extension of a placement problem in two ways:

  • The objects to be placed are rectangles or boxes aligned with the axes (as opposed to points), and must not overlap.
  • Each rectangle or box to be placed can be reconfigured, within some limits. For example we might fix the area of each rectangle, but not the length and height separately.
    In all floor planning problems, we require that the cells lie inside the bounding rectangle
    x i ≥ 0 ,      y i ≥ 0 ,      x i + w i ≤ W ,      y i + h i ≤ H ,   i = 1 , . . . , N . x_i\geq0, \ \ \ \ y_i\geq0,\ \ \ \ x_i +w_i ≤W,\ \ \ \ y_i +h_i ≤H, \ i=1,...,N. xi0,    yi0,    xi+wiW,    yi+hiH, i=1,...,N.We also require that the cells do not overlap, except possibly on their boundaries:
    i n t ( C i ∩ C j ) = ∅    f o r   i ≠ j . \mathbf{int}(C_i∩C_j)=∅ \ \ {\rm for} \ i\neq j. int(CiCj)=  for i=j.
    Convex Optimization 读书笔记 (7)

8.8.1 Relative positioning constraints

The idea of relative positioning constraints is to specify, for each pair of cells, one of the four possible relative positioning conditions, i.e., left, right, above, or below. One simple method to specify these constraints is to give two relations on { 1 , . . . , N } : L \{1,...,N\}: \mathcal{L} {1,...,N}:L (meaning ‘left of’) and {1,…,N}: \mathcal{B} (meaning ‘below’). We then impose the constraint that C i C_i Ci is to the left of C j C_j Cj if ( i , j ) ∈ L (i,j) ∈ \mathcal{L} (i,j)L, and C i C_i Ci is below C j C_j Cj if ( i , j ) ∈ B (i,j) ∈ \mathcal{B} (i,j)B. This yields the constraints
x i + w i ≤ x j   f o r   ( i , j ) ∈ L ,      y i + h i ≤ y j   f o r   ( i , j ) ∈ B x_i+w_i ≤x_j \ {\rm for} \ (i,j)∈\mathcal{L}, \ \ \ \ y_i+h_i ≤y_j \ {\rm for} \ (i,j)∈\mathcal{B} xi+wixj for (i,j)L,    yi+hiyj for (i,j)BWe can use H , V \mathcal{H},\mathcal{V} H,V to represent inequalities:
x i + w i ≤ x j   f o r   ( i , j ) ∈ H ,      y i + h i ≤ y j   f o r   ( i , j ) ∈ V x_i+w_i ≤x_j \ {\rm for} \ (i,j)∈\mathcal{H},\ \ \ \ y_i+h_i ≤y_j \ {\rm for} \ (i,j)∈\mathcal{V} xi+wixj for (i,j)H,    yi+hiyj for (i,j)V
Convex Optimization 读书笔记 (7)

8.8.2 Floor planning via convex optimization

We impose the bounding box constraints and the relative positioning constraints, which are linear inequalities.

8.8.3 Floor planning via geometric programming

The floor planning problem can also be formulated as a geometric program in the variables x i , y i , w i , h i , W , H x_i, y_i, w_i, h_i, W, H xi,yi,wi,hi,W,H.