{"id":8165,"date":"2014-12-12T12:12:00","date_gmt":"2014-12-12T11:12:00","guid":{"rendered":"http:\/\/djalil.chafai.net\/blog\/?p=8165"},"modified":"2015-01-02T15:20:00","modified_gmt":"2015-01-02T14:20:00","slug":"on-certain-0-1-matrices","status":"publish","type":"post","link":"https:\/\/djalil.chafai.net\/blog\/2014\/12\/12\/on-certain-0-1-matrices\/","title":{"rendered":"On certain 0-1 matrices"},"content":{"rendered":"<figure id=\"attachment_8048\" aria-describedby=\"caption-attachment-8048\" style=\"width: 220px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/djalil.chafai.net\/blog\/wp-content\/uploads\/2014\/11\/220px-Petersen1_tiny.svg_.png\"><img loading=\"lazy\" src=\"http:\/\/djalil.chafai.net\/blog\/wp-content\/uploads\/2014\/11\/220px-Petersen1_tiny.svg_.png\" alt=\"Cubic graph\" width=\"220\" height=\"220\" class=\"size-full wp-image-8048\" srcset=\"https:\/\/djalil.chafai.net\/blog\/wp-content\/uploads\/2014\/11\/220px-Petersen1_tiny.svg_.png 220w, https:\/\/djalil.chafai.net\/blog\/wp-content\/uploads\/2014\/11\/220px-Petersen1_tiny.svg_-150x150.png 150w\" sizes=\"(max-width: 220px) 100vw, 220px\" \/><\/a><figcaption id=\"caption-attachment-8048\" class=\"wp-caption-text\">Cubic graph (source: Wikipedia)<\/figcaption><\/figure>\n<p style=\"text-align: justify;\">Recently, during students projects defense in \u00c9cole Polytechnique, a colleague asked if one knows nice elementary facts on the smallest eigenvalue of the adjacency matrix of a finite graph, for instance in relation with positivity. This reminded me some pretty old personal notes written in 2006 and never published. They form the matter of this post.<\/p>\n<p style=\"text-align: justify;\">This post is devoted to the following simple fact: up to a permutation of the coordinates, the symmetric semi-definite positive matrices with entries in \\( {\\{0,1\\}} \\) are block diagonal with diagonal blocks full of ones. We provide a simple proof of this result, and we connect it with the spectrum of finite simple graphs, and with Gaussian graphical models in Statistics.<\/p>\n<p style=\"text-align: justify;\"><b>Binary matrices.<\/b> Let us denote by \\( {\\mathcal{S}_n^+} \\) the set of \\( {n\\times n} \\) real symmetric matrices with spectrum in \\( {[0,+\\infty)} \\), by \\( {\\mathcal{S}_n^{+*}} \\) the set of \\( {n\\times n} \\) real symmetric matrices with spectrum in \\( {(0,+\\infty)} \\), and by \\( {\\mathcal{P}_n} \\) the set of \\( {n\\times n} \\) permutation matrices: \\( {P\\in\\mathcal{P}_n} \\) iff there exists a permutation \\( {\\sigma} \\) of \\( {\\{1,\\ldots,n\\}} \\) such that \\( {P_{j,k}=\\mathbf{1}_{j=\\sigma(i)}} \\) for all \\( {1\\leq j,k\\leq n} \\).<\/p>\n<blockquote style=\"background: white; border: solid thick #e4e5e7; text-align: justify; padding-left: 1em;\"><p><b>Theorem 1 (Positivity of Boolean matrices)<\/b> <em><a id=\"thbin-mat\" id=\"thbin-mat\"><\/a> If \\( {C} \\) is a symmetric \\( {n\\times n} \\) matrix with entries in \\( {\\{0,1\\}} \\) and unit diagonal, then \\( {C\\in\\mathcal{S}_n^+} \\) if and only if there exists \\( {P\\in\\mathcal{P}_n} \\) such that \\( {PCP^\\top} \\) is block diagonal with diagonal blocks full of \\( {1} \\).<\/em><\/p><\/blockquote>\n<p style=\"text-align: justify;\"><em>Proof:<\/em> The necessity of \\( {C\\in\\mathcal{S}_n^+} \\) is clear. Conversely, let us show that if \\( {C\\in\\mathcal{S}_n^+} \\) then \\( {P C P^\\top} \\) is block diagonal with diagonal blocks full of \\( {1} \\), for some \\( {P\\in\\mathcal{P}_n} \\). The statement is true for \\( {n=1} \\) and \\( {n=2} \\). We proceed next by induction on \\( {n} \\). Assume that \\( {C\\in\\mathcal{S}_{n+1}^+} \\). Since \\( {C_{1:n,1:n}\\in\\mathcal{S}_n^+} \\), it follows by the induction hypothesis that<\/p>\n<p style=\"text-align: center;\">\\[ Q C_{1:n,1:n}Q^\\top \\]<\/p>\n<p style=\"text-align: justify;\">is block diagonal with diagonal blocks full of \\( {1} \\), for some \\( {Q\\in\\mathcal{P}_n} \\). It is then enough to show that if \\( {i,j,k\\in\\{1,\\ldots,n+1\\}} \\) are distinct indices with \\( {n+1\\in\\{i,j,k\\}} \\) and \\( {C_{i,j}=C_{j,k}=1} \\), then \\( {C_{i,k}=1} \\). Now, since \\( {C\\in\\mathcal{S}_{n+1}^+} \\), we know that the \\( {3\\times 3} \\) matrix \\( {C_{J,J}} \\) obtained from \\( {C} \\) by its restriction to the block of three indices \\( {J:=\\{i,j,k\\}} \\) belongs to \\( {\\mathcal{S}_3^+} \\). But if \\( {C} \\) is a \\( {3\\times 3} \\) symmetric matrix, with unit diagonal, then \\( {C_{1:2,1:2}=(1,1)^\\top(1,1)} \\) and \\( {C\\in\\mathcal{S}_3^+} \\) imply either that \\( {C_{3,1:2}=(0,0)} \\) or that \\( {C_{3,1:2}=(1,1)} \\). &#9744;<\/p>\n<p style=\"text-align: justify;\">There is also an alternative probabilistic proof, based on the fact that \\( {\\mathcal{S}_n^+} \\) coincides with the set of covariance matrices of gaussian random vectors of \\( {\\mathbb{R}^n} \\). If \\( {C\\in\\mathcal{S}_n^+} \\), then \\( {C} \\) is the covariance matrix of a centered Gaussian random vector \\( {Z} \\) of \\( {\\mathbb{R}^n} \\) with unit variances. Moreover,<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(Z_iZ_j)=C_{i,j}\\in\\{0,1\\} \\]<\/p>\n<p style=\"text-align: justify;\">for any \\( {1\\leq i,j\\leq n} \\). Now, for any \\( {1\\leq i,j\\leq n} \\), we have that \\( {\\mathbb{E}(Z_iZ_j)=1} \\) if and only if \\( {Z_i=Z_j} \\) a.-s. whereas \\( {\\mathbb{E}(Z_iZ_j)=0} \\) if and only if \\( {Z_i} \\) and \\( {Z_j} \\) are independent. As a consequence, one can construct inductively a partition \\( {I_1,\\ldots,I_r} \\) of \\( {\\{1,\\ldots,n\\}} \\) such that \\( {Z_i=Z_j} \\) a.-s. for any \\( {i,j\\in I_k} \\) and any \\( {1\\leq k\\leq r} \\), and such that the Gaussian random vectors \\( {Z_{I_1},\\ldots,Z_{I_r}} \\) are mutually independent. It follows that the covariance matrix \\( {C} \\) of \\( {Z} \\) is block diagonal with diagonal blocks full of \\( {1} \\) up to a permutation of the coordinates \\( {\\{1,\\ldots,n\\}} \\).<\/p>\n<p style=\"text-align: justify;\"><b>Links with finite simple graphs.<\/b> For a positive integer \\( {n} \\), let us recall that a finite simple graph \\( {G} \\) with \\( {n} \\) vertices is a couple of the form \\( {(V,E)} \\) where \\( {V:=\\{1,\\ldots,n\\}} \\) and where \\( {E} \\) is a symmetric off-diagonal subset of \\( {V\\times V} \\). The elements of \\( {V} \\) and \\( {E} \\) are respectively the vertices and the edges of \\( {G} \\). The graph \\( {G} \\) is uniquely determined by its <b>adjacency matrix<\/b> matrix \\( {A} \\) defined by \\( {A_{i,j}:=1} \\) if \\( {\\{i,j\\}\\in E} \\) and \\( {A_{i,j}:=0} \\) otherwise. There is a one to one correspondance between the set of finite simple graphs of \\( {n} \\) vertices and the set of \\( {n\\times n} \\) symmetric matrices with entries in \\( {\\{0,1\\}} \\) and null diagonal. As a real symmetric matrix, the adjacency matrix \\( {A} \\) of a finite simple graph with \\( {n} \\) vertices has real eigenvalues. If \\( {\\mu} \\) is the smallest eigenvalue of \\( {A} \\), then \\( {\\mu\\leq-1} \\) with equality if and only if the connected components of \\( {G} \\) are complete graphs, see for instance <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=0263674\">Doob<\/a> and <a href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=1324340\">Dragos, Cvetkovic, Doob, and Sachs (th. 0.13 and 6.4)<\/a>. We give below an elementary proof of this result.<\/p>\n<blockquote style=\"background: white; border: solid thick #e4e5e7; text-align: justify; padding-left: 1em;\"><p><b>Corollary 2<\/b> <em><a id=\"cographs\" id= \"cographs\"><\/a> Let \\( {G=(V,E)} \\) be a finite simple graph with \\( {n} \\) vertices and adjacency matrix \\( {A} \\) with smallest eigenvalue \\( {\\mu} \\). If \\( {\\mu\\geq-1} \\), then either \\( {\\mu=0} \\) and \\( {E} \\) is empty, or \\( {\\mu=-1} \\) and the connected components of \\( {G} \\) are complete graphs.<\/em><\/p><\/blockquote>\n<p style=\"text-align: justify;\"><em>Proof:<\/em> We first notice that \\( {\\mu\\geq-1} \\) if and only if \\( {C:=A+I_n\\in\\mathcal{S}_n^+} \\). According to Theorem <a href=\"#thbin-mat\">1<\/a>, \\( {C\\in\\mathcal{S}_n^+} \\) if and only if there exists \\( {P\\in\\mathcal{P}_n} \\) such that \\( {PCP^\\top} \\) is block diagonal with diagonal blocks full of \\( {1} \\). Thus, either \\( {PCP^\\top=I_n} \\) (and thus \\( {\\mu=0} \\)), or \\( {PCP^\\top\\in\\mathcal{S}_n^+\\setminus\\mathcal{S}_n^{+*}} \\) (and thus \\( {\\mu=-1} \\)). &#9744;<\/p>\n<p style=\"text-align: justify;\">Several aspects of finite simple graphs can be related to the spectrum of their adjacency matrix, as presented in the reference books <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=1633290\">Bollobas (VIII.2)<\/a> and <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=1324340\">Dragos, Cvetkovic, Doob, and Sachs<\/a>, and for instance in the articles <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=1455672\">Yong<\/a>, <a href=\"http:\/\/www.ams.org\/mathscinet-getitem?mr=1675930\">Kelmans and Yong<\/a>, <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=2047763\">Das and Kumar<\/a>. The case of finite oriented graphs is in a way simpler, see for instance <a href= \"htt:\/\/www.ams.org\/mathscinet-getitem?mr=2085343\">McKay, Oggier, Royle, Sloane, Wanless, and Wilf<\/a>.<\/p>\n<p style=\"text-align: justify;\"><b>Links with Gaussian graphical models in Statistics.<\/b> Let \\( {G} \\) be a simple finite graph with \\( {n} \\) vertices and adjacency matrix \\( {A} \\). Let \\( {\\mathcal{S}_n^{+*}(A)} \\) be the set of matrices \\( {K\\in\\mathcal{S}_n^{+*}} \\) such that \\( {K_{i,j}=0} \\) if \\( {A_{i,j}=0} \\) for any \\( {1\\leq i\\neq j\\leq n} \\).<\/p>\n<blockquote style=\"background: white; border: solid thick #e4e5e7; text-align: justify; padding-left: 1em;\"><p><b>Theorem 3<\/b> <em>Let \\( {X_1,\\ldots,X_N} \\) be a random sample of the centered Gaussian distribution \\( {\\mathcal{N}(0,K)} \\) where \\( {K\\in\\mathcal{S}_n^{+*}(A)} \\). The empirical covariance matrix of the sample is<\/em> <\/p>\n<p style=\"text-align: center;\"><em>\\[ \\mathbb{X}:=\\frac{1}{N}\\sum_{k=1}^N X_k X_k^\\top\\in\\mathcal{S}_n^+. \\]<\/em><\/p>\n<p> <em>Assume that \\( {A} \\) is known. How to provide a ``good'' numerical estimate of \\( {K} \\), belonging to \\( {\\mathcal{S}_n^{+*}(A)} \\), and computed from the sample?<\/em><\/p><\/blockquote>\n<p style=\"text-align: justify;\">Such estimation problems appear for instance in Gaussian covariance graph models, and in longitudinal data analysis, see for instance <a href=\"http:\/\/arxiv.org\/math.ST\/0508268\">Chaudhuri and Drton<\/a> and references therein. Recall that if \\( {N&gt;n} \\), the symmetric matrix \\( {\\mathbb{X}} \\) belongs almost-surely (a.-s.) to \\( {\\mathcal{S}_n^{+*}} \\), see for instance <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=0341745\">Eaton and Perlman<\/a>. Since we deal with a Gaussian sample, the random matrix \\( {\\mathbb{X}} \\) follows actually a Wishart distribution, see for instance <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=771294\">Anderson (p. 534)<\/a>.<\/p>\n<p style=\"text-align: justify;\">Let us denote by \\( {\\mathcal{S}_n} \\) the Hilbert space of symmetric \\( {n\\times n} \\) real matrices, equipped with the dot product \\( {U\\cdot V:=\\mathrm{Tr}(UV)} \\). Let us also denote by \\( {\\mathcal{S}_n(A)} \\) the set of symmetric \\( {n\\times n} \\) matrices \\( {M} \\) such that \\( {M_{i,j}=0} \\) if \\( {A_{i,j}=0} \\), for any \\( {1\\leq i\\neq j\\leq n} \\). We have then \\( {\\mathcal{S}_n^+(A)=\\mathcal{S}_n(A)\\cap\\mathcal{S}_n^+} \\) and \\( {\\mathcal{S}_n^{+*}(A)=\\mathcal{S}_n(A)\\cap\\mathcal{S}_n^{+*}} \\).<\/p>\n<p style=\"text-align: justify;\">If \\( {\\circ} \\) stands for the Schur-Hadamard product, the matrix \\( {\\mathbb{X}\\circ (A+I_n)} \\) is the orthogonal projection in \\( {\\mathcal{S}_n} \\) of \\( {\\mathbb{X}} \\) onto the hyperplane \\( {\\mathcal{S}_n(A)} \\). The ``naive estimator''<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{X}\\circ (A+I_n) \\]<\/p>\n<p style=\"text-align: justify;\">of \\( {K} \\) is obtained from \\( {\\mathbb{X}} \\) by putting zeros at the entries prescribed by \\( {A} \\). It does not belong necessarily to \\( {\\mathcal{S}_n^{+*}(A)} \\). We have however the following simple result.<\/p>\n<blockquote style=\"background: white; border: solid thick #e4e5e7; text-align: justify; padding-left: 1em;\"><p><b>Theorem 4<\/b> <em><a id=\"thnaive\" id= \"thnaive\"><\/a> A.s. the orthogonal projection \\( {p_{\\mathcal{S}_n(A)}(\\mathbb{X})=\\mathbb{X}\\circ(A+I_n) } \\) of \\( {\\mathbb{X}} \\) onto the hyperplace \\( {\\mathcal{S}_n(A)} \\) belongs to the convex cone \\( {\\mathcal{S}_n^{+*}(A)} \\) for large enough sample size \\( {N} \\). Moreover, \\( {p_{\\mathcal{S}_n(A)}(\\mathbb{X})} \\) is a strongly consistent and unbiased estimator of \\( {K} \\).<\/em><\/p><\/blockquote>\n<p style=\"text-align: justify;\"><em>Proof:<\/em> Let \\( {C:=A+I_n} \\). We have \\( {\\mathbb{E}(\\mathbb{X}\\circ C)=\\mathbb{E}(\\mathbb{X})\\circ C=K\\circ C=K} \\). The law of large numbers ensures that \\( {\\lim_{N\\rightarrow\\infty}\\mathbb{X}=K} \\) a.-s and thus that \\( {\\lim_{N\\rightarrow\\infty}\\mathbb{X}\\circ C=K\\circ C=K} \\) a.-s. This gives the consistency and the zero bias. Notice that \\( {\\mathbb{X}\\circ C\\in\\mathcal{S}_n(A)} \\). Now, Since \\( {K\\in\\mathcal{S}_n^{+*}(A)} \\) and since \\( {\\mathcal{S}_n^{+*}(A)} \\) is an open subset of the Hilbert space \\( {\\mathcal{S}_n(A)} \\), we get \\( {\\mathbb{X}\\circ C\\in\\mathcal{S}_n^{+*}(A)} \\) for large enough \\( {N} \\), a.-s. &#9744;<\/p>\n<p style=\"text-align: justify;\">In the result above, the threshold on the sample size \\( {N} \\) is random, and may vary from one sample to another. It also depends on \\( {G} \\) via \\( {A} \\). Recall that \\( {\\mathbb{X}\\in\\mathcal{S}_n^+} \\), and moreover \\( {\\mathbb{X}\\in\\mathcal{S}_n^{+*}} \\) a.-s. when \\( {N&gt;n} \\). Unfortunately, one can have \\( {p_{\\mathcal{S}_n(A)}(M)\\not\\in\\mathcal{S}_n^+} \\) even if \\( {M\\in\\mathcal{S}_n^{+*}} \\). For instance, here is a counter example in dimension \\( {n=3} \\).<\/p>\n<p style=\"text-align: center;\">\\[ M:=\\left( \\begin{array}{rrr} 4 & -3 & 3 \\\\ -3 & 4 & -3 \\\\ 3 &-3 & 4 \\\\ \\end{array} \\right) \\in\\mathcal{S}_n^{+*} \\text{\\quad but\\quad} p_{\\mathcal{S}_n(A)}(M)= \\left( \\begin{array}{rrr} 4 & -3 & 0 \\\\ -3 & 4 & -3 \\\\ 0 &-3 & 4 \\\\ \\end{array} \\right) \\not\\in\\mathcal{S}_n^+. \\]<\/p>\n<p style=\"text-align: justify;\">However, this annoying phenomenon disappears if \\( {A+I_n\\in\\mathcal{S}_n^+} \\), as stated below.<\/p>\n<blockquote style=\"background: white; border: solid thick #e4e5e7; text-align: justify; padding-left: 1em;\"><p><b>Theorem 5 (Stability by projections)<\/b> <em><a id=\"thadmis\" id=\"thadmis\"><\/a> We have \\( {p_{\\mathcal{S}_n(A)}(\\mathcal{S}_n^+)\\subset\\mathcal{S}_n^+} \\) if and only if \\( {A+I_n\\in\\mathcal{S}_n^+} \\). Moreover, in that case, \\( {p_{\\mathcal{S}_n^+(A)}\\equiv p_{\\mathcal{S}_n(A)}} \\) on \\( {\\mathcal{S}_n^+} \\), and \\( {p_{\\mathcal{S}_n(A)}(\\mathcal{S}_n^{+*})\\subset\\mathcal{S}_n^{+*}} \\).<\/em><\/p><\/blockquote>\n<p style=\"text-align: justify;\">Thus, and according to Theorem <a href=\"#thbin-mat\">1<\/a>, the naive estimator \\( {\\mathbb{X}\\circ (A+I_n)} \\) of \\( {K} \\) is good if and only if \\( {K} \\) is bloc diagonal up to a permutation of the coordinates. It is known in that case that the naive estimator coincides with the maximum likelihood estimator when \\( {N&gt;n} \\).<\/p>\n<p style=\"text-align: justify;\"><em>Proof:<\/em> } Set \\( {C:=A+I_n} \\) and let \\( {1:=(1,\\ldots,1)^\\top(1,\\ldots,1)} \\) be the \\( {n\\times n} \\) matrix full of ones. Let us establish the first part. If \\( {C\\in\\mathcal{S}_n^+} \\), then by the stability of \\( {\\mathcal{S}_n^+} \\) by the Schur-Hadamard product (Schur's Theorem, see <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=1084815\">Horn and Johnson (th. 7.5.3 p. 458)<\/a>, \\( {p_{\\mathcal{S}_n(C)}(M)= M\\circ C\\in\\mathcal{S}_n^+} \\) for any \\( {M\\in\\mathcal{S}_n^+} \\). Conversely, assume that \\( {p_{\\mathcal{S}_n(A)}(M)\\in\\mathcal{S}_n^+} \\) for any \\( {M\\in\\mathcal{S}_n^+} \\). Then, \\( {p_{\\mathcal{S}_n(A)}(1)=1\\circ C=C} \\) since \\( {1} \\) is the neutral element of the Schur-Hadamard product. Therefore, \\( {C\\in\\mathcal{S}_n^+} \\). For the second part, since \\( {C\\in\\mathcal{S}_n^+} \\), we have \\( {p_{\\mathcal{S}_n(A)}(M)\\in\\mathcal{S}_n^+} \\) for any \\( {M\\in\\mathcal{S}_n^+} \\) by virtue of the first part. Then, \\( {p_{\\mathcal{S}_n^+(A)}=p_{\\mathcal{S}_n(A)}} \\) on \\( {\\mathcal{S}_n^+} \\), since if \\( {\\mathcal{A}} \\) and \\( {\\mathcal{B}} \\) are two non-empty closed convex subsets of a Hilbert space \\( {\\mathcal{H}} \\) and if \\( {x\\in\\mathcal{H}} \\) is such that \\( {p_\\mathcal{A}(x)\\in&nbsp;\\mathcal{B}} \\), then \\( {p_{\\mathcal{A}\\cap\\mathcal{B}}(x)=p_\\mathcal{A}(x)} \\). The third part is a consequence for instance of Oppenheim's inequality:<\/p>\n<p style=\"text-align: center;\">\\[ \\det(U\\circ V)\\geq\\det(U)\\prod_{i=1}^n V_{i,i} \\]<\/p>\n<p style=\"text-align: justify;\">as soon as \\( {U,V\\in\\mathcal{S}_n^+} \\), cf. <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=1084815\">Horn and Johnson (th. 7.8.6 p. 480)<\/a>, used with \\( {V=C} \\) (notice that \\( {C_{i,i}=1} \\) for any \\( {1\\leq i\\leq n} \\). See also <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=MR1288752\">Horn and Johnson (th. 5.2.1 p. 309)<\/a>. &#9744;<\/p>\n<p style=\"text-align: justify;\">The matrix \\( {A} \\) associated to the counter example mentioned above is given by<\/p>\n<p style=\"text-align: center;\">\\[ A= \\left( \\begin{array}{rrr} 0 & 1 & 0 \\\\ 1 & 0 & 1 \\\\ 0 & 1 & 0 \\\\ \\end{array} \\right)\\not\\in\\mathcal{S}_n^+. \\]<\/p>\n<p style=\"text-align: justify;\">More generally, when \\( {A} \\) is band diagonal, i.e. \\( {A_{i,j}=0} \\) if \\( {|i-j|&gt;1} \\) for any \\( {1\\leq i,j\\leq n} \\), then \\( {A} \\) does not belong to \\( {\\mathcal{S}_n^+} \\). These patterns correspond to chain graphs. In the same spirit, it is known that the spectrum of \\( {A} \\) ranges from \\( {-r+1} \\) to \\( {r+1} \\) when \\( {G} \\) is a bipartite \\( {r} \\)-regular graph, see for instance <a href= \"http:\/\/www.ams.org\/mathscinet-getitem?mr=1633290\">Bollobas (VIII.2)<\/a>. However, if \\( {A+I_n} \\) is block diagonal with diagonal blocks full of \\( {1} \\), then \\( {A+I_n\\in\\mathcal{S}_n^+} \\). According to corollay <a href= \"#cographs\">2<\/a>, \\( {A+I_n\\in\\mathcal{S}_n^+} \\) means that \\( {G} \\) contains any chord of any path. In particular, when \\( {A=(1,\\ldots,1)^\\top(1,\\ldots,1)-I_n\\in\\mathcal{S}_n^+} \\), then \\( {\\mathcal{S}_n(A)=\\mathcal{S}_n} \\) (\\( {G} \\) is the complete graph). On the other hand, when \\( {A=I_n\\in\\mathcal{S}_n^+} \\), then \\( {\\mathcal{S}_n(A)} \\) is the set of diagonal matrices (\\( {G} \\) is the empty graph).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recently, during students projects defense in &Eacute;cole Polytechnique, a colleague asked if one knows nice elementary facts on the smallest eigenvalue of the adjacency matrix&#8230;<\/p>\n<div class=\"more-link-wrapper\"><a class=\"more-link\" href=\"https:\/\/djalil.chafai.net\/blog\/2014\/12\/12\/on-certain-0-1-matrices\/\">Continue reading<span class=\"screen-reader-text\">On certain 0-1 matrices<\/span><\/a><\/div>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":75},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/posts\/8165"}],"collection":[{"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/comments?post=8165"}],"version-history":[{"count":4,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/posts\/8165\/revisions"}],"predecessor-version":[{"id":8169,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/posts\/8165\/revisions\/8169"}],"wp:attachment":[{"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/media?parent=8165"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/categories?post=8165"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/tags?post=8165"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}