We describe the first worst-case efficient algorithm for simultaneously matching multiple rectangular patterns of varying sizes and aspect ratios in a rectangular text. Efficient means significantly more efficient asymptotically than applying known algorithms that handle one height (or width or aspect ratio) at a time for each height. Our algorithm features an interesting use of multidimensional range searching, as well as new adaptations of several known techniques for two-dimensional string matching. We also extend our algorithm to a dynamic setting where the set of patterns can change over time.
{"title":"Multiple matching of rectangular patterns","authors":"R. M. Idury, A. Schäffer","doi":"10.1145/167088.167116","DOIUrl":"https://doi.org/10.1145/167088.167116","url":null,"abstract":"We describe the first worst-case efficient algorithm for simultaneously matching multiple rectangular patterns of varying sizes and aspect ratios in a rectangular text. Efficient means significantly more efficient asymptotically than applying known algorithms that handle one height (or width or aspect ratio) at a time for each height. Our algorithm features an interesting use of multidimensional range searching, as well as new adaptations of several known techniques for two-dimensional string matching. We also extend our algorithm to a dynamic setting where the set of patterns can change over time.","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125231862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most complexity measures for concurrent algorithms for asynchronous shared-memory architectures focus on process steps and memory consumption. In practice, however, performance of multiprocessor algorithms is heavily influenced by contention , the extent to which processess access the same location at the same time. Nevertheless, even though contention is one of the principal considerations affecting the performance of real algorithms on real multiprocessors, there are no formal tools for analyzing the contention of asynchronous shared-memory algorithms. This paper introduces the first formal complexity model for contention in shared-memory multiprocessors. We focus on the standard multiprocessor architecture in which n asynchronous processes communicate by applying read, write, and read-modify-write operations to a shared memory. To illustrate the utility of our model, we use it to derive two kinds of results: (1) lower bounds on contention for well-known basic problems such as agreement and mutual exclusion, and (2) trade-offs between the length of the critical path (maximal number of accesses to shared variables performed by a single process in executing the algorithm) and contention for these algorithms. Furthermore, we give the first formal contention analysis of a variety of counting networks, a class of concurrent data structures inplementing shared counters. Experiments indicate that certain counting networks outperform conventional single-variable counters at high levels of contention. Our analysis provides the first formal model explaining this phenomenon.
{"title":"Contention in shared memory algorithms","authors":"C. Dwork, M. Herlihy, Orli Waarts","doi":"10.1145/167088.167145","DOIUrl":"https://doi.org/10.1145/167088.167145","url":null,"abstract":"Most complexity measures for concurrent algorithms for asynchronous shared-memory architectures focus on process steps and memory consumption. In practice, however, performance of multiprocessor algorithms is heavily influenced by contention , the extent to which processess access the same location at the same time. Nevertheless, even though contention is one of the principal considerations affecting the performance of real algorithms on real multiprocessors, there are no formal tools for analyzing the contention of asynchronous shared-memory algorithms. This paper introduces the first formal complexity model for contention in shared-memory multiprocessors. We focus on the standard multiprocessor architecture in which n asynchronous processes communicate by applying read, write, and read-modify-write operations to a shared memory. To illustrate the utility of our model, we use it to derive two kinds of results: (1) lower bounds on contention for well-known basic problems such as agreement and mutual exclusion, and (2) trade-offs between the length of the critical path (maximal number of accesses to shared variables performed by a single process in executing the algorithm) and contention for these algorithms. Furthermore, we give the first formal contention analysis of a variety of counting networks, a class of concurrent data structures inplementing shared counters. Experiments indicate that certain counting networks outperform conventional single-variable counters at high levels of contention. Our analysis provides the first formal model explaining this phenomenon.","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121246978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The following size{depth tradeo for threshold circuits is obtained: any threshold circuit of depth d that computes the parity function on n variables must have at least n1+c d edges, where c> 0 and 3 are constants independent of n and d. Previously known constructions show that up to the choice of c and this bound is best possible. In particular, the lower bound implies an armative answer to the conjecture of Paturi and Saks that a bounded-depth threshold circuit that computes parity requires a superlinear number of edges. This is the rst superlinear lower bound for an explicit function that holds for any xed depth and the rst that applies to threshold circuits with unrestricted weights. The tradeo is obtained as a consequence of a general restriction theorem for threshold circuits with a small number of edges: For any threshold circuit with n inputs, depth d, and at most kn edges, there exists a partial assignment to the inputs that xes the output of the circuit to a constant while leavingbn=(c1k)c2 d c variables unxed, where c1;c 2>0 and 3 are constants independent of n, k, and d. A tradeo between the number of gates and depth is also proved: any threshold circuit of depth d that computes the parity of n variables has at least (n=2) 1=2(d 1) gates. This tradeo, which is essentially the best possible, was proved previously (with a better constant in the exponent) for the case of threshold circuits with polynomially bounded weights in (K. Siu, V. Roychowdury, and T. Kailath, IEEE Trans. Inform. Theory, 40 (1994), pp. 455{466); the result in the present paper holds for unrestricted weights.
得到阈值电路的以下尺寸{深度折衷:任何深度d的阈值电路在n个变量上计算奇校验函数必须至少有n1+c d条边,其中c> 0和3是独立于n和d的常数。先前已知的结构表明,直到c和这个边界的选择是最好的。特别是,下界暗示了Paturi和Saks猜想的一个armative答案,即计算奇偶性的有界深度阈值电路需要一个超线性的边数。这是适用于任何固定深度的显式函数的第一个超线性下界,也是适用于权重不受限制的阈值电路的第一个超线性下界。获得tradeo由于一般限制定理与少量的边缘阈值电路:对任何阈值与n的输入电路,d,深度和最多kn边缘,存在部分任务换成的输入输出电路的一个常数,而leavingbn unxed = (c1k) c2 d c变量,c1; c 2 > 0和3是常数无关的n, k,和d。盖茨的数量和深度之间的tradeo也证明:任何计算n个变量奇偶校验的深度为d的阈值电路至少有(n=2) 1=2(d1)个门。这个折衷,本质上是最好的可能,在之前被证明(在指数中有一个更好的常数)对于多项式有界权的阈值电路的情况下(K. Siu, V. Roychowdury,和T. Kailath, IEEE Trans)。通知。理论,40(1994),第455页{466);本文的结果适用于不受限制的权重。
{"title":"Size-depth trade-offs for threshold circuits","authors":"R. Impagliazzo, R. Paturi, M. Saks","doi":"10.1145/167088.167233","DOIUrl":"https://doi.org/10.1145/167088.167233","url":null,"abstract":"The following size{depth tradeo for threshold circuits is obtained: any threshold circuit of depth d that computes the parity function on n variables must have at least n1+c d edges, where c> 0 and 3 are constants independent of n and d. Previously known constructions show that up to the choice of c and this bound is best possible. In particular, the lower bound implies an armative answer to the conjecture of Paturi and Saks that a bounded-depth threshold circuit that computes parity requires a superlinear number of edges. This is the rst superlinear lower bound for an explicit function that holds for any xed depth and the rst that applies to threshold circuits with unrestricted weights. The tradeo is obtained as a consequence of a general restriction theorem for threshold circuits with a small number of edges: For any threshold circuit with n inputs, depth d, and at most kn edges, there exists a partial assignment to the inputs that xes the output of the circuit to a constant while leavingbn=(c1k)c2 d c variables unxed, where c1;c 2>0 and 3 are constants independent of n, k, and d. A tradeo between the number of gates and depth is also proved: any threshold circuit of depth d that computes the parity of n variables has at least (n=2) 1=2(d 1) gates. This tradeo, which is essentially the best possible, was proved previously (with a better constant in the exponent) for the case of threshold circuits with polynomially bounded weights in (K. Siu, V. Roychowdury, and T. Kailath, IEEE Trans. Inform. Theory, 40 (1994), pp. 455{466); the result in the present paper holds for unrestricted weights.","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116534384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr (Event) = $pi$. It is shown that the problem of deciding whether there exists any distribution satisfying a given set of constraints is NP-hard. However, if the constraints are consistent, then there exists a distribution satisfying them, which is supported by a "small" sample space (one whose cardinality is equal to the number of constraints). For the important case of independence constraints, where the constraints have a certain form and are consistent with a joint distribution of independent random variables, a small sample space can be constructed in polynomial time. This last result can be used to derandomize algorithms; this is demonstrated by an application to the problem of finding large independent sets in sparse hypergraphs.
{"title":"Constructing small sample spaces satisfying given constraints","authors":"D. Koller, N. Megiddo","doi":"10.1145/167088.167168","DOIUrl":"https://doi.org/10.1145/167088.167168","url":null,"abstract":"The subject of this paper is finding small sample spaces for joint distributions of n discrete random variables. Such distributions are often only required to obey a certain limited set of constraints of the form Pr (Event) = $pi$. It is shown that the problem of deciding whether there exists any distribution satisfying a given set of constraints is NP-hard. However, if the constraints are consistent, then there exists a distribution satisfying them, which is supported by a \"small\" sample space (one whose cardinality is equal to the number of constraints). For the important case of independence constraints, where the constraints have a certain form and are consistent with a joint distribution of independent random variables, a small sample space can be constructed in polynomial time. This last result can be used to derandomize algorithms; this is demonstrated by an application to the problem of finding large independent sets in sparse hypergraphs.","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115852482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles H. Bennett, P. Gács, Ming Li, P. Vitányi, W. H. Zurek
Applying the tools of algorithmic information theory, we compare several candidates for an asymptotically machine-independent. absolute measure of the informational or ``cognitive`` distance between discrete objects x and y. The maximum of the conditional Kolmogorov complexities max{l_brace}K(y{vert_bar}z) K(m{vert_bar}y){r_brace}, is shown to be optimal, in the sense of being minimal within an additive constant among semicomputable, symmetric, positive semidefinite functions of z and y satisfying a reasonable normalization condition and obeying the triangle intequality. The optimal metric, in turn, differs by at most an additive logarithmic term from the size of the smallest program for a universal reversible computer to transform x into y. This program functions in a `catalytic`` capacity, being retained in the computer before, during, and after the computation. Similarly, the sum of the conditional complexities. K(y{vert_bar}x) + K(x{vert_bar}y), is shown to be equal within a logarithmic term to the minimal amount Of information flowing out and in during a reversible computation in which the program is not retained. Finally. using the physical theory of reversible computation, it is shown that the simple difference K(x) - K(y) is an appropriate (ie universal, antisymmetric, and transitive) measure of the amount of thermodynamic work required to transform string x intomore » string y by the most efficient process.« less
{"title":"Thermodynamics of computation and information distance","authors":"Charles H. Bennett, P. Gács, Ming Li, P. Vitányi, W. H. Zurek","doi":"10.1145/167088.167098","DOIUrl":"https://doi.org/10.1145/167088.167098","url":null,"abstract":"Applying the tools of algorithmic information theory, we compare several candidates for an asymptotically machine-independent. absolute measure of the informational or ``cognitive`` distance between discrete objects x and y. The maximum of the conditional Kolmogorov complexities max{l_brace}K(y{vert_bar}z) K(m{vert_bar}y){r_brace}, is shown to be optimal, in the sense of being minimal within an additive constant among semicomputable, symmetric, positive semidefinite functions of z and y satisfying a reasonable normalization condition and obeying the triangle intequality. The optimal metric, in turn, differs by at most an additive logarithmic term from the size of the smallest program for a universal reversible computer to transform x into y. This program functions in a `catalytic`` capacity, being retained in the computer before, during, and after the computation. Similarly, the sum of the conditional complexities. K(y{vert_bar}x) + K(x{vert_bar}y), is shown to be equal within a logarithmic term to the minimal amount Of information flowing out and in during a reversible computation in which the program is not retained. Finally. using the physical theory of reversible computation, it is shown that the simple difference K(x) - K(y) is an appropriate (ie universal, antisymmetric, and transitive) measure of the amount of thermodynamic work required to transform string x intomore » string y by the most efficient process.« less","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133597156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and themore » best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less
给定一个图G = (V, E),每个顶点V {element_of}被赋予一个重量w (V)和每条边E {element_of} E分配成本c (E)的商分区的顶点V切成集年代和年代}{酒吧是c (S, S}{酒吧)/分钟{l_brace} w (S), w (S) {r_brace},其中c (S,{酒吧年代})是成本之和的边缘穿越削减和w (S)和w({酒吧年代})顶点的权值之和年代和年代}{酒吧,分别。近年来,寻找图中商最小的切的问题引起了相当大的关注,这在很大程度上要归功于Rao、Leighton和Rao的工作。他们已经证明,最小商切问题的算法(精确或近似)可以用来获得更著名的最小平衡切问题的近似算法,该问题需要找到一个cut (S,{bar S})最小化c(S,{bar S}),服从约束bW {le} w(S) {le} (1 {-} b) w,其中w是总顶点权值,b是范围0 < b{{1/2}内的某个固定平衡。不幸的是,对于一般图来说,最小商切问题是强np困难的,而对于一般问题,已知的最好的多项式时间近似算法只能保证商最多为to (lgn)倍最优的切,其中n是图的大小。然而,对于平面图形,最小商切问题似乎更容易处理,因为Rao已经开发了几种有效的近似算法来解决平面版本的问题,这些算法能够找到商最多是某个常数倍最优的切。在本文中,我们从精度和速度两个方面改进了Rao的算法。作为我们的第一个结果,我们给出了平面最小商切问题的两个伪多项式时间精确算法。由于Rao对该问题最精确的近似算法(也是伪多项式时间算法)只能保证1.5倍的最优切割,因此我们的算法代表了一个重大的进步。«少
{"title":"Finding minimum-quotient cuts in planar graphs","authors":"James K. Park, C. Phillips","doi":"10.1145/167088.167284","DOIUrl":"https://doi.org/10.1145/167088.167284","url":null,"abstract":"Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and themore » best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124544888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two factors are prominent among those contributing to the increases in speed and storage capacity in current generations of computers. The first is increasing parallelism — whether in actual parallel and distributed computers, or among the steadily more numerous components of a sequential machine. The second is the dramatic miniaturization of logical devices and wires. The first of these factors greatly magnifies the number of interprocessor communications performed during any computation, while the second increases the noise level affecting transmissions. For these reasons, and on the basis that the role of noise should be understood in a model of a physical process, the following concern was recently identified as basic [10]. Consider a problem whose input is split between two processors connect ed by a communication link; and for which an interactive protocol exists which solves the problem in T transmissions on any input, provided the channel is noiseless. If in fact there is some noise on the channel, what is the effect upon the number of transmissions needed in order to solve the communication problem reliably? We describe a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slow-down. This is an analog for general interactive protocols of Shannon’s coding theorem, which dealt only with data transmission, i.e. one-way protocols [11]. This result im*Research supported by an NSF postdoctoral fellowship. Permission to copy without fee all or part of this material is granted provided that the copias are not made or distributed for direct commercial advantage, the ACM copyright notice and tha title of the publication and its date appear, and notioe is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 25th ACM STOC ‘93-51931CA,USA 01993 ACM 0-89791 -591 -71931000510747 . .. S1.50 proves on recent work which provided a randomized simulation method for interactive protocols. The Shannon theorem is thus reproduced for the general interactive case, in all but the constant factor. The randomized method was fundamentally unsuited to further derandomization, and the deterministic solution is entirely different. A key role in the present work is played by tree codes, originally considered by Wozencraft [13] for the sake of comput ationally efficient decoding of noisy data transmissions. In their new setting tree codes are reinterpreted as a way of transforming a highly interactive protocol into one that behaves like a pair of one-way protocols, and which therefore can be implemented at both high rate and reliability y.
{"title":"Deterministic coding for interactive communication","authors":"L. Schulman","doi":"10.1145/167088.167279","DOIUrl":"https://doi.org/10.1145/167088.167279","url":null,"abstract":"Two factors are prominent among those contributing to the increases in speed and storage capacity in current generations of computers. The first is increasing parallelism — whether in actual parallel and distributed computers, or among the steadily more numerous components of a sequential machine. The second is the dramatic miniaturization of logical devices and wires. The first of these factors greatly magnifies the number of interprocessor communications performed during any computation, while the second increases the noise level affecting transmissions. For these reasons, and on the basis that the role of noise should be understood in a model of a physical process, the following concern was recently identified as basic [10]. Consider a problem whose input is split between two processors connect ed by a communication link; and for which an interactive protocol exists which solves the problem in T transmissions on any input, provided the channel is noiseless. If in fact there is some noise on the channel, what is the effect upon the number of transmissions needed in order to solve the communication problem reliably? We describe a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slow-down. This is an analog for general interactive protocols of Shannon’s coding theorem, which dealt only with data transmission, i.e. one-way protocols [11]. This result im*Research supported by an NSF postdoctoral fellowship. Permission to copy without fee all or part of this material is granted provided that the copias are not made or distributed for direct commercial advantage, the ACM copyright notice and tha title of the publication and its date appear, and notioe is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 25th ACM STOC ‘93-51931CA,USA 01993 ACM 0-89791 -591 -71931000510747 . .. S1.50 proves on recent work which provided a randomized simulation method for interactive protocols. The Shannon theorem is thus reproduced for the general interactive case, in all but the constant factor. The randomized method was fundamentally unsuited to further derandomization, and the deterministic solution is entirely different. A key role in the present work is played by tree codes, originally considered by Wozencraft [13] for the sake of comput ationally efficient decoding of noisy data transmissions. In their new setting tree codes are reinterpreted as a way of transforming a highly interactive protocol into one that behaves like a pair of one-way protocols, and which therefore can be implemented at both high rate and reliability y.","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124855393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Proc. 25th Annual Symp. Theory Computing , San Diego, May 1993 This paper deals with analog circuits. It establishes the finiteness of VC dimension, teaching dimension, and several other measures of sample complexity which arise in learning theory. It also shows that the equivalence of behaviors, and the loading problem, are effectively decidable, modulo a widely believed conjecture in number theory. The results, the first ones that are independent of weight size, apply when the gate function is the “standard sigmoid” commonly used in neural networks research. The proofs rely on very recent developments in the elementary theory of real numbers with exponentiation. (Some weaker conclusions are also given for more general analytic gate functions.) Applications to learnability of sparse polynomials are also mentioned.
{"title":"Finiteness results for sigmoidal “neural” networks","authors":"A. Macintyre, Eduardo Sontag","doi":"10.1145/167088.167192","DOIUrl":"https://doi.org/10.1145/167088.167192","url":null,"abstract":"Proc. 25th Annual Symp. Theory Computing , San Diego, May 1993 This paper deals with analog circuits. It establishes the finiteness of VC dimension, teaching dimension, and several other measures of sample complexity which arise in learning theory. It also shows that the equivalence of behaviors, and the loading problem, are effectively decidable, modulo a widely believed conjecture in number theory. The results, the first ones that are independent of weight size, apply when the gate function is the “standard sigmoid” commonly used in neural networks research. The proofs rely on very recent developments in the elementary theory of real numbers with exponentiation. (Some weaker conclusions are also given for more general analytic gate functions.) Applications to learnability of sparse polynomials are also mentioned.","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126604766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We settle a conjecture raised by Z. Galil and J. Seiferas 14 years ago (J. Comput. System Sci.26(1983), 280?294):k-head one-way deterministic finite automata cannot perform string-machine (i.e., accept the language {x#y??u?vy=uxv}), for anyknonsensing heads.
{"title":"k one-way heads cannot do string-matching","authors":"Tao Jiang, Ming Li","doi":"10.1145/167088.167111","DOIUrl":"https://doi.org/10.1145/167088.167111","url":null,"abstract":"We settle a conjecture raised by Z. Galil and J. Seiferas 14 years ago (J. Comput. System Sci.26(1983), 280?294):k-head one-way deterministic finite automata cannot perform string-machine (i.e., accept the language {x#y??u?vy=uxv}), for anyknonsensing heads.","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"33 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113980993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a concept of the rank of a tensor which is related to the contact rank introduced by Razborov [15]. We prove some upper and lower bounds for the tensor of multiplication of polynomials mod (z” — 1) and derive some corollaries for contact rank, for the size of depth 2 algebraic and boolean circuits and for a multiparty communication complexity problem. In particular we give answers to a conjecture of A. A. Razborov and a problem of M.Saks and A. Wigderson.
我们考虑一个张量秩的概念,它与Razborov[15]引入的接触秩有关。我们证明了多项式乘法张量mod (z ' - 1)的一些上界和下界,并推导了接触秩、深度2代数和布尔电路的大小以及多方通信复杂性问题的一些推论。特别地,我们给出了a . a . Razborov的一个猜想和M.Saks和a . Wigderson的一个问题的答案。
{"title":"Modified ranks of tensors and the size of circuits","authors":"P. Pudlák, V. Rödl","doi":"10.1145/167088.167228","DOIUrl":"https://doi.org/10.1145/167088.167228","url":null,"abstract":"We consider a concept of the rank of a tensor which is related to the contact rank introduced by Razborov [15]. We prove some upper and lower bounds for the tensor of multiplication of polynomials mod (z” — 1) and derive some corollaries for contact rank, for the size of depth 2 algebraic and boolean circuits and for a multiparty communication complexity problem. In particular we give answers to a conjecture of A. A. Razborov and a problem of M.Saks and A. Wigderson.","PeriodicalId":280602,"journal":{"name":"Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122528404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}