Data Structures And Algorithms
Almost Tight Lower Bounds for Hard Cutting Problems in Embedded Graphs (1903.08603v1)
Vincent Cohen-Addad, Éric Colin de Verdière, Daniel Marx, Arnaud de Mesmay
2019-03-20
We prove essentially tight lower bounds, conditionally to the Exponential Time Hypothesis, for two fundamental but seemingly very different cutting problems on surface-embedded graphs: the Shortest Cut Graph problem and the Multiway Cut problem. A cut graph of a graph embedded on a surface is a subgraph of whose removal from leaves a disk. We consider the problem of deciding whether an unweighted graph embedded on a surface of genus has a cut graph of length at most a given value. We prove a time lower bound for this problem of conditionally to ETH. In other words, the first -time algorithm by Erickson and Har-Peled [SoCG 2002, Discr.\ Comput.\ Geom.\ 2004] is essentially optimal. We also prove that the problem is W[1]-hard when parameterized by the genus, answering a 17-year old question of these authors. A multiway cut of an undirected graph with distinguished vertices, called terminals, is a set of edges whose removal disconnects all pairs of terminals. We consider the problem of deciding whether an unweighted graph has a multiway cut of weight at most a given value. We prove a time lower bound for this problem of , conditionally to ETH, for any choice of the genus of the graph and the number of terminals . In other words, the algorithm by the second author [Algorithmica 2017] (for the more general multicut problem) is essentially optimal; this extends the lower bound by the third author [ICALP 2012] (for the planar case). Reductions to planar problems usually involve a grid-like structure. The main novel idea for our results is to understand what structures instead of grids are needed if we want to exploit optimally a certain value of the genus.
Rapid Convergence of the Unadjusted Langevin Algorithm: Log-Sobolev Suffices (1903.08568v1)
Santosh S. Vempala, Andre Wibisono
2019-03-20
We prove a convergence guarantee on the unadjusted Langevin algorithm for sampling assuming only that the target distribution satisfies a log-Sobolev inequality and the Hessian of is bounded. In particular, is not required to be convex or have higher derivatives bounded.
Inner approximation algorithm for solving linear multiobjective optimization problems (1808.01786v2)
Laszlo Csirmaz
2018-08-06
Benson's outer approximation algorithm and its variants are the most frequently used methods for solving linear multiobjective optimization problems. These algorithms have two intertwined components: one-dimensional linear optimization one one hand, and a combinatorial part closely related to vertex numeration on the other. Their separation provides a deeper insight into Benson's algorithm, and points toward a dual approach. Two skeletal algorithms are defined which focus on the combinatorial part. Using different single-objective optimization problems - called oracle calls - yield different algorithms, such as a sequential convex hull algorithm, another version of Benson's algorithm with the theoretically best possible iteration count, the dual algorithm of Ehrgott, L"ohne and Shao, and the new algorithm. The new algorithm has several advantages. First, the corresponding one-dimensional optimization problem uses the original constraints without adding any extra variables or constraints. Second, its iteration count meets the theoretically best possible one. As a dual algorithm, it is sequential: in each iteration it produces an extremal solution, thus can be aborted when a satisfactory solution is found. The Pareto front can be "probed" or "scanned" from several directions at any moment without adversely affecting the efficiency. Finally, it is well suited to handle highly degenerate problems where there are many linear dependencies among the constraints. On problems with ten or more objectives the implementation shows a significant increase in efficiency compared to Bensolve - due to the reduced number of iterations and the improved combinatorial handling.
A Novel Dynamic Programming Approach to the Train Marshalling Problem (1903.08364v1)
Hossein Falsafain, Mohammad Tamannaei
2019-03-20
Train marshalling is the process of reordering the railcars of a train in such a way that the railcars with the same destination appear consecutively in the final, reassembled train. The process takes place in the shunting yard by means of a number of classification tracks. In the Train Marshalling Problem (TMP), the objective is to perform this rearrangement of the railcars with the use of as few classification tracks as possible. The problem has been shown to be NP-hard, and several exact and approximation algorithms have been developed for it. In this paper, we propose a novel exact dynamic programming (DP) algorithm for the TMP. The worst-case time complexity of this algorithm (which is exponential in the number of destinations and linear in the number of railcars) is lower than that of the best presently available algorithm for the problem, which is an inclusion-exclusion-based DP algorithm. In practice, the proposed algorithm can provide a substantially improved performance compared to its inclusion-exclusion-based counterpart, as demonstrated by the experimental results.
Faster Algorithms for the Geometric Transportation Problem (1903.08263v1)
Pankaj K. Agarwal, Kyle Fox, Debmalya Panigrahi, Kasturi R. Varadarajan, Allen Xiao
2019-03-19
Let and be two point sets in , with and where is a constant. Next, let such that be demand functions over and . Let be a suitable distance function such as the distance. The transportation problem asks to find a map such that , , and is minimized. We present three new results for the transportation problem when is any metric: - For any constant , an expected time randomized algorithm that returns a transportation map with expected cost times the optimal cost. - For any , a -approximation in time, where . - An exact strongly polynomial time algorithm, for .
Derandomized concentration bounds for polynomials, and hypergraph maximal independent set (1609.06156v7)
David G. Harris
2016-09-20
A parallel algorithm for maximal independent set (MIS) in hypergraphs has been a long-standing algorithmic challenge, dating back nearly 30 years to a survey of Karp & Ramachandran (1990). The best randomized parallel algorithm for hypergraphs of fixed rank was developed by Beame & Luby (1990) and Kelsen (1992), running in time roughly . We improve the randomized algorithm of Kelsen, getting a simpler analysis using more modern concentration inequalities, and improving the runtime to . We also give a method for derandomizing concentration bounds for low-degree polynomials, which are the key technical tool used to analyze that algorithm. This leads to a deterministic PRAM algorithm also running in time and processors. This is the first deterministic algorithm with sub-polynomial runtime for hypergraphs of rank . Our analysis can also apply when is slowly growing; using this in conjunction with a strategy of Bercea et al. (2015) gives a deterministic MIS algorithm running in time .
The Average-Case Complexity of Counting Cliques in Erdős-Rényi Hypergraphs (1903.08247v1)
Enric Boix Adserà, Matthew Brennan, Guy Bresler
2019-03-19
The complexity of clique problems on Erdos-Renyi random graphs has become a central topic in average-case complexity. Algorithmic phase transitions in these problems have been shown to have broad connections ranging from mixing of Markov chains to information-computation gaps in high-dimensional statistics. We consider the problem of counting -cliques in -uniform Erdos-Renyi hypergraphs with edge density , and show that its fine-grained average-case complexity can be based on its worst-case complexity. We prove the following: 1. Dense Erdos-Renyi hypergraphs: Counting -cliques on with and constant matches its worst-case complexity up to a factor. Assuming ETH, it takes time to count -cliques in if and are constant. 2. Sparse Erdos-Renyi hypergraphs: When , our reduction yields different average-case phase diagrams depicting a tradeoff between runtime and for each fixed . Assuming the best-known worst-case algorithms are optimal, in the graph case of , we establish that the exponent in of the optimal running time for -clique counting in is , where and is the matrix multiplication constant. In the hypergraph case of , we show a lower bound at the exponent of which surprisingly is tight exactly for the set of above the Erdos-Renyi -clique percolation threshold. Our reduction yields the first known average-case hardness result on Erdos-Renyi hypergraphs based on worst-case hardness conjectures. We also analyze several natural algorithms for counting -cliques in that establish our upper bounds in the sparse case .
k-Means Clustering of Lines for Big Data (1903.06904v2)
Yair Marom, Dan Feldman
2019-03-16
The k-means for lines is a set of k centers (points) that minimizes the sum of squared distances to a given set of n lines in R^d. This is a straightforward generalization of the k-means problem where the input is a set of n points. Related problems minimize sum of (non-squared) distances, other norms, m-estimators or ignore the t farthest points (outliers) from the k centers. We suggest the first provable PTAS algorithms for these problems that compute (1+epsilon)-approximation in time O(n\log (n)/epsilon^2) for any given epsilon \in (0, 1), and constant integers k, d, t \geq 1, including support for streaming and distributed input. Experimental results on Amazon EC2 cloud and open source are also provided.
Independent Range Sampling, Revisited Again (1903.08014v1)
Peyman Afshani, Jeff M. Phillips
2019-03-19
We revisit the range sampling problem: the input is a set of points where each point is associated with a real-valued weight. The goal is to store them in a structure such that given a query range and an integer , we can extract independent random samples from the points inside the query range, where the probability of sampling a point is proportional to its weight. This line of work was initiated in 2014 by Hu, Qiao, and Tao and it was later followed up by Afshani and Wei. The first line of work mostly studied unweighted but dynamic version of the problem in one dimension whereas the second result considered the static weighted problem in one dimension as well as the unweighted problem in 3D for halfspace queries. We offer three main results and some interesting insights that were missed by the previous work: We show that it is possible to build efficient data structures for range sampling queries if we allow the query time to hold in expectation (the first result), or obtain efficient worst-case query bounds by allowing the sampling probability to be approximately proportional to the weight (the second result). The third result is a conditional lower bound that shows essentially one of the previous two concessions is needed. For instance, for the 3D range sampling queries, the first two results give efficient data structures with near-linear space and polylogarithmic query time whereas the lower bound shows with near-linear space the worst-case query time must be close to , ignoring polylogarithmic factors. Up to our knowledge, this is the first such major gap between the expected and worst-case query time of a range searching problem.
A New Lower Bound for Semigroup Orthogonal Range Searching (1903.07967v1)
Peyman Afshani
2019-03-19
We report the first improvement in the space-time trade-off of lower bounds for the orthogonal range searching problem in the semigroup model, since Chazelle's result from 1990. This is one of the very fundamental problems in range searching with a long history. Previously, Andrew Yao's influential result had shown that the problem is already non-trivial in one dimension~\cite{Yao-1Dlb}: using units of space, the query time must be where is the inverse Ackermann's function, a very slowly growing function. In dimensions, Bernard Chazelle~\cite{Chazelle.LB.II} proved that the query time must be where . Chazelle's lower bound is known to be tight for when space consumption is
high' i.e., ![](http://latex2png.com/output//latex_2924d94d6248c8dd5330ac9d16e8fdfe.png). We have two main results. The first is a lower bound that shows Chazelle's lower bound was not tight for
low space': we prove that we must have . Our lower bound does not close the gap to the existing data structures, however, our second result is that our analysis is tight. Thus, we believe the gap is in fact natural since lower bounds are proven for idempotent semigroups while the data structures are built for general semigroups and thus they cannot assume (and use) the properties of an idempotent semigroup. As a result, we believe to close the gap one must study lower bounds for non-idempotent semigroups or building data structures for idempotent semigroups. We develope significantly new ideas for both of our results that could be useful in pursuing either of these directions.
Congratulations @binarytree! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :
You can view your badges on your Steem Board and compare to others on the Steem Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
To support your work, I also upvoted your post!
Do not miss the last post from @steemitboard:
Vote for @Steemitboard as a witness to get one more award and increased upvotes!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit