Data Structures And Algorithms
An algorithmic approach to the existence of ideal objects in commutative algebra (1903.03070v1)
Thomas Powell, Peter M Schuster, Franziskus Wiesnet
2019-03-07
The existence of ideal objects, such as maximal ideals in nonzero rings, plays a crucial role in commutative algebra. These are typically justified using Zorn's lemma, and thus pose a challenge from a computational point of view. Giving a constructive meaning to ideal objects is a problem which dates back to Hilbert's program, and today is still a central theme in the area of dynamical algebra, which focuses on the elimination of ideal objects via syntactic methods. In this paper, we take an alternative approach based on Kreisel's no counterexample interpretation and sequential algorithms. We first give a computational interpretation to an abstract maximality principle in the countable setting via an intuitive, state based algorithm. We then carry out a concrete case study, in which we give an algorithmic account of the result that in any commutative ring, the intersection of all prime ideals is contained in its nilradical.
On the Complexity of Approximating Wasserstein Barycenter (1901.08686v2)
Alexey Kroshnin, Darina Dvinskikh, Pavel Dvurechensky, Alexander Gasnikov, Nazarii Tupitsa, Cesar Uribe
2019-01-24
We study the complexity of approximating Wassertein barycenter of discrete measures, or histograms of size by contrasting two alternative approaches, both using entropic regularization. The first approach is based on the Iterative Bregman Projections (IBP) algorithm for which our novel analysis gives a complexity bound proportional to to approximate the original non-regularized barycenter. Using an alternative accelerated-gradient-descent-based approach, we obtain a complexity proportional to . As a byproduct, we show that the regularization parameter in both approaches has to be proportional to , which causes instability of both algorithms when the desired accuracy is high. To overcome this issue, we propose a novel proximal-IBP algorithm, which can be seen as a proximal gradient method, which uses IBP on each iteration to make a proximal step. We also consider the question of scalability of these algorithms using approaches from distributed optimization and show that the first algorithm can be implemented in a centralized distributed setting (master/slave), while the second one is amenable to a more general decentralized distributed setting with an arbitrary network topology.
Fast Exact Dynamic Time Warping on Run-Length Encoded Time Series (1903.03003v1)
Vincent Froese, Brijnesh Jain
2019-03-07
Dynamic Time Warping (DTW) is a well-known similarity measure for time series. The standard dynamic programming approach to compute the dtw-distance of two length- time series, however, requires time, which is often too slow in applications. Therefore, many heuristics have been proposed to speed up the dtw computation. These are often based on approximating or bounding the true dtw-distance or considering special inputs (e.g. binary or piecewise constant time series). In this paper, we present a fast and exact algorithm to compute the dtw-distance of two run-length encoded time series. This might be used for fast and accurate indexing and classification of time series in combination with preprocessing techniques such as piecewise aggregate approximation (PAA).
Halin graphs are 3-vertex-colorable except even wheels (1903.02904v1)
A. Kapanowski, A. Krawczyk
2019-03-07
A Halin graph is a graph obtained by embedding a tree having no nodes of degree two in the plane, and then adding a cycle to join the leaves of the tree in such a way that the resulting graph is planar. According to the four color theorem, Halin graphs are 4-vertex-colorable. On the other hand, they are not 2-vertex-colorable because they have triangles. We show that all Halin graphs are 3-vertex-colorable except even wheels. We also show how to find the perfect elimination ordering of a chordal completion for a given Halin graph. The algorithms are implemented in Python using the graphtheory package. Generators of random Halin graphs (general or cubic) are included. The source code is available from the public GitHub repository.
A New Approach to Laplacian Solvers and Flow Problems (1611.07138v2)
Patrick Rebeschini, Sekhar Tatikonda
2016-11-22
This paper investigates the behavior of the Min-Sum message passing scheme to solve systems of linear equations in the Laplacian matrices of graphs and to compute electric flows. Voltage and flow problems involve the minimization of quadratic functions and are fundamental primitives that arise in several domains. Algorithms that have been proposed are typically centralized and involve multiple graph-theoretic constructions or sampling mechanisms that make them difficult to implement and analyze. On the other hand, message passing routines are distributed, simple, and easy to implement. In this paper we establish a framework to analyze Min-Sum to solve voltage and flow problems. We characterize the error committed by the algorithm on general weighted graphs in terms of hitting times of random walks defined on the computation trees that support the operations of the algorithms with time. For -regular graphs with equal weights, we show that the convergence of the algorithms is controlled by the total variation distance between the distributions of non-backtracking random walks defined on the original graph that start from neighboring nodes. The framework that we introduce extends the analysis of Min-Sum to settings where the contraction arguments previously considered in the literature (based on the assumption of walk summability or scaled diagonal dominance) can not be used, possibly in the presence of constraints.
Variational Graph Methods for Efficient Point Cloud Sparsification (1903.02858v1)
Daniel Tenbrinck, Fjedor Gaede, Martin Burger
2019-03-07
In recent years new application areas have emerged in which one aims to capture the geometry of objects by means of three-dimensional point clouds. Often the obtained data consist of a dense sampling of the object's surface, containing many redundant 3D points. These unnecessary data samples lead to high computational effort in subsequent processing steps. Thus, point cloud sparsification or compression is often applied as a preprocessing step. The two standard methods to compress dense 3D point clouds are random subsampling and approximation schemes based on hierarchical tree structures, e.g., octree representations. However, both approaches give little flexibility for adjusting point cloud compression based on a-priori knowledge on the geometry of the scanned object. Furthermore, these methods lead to suboptimal approximations if the 3D point cloud data is prone to noise. In this paper we propose a variational method defined on finite weighted graphs, which allows to sparsify a given 3D point cloud while giving the flexibility to control the appearance of the resulting approximation based on the chosen regularization functional. The main contribution in this paper is a novel coarse-to-fine optimization scheme for point cloud sparsification, inspired by the efficiency of the recently proposed Cut Pursuit algorithm for total variation denoising. This strategy gives a substantial speed up in computing sparse point clouds compared to a direct application on all points as done in previous works and renders variational methods now applicable for this task. We compare different settings for our point cloud sparsification method both on unperturbed as well as noisy 3D point cloud data.
The robust bilevel continuous knapsack problem (1903.02810v1)
Christoph Buchheim, Dorothee Henke
2019-03-07
We consider a bilevel continuous knapsack problem where the leader controls the capacity of the knapsack and the follower's profits are uncertain. Adopting the robust optimization approach and assuming that the follower's profits belong to a given uncertainty set, our aim is to compute a worst case optimal solution for the leader. We show that this problem can be solved in polynomial time for both discrete and interval uncertainty. In the latter case, we make use of an algorithm by Woeginger for a class of precedence constraint knapsack problems.
Sketching for Principal Component Regression (1803.02661v2)
Liron Mor-Yosef, Haim Avron
2018-03-07
Principal component regression (PCR) is a useful method for regularizing linear regression. Although conceptually simple, straightforward implementations of PCR have high computational costs and so are inappropriate when learning with large scale data. In this paper, we propose efficient algorithms for computing approximate PCR solutions that are, on one hand, high quality approximations to the true PCR solutions (when viewed as minimizer of a constrained optimization problem), and on the other hand entertain rigorous risk bounds (when viewed as statistical estimators). In particular, we propose an input sparsity time algorithms for approximate PCR. We also consider computing an approximate PCR in the streaming model, and kernel PCR. Empirical results demonstrate the excellent performance of our proposed methods.
A face cover perspective to embeddings of planar graphs (1903.02758v1)
Arnold Filtser
2019-03-07
It was conjectured by Gupta et. al. [Combinatorica04] that every planar graph can be embedded into with constant distortion. However, given an -vertex weighted planar graph, the best upper bound is only by Rao [SoCG99]. In this paper we study the terminated case, where there is a set of terminals, and the goal is to embed only the terminals into with low distortion. In a seminal paper, Okamura and Seymour [J.Comb.Theory81] showed that if all the terminals lie on a single face, they can be embedded isometrically into . More generally, suppose that the terminals could be covered by faces. In a recent paper Krauthgamer, Lee and Rika [SODA19] showed an upper bound of on the distortion, improving previous results by Lee and Sidiropoulos [STOC09] and Chekuri et. al. [J.Comb.Theory13]. Our contribution is a further improvement of the upper bound to . Note that since every planar graph has at most faces, any further improvement of this result, will imply an improvement upon Rao's long standing upper bound. It is well known that the flow-cut gap equals to the distortion of the best embedding into . In particular, our result provide a polynomial time -approximation to the sparsest cut problem on planar graph, for the case where all the demand pairs can be covered by faces.
Stronger L2/L2 Compressed Sensing; Without Iterating (1903.02742v1)
Vasileios Nakos, Zhao Song
2019-03-07
We consider the extensively studied problem of compressed sensing. The main contribution of our work is an improvement over [Gilbert, Li, Porat and Strauss, STOC 2010] with faster decoding time and significantly smaller column sparsity, answering two open questions of the aforementioned work. Previous work on sublinear-time compressed sensing employed an iterative procedure, recovering the heavy coordinates in phases. We completely depart from that framework, and give the first sublinear-time scheme which achieves the optimal number of measurements without iterating; this new approach is the key step to our progress. Towards that, we satisfy the guarantee by exploiting the heaviness of coordinates in a way that was not exploited in previous work. Via our techniques we obtain improved results for various sparse recovery tasks, and indicate possible further applications to problems in the field, to which the aforementioned iterative procedure creates significant obstructions.
Congratulations @binarytree! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :
Click here to view your Board
If you no longer want to receive notifications, reply to this comment with the word
STOP
Vote for @Steemitboard as a witness and get one more award and increased upvotes!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit