Data Structures And Algorithms | 2019-05-05

in algorithms •  6 years ago 

Data Structures And Algorithms


Log Diameter Rounds Algorithms for -Vertex and -Edge Connectivity (1905.00850v1)

Alexandr Andoni, Clifford Stein, Peilin Zhong

2019-05-02

Many modern parallel systems, such as MapReduce, Hadoop and Spark, can be modeled well by the MPC model. The MPC model captures well coarse-grained computation on large data --- data is distributed to processors, each of which has a sublinear (in the input data) amount of memory and we alternate between rounds of computation and rounds of communication, where each machine can communicate an amount of data as large as the size of its memory. This model is stronger than the classical PRAM model, and it is an intriguing question to design algorithms whose running time is smaller than in the PRAM model. In this paper, we study two fundamental problems, -edge connectivity and -vertex connectivity (biconnectivity). PRAM algorithms which run in time have been known for many years. We give algorithms using roughly log diameter rounds in the MPC model. Our main results are, for an -vertex, -edge graph of diameter and bi-diameter , 1) a parallel time -edge connectivity algorithm, 2) a parallel time biconnectivity algorithm, where the bi-diameter is the largest cycle length over all the vertex pairs in the same biconnected component. Our results are fully scalable, meaning that the memory per processor can be for arbitrary constant , and the total memory used is linear in the problem size. Our -edge connectivity algorithm achieves the same parallel time as the connectivity algorithm of Andoni et al. (FOCS 2018). We also show an conditional lower bound for the biconnectivity problem.

Budget-Feasible Mechanism Design for Non-Monotone Submodular Objectives: Offline and Online (1905.00848v1)

Georgios Amanatidis, Pieter Kleer, Guido Schäfer

2019-05-02

The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of designing truthful mechanisms that have good approximation guarantees and never pay the participating agents (sellers) more than the budget. We focus on the case of general (non-monotone) submodular valuation functions and derive the first truthful, budget-feasible and -approximate mechanisms that run in polynomial time in the value query model, for both offline and online auctions. Prior to our work, the only -approximation mechanism known for non-monotone submodular objectives required an exponential number of value queries. At the heart of our approach lies a novel greedy algorithm for non-monotone submodular maximization under a knapsack constraint. Our algorithm builds two candidate solutions simultaneously (to achieve a good approximation), yet ensures that agents cannot jump from one solution to the other (to implicitly enforce truthfulness). Ours is the first mechanism for the problem where---crucially---the agents are not ordered with respect to their marginal value per cost. This allows us to appropriately adapt these ideas to the online setting as well. To further illustrate the applicability of our approach, we also consider the case where additional feasibility constraints are present. We obtain -approximation mechanisms for both monotone and non-monotone submodular objectives, when the feasible solutions are independent sets of a -system. With the exception of additive valuation functions, no mechanisms were known for this setting prior to our work. Finally, we provide lower bounds suggesting that, when one cares about non-trivial approximation guarantees in polynomial time, our results are asymptotically best possible.

Near Optimal Jointly Private Packing Algorithms via Dual Multiplicative Weight Update (1905.00812v1)

Zhiyi Huang, Xue Zhu

2019-05-02

We present an improved -jointly differentially private algorithm for packing problems. Our algorithm gives a feasible output that is approximately optimal up to an additive factor as long as the supply of each resource is at least , where is the number of resources. This improves the previous result by Hsu et al.~(SODA '16), which requires the total supply to be at least , and only guarantees approximate feasibility in terms of total violation. Further, we complement our algorithm with an almost matching hardness result, showing that supply is necessary for any -jointly differentially private algorithm to compute an approximately optimal packing solution. Finally, we introduce an alternative approach that runs in linear time, is exactly truthful, can be implemented online, and can be -jointly differentially private, but requires a larger supply of each resource.

Scalable and Jointly Differentially Private Packing (1905.00767v1)

Zhiyi Huang, Xue Zhu

2019-05-02

We introduce an -jointly differentially private algorithm for packing problems. Our algorithm not only achieves the optimal trade-off between the privacy parameter and the minimum supply requirement (up to logarithmic factors), but is also scalable in the sense that the running time is linear in the number of agents . Previous algorithms either run in cubic time in , or require a minimum supply per resource that is times larger than the best possible.

Static Pricing: Universal Guarantees for Reusable Resources (1905.00731v1)

Omar Besbes, Adam N. Elmachtoub, Yunjie Sun

2019-05-02

We consider a fundamental pricing model in which a fixed number of units of a reusable resource are used to serve customers. Customers arrive to the system according to a stochastic process and upon arrival decide to purchase or not the service depending on their willingness to pay and the current price. The service time during which the resource is used by the customer is stochastic and the firm may incur a service cost. This model represents various markets for reusable resources such as cloud computing, shared vehicles, rotable parts, and hotel rooms. In the present paper, we analyze this pricing problem when the firm attempts to maximize a weighted combination of three central metrics: profit, market share, and service level. Under Poisson arrivals, exponential service times, and standard assumptions on the willingness to pay distribution, we establish series of results that characterize the performance of static pricing in such environments. In particular, while an optimal policy is fully dynamic in such a context, we prove that a static pricing policy simultaneously guarantees 78.9% of the profit, market share, and service level from the optimal policy. Notably this result holds for any service rate and number of units the firm operates. In the special case where there are two units and the induced demand demand is linear, we also prove that the static policy guarantees 95.5% of the profit from the optimal policy. Our numerical findings on a large testbed of instances suggest that the latter result is quite indicative of the profit obtained by the static pricing policy across all parameters.

Fast hashing with Strong Concentration Bounds (1905.00369v2)

Anders Aamand, Jakob B. T. Knudsen, Mathias B. T. Knudsen, Peter M. R. Rasmussen, Mikkel Thorup

2019-05-01

Previous work on tabulation hashing of P\v{a}tra\c{s}cu and Thorup from STOC'11 on simple tabulation and from SODA'13 on twisted tabulation offered Chernoff-style concentration bounds on hash based sums, but under some quite severe restrictions on the expected values of these sums. More precisely, the basic idea in tabulation hashing is to view a key as consisting of characters, e.g., a 64-bit key as characters of 8-bits. The character domain should be small enough that character tables of size fit in fast cache. The schemes then use tables of this size, so the space of tabulation hashing is . However the above concentration bounds only apply if the expected sums are . To see the problem, consider the very simple case where we use tabulation hashing to throw balls into bins and apply Chernoff bounds to the number of balls that land in a given bin. We are fine if , for then the expected value is . However, if bins as when tossing unbiased coins, then the expectancy is for large data sets, e.g., data sets that don't fit in fast cache. To handle expectations that go beyond the limits of our small space, we need a much more advanced analysis of simple tabulation, plus a new tabulation technique that we call tabulation-permutation hashing which is at most twice as slow as simple tabulation. No other hashing scheme of comparable speed offers similar Chernoff-style concentration bounds.

Efficient approximation schemes for uniform-cost clustering problems in planar graphs (1905.00656v1)

Vincent Cohen-Addad, Marcin Pilipczuk, Michał Pilipczuk

2019-05-02

We consider the -Median problem on planar graphs: given an edge-weighted planar graph , a set of clients , a set of facilities , and an integer parameter , the task is to find a set of at most facilities whose opening minimizes the total connection cost of clients, where each client contributes to the cost with the distance to the closest open facility. We give two new approximation schemes for this problem: -- FPT Approximation Scheme: for any , in time we can compute a solution that (1) has connection cost at most times the optimum, with high probability. -- Efficient Bicriteria Approximation Scheme: for any , in time we can compute a set of at most facilities (2) whose opening yields connection cost at most times the optimum connection cost for opening at most facilities, with high probability. As a direct corollary of the second result we obtain an EPTAS for the Uniform Facility Location on planar graphs, with same running time. Our main technical tool is a new construction of a "coreset for facilities" for -Median in planar graphs: we show that in polynomial time one can compute a subset of facilities of size with a guarantee that there is a -approximate solution contained in .

Tight Approximation Bounds for Maximum Multi-Coverage (1905.00640v1)

Siddharth Barman, Omar Fawzi, Suprovat Ghoshal, Emirhan Gürpınar

2019-05-02

In the classic maximum coverage problem, we are given subsets of a universe along with an integer and the objective is to find a subset of size that maximizes . It is well-known that the greedy algorithm for this problem achieves an approximation ratio of and there is a matching inapproximability result. We note that in the maximum coverage problem if an element is covered by several sets, it is still counted only once. By contrast, if we change the problem and count each element as many times as it is covered, then we obtain a linear objective function, , which can be easily maximized under a cardinality constraint. We study the maximum -multi-coverage problem which naturally interpolates between these two extremes. In this problem, an element can be counted up to times but no more; hence, we consider maximizing the function , subject to the constraint . Note that the case of corresponds to the standard maximum coverage setting and gives us a linear objective. We develop an efficient approximation algorithm that achieves an approximation ratio of for the -multi-coverage problem. In particular, when , this factor is and as grows the approximation ratio behaves as . We also prove that this approximation ratio is tight, i.e., establish a matching hardness-of-approximation result, under the Unique Games Conjecture.

Online Circle Packing (1905.00612v1)

Sándor P. Fekete, Sven von Höveling, Christian Scheffer

2019-05-02

We consider the online problem of packing circles into a square container. A sequence of circles has to be packed one at a time, without knowledge of the following incoming circles and without moving previously packed circles. We present an algorithm that packs any online sequence of circles with a combined area not larger than 0.350389 0.350389 of the square's area, improving the previous best value of {\pi}/10 \approx 0.31416; even in an offline setting, there is an upper bound of {\pi}/(3 + 2 \sqrt{2}) \approx 0.5390. If only circles with radii of at least 0.026622 are considered, our algorithm achieves the higher value 0.375898. As a byproduct, we give an online algorithm for packing circles into a 1\times b rectangle with b \geq 1. This algorithm is worst case-optimal for b \geq 2.36.

An output-sensitive algorithm for the minimization of 2-dimensional String Covers (1806.08131v2)

Alexandru Popa, Andrei Tanasescu

2018-06-21

String covers are a powerful tool for analyzing the quasi-periodicity of 1-dimensional data and find applications in automata theory, computational biology, coding and the analysis of transactional data. A \emph{cover} of a string is a string for which every letter of lies within some occurrence of . String covers have been generalized in many ways, leading to \emph{k-covers}, \emph{-covers}, \emph{approximate covers} and were studied in different contexts such as \emph{indeterminate strings}. In this paper we generalize string covers to the context of 2-dimensional data, such as images. We show how they can be used for the extraction of textures from images and identification of primitive cells in lattice data. This has interesting applications in image compression, procedural terrain generation and crystallography.



Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Source
Copying/Pasting full or partial texts without adding anything original is frowned upon by the community. Repeated copy/paste posts could be considered spam. Spam is discouraged by the community, and may result in action from the cheetah bot.

More information and tips on sharing content.

If you believe this comment is in error, please contact us in #disputes on Discord

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://homepages.cwi.nl/~amanatid/Non-monotone_SM_BFMD.pdf