Go to the main page

Parallel Construction of Triangulated Plane Graphs

José Fuentes-Sepúlveda, Meng He, Leo Ferres and Norbert Zeh

Introduction

The triangulated plane graphs have been used to represent polytopes, geographic maps, microchip layouts and design, software engineering diagrams, surface meshes in computer graphics, among others. In practice, triangulated plane graphs may be large, such as in VLSI (very large scale integrations) circuits or TIN (triangulated irregular network) surfaces. Thus, the design of space-efficient representations of triangulated plane graph becomes useful.

A graph G=V,E, with V=n vertices, E=m edges and f faces, is a triangulated planar graph if G is planar and the addition of any edge to G results in a nonplanar graph. A triangulated planar graph with a particular drawing or embedding is a triangulated plane graph. Triangulated plane graphs are also known as maximal plane graphs. Notice that a triangulated plane graph with n vertices has 3n-6 edges, 2n-4 faces and all its faces are triangles.

A common approach to construct succinct representations of triangulated plane graphs is to decompose them into a set of trees and subgraphs, representing them as parentheses sequences to then apply some ideas of succinct ordinal trees to support operations in optimal time. Operations of interest are the computation of the degree of a vertex (degree) and the adjacency test of two vertices (adjacency). Usually, decomposition is achieved using either canonical ordering [1, 2, 3, 4], realizers [5, 6] and orderly spanning trees [7]. In [8], authors show that canonical orderings, realizers and orderly spanning trees are equivalent on triangulated plane graphs. We adopt the canonical ordering approach, but extend our results to realizers which support more operations.

In this work, we provide a parallel algorithm to construct succinct representations of triangulated plane graphs given its canonical ordering. Additionally, we adapt our algorithm to compute succinct representations based on realizers.


The Code

The code for this work is available at github.


Experiments

We implemented the our algorithm in C and compiled it using GCC 4.9 with optimization level -O2 and using the -ffast-math flag. There is not available implementations to construct succinct representations of triangulated plane graphs. Therefore, we compared our parallel algorithm with its sequential version. All parallel code was compiled using the GCC Cilk branch. We tested our algorithm on six inputs:

Each input was generated in four stages: In the first stage, we used the function rnorm of R to generate random coordinates x,y. The only exception was the dataset worldcities, which corresponds to the coordinates of 2,243,467 uniques cities in the world. The dataset containing the coordinates was created by MaxMind, available here. The original dataset contains 3,173,959 cities, but some of them have the same coordinates. We selected the 2,243,467 cities with unique coordinates to build our dataset worldcities. In the second stage, we generated the Delaunay Triangulation of the coordinates generated in the first stage. The triangulations were generated using Triangle, a piece of software dedicated to the generation of meshes and triangulations (Our triangulations were generated using the options -cezCBVPNE.). In the third stage, we generated the maximal plane graph and the canonical ordering of the Delaunay triangulation computed in the second stage. Both the graph and the canonical ordering were computed using the Boost Library. The graph was generated with the function make_maximal_planar and the canonical ordering was computed with the function planar_canonical_ordering. Finally, in the fourth stage, we generated the canonical spanning tree of each maximal plane graph. We repeated each experiment five times and recorded the median time.

The experiments were carried out on a machine with four 16-core AMD Opteron® 6278 processors clocked at 2.4GHz, with 64KB of L1 cache per core, 2MB of L2 cache shared between two cores, and 6MB of L3 cache shared between 8 cores. The machine had 189GB of DDR3 RAM, clocked at 1333MHz. Running times were measured using the high-resolution (nanosecond) C functions in time.h. Memory usage was measured using the tools provided by malloc_count.


Go to the main page

This work was supported the doctoral scholarship of CONICYT (21120974) and in part by the Emerging Leaders in the Americas scholarship programme


Contact us at: jfuentess@udec.cl
We would like to thank to Free DNS and HighCharts for their services