Questo sito utilizza cookie di terze parti per inviarti pubblicità in linea con le tue preferenze. Se vuoi saperne di più clicca QUI 
Chiudendo questo banner, scorrendo questa pagina, cliccando su un link o proseguendo la navigazione in altra maniera, acconsenti all'uso dei cookie. OK

On Leveraging GPUs for Security: discussing k-anonymity and pattern matching

Recent GPU improvements led to an increased use of these hardware devices for tasks not related to graphical issues. With the advent of GPGPU, programmers have token advantage of GPUs increasingly, trying to improve a wide variety of algorithms via parallelization. According to the need to make GPUs easily programmable, the manufacturers of the graphic cards developed some parallel computing libraries such as CUDA, by NVIDIA, and OpenCL, by Khronos Group. Privacy and security problems can benefit from parallel computing resources.

In our work we contributed to improve GPU usefulness for security, showing that it is possible to improve performance thanks to this hardware component.
Furthermore, Aparapi allowed us to extend Java’s paradigm “Write Once Run Anywhere” to parallel computing.
To reach this goal we have studied the k-anonymity problem, emphasizing its mathematical aspects but also the need for fast results calculation, in order to keep the privacy of the respondents of a large amount of datasets all over the world.
We have performed an in-depth thorough study of the actual parallelization issues of algorithms on GPUs. As a consequence, we have learnt a big deal of information over the advantages, benefits and caveats of such approach.
Particularly, we have focused on the micro-aggregation technique, investigating two algorithms, MDAV and HyperCubes. We have improved their performance and we have also proposed an hybrid version for the MDAV algorithm.
We seen that parallelization is very helpful in improving performance of algorithms which contain parallelizable portions of code. Furthermore, GPUs allow the parallelization to be very performant, even if there are still several issues related to the usage of graphic cards.
In conclusion we have collected a large amount of high-quality experimental data that can be used further to better model actual GPU usage. These GPU collected results are particularly interesting, as the trend for future GPUs and CPUs will see:
1. a widening of the gap for performance and
2. a unified address space, resulting in a scaling issues improvement.

Lessons learnt
Our work allowed us to make some observations about Aparapi and parallelization.
Despite several issues, such as the lack of support for multiple entrypoints kernel, Aparapi has shown to be a very powerful tool, that allows to fully exploit the power of the GPUs.
We have learnt how to improve performance avoiding the main bottleneck due to GPU usage: the data transfer between CPU and GPU. We have also learnt how to achieve the best performance from different hardware architectures, managing the chunkSize value.
In conclusion it is not trivial to parallelize on GPUs: the development of parallel code implementing tools is not finished yet.

Future works
After having devised a hybrid version of the MDAV algorithm, we are going to improve our approach allowing the micro-aggregation of larger datasets. In practice we are going to overcome the GPU memory limits by partitioning the dataset to be micro-aggregated and by computing micro-aggregation of different subsets on different GPUs. In this approach it is important to pay attention to how data are distributed on the GPUs, in order to avoid the increase of the SSE. A solution to this issue consists in totally ordering the records in the dataset, by one or two components, before partitioning it.

Mostra/Nascondi contenuto.
Chapter 1 Introduction In recent years the need to solve complex problems that require large com- puting resources in shorter time has especially arisen. Some of these in the scientific field are: weather forecast, seismic simulations, chemical reactions simulation and studies on the human genoma [1]. All of them belong to the “Grand Challenge Problems” set. As can be noted, solving these problems within strict time limits is very interesting for many areas of scientific re- search. Other relevant problems exist in the computer science and business fields: database management and data-mining [2], search engine development on the web [3], medical diagnostic, advanced graphic [4], virtual reality [5], network broadcasting, highly complex information security [6] and data confidential- ity [7]. In the past, the solution of these problems used to take large amounts of time, due to the need of leveraging very large computing capacity to solve them. Now, thanks to the development of parallel computing, larger and cheaper resources are available, that can be used to address such problems. Parallel computing potentially allows: • to solve larger problems than before; • to improve computing time; • to reduce costs (by splitting work among multiple cheaper processors). Despite the hopes placed in parallel computing, some problems, requiring super-polinomial algorithms, are still far from being solved in acceptable time. However parallelization can help reduce computing time in order to speed up results availability. 3

Tesi di Laurea Magistrale

Facoltà: Scienze Matematiche, Fisiche e Naturali

Autore: Leonardo Jero Contatta »

Composta da 75 pagine.


Questa tesi ha raggiunto 503 click dal 17/09/2013.

Disponibile in PDF, la consultazione è esclusivamente in formato digitale.