You can always reach my latest publication via Google Scholar Page

But also I keep my own records, so here it is:



Importance of Data Distribution on Hive-based Systems for Query Performance: An Experimental Study

4th_paper.jpg

IEEE BigComp ’20
(Acceptance Rate: 20.8%)

SQL-on-Hadoop systems have been gaining popularity in recent years. One popular example of SQL-on-Hadoop systems is Apache Hive; the pioneer of SQL-on-Hadoop systems. Hive is located on top of big data stack as an application layer. Besides from the application layer, the Hadoop Ecosystem is composed of 3 different main layers: storage, the resource manager and processing engine. The demand from industry has led to the development of new efficient components for each layer. As the ecosystem evolves over time, Hive employed different execution engines too. Understanding the strengths of components is very important in order to exploit the full performance of the Hadoop Ecosystem. Therefore, recent works in the literature study the importance of each layer separately. To the best of our knowledge, the present work is the first work that focuses on the performance of the combination of both storage layer and execution engine. In this work, we compare Hive’s query performance by using three different execution engines: MR, Tez and Spark on the skewed/well-balanced data distribution through the full TPC-H benchmark. Our results underline the importance of data distribution on the storage layer for overall job performance of SQL-on-Hadoop systems and empirically showed even distribution improves performance up to 48% compared to skewed distribution. Moreover, the present paper provides insightful findings by identifying particular SQL query cases that certain processing engine deal exceptionally well.

Full PDF

Link: AuthorsVersion.pdf
Link 2:AuthorsVersion.pdf

————————————–

HaRD: a heterogeneity‐aware replica deletion for HDFS

3rd_paper.jpg 3rd_paper_additional.jpg

Journal of Big Data (2019) 6:94
Q1 in Information Systems

The Hadoop distributed file system (HDFS) is responsible for storing very large data- sets reliably on clusters of commodity machines. The HDFS takes advantage of replication to serve data requested by clients with high throughput. Data replication is a trade-off between better data availability and higher disk usage. Recent studies propose different data replication management frameworks that alter the replication factor of files dynamically in response to the popularity of the data, keeping more replicas for in-demand data to enhance the overall performance of the system. When data gets less popular, these schemes reduce the replication factor, which changes the data distribution and leads to unbalanced data distribution. Such an unbalanced data distribution causes hot spots, low data locality and excessive network usage in the cluster. In this work, we first confirm that reducing the replication factor causes unbalanced data distribution when using Hadoop’s default replica deletion scheme. Then, we show that even keeping a balanced data distribution using WBRD (data-distribution-aware replica deletion scheme) that we proposed in previous work performs sub-optimally on heterogeneous clusters. In order to overcome this issue, we propose a heterogeneity- aware replica deletion scheme (HaRD). HaRD considers the nodes’ processing capabilities when deleting replicas; hence it stores more replicas on the more powerful nodes. We implemented HaRD on top of HDFS and conducted a performance evaluation on a 23-node dedicated heterogeneous cluster. Our results show that HaRD reduced execution time by up to 60%, and 17% when compared to Hadoop and WBRD, respectively.

Full PDF

Link: AuthorsVersion.pdf
Link 2: AuthorsVersion.pdf


Towards a Better Replica Management for Hadoop Distributed File System

2nd_paper.jpg

IEEE BigData Congress ’18
(Acceptance Rate: 18%)

The Hadoop Distributed File System (HDFS) is the storage of choice when it comes to large-scale distributed systems. In addition to being efficient and scalable, HDFS provides high throughput and reliability through the replication of data. Recent work exploits this replication feature by dynamically varying the replication factor of in-demand data as a means of increasing data locality and achieving performance improvement. However, to the best of our knowledge, no study has been performed on the consequences of varying the replication factor. In particular, our work is the first to show that although HDFS deals well with increasing the replication factor, it experiences problems with decreasing it. This leads to unbalanced data, hot spots, and performance degradation. In order to address this problem, we propose a new workload-aware balanced replica deletion algorithm. We also show that our algorithm successfully maintains the data balance and achieves up to 48% improvement in execution time when compared to HDFS, while only creating an overhead of 1.69% on average.

Full PDF

Link: AuthorsVersion.pdf
Link 2: AuthorsVersion.pdf


Investigation of Replication Factor for Performance Enhancement in the Hadoop Distributed File System

1st_paper.jpg

ACM/SPEC International Conference on Performance Engineering ’18 PABS
(Acceptance Rate: 33%)

The massive growth in the volume of data and the demand for big data utilisation has led to an increasing prevalence of Hadoop Distributed File System (HDFS) solutions. However, the performance of Hadoop and indeed HDFS has some limitations and remains an open problem in the research community. The ultimate goal of our research is to develop an adaptive replication system; this paper presents the first phase of the work – an investigation into the replication factor used in HDFS to determine whether increasing the replication factor for in-demand data can improve the performance of the system. We constructed a physical Hadoop cluster for our experimental environment, using TestDFSIO and both the real world and the synthetic data sets, NOAA and TPC-H, with Hive to validate our proposal. Results show that increasing the replication factor of the ‘hot’ data increases the availability and locality of the data, and thus, decreases the job execution time.

Full PDF

Link: AuthorsVersion.pdf
Link 2:AuthorsVersion.pdf