Understand Hadoop Ecosystem in Depth

 Understanding in depth

Hadoop

Hadoop is an open source project of the Apache foundation. It is a framework written in Java, originally developed by Doug Cutting in 2005. It was created to support distribution for Nutch, the text search engine. Hadoop uses Google's Map Reduce and Google File System Technologies as its foundation.

Features of Hadoop

  1. It is optimized to handle massive quantities of structured, semi-structured and unstructured data using commodity hardware.
  2. It has shared nothing architecture.
  3. It replicates its data into multiple computers so that if one goes down, the data can still be processed from another machine that stores its replica.
  4. Hadoop is for high throughput rather than low latency. It is a batch operation handling massive quantities of data; therefore the response time is not immediate.
  5. It complements Online Transaction Processing and Online Analytical Processing. However, it is not a replacement for a RDBMS.
  6. It is not good when work cannot be parallelized or when there are dependencies within the data.
  7. It is not good for processing small files. It works best with huge data files and data sets.

Versions of Hadoop

There are two versions of Hadoop available :

  1. Hadoop 1.0
  2. Hadoop 2.0

Hadoop 1.0

It has two main parts :

1. Data Storage Framework

It is a general-purpose file system called Hadoop Distributed File System (HDFS).

HDFS is schema-less

It simply stores data files and these data files can be in just about any format.

The idea is to store files as close to their original form as possible.

This in turn provides the business units and the organization the much needed flexibility and agility without being overly worried by what it can implement.


Hadoop


2. Data Processing Framework

This is a simple functional programming model initially popularized by Google as MapReduce.

It essentially uses two functions: MAP and REDUCE to process data.

The "Mappers" take in a set of key-value pairs and generate intermediate data (which is another list of key-value pairs).

The "Reducers" then act on this input to produce the output data.

The two functions seemingly work in isolation with one another, thus enabling the processing to be highly distributed in highly parallel, fault-tolerance and scalable way.

Limitations of Hadoop 1.0

  1. The first limitation was the requirement of MapReduce programming expertise.

  2. It supported only batch processing which although is suitable for tasks such as log analysis, large scale data mining projects but pretty much unsuitable for other kinds of projects.

  3. One major limitation was that Hadoop 1.0 was tightly computationally coupled with MapReduce, which meant that the established data management vendors where left with two opinions:

    1. Either rewrite their functionality in MapReduce so that it could be executed in Hadoop or

    2. Extract data from HDFS or process it outside of Hadoop.

None of the options were viable as it led to process inefficiencies caused by data being moved in and out of the Hadoop cluster.

Hadoop 2.0

In Hadoop 2.0HDFS continues to be data storage framework.

However, a new and seperate resource management framework called Yet Another Resource Negotiater (YARN) has been added.

Any application capable of dividing itself into parallel tasks is supported by YARN.

YARN coordinates the allocation of subtasks of the submitted application, thereby further enhancing the flexibility, scalability and efficiency of applications.

It works by having an Application Master in place of Job Tracker, running applications on resources governed by new Node Manager.

ApplicationMaster is able to run any application and not just MapReduce.

This means it does not only support batch processing but also real-time processing. MapReduce is no longer the only data processing option.

Advantages of Hadoop

It stores data in its native from. There is no structure imposed while keying in data or storing data. HDFS is schema less. It is only later when the data needs to be processed that the structure is imposed on the raw data.

It is scalable. Hadoop can store and distribute very large datasets across hundreds of inexpensive servers that operate in parallel.

It is resilient to failure. Hadoop is fault tolerance. It practices replication of data diligently which means whenever data is sent to any node, the same data also gets replicated to other nodes in the cluster, thereby ensuring that in event of node failure,there will always be another copy of data available for use.

It is flexible. One of the key advantages of Hadoop is that it can work with any kind of data: structured, unstructured or semi-structured. Also, the processing is extremely fast in Hadoop owing to the "move code to data" paradigm.

Hadoop Ecosystem

Following are the components of Hadoop ecosystem:

HDFSHadoop Distributed File System. It simply stores data files as close to the original form as possible.

HBase: It is Hadoop's database and compares well with an RDBMS. It supports structured data storage for large tables.

Hive: It enables analysis of large datasets using a language very similar to standard ANSI SQL, which implies that anyone familier with SQL should be able to access data on a Hadoop cluster.

Pig: It is an easy to understand data flow language. It helps with analysis of large datasets which is quite the order with HadoopPig scripts are automatically converted to MapReduce jobs by the Pig interpreter.

ZooKeeper: It is a coordination service for distributed applications.

Oozie: It is a workflow schedular system to manage Apache Hadoop jobs.

Mahout: It is a scalable machine learning and data mining library.

Chukwa: It is data collection system for managing large distributed system.

Sqoop: It is used to transfer bulk data between Hadoop and structured data stores such as relational databases.

Ambari: It is a web based tool for provisioning, managing and monitoring Hadoop clusters.

Hive

Hive is a data warehouse infrastructure tool to process structured data in Hadoop. It resides on top of Hadoop to summarize Big Data and makes querying and analyzing easy.

Hive is not

  1. A relational database

  2. A design for Online Transaction Processing (OLTP).

  3. A language for real-time queries and row-level updates.

Features of Hive

  1. It stores schema in database and processed data into HDFS.

  2. It is designed for OLAP.

  3. It provides SQL type language for querying called HiveQL or HQL.

  4. It is familier, fast, scalable and extensible.

Hive Architecture

The following components are contained in Hive Architecture:

  1. User InterfaceHive is a data warehouse infrastructure that can create interaction between user and HDFS. The User Interfaces that Hive supports are Hive Web UI, Hive Command line and Hive HD Insight(In Windows Server).

  2. MetaStoreHive chooses respective database servers to store the schema or Metadata of tables, databases, columns in a table, their data types and HDFS mapping.

  3. HiveQL Process EngineHiveQL is similar to SQL for querying on schema info on the Metastore. It is one of the replacements of traditional approach for MapReduce program. Instead of writing MapReduce in Java, we can write a query for MapReduce and process it.

  4. Exceution Engine: The conjunction part of HiveQL process engine and MapReduce is the Hive Execution Engine. Execution engine processes the query and generates results as same as MapReduce results. It uses the flavor of MapReduce.

  5. HDFS or HBaseHadoop Distributed File System or HBase are the data storage techniques to store data into file system.


Source : Stack overflow