Monday 14 November 2016

HISTORY OF HADOOP

We are all aware of google ,a great web search engine in web world .as these google people have done a great work in 1990s ,they had to come up with more data that time they started thinking that how to store huge data and how to process it ,so to get proper solution for that it has been taken 13 years for them and in the tear 2003 they had given one conclusion to store the data as GFS called as Google file system, a technique to store the data and in the year 2004 they came up with one more technique called as Map Reduce .As GFS is a technique to store so much of huge data ,Map Reduce is a technique to process that much of huge data but the problem with Google is they had  just given these techniques as description in some white paper but never implemented that. Later yahoo ,a largest search engine in web world introduced a technique called HDFS (Hadoop distributed file system) by using the concept of Google file system in the year 2006 and Map reduce in the year 2007 .

Before understanding Hadoop and its core concepts (HDFS and Map-Reduce), we need to have some knowledge about Big Data.

Big Data :

Right now we are living in data world, so everywhere we are seeing is only data so the important thing is how to store the data and how to process the data. Exactly to say what is Big Data?
We can define big data as data which is beyond to the storage capacity and which is beyond to the storage power, that data we are calling here is the big data. In other words Big Data is nothing but an assortment of such a complex and huge data that it becomes tedious to capture, store process, retrieve and analyse it with the help of traditional data base management techniques

No comments:

Post a Comment