There are different data generators like
sensors, CCTV, online shopping,
airlines, hospitality data, social networks like Facebook, twitter, linked in,
and e bloggers and so on
There are many real examples of such a huge
data.
Social media such as Facebook generates more
than 500 TB of data on a single day, New York Stock Exchange generates more
than 1 TB data per day. These are some of examples of Big Data.
If we are living in 100 percent of data
world, 90 percent of data has been generated for the last two years and the
remaining 10 percent of the data has been generated for the long back when
these systems were getting introduced.
In fact, big data is about more than just the
“bigness” of the data. Its key characteristics, coined by industry analysts are
the “Three V’s,” which include volume (size) as well as velocity (speed) and
variety (type). As far as we are
getting so much of big data, we must be in a position to process that much of
huge data in less time. With the time as
data has increased but processing speed has not been increased to synchronise
with such a data.
So our processing power must be equalise to
our big data, in that sense Hadoop has been introduced as a best solution to
big data. Hadoop knows very well how to store and process huge data in less
time.
No comments:
Post a Comment