- The Data Revolution
- Traditional Data Systems
- The Modern Data Architecture
- Industry Transformation
- Summary
The Modern Data Architecture
Existing data architectures are being pushed to the breaking point with the large amount of data, velocity of data ingestion, and variety of data they need to process and store. Industry analysts are predicting that up to 80% of the new data will be semi-structured and unstructured (video, pictures, audio, documents, emails, and so on) data coming from clickstream, sentiment/social media, machine sensors, server logs, RFID, and GPS (geographic). There are more than 3ZB in 2013, and predictions of 40ZB by 2020 are not considered underestimates.
The Modern Data Architecture (MDA) adds Hadoop to existing enterprise data platforms to solve this data pressure problem. A Hadoop cluster can be used as a combination ingestion, storage, and compute grid. Hadoop, NoSQL, and in-memory solutions are becoming the new data components that make up a big data platform. The data landscape for enterprises is evolving to support relational databases, enterprise data warehouses, NoSQL databases, in-memory, and Hadoop. Organizations are blending these solutions to leverage what each platform does extremely well.
Organizing data into a single data source or just a few data sources allows a richer set of questions to be asked of the data. Being able to add additional correlation sources increases the confidence and reduces the risk of the results of the questions. Different names are associated with these single source platforms. Some popular names are data refinery, enterprise data hub, and data lake. Each of these platforms is similar. They all use Hadoop, NoSQL, and different Apache frameworks for delivering data solutions. Data refinery, data lake, and enterprise data hubs are used by different organizations to refer to single-source big data platforms that leverage a Hadoop software distribution. The terms are cross-pollinating with new variations, and new terms are rising as well, such as a marshal data yard. What they have in common is the ingestion of data from all types of data sources, with rapid data movement and the flexibility of quickly working with schema-less data being used with schema-on-read capability.
Data warehouses are considered more rigid with schema-on-write, ACID compliance, and much less data movement. Data warehouses usually transform the data before entry using well-defined formats. Big data platforms load the data in first and then, based on the analytics and data usage, transform the data as needed. This is important to understand. Data analysts and scientists often spend 80% of their time looking at the data and 20% analyzing it. A big data platform can significantly change this ratio, allowing data insight to occur faster. We introduced data refinery and data lake, but we thought we should describe all the definitions together. All these assume centralizing the data into a single source; there is just emphasis on different areas.
- A big data refinery is a data platform that can store, transform, and process polystructured data sources. Refining the data creates new insights leveraging all different types of data sources. A data refinery controls more tightly what data can be ingested inside it.
- A data lake is one way to store and process data in its native format. A data lake is a single place to land all data and do analysis regardless of toolset. Putting data from different sources together allows data mashups and correlation to occur. A data lake has more flexibility in allowing new types of data in for exploration, but this does not mean any data can go inside it so that it gets swamped and loses veracity. The term data lake is more popular with the Hortonworks distribution, but it is a concept and is not tied to any distribution.
- An enterprise data hub, also referred to as a data lake, is a big data platform leveraging Hadoop. An enterprise data hub is a concept where Hadoop is the central data platform with data flowing in and out of other data platforms in a hublike architecture. The enterprise data hub is more popular with the Cloudera distribution; again, it is a concept that does not have to be tied to a distribution.
- A data marshal yard describes a big data platform with the emphasis on the data movement—similar to a railroad marshal yard where trains are moving in and out of a central location.
It is incredibly painful, inefficient, and expensive for organizations to have data take a lot of hops when it goes from being stored, transformed, and then analyzed. The data lake can be used as a data ingestion, analytics, and/or compute grid. The data lake can absorb a lot of the ETL processing done by data warehouses and be used to offload data from a data warehouse. This enables a data warehouse to be right-sized and stay at a fixed size. If a data warehouse can store data for only 6 months and the volume of data keeps growing, moving the data to a data lake reduces growth pressure on the data warehouse. Letting the big data platform handle the data ingestion also frees up a lot of compute cycles to be used for analytics and not data ingestion.
A data lake enables business units to land data once in a single place. Backups, analytics, joins, security, downstream reporting, data science, and data ingestion can all be performed in one system. Hadoop’s distributed file system enables it to easily store data in any form, so it can be stored in its raw data form. Hadoop is a schema-on-read platform, so a schema does not need to be applied to new data right away. Data can be integrated when it is needed. Data lakes enable data science to go to a new level with data mashups allowing for a 720° view of a customer. A 360° view of the customer is an old term that refers to having a complete view of the customer. A 720° view refers to an additional 360° view of the customer using unstructured and semi-structured data. This allows cross-silo and cross-channel analysis. Businesses such as banks, credit card companies, insurance, retail, health care, financial services, telcos, gaming, and Internet companies all need this capability.
Organizational Transformation
Transforming an organization is very difficult. Transformation includes bringing in vendors and external consultants that recommend different software, tools, and methods for building a big data environment. Internal employees must learn new approaches and new technologies. Business units must find the right use cases and be able to ask questions they want to ask and what insights they are looking to get to address business challenges. Business units must have confidence that moving their data into a big data platform is the right thing to do. Companies must learn how to become learning organizations and adapt to the rate of innovation in open source. Data analysts and data scientists must be found who have the skill to be able to ask the right questions of the data to find new insights. Finding the right blend of open and closed software is important. Working on minimizing the political, territorial, and technical silos built over years around data takes time. At the same time, there is a war on talent with organizations competing for the small group of qualified talent. The skilled talent is not coming anywhere close to keeping up with demand. Competing in today’s digital environment requires a company to use data as a corporate asset and a competitive advantage.
Successful data-driven IT organizations need teams that have the skills and ability equivalent to being able to build and change a plane while flying it. Hadoop is maturing and evolving quickly, which is demonstrating the power of open source innovation. Everyone understands that change is required, but most are not prepared for the speed of this change. This is challenging organizations to be able to absorb and adapt to this rate of change. Success with Hadoop requires a new way of thinking as well as a sense of urgency. Organizations now want the capability of batch processing and interactive and real-time queries with their big data platforms. This requires building the right combination of software frameworks, tools, in-memory software, distributed search, and NoSQL databases around Hadoop, and leveraging the existing software from proprietary software firms.
Organizations need to greatly reduce data silos and centralize data more efficiently for better correlation and analytics. The analytical systems that maximize business value are data repositories that allow data from multiple sources of different types to be correlated to find new data patterns that provide significantly increased accuracy. The world of relational databases and data warehouses that require deleting, ignoring, aggregating, and summarizing data because of the high costs of storage is a losing formula for descriptive and predictive analytics. It is the detailed data that contains the golden information (insights) for success. Big data platforms bring together a number of very important components required for fast and accurate analytics. These components include low-cost storage, schema-on-read, linearly scalable platforms, super-computer platforms that leverage a large number of spinning disks using commodity hardware, and highly parallelized processing frameworks. Hadoop is a platform that enables the secret sauce of all these important components to come together into a single data repository.
The following contain some of the key goals around big data:
- Be able to make business decisions faster than your competition with a higher degree of confidence and less risk.
- Increase the type and number of questions you can ask of your data for more business insight and value.
- Increase the level of efficiency and competitiveness of an organization.
- Create an environment that provides new business insight through data.
Saving money by using commodity hardware is also important, but making sure the business results are achieved should be a higher priority. A key area of being successful with big data is transforming into a learning organization. Time needs to be invested in educating the business units on what big data is and how it benefits the business unit as well as the organization. Most companies do not do this well, and they end up dragging the business unit across the finish line. More importantly, it can significantly delay projects and keep the benefits of realizing the benefits of big data by months and even years. There is a lot to big data platforms such as Hadoop, NoSQL, and the ecosystem surrounding them. The technical teams have to be educated. The problem with this is that the traditional training that organizations use does not always build the right type of knowledge, skill, understanding, and expertise needed by the internal teams. Look at the evolution and growth of big data and the skill set that needs to be created versus looking at what training classes your teams can take. Using traditional training classes is not always the optimized method. Look at how you need to build the skills within your internal organization as well as how quickly the technology around big data is growing.