Companies of all sizes and sectors are joining the movement, along with data scientists and big data solution architects. With the big data market nearly doubling in size by 2025 and the amount of user data growing, now is the best time to become a big data scientist.
Today, we’re going to get you started on your big data journey and walk you through the core concepts, uses, and tools that any aspiring data scientist needs.
Daftar Isi
What is big data?
Big data refers to large collections of data that are so complex and vast that they cannot be interpreted by humans or traditional data management systems. When properly analyzed using modern tools, these vast amounts of data provide businesses with the information they need to make informed decisions.
New software developments have recently made it possible to use and track large datasets. Much of this user information may appear meaningless and unrelated to the human eye. However, big data analytics tools can track relationships between hundreds of data types and sources to provide useful business intelligence.
All big datasets have three defining properties known as 3 Vs:
- Volume: Big datasets should include millions of low-density, unstructured data points. Big data companies can store tens of terabytes to hundreds of petabytes of user data. With the advent of cloud computing, companies now have access to zettabytes of data! All data is saved regardless of apparent importance. Big data experts argue that sometimes the answers to business questions can lie in unexpected data.
- Speed: Speed means the rapid creation and application of big data. Big data is received, analyzed and interpreted in rapid succession to provide the most up-to-date results. Many big data platforms even record and interpret data in real time.
- Variety: Big datasets contain different types of data in the same flat database. Traditional data management systems use structured relational databases that contain certain data types with established relationships with other data types. Big data analysis programs use many different types of unstructured data to find all correlations between all types of data. Big data approaches often provide a more complete picture of the relationship of each factor.
Correlation versus causation
Big data analysis only finds correlations between factors, not causal relationships. In other words, he can determine whether two things are related, but he cannot determine whether one is the cause of the other.
Data analysts must decide which data relationships are valid and which are just random correlations.
History of big data
The concept of big data has been around since the 1960s and 1970s, but at the time they didn’t have the means to collect and store that much data.
Practical big data began to develop only in 2005, when developers from organizations such as YouTube and Facebook realized the amount of data they generate in their daily activities.
Around the same time, new advanced storage frameworks and systems such as Hadoop and NoSQL databases enabled data scientists to store and analyze larger datasets than ever before. Open source frameworks such as Apache Hadoop and Apache Spark provide an ideal platform for big data growth.
Big data continues to evolve and more companies are recognizing the benefits of predictive analytics. Modern approaches to big data use Internet of Things (IoT) and cloud computing strategies to record more data from around the world and machine learning to create more accurate models.
While it is difficult to predict what the next big data advance will be, it is clear that big data will continue to become more scalable and efficient.
What is big data used for?
Big data applications are useful throughout the business world, not just in technology. Here are some examples of using big data:
- Product Decision Making: Companies like Netflix and Amazon are using big data to design products based on future product trends. They can use combined data on past product performance to predict which products consumers will want before they want to. They can also use price data to determine the best price to sell to their target customers.
- Testing: Big data allows you to analyze millions of error reports, hardware specifications, sensor readings, and past changes to recognize points of failure in a system before they occur. This helps service technicians prevent problems and costly system downtime.
- Marketing: Marketers collect big data from previous marketing campaigns to optimize future advertising campaigns. By combining data from retailers and online advertising, big data can help optimize strategies by discovering subtle preferences for ads with certain image types, colors, or word choices.
- Healthcare: Medical professionals are using big data to identify side effects of medications and detect early signs of illness. For example, imagine there is a new disease that strikes people quickly and without warning. However, many patients reported headaches at their last annual checkup. This will be noted as a clear correlation when using big data analysis, but the human eye may not notice this due to differences in time and location.
- Customer Experience: Big data is used by post-launch product teams to gauge customer experience and product perception. Big data systems can analyze large datasets from social media mentions, online reviews, and product testimonials to better understand what problems customers are experiencing and how well a product is perceived.
- Machine Learning: Big data has become an important part of machine learning and artificial intelligence technologies as it offers a huge amount of data to draw from. Machine learning engineers use large datasets as a variety of training data to build more accurate and robust prediction systems.
How does big data work?
On its own, big data cannot provide the business intelligence that many companies are looking for. You will need to process the data before it can provide useful information.
This process includes 3 main steps:
1. Data stream consumption
At the first stage, data enters the system in huge quantities. This data is of various types and cannot be organized into any usable schema. The data at this stage is called a data lake because all the data is grouped together and cannot be distinguished.
Your company’s system must have the processing power and storage capacity to handle this amount of data. Local storage is the most secure, but it can be overloaded depending on the amount.
Cloud computing and distributed storage are often the secret to efficient streaming. They allow you to share storage across multiple databases in a system.
2. Data analysis
Then you need a system that automatically cleans and organizes the data. Data of this scale and frequency is too large to be systematized manually.
Popular strategies include setting criteria that rule out any erroneous data, or building in-memory analytics that constantly add new data to ongoing analysis. Essentially, this step is like picking up a stack of documents and organizing them until it’s structured.
At this point, you will have raw results, but not what to do with them. For example, a sharing service may find that more than 50% of users will cancel a trip if an arriving driver is stopped for more than 1 minute.
3. Data driven decision making
In the final step, you interpret the raw results to form a concrete plan. As a data scientist, your job will be to analyze all the results and create an evidence-based proposal on how to improve the business.
In the sharing example, you might decide that the service should send drivers on routes that keep them moving, even if it takes a little longer to reduce customer frustration. On the other hand, you can include an incentive for the user to wait for the driver to arrive.
Any of these options are acceptable because your big data analysis cannot determine what aspect of this interaction needs to be changed to improve customer satisfaction.
Big data terminology
Structured data
This data has some predefined organizational properties that make it easier to find and analyze. The data is backed by a model that defines the size of each field: its type, length, and restrictions on what values it can take. An example of “block produced per day” structured data, since each element has a specific product type and number produced field.
Unstructured data
This is the opposite of structured data. It has no predetermined organizational property or conceptual definition. Unstructured data makes up the bulk of big data. Some examples of unstructured data are social media posts, phone transcripts, or videos.
Database
An organized set of stored data that can contain both structured and unstructured data. Databases are designed to maximize the efficiency of data retrieval. Databases come in two types: relational and non-relational.
Database management system
Usually, when talking about databases like MySQL and PostgreSQL, we are talking about a system called a database management system. A DBMS is software for creating, maintaining, and deleting multiple separate databases. It provides peripheral services and interfaces for end user interaction with databases.
Relational database (SQL)
Relational databases consist of structured data stored as rows in tables. Table columns follow a specific schema that describes the type and size of data that a table column can contain. Think of a schema as the schema of each record or row in a table. Relational databases must have structured data and the data must have logical relationships with each other.
Non-relational database
Non-relational databases do not have a rigid schema and contain unstructured data. The data inside has no logical relationship with other data in the database and is organized differently depending on the needs of the company. Some common types include key and value stores (Redis, Amazon Dynamo DB), column stores (HBase, Cassandra), document stores (Mongo DB, Couchbase), graphical databases (Neo4J), and search engines (Solr, ElasticSearch, Splunk). Most big data is stored in non-relational databases because they can contain multiple types of data.
data lake
Storage of data stored in raw form. As with water, all data is mixed and no collection data can be used before it can be separated from the lake. The data in the data lake does not need to have a specific purpose just yet. It is saved in case its use is discovered later.
Data store
A repository for filtered and structured data with a predefined purpose. It is essentially the structured equivalent of a data lake.
Big data technologies
Finally, we’ll look at the main tools that modern data scientists use to build big data solutions.
Hadoop
Hadoop is a robust, distributed and scalable data processing platform for storing and analyzing huge amounts of data. It allows many computers to be connected in a network used for simple storage and computation of huge datasets.
The lure of Hadoop lies in its ability to run on cheap, off-the-shelf hardware, while its competitors may need expensive hardware to do the same job. It’s also open source. It makes big data solutions accessible to everyday businesses and makes big data accessible to non-high tech people.
Hadoop is sometimes used as a general term referring to all the tools in the Apache data science ecosystem.
MapReduce
MapReduce is a programming model used in a cluster of computers to process and create big data sets using a parallel distributed algorithm. It can be implemented on Hadoop and other similar platforms.
The MapReduce program contains a map procedure that filters and sorts data into a usable form. Once the data has been matched, it is passed to the reduce procedure, which summarizes the trends in the data. Multiple computers in the system can perform this process at the same time to quickly process data from the raw data lake and produce useful results.
The MapReduce programming model has the following characteristics:
- Distributed: MapReduce is a distributed framework consisting of clusters of standard hardware that runs map or reduce to perform tasks.
- Parallel: Match and reduce tasks always run in parallel.
- Fault-tolerant: if a task fails, it is transferred to another node.
- Scalability: scaling is possible arbitrarily. As the problem gets bigger, more machines can be added to solve the problem in a reasonable amount of time; wireframe can be scaled horizontally instead of vertically.
Mapper class in Java
Let’s see how we can implement MapReduce in Java.
We will first use the Mapper class added by the Hadoop package ( org.apache.hadoop.mapreduce) to create a map operation. This class maps input key/value pairs to a set of intermediate key/value pairs. Essentially, the converter performs parsing, projection (selecting fields of interest from the input), and filtering (removing uninteresting or garbled entries).
For example, we’ll create a mapper that takes a list of cars and returns the make of the car and an iterator; the list of Honda Pilot and Honda Civic will return (Honda 1), (Honda 1).
public class CarMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// We can ignore the key and only work with value
String[] words = value.toString().split(” “);
for (String word : words) {
context.write(new Text(word.toLowerCase()), new IntWritable(1));
}
}
}
Reducer class in Java
The most important part of this code is on line 9. Here we output key/value pairs, which are later sorted and combined by reducers.
Don’t confuse the key and value we write with the key and values passed to the map(…) method. The key is the brand name of the car. Since each occurrence of the key represents one physical counter for that brand of car, we output 1 as the value. We want to infer a key type that can be both serializable and comparable, but the value type must be only serializable.
Reducer class in Java
Next, we implement the reduce operation using the Reducer class added by Hadoop. The Reducer function automatically outputs a Mapper and returns the total number of cars for each brand.
The reduction task is divided among one or more reducer nodes for faster processing. All tasks of one key (brand) are performed by the same node.
public class CarReducer extends Reducer<Text, IntWritable, Text, LongWritable> {
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
long sum = 0;
for (IntWritable occurrence : values) {
sum += occurrence.get();
}
context.write(key, new LongWritable(sum));
}
}
Lines 8-10 repeat each card of the same key and sum the total using the sum variable.
Mapper and Reducer are the foundation of many Hadoop solutions. You can extend these basic forms to handle huge amounts of data or reduce them to highly specialized summaries.
Read also:
- Benefit Business Intelligence System + Real Examples in 2024
- Tips for Building Business Intelligence Career & Skills + Insights
- Best BI Tools: 8 Most Used Business Intelligence Tools in 2024
- Business Intelligence: Definition, Roles, Skills Required & Workflow
- Top 10 Best Business Intelligence Platform References in 2024
- What is Business Intelligence? Benefits, Examples and How it Work