Introduction to Data Pipelines CLEAN IN G DATA W ITH P YS PARK - - PowerPoint PPT Presentation

introduction to data pipelines
SMART_READER_LITE
LIVE PREVIEW

Introduction to Data Pipelines CLEAN IN G DATA W ITH P YS PARK - - PowerPoint PPT Presentation

Introduction to Data Pipelines CLEAN IN G DATA W ITH P YS PARK Mike Metzger Data Engineering Consultant What is a data pipeline? A set of steps to process data from source(s) to nal output Can consist of any number of steps or components


slide-1
SLIDE 1

Introduction to Data Pipelines

CLEAN IN G DATA W ITH P YS PARK

Mike Metzger

Data Engineering Consultant

slide-2
SLIDE 2

CLEANING DATA WITH PYSPARK

What is a data pipeline?

A set of steps to process data from source(s) to nal output Can consist of any number of steps or components Can span many systems We will focus on data pipelines within Spark

slide-3
SLIDE 3

CLEANING DATA WITH PYSPARK

What does a data pipeline look like?

Input(s) CSV, JSON, web services, databases Transformations

withColumn() , .filter() , .drop()

Output(s) CSV, Parquet, database Validation Analysis

slide-4
SLIDE 4

CLEANING DATA WITH PYSPARK

Pipeline details

Not formally dened in Spark Typically all normal Spark code required for task schema = StructType([ StructField('name', StringType(), False), StructField('age', StringType(), False) ]) df = spark.read.format('csv').load('datafile').schema(schema) df = df.withColumn('id', monotonically_increasing_id()) ... df.write.parquet('outdata.parquet') df.write.json('outdata.json')

slide-5
SLIDE 5

Let's Practice!

CLEAN IN G DATA W ITH P YS PARK

slide-6
SLIDE 6

Data handling techniques

CLEAN IN G DATA W ITH P YS PARK

Mike Metzger

Data Engineering Consultant

slide-7
SLIDE 7

CLEANING DATA WITH PYSPARK

What are we trying to parse?

Incorrect data Empty rows Commented lines Headers Nested structures Multiple delimiters Non-regular data Differing numbers of columns per row Focused on CSV data

width, height, image # This is a comment 200 300 affenpinscher;0 600 450 Collie;307 Collie;101 600 449 Japanese_spaniel;23

slide-8
SLIDE 8

CLEANING DATA WITH PYSPARK

Stanford ImageNet annotations

Identies dog breeds in images Provides list of all identied dogs in image Other metadata (base folder, image size, etc.) Example rows:

02111277 n02111277_3206 500 375 Newfoundland,110,73,416,298 02108422 n02108422_4375 500 375 bull_mastiff,101,90,214,356 \ bull_mastiff,282,74,416,370

slide-9
SLIDE 9

CLEANING DATA WITH PYSPARK

Removing blank lines, headers, and comments

Spark's CSV parser: Automatically removes blank lines Can remove comments using an optional argument

df1 = spark.read.csv('datafile.csv.gz', comment='#')

Handles header elds Dened via argument Ignored if a schema is dened

df1 = spark.read.csv('datafile.csv.gz', header='True')

slide-10
SLIDE 10

CLEANING DATA WITH PYSPARK

Automatic column creation

Spark will: Automatically create columns in a DataFrame based on sep argument df1 = spark.read.csv('datafile.csv.gz', sep=',') Defaults to using , Can still successfully parse if sep is not in string df1 = spark.read.csv('datafile.csv.gz', sep='*') Stores data in column defaulting to _c0 Allows you to properly handle nested separators

slide-11
SLIDE 11

Let's practice!

CLEAN IN G DATA W ITH P YS PARK

slide-12
SLIDE 12

Data validation

CLEAN IN G DATA W ITH P YS PARK

Mike Metzger

Data Engineering Consultant

slide-13
SLIDE 13

CLEANING DATA WITH PYSPARK

Denition

Validation is: Verifying that a dataset complies with the expected format Number of rows / columns Data types Complex validation rules

slide-14
SLIDE 14

CLEANING DATA WITH PYSPARK

Validating via joins

Compares data against known values Easy to nd data in a given set Comparatively fast

parsed_df = spark.read.parquet('parsed_data.parquet') company_df = spark.read.parquet('companies.parquet') verified_df = parsed_df.join(company_df, parsed_df.company == company_df.company)

This automatically removes any rows with a company not in the valid_df !

slide-15
SLIDE 15

CLEANING DATA WITH PYSPARK

Complex rule validation

Using Spark components to validate logic: Calculations Verifying against external source Likely uses a UDF to modify / verify the DataFrame

slide-16
SLIDE 16

Let's practice!

CLEAN IN G DATA W ITH P YS PARK

slide-17
SLIDE 17

Final analysis and delivery

CLEAN IN G DATA W ITH P YS PARK

Mike Metzger

Data Engineering Consultant

slide-18
SLIDE 18

CLEANING DATA WITH PYSPARK

Analysis calculations (UDF)

Calculations using UDF

def getAvgSale(saleslist): totalsales = 0 count = 0 for sale in saleslist: totalsales += sale[2] + sale[3] count += 2 return totalsales / count udfGetAvgSale = udf(getAvgSale, DoubleType()) df = df.withColumn('avg_sale', udfGetAvgSale(df.sales_list))

slide-19
SLIDE 19

CLEANING DATA WITH PYSPARK

Analysis calculations (inline)

Inline calculations

df = df.read.csv('datafile') df = df.withColumn('avg', (df.total_sales / df.sales_count)) df = df.withColumn('sq_ft', df.width * df.length) df = df.withColumn('total_avg_size', udfComputeTotal(df.entries) / df.numEntries)

slide-20
SLIDE 20

Let's practice!

CLEAN IN G DATA W ITH P YS PARK

slide-21
SLIDE 21

Congratulations and next steps

CLEAN IN G DATA W ITH P YS PARK

Mike Metzger

Data Engineering Consultant

slide-22
SLIDE 22

CLEANING DATA WITH PYSPARK

Next Steps

Review Spark documentation Try working with data on actual clusters Work with various datasets

slide-23
SLIDE 23

Thank you!

CLEAN IN G DATA W ITH P YS PARK