Join in pyspark with example

Join in pyspark with example

Requirement

You have two table named as A and B. and you want to perform all types of join in spark using python. It will help you to understand, how join works in pyspark.

Solution

Step 1: Input Files

Download file  Aand B from here. And place them into a local directory.

File A and B are the comma delimited file, please refer below :-

I am placing these files into local directory ‘sample_files’

to see local files
 
cd sample_files
ls -R *

Step 2: Loading the files into Hive.

To load the files into hive,Let’s first put these files into hdfs location using below commands.

put into hdfs
 
hadoop fs -mkdir bdps/sample_files
hadoop fs -mkdir bdps/sample_files/A
hadoop fs -mkdir bdps/sample_files/B
hadoop fs -put A/A.txt bdps/sample_files/A/
hadoop fs -put B/B.txt bdps/sample_files/B/

 

you can check the files in hdfs using below command.

check files in hdfs
 
hadoop fs -ls -R hdfs://sandbox-hdp.hortonworks.com:8020/user/root/bdps/sample_files/

Now let’s create two hive table A and B for both the files,using below commands:-

hive table creation
 
CREATE SCHEMA IF NOT EXISTS bdp;
CREATE EXTERNAL TABLE IF NOT EXISTS bdp.A
(id INT,
type STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 'hdfs://sandbox-hdp.hortonworks.com:8020/user/root/bdps/sample_files/A';
CREATE EXTERNAL TABLE IF NOT EXISTS bdp.B
(id INT,
type STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 'hdfs://sandbox-hdp.hortonworks.com:8020/user/root/bdps/sample_files/B';

Let’s check whether data populated correctly or not using below commands :-

check data of hive tables
 
select * from bdp.A;
select * from bdp.B;

Step3: Loading Tables in pyspark

Now enter into pyspark using below command ,

spark shell
 
pyspark

Note : I am using spark version 2.3

use below command to load hive tables in to dataframe :-

load table into dataframe
 
A=spark.table("bdp.A")
B=spark.table("bdp.B")

and check data using below command :-

check dataframe data
 
A.show()
B.show()

 

Let’s understand join one by one

A. Inner Join:

Sometimes it is required to have only common records out of two datasets. Now we have two table A & B, we are joining based on a key which is id.
So in output, only those records which match id with another dataset will come. Rest will be discarded.

Use below command to perform the inner join.

 
 
  1. inner_df=A.join(B,A.id==B.id)

 

Expected output:


Use below command to see the output set.

 
 
  1. inner_df.show()

Please refer below screen shot for reference.


As you can see only records which have the same id such as 1, 3, 4 are present in the output, rest have been discarded.

B. Left Join

this type of join is performed when we want to look up something from other datasets, the best example would be fetching a phone no of an employee from other datasets based on employee code.
Use below command to perform left join.

 
 
  1. left_df=A.join(B,A.id==B.id,"left")

Expected output

Use below command to see the output set.

 
 
  1. left_df.show()

Now we have all the records of left table A and matched records of table B.

C. Right Join

This type of join is performed when we want to get all the data of look-up table with only matching records of left table.

Use below command to perform right join.

 
 
  1. right_df=A.join(B,A.id==B.id,"right")

Expected output

Use below command to see the output set.

 
 
  1. right_df.show()

Now we have all the records of right table B and matched records of table A.

D.Full Join

When it is needed to get all the matched and unmatched records out of two datasets, we can use full join. All data from left as well as from right datasets will appear in result set. Nonmatching records will have null have values in respective columns.
Use below command to perform full join.

 
 
  1. full_df=A.join(B,A.id==B.id,"full")

Expected output

Use below command to see the output set.

 
 
  1. full_df.show()

Now we have all matched and unmatched records in output as shown below.

 

Wrapping Up

Joins are important when you have to deal with data which are present in more than a table. In real time we get files from many sources which have a relation between them, so to get meaningful information from these data-sets it needs to perform join to get combined result.

We are happy to help you, don’t forget to subscribe us. Keep learning.

0
0

Join in hive with example

Requirement You have two table named as A and B. and you want to perform all types of join in ...
Read More

Join in pyspark with example

Requirement You have two table named as A and B. and you want to perform all types of join in ...
Read More

Join in spark using scala with example

Requirement You have two table named as A and B. and you want to perform all types of join in ...
Read More

Java UDF to convert String to date in PIG

About Code Many times it happens like you have received data from many systems and each system operates on a ...
Read More
/ java udf, Pig, pig, pig udf, string to date, udf

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.