How to calculate Rank in dataframe using scala with example

How to calculate Rank in dataframe using scala with example

Requirement :

You have marks of all the students of class and you want to find ranks of students using scala.

Given :

A pipe separated file which contains roll number and marks of students : below are the sample values :-

R_no marks
101 389
102 412
103 435

Please download sample file marks

 

Solution :

Step 1 : Loading the Raw data into hive  table

As you can see we have our raw data into file which is pipe separated. I am keeping the raw file into class8 directory which is just created . Use below commands:-

put file into local directory
 
  1. cd bdp/projects/
  2. mkdir class8
  3. cd class8
  4. cat marks.txt

Now I am putting this file into Hdfs location after creating stu_marks directory into Hdfs location.

Use below commands:-

 

put local file into hdfs
 
  1. hadoop fs -mkdir bdps/stu_marks
  2. hadoop fs -put marks.txt bdps/stu_marks/
  3. hadoop fs -ls bdps/stu_marks/

Now we will load this file into hive table . Please refer this post if you have trouble loading file into hive table .You need to give pipe (|) as delimiter .

I have created one hive table  named as “Studnt_mrks” on top of this data .

Which have two columns and both of them are of Int type.

Please use below commands one by one to create a hive table on top of it.

 

Create hive table
 
  1. CREATE SCHEMA IF NOT EXISTS bdp;
  2. CREATE EXTERNAL TABLE IF NOT EXISTS bdp.class8_marks
  3. (roll_no INT,
  4. ttl_marks INT)
  5. ROW FORMAT DELIMITED
  6. FIELDS TERMINATED BY '|'
  7. STORED AS TEXTFILE
  8. LOCATION 'hdfs://sandbox-hdp.hortonworks.com:8020/user/root/bdps/stu_marks';

As you can see above the Location which I gave here is the path where my hdfs file is present.

Step 2: – Loading hive table into Spark using scala

First open spark shell by using below command:-

Run spark shell
 
  1. Spark-shell

Note :-  I am using spark 2.3 version .

 

Once the CLI is opened .Use below commands to load the hive table:-

Load hive table in spark
 
  1. var stu_marks=spark.table("bdp.class8_marks")

here you can see stu_marks is the data frame which contains the data of hive table: – You can see the data using show command :-

to see the data
 
  1. stu_marks.show()
  2.  

Step 3 : Assign Rank to the students

Now let’s come to the actual logic to find the rank of the students :-

Please use below command to import the required functions

imports
 
  1. import org.apache.spark.sql.expressions.Window
  2. import org.apache.spark.sql.functions.rank

 

assigning rank
 
  1. var ranked=stu_marks.withColumn("rank",rank().over(Window.orderBy($"ttl_marks".desc)))

In above command I am using rank function over marks . As we want to rank higher if one has score higher marks, So we are using desc .

Below command will give you the expected results ,In which rank of student is assigned against roll no.

to see the final output
 
  1. ranked.show()

You can save the result as per your requirements .

Wrapping Up:

Here we have understood how the rank functions works and what is the simple use case of this function. We can assign rank partition wise as well ,For that you have to use partition by in over clause .

Rank can be used if you want to find the result of n’th rank holder .You can filter based on the required rank. If you are looking for the same code in python instead of scala .Please read this blog post .

Don’t forget to subscribe our blog.

1

Join in hive with example

Requirement You have two table named as A and B. and you want to perform all types of join in ...
Read More

Join in pyspark with example

Requirement You have two table named as A and B. and you want to perform all types of join in ...
Read More

Join in spark using scala with example

Requirement You have two table named as A and B. and you want to perform all types of join in ...
Read More

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.