Hadoop and Spark live project in bigdata domain - Central Area
Tuesday, 30 May 2017Item details
Urban area:
Central Area, Central Region
Offer type:
Offer
Price:
S$ 300
Item description
Note: This is an actual project which we happened to do with a USA based Credit Card Payment Gateway processing company ( based out of Dallas, TX, USA, same business as that of MasterCard and Visa) for a 2000+ small scale banks and credit union.
Project Name: Unified Payment Gateway Data Analytic System (AGILE Methodology) - Visa and MasterCard Domain
Technologies: Hadoop, Spark, Kafka, MR, Apache Sqoop, Apache Hive, Apache Pig, Linux Shell scripting/Java , Oracle DB.
Production Deployment Environment: Apache Hadoop, HortonWorks -HDP , Pivotal HD , Cloudera CDH
Scope: 15 Use cases covering total data analytic and Hadoop platform .
Description: Project is executed in 4 Phases :
- Data Filtering/Preprocessing Module = To filter raw data (XML, JSON, Flat File) and move to common datastore (datalake)
- Data Migration Module = To move data to Hadoop cluster (HDFS/Hive , Sqoop , Kafka)
- Data Analytic Module = Map reduce jobs, offline exe of jobs (using MR jobs, Hive HQL scripts, Spark RDD)
- Data Visualization System= Visual Representation of data in Charts (Tableau , QlickView, J FreeChart)
We will be building all the above 4 systems from scratch to get every possible insight of these modules.
What you will receive:
1) Requirement docs., Project Design docs.
2) Code for each module.( xmls, scripts, code , database queries, scripts etc)
3) Actual Interview questions from each module
Duration: 30 Hours (5 weeks duration -Daily session)
Medium: GotoMeeting
Email: Onlinetraining2011@gmail.com (preferred)
Skype: onlinetraining2011 (preferred)
Phone: +91 8308204692
Linked in group : httpwww.linkedin.com/groups/Online-Hadoop-Training-4838165
My profile: www.linkedin.com/pub/kamal-a/65/2b2/2b5
Project Name: Unified Payment Gateway Data Analytic System (AGILE Methodology) - Visa and MasterCard Domain
Technologies: Hadoop, Spark, Kafka, MR, Apache Sqoop, Apache Hive, Apache Pig, Linux Shell scripting/Java , Oracle DB.
Production Deployment Environment: Apache Hadoop, HortonWorks -HDP , Pivotal HD , Cloudera CDH
Scope: 15 Use cases covering total data analytic and Hadoop platform .
Description: Project is executed in 4 Phases :
- Data Filtering/Preprocessing Module = To filter raw data (XML, JSON, Flat File) and move to common datastore (datalake)
- Data Migration Module = To move data to Hadoop cluster (HDFS/Hive , Sqoop , Kafka)
- Data Analytic Module = Map reduce jobs, offline exe of jobs (using MR jobs, Hive HQL scripts, Spark RDD)
- Data Visualization System= Visual Representation of data in Charts (Tableau , QlickView, J FreeChart)
We will be building all the above 4 systems from scratch to get every possible insight of these modules.
What you will receive:
1) Requirement docs., Project Design docs.
2) Code for each module.( xmls, scripts, code , database queries, scripts etc)
3) Actual Interview questions from each module
Duration: 30 Hours (5 weeks duration -Daily session)
Medium: GotoMeeting
Email: Onlinetraining2011@gmail.com (preferred)
Skype: onlinetraining2011 (preferred)
Phone: +91 8308204692
Linked in group : httpwww.linkedin.com/groups/Online-Hadoop-Training-4838165
My profile: www.linkedin.com/pub/kamal-a/65/2b2/2b5