Job Overview:
- Looking for a tech savvy Data Engineer to Design, Develop and Support ETL interfaces of a big data marketing technology platform built on AWS. Understand the existing landscape, document and optimize the pipelines for best performance.
- Interact with Business and Marketing users, Data Scientists and other developers.
- Prefer candidates having exposure to Data science (Model creation and execution)
- Good to have knowledge in ETL tools such as Glue, Spark etc.
- Strong communication and interpersonal skills
Responsibilities and Duties:
- Create new data pipelines using python and pyspark and other tech
- Ability to create POCs using data related tech
- Should support and maintain currently built big data stack
- Should be responsible to participate in customers discussions around requirement gathering and other discussions.
- Should take responsibility on aspects like requirement gathering, development, building test cases, testing and deployments of ETLs.
Experience:
- Overall experience of 6-8 years with 4-6 years of strong experience building ETLs and Data pipelines and optimizing using SQL, Hive, Python and Pyspark.
- 2+ years of experience on AWS Bigdata solutions like EMR(or Hadoop-Hive), Glue and PySpark and Redshift.
- 1+ years of experience in AWS using services like S3, EC2, RDS, EMR, Redshift.
- Experience building and optimizing RDBMS/BIGDATA data pipelines, architecture and data sets.
Primary Skills:
Pyspark, Python, AWS Glue, Hadoop(Hive), SQL
Good to have Skills:
AWS, Snowflake, AWS Data bricks
Location: Bangalore
Shift Timings: Flexible to work in UK shift (1:30 to 10:30PM IST) and on call support during weekends(rotation)
CTC: INR 14 LPA