MRO is transforming the way clinical data is exchanged...

LEARN MORE!!!

Big Data Lead

Job Description

Apply Now

Location: Nanded City, Pune

Report to: Development Manager

Purpose of Position:

  • Purpose of this position is to work Big data technical team as well as key stakeholders across BUs for proposed strategies and solutions and provide technical leadership to the Big Data development, QA testing, and support teams in preparing the design artifacts and implementation of Big data implementation framework and solutions

  • The Lead Developer is a Senior Level position in the Product division of FIGmd who will technically manage a group of team members and contribute to multiple aspects of Big data implementation including planning, design, implementation, unit testing, deployment, release management etc which will help in the successful delivery of a particular product stream

Major Responsibilities:

  • Architecting, Designing, Building and supporting mission critical big data applications

  • Exploring new technology trends and leveraging them to simplify big data ecosystem

  • Driving Innovation and leading the teams to implement big dat solutions with future thinking with internal and external teams to deliver technology solutions for the business needs

  • Guiding teams to improve development agility and productivity

  • Resolving technical roadblocks to the team and mitigating potential risks

  • Delivering system automation by setting up continuous integration/continuous delivery pipelines

  • Acting as a technical mentor to the team and bringing them up to speed on latest data technologies and promoting continuous learning

Competencies:

General Skills:

  • Team Player, Flexible

  • Strong written communication skills

Technical/Domain Skills:

  • Good knowledge of Build Tools (Maven, Sbt)

  • Knowledge of Version Control like Git, SVN, SVN etc (Git Preferred)

  • Knowledge of Application Servers

  • Knowledge of Cloud Technologies like AWS

  • Knowledge of Micro-services

Education:

  • Graduate Engineer or equivalent qualification with minimum 7+ years of successful experience in recognized, global IT services / consulting company

Work Experience:

  • Experience in owning systems end-to-end and hands-on in every aspect of the software development and support: requirement discussion, architecture, prototyping, development, debugging, unit-testing, deployment, support

  • Hands-on experience in implementing Data Lake using Lambda/Kappa, architecture & design patterns

  • 5+ years of hands-on experience in one or more modern Object Oriented Programming languages (Java, Scala, Python) including the ability to code in more than one programming language. Our engineers work across several of them, sometimes simultaneously

  • 4+ years of hands-on experience in architecting, designing and developing highly scalable distributed data processing systems

  • 4+ years of hands-on experience in implementing batch and real-time Big Data integration frameworks and/or applications, in private or public cloud, preferably AWS, using streaming and ETL technologies

  • Minimum 5+ years of hand-on experience in Big Data stack i.e. Apache Spark, Hadoop,YARN,HIVE,Oozie,NOSQL like Cassandra, Hbase etc.

  • Good experience on Hadoop Distribution like EMR/Cloudera/Hortonworks

  • Minimum 3+ years of experience in managing teams of 8-10 team members including estimation and project planning

  • Excellent presentation, documentation, communication and influencing skills as well as skills which present/influence technology direction in business context to the stakeholders

Note:

The Job Description is subject to change from time to time, as per the requirements of the Company and the competencies / qualifications you may acquire in future.

Apply Now