Big Data Engineer
Role & Responsibilities:
- Have all round experience in developing and delivering large-scale business applications in scale-up systems as well as scale-out distributed systems.
- Responsible for design and development of application on Big data platform.
- Should implement complex algorithms in a scalable fashion. Core data processing skills are highly important with tools like HIVE/Impala
- Should be able to write MapReduce jobs or Spark jobs for implementation. Ability to write Java-based middle layer orchestration between various components on Hadoop/spark stack.
- Work closely with product and Analytic managers, user interaction designers, and other software engineers to develop new offerings and improve existing ones.
Desired Skills & Experience:
- B.Tech or Master’s degree from a reputed university in Computer Science or equivalent disciplines.
- 1-3 years’ experience building software or web applications with object-oriented or functional programming languages. Doesn’t matter what language, just a focus on writing clean, well designed and scalable code on MapReduce.
- Experience with Big Data technologies such as Hadoop, Hive, Spark, or Storm
- Experience with streaming technologies like Kafka, Spark, and Flink
- Experience with scalable systems, large-scale data processing, and ETL pipelines
- Experience with SQL and relational databases such as Postgres or MySQL
- Experience with NoSQL databases like Dynamo DB, Cloud Search, or open source variants like Cassandra, HBase, Solr, or Elastic Search
- Experience with DevOps tools (Git Hub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
- Experience building and deploying applications on on-premise and AWS cloud-based infrastructure.
To apply, send your cv at email@example.com