"- Strong in Spark Scala pipelines (both ETL & Streaming)
- Proficient in Spark architecture
- Atleast 1 year experience in migration of Map Reduce process to Spark platform
- 3 yrs experience in Design and implementation using Hadoop, Hive
- Should be able to optimize and performance tune Hive queries
- Experience in Java coding language is a must
- Worked on designing ETL & Streaming pipelines in Spark Scala."
"- Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
- Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~10 Million messages).
- Provide expertise and hands on experience working on AvroConverters, JsonConverters, StringConverters.
- Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms.
- Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
- Strong skills in In-memory applications, Database Design, Data Integration.
- Strong Java background with Spring Modules, Spring Boot.
- Strong working experience in Unix environment"