Career Center

Big Data, Hadoop Engineer

Location: Pleasanton, CA
Posted On: 05/18/2020
Requirement Code: 40630
Requirement Detail

Local candidates preferred:

 Required:


1.       4+ years of hands-on Development, Deployment and production Support experience in Big Data environment.
2.       4-5 years of programming experience in Java, Scala, Python, Solr, Hbase
3.       Proficient in SQL and relational database design and methods for data retrieval.
4.       Hands-on experience in Cloudera Distribution 6.x
5.       Must have experience with Spring framework, Web Services and REST API's.
6.       Project Experience in Query Processing Language (QPL) - a search engine independent technology for Advance Query Processing is highly desirable.

 
 

Technical Knowledge and Skills:

Consultant resources shall possess most of the following technical knowledge and experience:

                    Project Experience in Query Processing Language (QPL) - a search engine independent technology for Advance Query Processing is highly desirable.

                    4+ years of hands-on Development, Deployment and production Support experience in Big Data environment.

                    4-5 years of programming experience in Java, Scala, Python. 

                    Proficient in SQL and relational database design and methods for data retrieval.

                    Knowledge of NoSQL systems like HBase or Cassandra

                    Hands-on experience in Cloudera Distribution 6.x

                    Hands-on experience in creating, indexing Solr collections in Solr Cloud environment.

                    Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Solr, MR, Impala, Spark, Spark SQL.

                    Must have experience with developing Hive QL, UDF's for analyzing semi structured/structured datasets.

                    Must have experience with Spring framework, Web Services and REST API's.

                    Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc.

                    Must have working experience in the data warehousing and Business Intelligence systems.

                    Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.

                    Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.

                    Experience in building ML models using MLLib or any ML tools.

                    Hands-on experience working in Real-Time analytics like Spark/Kafka/Storm

                    Experience with Graph Databases like Neo4J, Tiger Graph, Orient DB.

               Agile development methodologies

Apply Now
SAICON Ranked #142
  • SAICON Ranked #142

  • America‚Äôs Fastest Growing Companies