Operation Engineer - Big Data - Python Technologies

Tummyfill Solutions Experience : 3-7 years Notice period maximum : 60 days Relevant Experience : 3 years Mandatory Skills : Python Linux Hadoop Spark Hive HBase Dom.....

Tummyfill Solutions Bengaluru ₹ NA 3-7Y Full Time
Job Description
  • Experience : 3-7 years Notice period maximum : 60 days Relevant Experience : 3 years Mandatory Skills : Python Linux Hadoop Spark Hive HBase Domain Expertise : - E-commerce - Online/internet commerce - Our client is looking for Operation Engineer - Big Data Technologies - Our client Provide in house Hadoop Platform which is developed and managed by their operation engineering team. Responsibilities : - Managing Production Cluster of 1500 nodes + clusters with 500+ nodes. - Providing Hadoop as service to customers of different teams. - System Engineer managing a 30 petabyte Hadoop cluster with responsibilities including - Complete infrastructure maintenance by oncall support and development. - System automations - Python/Bash and Ansible. - Performance and availability management to keep up SLAs. - Repairing and performing the installation of Hadoop software - Adding and configuring the nodes - Monitoring Cluster Health and Troubleshooting - Management of the metadata databases - Manage and review Backups - Moving data efficiently between clusters using Distributed Copy - Restore in case of project-specific requirements - Rebalancing of the HDFS - Regularly scheduling statistics runs and automating them - Quota administration notification in case of space problems - Moving/shifting nodes/roles/services to other nodes - Performing cluster/node migrations (new hardware and/or new OS version and/or new Hadoop version) with OS or Hadoop tools or manually - Processing the events of Hadoop logs taking measures correcting errors and involving the relevant teams if necessary - Tuning by making changes to settings (e.g. HBase Hive etc Required skills :- Operation Engineer or Lead with 3-7 years of experience in Big Data Technologies- Exp with Hadoop ecosystem such as HDFS HBase Hive Yarn Spark Zookeeper Kafka Storm- 3+ years- experience in a systems administration engineering or operations.- Experience configuring managing or troubleshooting systems of scale (cloud virtualization distributed networks colocation..)- Linux Operating Systems skills- Bachelors Degree or Equivalent work experience- Experience in other database like MySql Redis Hazelcast- Exp in any of the scripting languages Python Perl Bash- Linux admin with exp in distributed cluster operations- Experience in managing Hadoop clusters- Attitute towards automation. Experience in workflow and operations automation- Strong analytical skills- Troubleshooting- Problem solving skills- Contribution in open source projects is a plus- Any certificate is a must (Linux Certification Big Data Certification Certified in Red Hat System Administration Apache Hadoop Certification)Interview Process : linux scripting and problem solving.Required skills : Python Hadoop HDFS HBase Hive yarn Spark storm Kafka Scripting Linux
Job Summary
Experience : 3-7 years Notice period maximum : 60 days Relevant Experience : 3 years Mandatory Skills : Python Linux Hadoop Spark Hive HBase Dom