Job Details


Hadoop Infrastructure Engineer - Elite Tech Firm

Job Info:


Category: Development, Infrastructure
Company Description: Leading global provider of data, news and analytics
Salary: Highly Competitive, Depending on Experience
Position Type: Permanent
Location:
Job Number: 8549

Job Description:


 

Many of the most prevalent applications at our client need real-time data analysis and business intelligence. Instead of using off-the-shelf applications, they use Hadoop and it's ecosystem to provide large-scale data platform with low-latency SLAs and large storage capabilities. This platform has revolutionized the way they manage and analyze data in a distributed environment.

About the Team:

We design software and hardware systems to support low-latency/high-volume requests, security, fault tolerance/high availability and easy customization. Our Hadoop Infrastructure Platform is built to fully automate deployment and operations using Chef; developed and open sourced on githib. With hundreds of applications depending on our platform, we are looking to grow our Hadoop Infrastructure team. That's where you come in.

We'll trust you to:

  •  Evaluate Hadoop projects across the ecosystem and extend and deploy them to exacting standards (high availability, big data clusters, elastic load tolerance)
  •  Develop automation, installation and monitoring of Hadoop ecosystem components in our open source infrastructure stack, specifically HBase, HDFS, Map/Reduce, Yarn, Oozie, Pig, Hive, Tez, Spark and Kafka
  •  Dig deep into performance, scalability, capacity and reliability problems to resolve them
  •  Create application patterns for integrating internal application teams and external vendors into our infrastructure
  •  Troubleshoot and debug Hadoop ecosystem run-time issues
  •  Provide developer and operations documentation to educate peer teams

You'll need to have most of the following:

  •  Experience building out and scaling a Hadoop-based or UNIX-hosted database infrastructure for an enterprise
  •  Proven experience with Hadoop infrastructure or a strong and diverse background of distributed cluster management and operations experience
  •  Experience writing software in a continuous build and automated deployment environment

We'd love to see:

  •  DevOps or System Administration experience using Chef/Puppet/Ansible for system configuration, or quality shell scripting for systems management (error handling, idempotency, configuration management)
  •  In-depth knowledge of low-level Linux, UNIX networking and C system calls for high performance computing
  •  Experience with Java, Python or Ruby development (including testing with standard test frameworks and dependency management systems, knowledge of Java garbage collection fundamentals)
  •  Experience or exposure to the open source community (a well-curated blog, upstream accepted contribution or community presence)

 

 


All qualified candidates are encouraged to apply by submitting their resume as an MS word document including a cover letter with a summary of relevant qualifications, highlighting clearly any special or relevant experience.


Please Note: All inquiries will be treated with the utmost confidentiality. Your resume will not be submitted to any client company without your prior knowledge and consent.


Contact Recruiter
sarah.chimino@andiamogo.com
Senior Technical Recruiter
Andiamo Partners | 90 Broad Street, Suite 1501, New York, NY 10004


Share Share this Job