Mid-Level Cloud Administrator w/ Hadoop and Accumulo

Location: Annapolis Junction, MD
REQUIRED SKILLS/ABILITIES:
  • TO BE CONSIDERED FOR THIS POSITION YOU MUST HAVE AN ACTIVE TS/SCI W/ FULL SCOPE POLYGRAPH SECURITY CLEARANCE (U.S. CITIZENSHIP REQUIRED) 
Seeking a Mid-Level Cloud Administrator to augment the existing Operations team for a large analytic cloud repository. A successful candidate for this position must have experience as a system administrator for large Hadoop and Accumulo based clusters, and a strong background in troubleshooting operational system issues as they arise.
 
·         Required - At least two (2) years experience managing and monitoring large Hadoop clusters (>200 nodes).
·          
·         Required - At least two (2) years experience in the implementation and technical support of multi-platform, multi-system networks, including those composed of CISCO and UNIX or LINUX-based hardware platforms, to encompass diagnosing network performance shortcomings and designing and implementing performance improvements.
·          
·         Required - At least two (2) years experience implementing network solutions for complex, high performance systems composed of UNIX or LINUX-based hardware platforms.
·          
·         Required - Demonstrated the ability to work with OpenSource (NoSQL) products that support highly distributed, massively parallel computation needs such as Hbase, CloudBase/Acumulo, Big Table, etc.
·          
·         Required - Demonstrated work experience with the Hadoop Distributed File System (HDFS).
·         Highly Desired – Demonstrated work experience with Hadoop Cloud technology.
·         Highly Desired – Experience with Bash, Python, and configuration management tools (i.e. Puppet, Chef).
·         Highly Desired - Experience in the planning, design, development, implementation and technical support of multi-platform, multi-system networks, including those composed of CISCO and UNIX or LINUX-based hardware platforms, to encompass diagnosing network performance shortcomings and designing and implementing performance improvements.
 
·         Highly Desired - Experience with the Hadoop Distributed File System (HDFS).
 
·         Highly Desired - Strong LINUX command line level skills.
·         Desired – Demonstrated work experience interfacing with hardware teams.
·          
·         Desired - Demonstrated knowledge of analytical needs and requirements, query syntax, data flows, and traffic manipulation.
·          
SPECIALIZED EXPERIENCE:
·         Required – Hadoop/Cloud Developer Certification or comparable Cloud System/Service Certification. Six (6) months experience administering or implementing Cloud technology will be accepted as substitution for certification.
·          
·         Required - At least two (2) years experience writing software scripts using scripting languages such as Perl, Python, or Ruby for software automation.
·          
·         Desired - Technical experience and knowledge of peer-to-peer distributed storage networks, peer-to-peer routing and application messaging frameworks.
·          
·         Desired - Significant experience provisioning and sustaining network infrastructures and have experience developing, operations, and managing networks required to operate in a secure PKI, IPSEC, or VPN enabled environment.
·          
Education:   N/A 
 
NOTE:  A degree in Communications, Computer Science, Mathematics, Accounting, Information Systems, Program Management, or similar degree will be considered as a technical field.
 
Please learn more about us at http://www.onyxpoint.com
or
this job portal is powered by CATS