Required – Shall have at least eight (8) years experience in software development/engineering, including requirements analysis, software development, installation, integration, evaluation, enhancement, maintenance, testing, and problem diagnosis/resolution.
Required – Shall have demonstrated experience working with OpenSource (NoSQL) products that support highly distributed, massively parallel computation needs such as Hbase, CloudBase/Acumulo, Big Table, etc.
Required – Shall have demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc.·
Required – Shall have demonstrated work experience with the Hadoop Distributed File System (HDFS).
Required – Shall have demonstrated work experience with serialization such as JSON and/or BSON.
Required – Shall have demonstrated work experience in the requirements analysis and design of at least one Object Oriented system.
Required – Shall have demonstrated work experience developing solutions integrating and extending FOSS/COTS products.
Required – Shall have at least three (3) years experience in software integration and software testing, to include developing and implementing test plans and test scripts.
Required – Shall have demonstrated technical writing skills and shall have generated technical documents in support of software development project.
Required – Experience developing and deploying: data driven analytics; event driven analytics; sets of analytics orchestrated through rules engines.
Required – In addition, the candidate will have demonstrated work experience in at least four (4) of the desired characteristics.
• Apache Hadoop
• Apache Accumulo
• Apache Zookeeper
• Apache NiFi
• Java Programming
• Linux operating system monitoring and tuning
• Linux operating system level virtualization
• Committer/contributor to open source application
• Agile development experience
• Highly Desired – Familiarity with microservices software development technique and container-orchestration (e.g., Kubernetes)