|
What you’ll do:
- Work with team members to operationalize data pipelines and supporting cloud infrastructure
- Collaborate with external data producers and consumers to obtain and provide data through interfaces such as REST APIs and S3
- Provide day-to-day support of deploying Python-native data pipelines and performing data engineering tasks to enable data brokering and exchange capabilities
- Provide Tier 2/3 troubleshooting and incident resolution support for data pipelines in Production
What you’ll need to succeed:
- Active TS/SCI with poly required
- 4+ years of proven experience in data engineering, with expertise in designing, developing, and maintaining data ingestion, transformation, and loading pipelines and components
- Demonstrated experience in designing and deploying data pipelines leveraging AWS cloud infrastructure across multiple classification domains (e.g., IL5 to IL6+)
- Experience with Infrastructure-as-Code (IaC) tools, including Terraform, CloudFormation, or Ansible, to automate deployment of data pipeline cloud infrastructure
- Understanding of RMF security principles and hands-on experience implementing security controls for data pipelines in cloud environments
- Strong scripting and programming skills in languages such as Go, Python, and Bash
- Experience with data pipeline tools and technologies such as Nifi, Hadoop, HDFS, and Kafka. Experience implementing data pipelines in the Cloudera Data Platform environment is highly preferred.
- Strong communication skills, with the ability to clearly convey complex technical concepts
|