Responsibilities
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver tailored data solutions.
- Develop efficient data processing and transformation workflows to support analytics and reporting needs.
- Design, build, and maintain data pipelines and automated extract, transform, load (ETL) processes using tools like Python, R, and platform-specific environments such as Jupyter Notebooks.
- Integrate disparate structured and unstructured data from APIs, databases, and cloud storage into unified datasets utilizing ETL patterns, frameworks, query techniques.
- Implement processes for data cleaning, transformation, and validation to ensure data accuracy, consistency, and compliance with security and privacy policies.
- Develop dashboards, visualizations, and analytical products leveraging QuickSight or other mission-approved tools to support operational decision-making.
- Provide full lifecycle assistance in deploying, optimizing, maintaining complex code with data processing routines running in development, test, and production
- Optimize code through advanced algorithmic concepts to facilitate more efficient
Work Arrangement
Hybrid
Team
Structure: interdisciplinary
Additional Information
- Federal position requires Public Trust.
- Candidates must be US Citizens to be eligible.